uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,155,028 | arxiv | \section{Direct proof of Barb\u{a}lat's Lemma}\label{SEC-I}
In 1959, Barb\u{a}lat formalized the intuitive principle that a function whose integral up to infinity exists, and whose oscillation is bounded, needs to be small at infinity.
\smallskip
\begin{thm}\label{THM-1}(Barb\u{a}lat's Lemma \cite[p.~269]{Barbalat}) Suppose that $f\colon[0,\infty)\rightarrow\mathbb{R}$ is uniformly continuous and that $\lim_{t\rightarrow\infty}\int_0^tf(\tau)\mathrm{d}\tau$ exists. Then $\lim_{t\rightarrow\infty}f(t)=0$ holds.
\end{thm}
\smallskip
Barb\u{a}lat's original proof, as well as its reproductions in textbooks, e.g., Khalil \cite[p.~192]{K}, Popov \cite[p.~211]{Popov} and Slotine, Li \cite[p.~124]{SL}, are by contradiction. Our first aim in this note is to give a direct proof of Theorem \ref{THM-1} which also reveals the essence of the statement and enables us to generalize Barb\u{a}lat's Lemma to vector valued functions without difficulty. This is based on the following two lemmas.
\smallskip
\begin{lem}\label{LEM-1} Let $f:[0,\infty)\to\mathbb{R}$ be a continuous function. We define $S\colon[0,\infty)\rightarrow\mathbb{R}$ via $S(t):=\sup_{s\geqslant{}t}\bigl|\int_t^sf(\tau)\mathrm{d}\tau\bigr|$ and put $\underline{\omega}(a,b,\delta):=\sup\{|f(x)-f(y)|\; | \; x,\,y\in (a,b) \text{ with } |x-y|\leqslant \delta\}$ for $a<b\leqslant\infty$ and $\delta\geqslant 0$. Then $|f(t)|\leqslant S(t)^{1/2}+\underline{\omega}(t,t+S(t)^{1/2},S(t)^{1/2})$ holds for $t\geqslant 0$.
\end{lem}
\smallskip
\begin{proof} Fix $t\geqslant0$. If $S(t)=0$, then $f(t)=0$ and the assertion follows immediately. Also, if $S(t)=\infty$, then there is nothing to prove. Suppose therefore $0<S(t)<\infty$, put $s=S(t)^{1/2}>0$, and compute
\begin{eqnarray*}
|f(t)|&=&\frac 1s\Bigl|\int_t^{t+s} f(t) \mathrm{d}\tau\Bigr|\\
&\leqslant& \frac 1s\Bigl|\int_t^{t+s} f(\tau) \mathrm{d}\tau\Bigr|+\frac 1s\Bigl|\int_t^{t+s} \bigl(f(t) -f(\tau)\bigr)\mathrm{d}\tau\Bigr|\\
&\leqslant& \frac 1s\Bigl|\int_t^{t+s} f(\tau) \mathrm{d}\tau\Bigr|+\underline{\omega}(t,t+s,s)\\
&\leqslant& \frac 1s S(t)+{\underline{\omega}}(t,t+s,s) \\
&=& \; S(t)^{1/2}+\underline{\omega}(t,t+S(t)^{1/2},S(t)^{1/2})
\end{eqnarray*}
as desired.
\end{proof}
\smallskip
We recall that given a uniformly continuous function $f\colon[0,\infty)\rightarrow\mathbb{R}$, a function $\omega\colon[0,\infty)\rightarrow\mathbb{R}$ is said to be a \emph{modulus of continuity} for $f$ if $\lim_{t\rightarrow 0}\omega(t)=\omega(0)=0$ and $|f(t)-f(\tau)|\leqslant\omega(|t-\tau|)$ for all $t$, $\tau\in[0,\infty)$.
\smallskip
\begin{lem}\label{LEM-2} Let $f:[0,\infty)\to\mathbb{R}$ be uniformly continuous and let $\omega$ be a modulus of continuity for $f$. Consider $S(t)=\sup_{s\geqslant{}t}|\int_t^sf(\tau)\mathrm{d}\tau|$. Then we have $|f(t)|\leqslant S(t)^{1/2}+\omega(S(t)^{1/2})$ for all $t\geqslant0$.
\end{lem}
\begin{proof} Let $f$ and $\omega$ be given. We define $\underline{\omega}(t):=\underline{\omega}(0,\infty,t)$, where $\underline{\omega}(0,\infty,\cdot)$ is the function defined in Lemma \ref{LEM-1}. Then, $\underline{\omega}(t)\leqslant\omega(t)$ holds for all $t\geqslant0$. Since $\underline{\omega}(a,b,t)\leqslant\underline{\omega}(0,\infty,t)$ holds for all $a<b\leqslant\infty$, we can use Lemma \ref{LEM-1} to obtain
$$
|f(t)|\leqslant S(t)^{1/2}+{\underline{\omega}}(t,t+S(t)^{1/2},S(t)^{1/2})\leqslant S(t)^{1/2}+\underline{\omega}(S(t)^{1/2})\leqslant S(t)^{1/2}+\omega(S(t)^{1/2})
$$
as desired.
\end{proof}
We now give the direct proof of Barb\u{a}lat's Lemma.
\medskip
\emph{Proof of Theorem \ref{THM-1}.} Let $f$ be uniformly continuous and $\omega$ be a modulus of continuity for $f$. By the Cauchy criterion for indefinite integrals, we obtain (with a direct proof!) that $S(t)\rightarrow0$ for $t\rightarrow\infty$. Therefore, Lemma \ref{LEM-2} yields $|f(t)|\leqslant S(t)^{1/2}+\omega(S(t)^{1/2})\rightarrow 0$ for $t\rightarrow\infty$.\hfill\qed
\medskip
Since Lemmas \ref{LEM-1} and \ref{LEM-2} remain true for functions with values in a Banach space $E$ (the proofs are verbatim the same), we immediately obtain the next generalization.
\smallskip
\begin{thm}\label{THM-12} Let $E$ be a Banach space and suppose that $f\colon[0,\infty)\rightarrow E$ is uniformly continuous such that $\lim_{t\rightarrow\infty}\int_0^tf(\tau)\mathrm{d}\tau$ exists. Then $\lim_{t\rightarrow\infty}f(t)=0$ holds.\hfill\qed
\end{thm}
\section{Barb\u{a}lat's Lemma in a different context}\label{SEC-II}
We pointed out that all proofs of Barb\u{a}lat's Lemma given in the relevant textbooks are indirect. On the other hand, there appeared recently several \textquotedblleft{}alternative versions\textquotedblright{} in the literature whose proofs, or hints for a proof, are based on direct estimates. Tao \cite[Lemma 1]{Tao} states that $\lim_{t\rightarrow\infty}f(t)=0$ holds, whenever $f\in L^2(0,\infty)$ and $f'\in L^{\infty}(0,\infty)$. Desoer and Vidyasagar \cite[Ex.~1 on p.~237]{DV} indicate that it is enough to require that $f$ and $f'$ are in $L^{2}(0,\infty)$ and Teel \cite[Fact 4]{Teel} notes that in the latter the Lebesgue exponent $2$ can be replaced by $p\in [1,\infty)$. Here, $f'$ can be interpreted in the sense of distributions or, equivalently, in the sense that $f$ is absolutely continuous with the almost everywhere existing derivative being essentially bounded.
\smallskip
Indeed, the three results extend the classical statement, that for $1\leqslant{}p<\infty$ all functions in the Sobolev space $W^{1,p}(0,\infty)$ tend to zero for $t\to \infty$ (see, e.g., Brezis \cite[Corollary 8.9]{Brezis}) to the \textquotedblleft{}mixed Sobolev space\textquotedblright{}
$$
W^{1,p,q}(0,\infty)=\bigl\{f\:|\:f\in L^p(0,\infty) \text{ and } f'\in L^q(0,\infty)\bigr\}
$$
for $p=2$, $q=\infty$, and $p=q\in[1,\infty)$, respectively. Our first aim in this section is to prove the following common generalization of the results of Tao \cite{Tao}, Desoer and Vidyasagar \cite{DV}, and Teel \cite{Teel}.
\medskip
\begin{thm}\label{THM-2} Let $p\in [1,\infty)$ and $q\in(1,\infty]$. Every function $f\in W^{1,p,q}(0,\infty)$ tends to zero at infinity.
\end{thm}
\smallskip
Notice, that our proof below shows that all three alternatives are immediate consequences of the original Barb\u{a}lat Lemma. The latter cannot be applied \emph{a priori}, but in view of the following lemma.
\smallskip
\begin{lem}\label{LEM-3} Let $p\in [1,\infty)$ and $q\in (1,\infty]$ be arbitrary. A function $f\in W^{1,p,q}(0,\infty)$ is bounded and uniformly continuous. More precisely, $f$ is $\frac{q-1}q$-H\"older continuous if $q<\infty$ and Lipschitz-continuous if $q=\infty$.
\end{lem}
\begin{proof} For the proof let $q'\in[1,\infty)$ be such that $1/q'+1/q=1$ holds, where we use the convention $1/\infty=0$. In particular, we read $(q-1)/q=1/q'=1$ if $q=\infty$. By our assumptions we have
$$
f(y)-f(x)=\int_x^y f'(s)\mathrm{d} s
$$
for almost every $x,y\in [0,\infty)$. Thus, $f$ can be identified with a continuous function satisfying
$$
|f(x)-f(y)|\leqslant \Bigl|\int_x^y f'(s)\mathrm{d} s\Bigr|\leqslant |x-y|^{1/q'}\|f'\|_q.
$$
Here we used H\"older's inequality for $q,q'$ with $1/q+1/q'=1$, so that $f$ is indeed H\"older continuous with exponent $1/q'=(q-1)/q$. Let $r:=p(q-1)/q$. Then we have $r>0$ and
$$
\tfrac{\mathrm{d}}{\mathrm{d} x}|f(x)|^{r+1}=(r+1)|f(x)|^{r-1}f(x)f'(x)
$$
holds. For $x\in [0,\infty)$ we thus obtain
$$
|f(x)|^{r+1}=|f(0)|^{r+1}+(r+1)\int_0^x|f(x)|^{r-1}f(x)f'(x)\,\mathrm{d} s
\leqslant |f(0)|^{r+1}+(r+1)\|f\|_p^{r}\cdot \|f'\|_q,
$$
where the last step is again an application of H\"older's inequality for $q,q'$ with $1/q+1/q'=1$.
\end{proof}
\smallskip
Lemma \ref{LEM-3} enables us to employ the original Barb\u{a}lat Lemma to prove Theorem \ref{THM-2}.
\bigskip
\emph{Proof of Theorem \ref{THM-2}.} By Lemma \ref{LEM-3} the function $f$ is bounded and uniformly continuous, hence so is $|f|^p$. Indeed, we have by the mean-value theorem
\begin{equation}\label{eq:meanvalue}
\bigl||f(x)|^p-|f(y)|^p\bigr|\leq \sup_{|t|\leqslant \|f\|_\infty}p |t|^{p-1}\cdot |f(x)-f(y)|=p\|f\|_\infty^{p-1}|f(x)-f(y)|.
\end{equation}
This inequality implies the asserted uniform continuity of $|f|^p$. By assumption, we can apply Barb\u{a}lat's Lemma and obtain the statement.\hfill\qed
\bigskip
Tao's formulation \cite[3rd paragraph on p.~698]{Tao} might erroneously establish the impression that his alternative \cite[Lemma 1]{Tao} uses a weaker assumption than Barb\u{a}lat's Lemma, but has the same conclusion. Our second aim in this section is to illustrate that this is not the case. The following example of a function which satisfies the assumptions of Barb\u{a}lat's Lemma, but not those of \cite[Lemma 1]{Tao} combines two effects. The difference between Lebesgue and improper Riemann integral on the one hand, and that between uniform continuity and having a bounded derivative on the other.
\smallskip
\begin{ex}\label{EXp} Let $f(x)=0$ for $x\in[0,2)$ and $f(x)=(-1)^nf_n(x)$ for $x\in[n,n+1)$ with $n\geqslant2$ and
$$
f_n(x)=\begin{cases}
\:(x-n)^{\frac{1}{2}},&x\in[n,n+\frac{1}{2}n^{-\frac{1}{3}}),\\
\:(n+n^{-\frac{1}{3}}-x)^{\frac{1}{2}},&x\in[n+\frac{1}{2}n^{-\frac{1}{3}},n+n^{-\frac{1}{3}}),\\
\;0,&x\in[n+n^{-\frac{1}{3}},n+1),
\end{cases}
$$
i.e., $f$ looks as follows.
\begin{center}
\input{TIKZ-Pic}
\end{center}
\noindent{}Straightforward computations show that $\lim_{t\rightarrow\infty}\int_0^tf(\tau)\mathrm{d}\tau=\frac{\sqrt{2}}{3}\sum_{n=2}^{\infty}(-1)^n\frac{1}{\sqrt{n}}$ exists and that $f$ is uniformly continuous. On the other hand $f\not\in L^2(0,\infty)$ and $f'\not\in L^{\infty}(a,\infty)$ for any $a\geqslant0$.
\end{ex}
\medskip
For given $1\leqslant p<\infty$, the function $f$ in Example \ref{EXp} can easily be modified such that $f\not\in L^p(a,\infty)$ holds. With some additional work, it is also possible to construct a single $f$ such that $f\not\in L^p(a,\infty)$ is true for all $1\leqslant p<\infty$. Finally, in all these cases $f$ can also be changed into a $C^{\infty}$--function; $|f'|$ is then bounded on any finite interval, but unbounded at infinity. Thus, for every $p\in[1,\infty)$ and $q\in(1,\infty]$ there is a function $f$ to which Barb\u{a}lat's Lemma can be applied, but which fails the assumptions of Theorem \ref{THM-2}.
\medskip
Concerning the other direction, it is easy to construct a function $f$ that satisfies the condition of \cite[Lemma 1]{Tao}, i.e., $f\in L^2(0,\infty)$ and $f'\in L^{\infty}(0,\infty)$, but whose improper Riemann integral $\int_0^{\infty}f(t) \mathrm{d} t$ does not exist. Thus, Tao's alternative is incomparable with the original Barb\u{a}lat Lemma. The same is true for the statement of Theorem \ref{THM-2} whenever $p\not=1$. Only in the case $p=1$, Theorem \ref{THM-2} is a special case of Barb\u{a}lat. Let $f\in W^{1,1,q}$ with $q\in(1,\infty]$. By Lemma \ref{LEM-2}, $f$ is uniformly continuous and bounded. Therefore, $f\in L^1(0,\infty)$ implies that the improper Riemann integral $\int_0^{\infty}f(t)\mathrm{d} t$ exists.
\bigskip
\section{Rates of convergence}\label{SEC-III}
In this section we use the methods of Section \ref{SEC-I} to derive estimates for the speed of decay in the previous versions of Barb\u{a}lat's Lemma so as to make the statement quantitative. We start with a modification of Theorem \ref{THM-1}.
\smallskip
\begin{thm}\label{THM-3} Suppose that for $f\colon[0,\infty)\rightarrow\mathbb{R}$ the limit $\lim_{t\rightarrow\infty}\int_0^tf(\tau)\mathrm{d}\tau$ exists, and that $f$ is H\"older continuous of order $\alpha\in(0,1]$, i.e., $\omega(\tau)=c\tau^{\alpha}$ is a modulus of continuity for a constant $c\geqslant0$. Then we have $|f(t)|\leqslant(1+c)S(t)^{\alpha/(1+\alpha)}$ for $t\geqslant0$, where $S(t)=\sup_{s\geqslant{}t}|\int_t^sf(\tau)\mathrm{d}\tau|$.
\end{thm}
\begin{proof} It is enough to repeat the proof of Lemmas \ref{LEM-1} and \ref{LEM-3} but with $s=S(t)^{1/(1+\alpha)}$.
\end{proof}
\smallskip
Next, we specialize to the situation of Theorem \ref{THM-2}.
\smallskip
\begin{cor}\label{COR-1} Let $f\in W^{1,p,q}(0,\infty)$ for some $p\in [1,\infty)$ and $q\in (1,\infty]$. Then we have $|f(t)|^p\leqslant(1+p\|f\|^{p-1}_{\infty}\|f'\|_{q})S(t)^{(q-1)/(2q-1)}$ for $t\geqslant0$, where $S(t)=\int_t^{\infty}|f(\tau)|^p\mathrm{d}\tau$.
\end{cor}
\begin{proof} By Lemma \ref{LEM-3}, under our assumptions, $f$ is bounded, and $\tau\mapsto \|f'\|_{q}\tau^{(q-1)/q}$ is a modulus of continuity for $f$. As in equation \eqref{eq:meanvalue} in the proof of Theorem \ref{THM-2}, we conclude that $\omega(\tau)=p\|f\|_\infty^{p-1}\|f'\|_{q}\tau^{(q-1)/q}$ is a modulus of continuity of $|f|^p$. It is therefore enough to apply Theorem \ref{THM-3} to the latter function and to $\alpha={(q-1)}/{q}$ to obtain the assertion, because then $\alpha/(1+\alpha)=(q-1)/(2q-1)$.
\end{proof}
\smallskip
We point out that Corollary \ref{COR-1} contains the quantitative versions of the results of Tao \cite[Lemma 1]{Tao}, Desoer and Vidyasagar \cite[Ex.~1 on p.~237]{DV}, and Teel \cite[Fact 4]{Teel}.
\bigskip
We finish this short note by illustrating how Barb\u{a}lat's Lemma, and its variations are typically used in the control theoretic literature, for instance, to obtain (asymptotic) stability of solutions of ordinary differential equations. The next example is taken from Hou, Duan, Guo \cite[Example 3.1]{HDG}.
\smallskip
\begin{ex}\label{EX} Consider the system
$$
\begin{cases}
\; \dot{e}(t) = -e(t)+\theta(t)\omega(t),\\
\; \dot{\theta}(t) = -e(t)\omega(t),
\end{cases}
$$
with $\omega$ bounded and continuous. For some solution $(e,\theta)$ of this equation we define $V=e^2+\theta^2$ and compute $\dot{V}(t)=-2e^2(t)\leqslant0$ for every $t\geqslant0$. Thus, $V$ and therefore $e$ and $\theta$ are bounded. We put $f:=2e^2$ and in view of the positivity of $V$ and $f$ we obtain
$$
\int_0^Rf(t)\mathrm{d}{}t \, = \int_0^R-\dot{V}(t)\mathrm{d}{}t = -V(R)+V(0)\leqslant V(0),
$$
so that $f\in L^1(0,\infty)$. Since $\dot{f}(t)=4\dot{e}(t)e(t)=-4e(t)^2+4\theta(t)e(t)\omega(t)$ holds and $\theta,e,\omega$ are bounded, we conclude $\dot{f}\in L^{\infty}(0,\infty)$. That is $f\in W^{1,1,\infty}(0,\infty)$ and by Theorem \ref{THM-2} it follows $f(t)\rightarrow0$ for $t\rightarrow\infty$. From the definition of $f$ we finally obtain that also $e(t)\rightarrow0$ for $t\rightarrow\infty$.
\end{ex}
\medskip
{
\footnotesize
{\sc Acknowledgements. }The authors would like to thank the referees and the editor for their careful work and their valuable comments.
}
\bibliographystyle{amsplain}
|
2,869,038,155,029 | arxiv | \section{Introduction}
A complex, or dusty plasma is a suspension of nanometer to micrometer size particles of solid matter in a gas-discharge plasma \cite{Ivlev_book}. The particles become charged due to the collection of electrons and ions from plasma and through their interaction and external confinement self-organize into liquid-like or solid-like structures. Complex plasmas are excellent model systems which allow studying of various plasma-specific and generic phenomena at the level of individual particles. Their advantages include the possibility of directly and relatively easily observing virtually undamped dynamics of the particles in real time. Due to the low neutral gas damping rate, the particle inertia becomes important, which distinguishes complex plasmas from such model systems as colloidal suspensions \cite{Ivlev_book}. Complex plasmas were successfully used to study transport phenomena \cite{Nunomura:2006,Nosenko:04PRL_visc,Gavrikov:2005,Hartmann:2011,Nosenko:08PRL_therm}, phase transitions \cite{Thomas:1996,Nosenko:2009,Melzer:2013}, as well as waves and instabilities \cite{Nunomura:2002,Piel:2002,Zhdanov:2003,Avinash:2003,Couedel:2010}.
Recently, the scope of complex plasmas as model systems was extended to include active matter systems. Active matter is a collection of active particles, each of which can extract energy from their environment and convert it into directed motion, thereby driving the whole system far from equilibrium \cite{Elgeti:2015,Bechinger:2016}. Active matter has some intriguing physical properties and potentially a number of applications in catalysis, chemical sensing, and health care. Recent research trends in the field of active matter include systems consisting of active particles with inertia \cite{Loewen:2020,Caprini:2021} and mixtures of active and regular (passive) particles \cite{Hauke:2020}. Complex plasmas are ideally suited for experiments in both of these subfields.
A particle in a complex plasma can become active, that is achieve self-propulsion, via several mechanisms. First, nonreciprocal interparticle interaction due to the plasma wake effect \cite{Melzer:1996,Lampe:2000,Ivlev:2015} can under certain conditions lead to the particle self-propulsion. Examples include channeling particles \cite{Du:2014} and spinning particle pairs (``torsions'') \cite{Nosenko:2015}. Second, a particle can be driven by a phoretic force, e.g., the photophoretic force from the illumination laser \cite{Du:2017,Wieben:2018}. Third, in extreme cases a particle can be propelled by the ``rocket force'' due to the ablation and removal of the particle material by a powerful laser irradiation \cite{Krasheninnikov:2010,Nosenko:2010}. It was recently shown that polymer microspheres coated on one side with a thin layer of platinum (the so-called Janus particles (JP) \cite{Walther:2013,Bechinger:2016}) become active when suspended in a radio-frequency (rf) argon plasma \cite{Nosenko:2020PRR_JP}. The emphasis was on the behaviour of single JPs, which were shown to be {\it circle swimmers} moving along characteristic looped trajectories. In Ref.~\cite{Arkar:2021}, single polymer microparticles partially coated with iron and suspended in an rf argon plasma were shown to move along complex jerky trajectories.
In this paper, we experimentally study a single-layer complex plasma composed of a mixture of regular melamine formaldehyde (MF) microspheres and active Janus particles similar to those used in Ref.~\cite{Nosenko:2020PRR_JP}. We find stark differences with a similar single layer consisting only of regular MF microspheres in the way of the its structure and dynamics.
\section{Experimental method}
The experiments described in this paper were carried out in a modified Gaseous Electronics Conference (GEC) radio-frequency (rf) reference cell \cite{Couedel:2022}. Plasma was produced by a capacitively coupled rf discharge in argon at $13.56$~MHz. The gas pressure was $p_{\rm Ar}=1.66$~Pa, the rf discharge power was $P_{\rm rf}=20$~W \cite{Hargis:1994}.
The particle sample used in our experiments was a mixture of regular melamine formaldehyde (MF) microspheres and active Janus particles. It was prepared using the method described in Ref.~\cite{Nosenko:2020PRR_JP}. MF microspheres \cite{microparticles} with a diameter of $9.19\pm0.09$~$\mu$m and mass $6.14\times10^{-13}$~kg were dispersed in isopropanol. A drop of the suspension was placed on a Si wafer and allowed to dry up. Unlike in Ref.~\cite{Nosenko:2020PRR_JP}, where the particles formed a monolayer on the wafer surface, here the amount of particles was larger and they formed a thicker layer. The wafer with particles was then placed in a sputter deposition machine and coated with a $\approx 10$~nm layer of platinum. Only the particles in the upper layer received the coating (on one side). Given this deposition technique, only a small fraction of all particles (a few percent) received metal coating resulting in their conversion into Janus particles. All particles were then separated from the wafer by a sharp blade.
The particles were injected into the plasma from a manual dispenser mounted in the upper flange. They were suspended in the plasma sheath above the lower rf electrode, where they formed a single layer. After injection, the particle suspension was cleaned using a standard procedure \cite{Du:2012}, where the discharge power was gradually reduced until larger particles and agglomerations of particles fell down to the rf electrode; the discharge power was then restored. The neutral-gas damping rate for the particles (which has the physical meaning of the collision frequency of the neutral gas atoms with particles) was calculated using the Epstein expression \cite{Epstein:1924} $\gamma=\delta N_gm_g\overline{v}_g(\rho_pr_p)^{-1}$, where $N_g$, $m_g$, and $\overline{v}_g$ are the number density, mass, and mean thermal speed of gas atoms and $\rho_p$, $r_p$ are the mass density and radius of the particles, respectively. With leading coefficient $\delta=1.39$ for the diffuse reflection of gas atoms from the particle, this gave $\gamma=2.12~{\rm s}^{-1}$.
The particles were illuminated by a horizontal laser sheet which had a Gaussian profile in the vertical direction with a standard deviation $\sigma\simeq75~\mu$m (corresponding to a full width at half maximum of $175~\mu$m) \cite{Couedel:2010}. The illumination laser had the wavelength $\lambda=660$~nm and variable output power of up to $100$~mW. The particles were imaged from above using the Photron FASTCAM mini WX100 camera equipped with the Nikon Micro-Nikkor $105$-mm lens fitted with a matched bandpass interference filter. This $4$-Megapixel, monochrome, $12$-bits per pixel camera has onboard memory of $16$~GB, which allowed recording of up to $2726$ frames. The camera frame rate was set to $125$ frames per second, resulting in the maximum recording duration of $21.8$~s. Experimental data were analysed in the following way. In each frame, the particle coordinates were calculated with subpixel resolution using a moment method \cite{SPIT}. Then individual particles were traced from frame to frame and their velocities were calculated from their displacements between frames.
\section{Results and discussion}
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\columnwidth]{fig1}
\caption {\label {Fig_traj} Trajectories of (a) regular MF and (b) mixed Janus particles suspended as a single layer in rf plasma sheath, during $0.36$~s (time is color-coded from purple to red). Panels (c) and (d) show respective pair correlation functions $g(r)$. The argon pressure was $p_{\rm Ar}=1.66$~Pa, the rf discharge power was $P_{\rm rf}=20$~W, the illumination laser power was $P_{\rm laser}=14$~mW. The data were recorded after injecting the particles and cleaning the suspension.
}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.86\columnwidth]{fig2}
\caption {\label {Fig_MSD} (a) Mean-squared displacement MSD(t) of the mixed Janus particles (upper curves) and regular MF particles (lower curves). (b),(c) Dynamical exponent $\alpha$ for the mixed Janus particles and regular MF particles, respectively. The left (right) error bars are for $t<2$~s ($t>2$~s). The illumination laser power was $P_{\rm laser}=14$~mW (blue down triangles and lines), $76$~mW (green circles and lines), and $99$~mW (red up triangles and lines). The inertial delay time $\tau_m=\gamma^{-1}$ is shown by vertical arrows. The argon pressure was $p_{\rm Ar}=1.66$~Pa, the rf discharge power was $P_{\rm rf}=20$~W.
}
\end{figure}
As expected, the regular MF particles formed a two-dimensional triangular lattice (plasma crystal) in our experimental conditions (argon pressure $p_{\rm Ar}=1.66$~Pa, rf discharge power $P_{\rm rf}=20$~W), see Fig.~\ref{Fig_traj}(a). The lattice consisted of $\approx1900$ particles and was highly ordered, as evidenced by the pair correlation function for particles $g(r)$ with the high first and split second peaks, see Fig.~\ref{Fig_traj}(c). The lattice contained, however, a few energetic particles that locally disturbed it, which is similar to previous experiments, e.g. Refs.~\cite{Du:2017,Nosenko:2006}. These were most probably particles with slightly different sizes or irregular shapes, possibly damaged particles (called ``abnormal'' particles in Ref.~\cite{Du:2017}). They moved intermittently with high kinetic energy \cite{Nosenko:2006}, their trajectories were often irregular \cite{Du:2017}. They transferred a part of their kinetic energy to the neighboring particles via collisions. There were up to $10$ such ``active centers'' in the plasma crystal, which were distributed non-homogeneously (probably, due to the electric field inhomogeneity). Otherwise, the lattice was stable, in particular with respect to the mode-coupling instability (MCI) \cite{Couedel:2010}.
On the contrary, the suspension of mixed Janus particles did not crystallize. Instead, the particles energetically moved around colliding with each other, see Fig.~\ref{Fig_traj}(b). This is similar to the experiment of Ref.~\cite{Nosenko:2020PRR_JP}, where Janus particles moved around in characteristic curly trajectories and did not form an ordered lattice. The pair correlation function for mixed Janus particles $g(r)$, see Fig.~\ref{Fig_traj}(d), indicates a highly disordered (gas-like) state. These observations suggest that there must be some kind of energy input or external drive on the Janus particles. In Ref.~\cite{Nosenko:2020PRR_JP}, it was found that the individual Janus particles behave as {\it circle swimmers} when illuminated by a laser. The driving force on the Janus particles was identified as the photophoretic force caused by the illumination laser. In our experiment, the mixture of active Janus particles and passive MF particles appears rather homogeneous. The energy influx into the particle system due to the activity of Janus particles is effectively redistributed to the passive MF particles due to the interparticle interactions. Therefore, distinguishing between the two particle sorts is not straightforward.
To characterize the apparently random particle motion, we used their mean-squared displacement,
\begin{equation}\label{MSD}
{\rm MSD}(t)=\langle|{\bf r}_i(t)-{\bf r}_i(t_0)|^2\rangle,
\end{equation}
where ${\bf r}_i(t)$ is the position of the $i$-th particle at time $t$. The brackets denote the average over $10$ different times $t_0$ separated by $0.4$~s (i.e., $\simeq\gamma^{-1}$) and over all particles. The ${\rm MSD}(t)$ was measured for the whole particle suspension, which in the case of regular MF particles included ordered crystalline domains and also energetic particles. The ${\rm MSD}(t)$ of the mixed Janus particles as well as regular MF particles are shown in Fig.~\ref{Fig_MSD}(a). The mean-squared displacement of the mixed Janus particles scales as ${\rm MSD}(t)\propto t^{\alpha}$ with $\alpha=2$ at small times $t\ll\gamma^{-1}$ indicating ballistic motion. Here, the particle inertia is important due to the low gas damping rate. At later times, the dynamical exponent defined as \cite{Wang:2018,Hanes:2012}
\begin{equation}\label{alpha}
\alpha(t)=\frac{{\rm d\,ln}({\rm MSD}(t))}{{\rm d\,ln}(t)}
\end{equation}
declines, finally reaching the value of $\approx 0.56$, see Fig.~\ref{Fig_MSD}(b). ($\alpha$ was further smoothed using Stineman function, the error bars were calculated as the r.m.s. residuals of the respective fits.) We ascribe this to the combined effect of the Janus particle propensity to move in circular trajectories and external confinement. Note that no superballistic regime ($\alpha>2$) was observed.
The dynamical exponent $\alpha(t)$ for the regular MF particles is shown in Fig.~\ref{Fig_MSD}(c). It starts from a value below $2$, reaches $\simeq2$, and then declines. At later times, $\alpha(t)$ depends strongly on the illumination laser power $P_{\rm laser}$, varying from $\simeq2$ for the lowest $P_{\rm laser}$ to $\simeq1$ for the highest $P_{\rm laser}$. This behavior is due to the intermittent effect of the energetic particles. Since the energetic particles have much higher velocities than the particles in the crystalline areas, the MSD of the whole particle suspension is dominated by a few energetic particles and their immediate neighbors. Since the effect of the energetic particles is intermittent, the observed trends in MSD cannot be reliably extrapolated to longer times.
To compare our results with theory and computer simulations, we note that our experimental system is an ensemble of regular (passive) particles with an addition of small amount of active Langevin particles (self-propelled particles with inertia \cite{Loewen:2020,Caprini:2021,Sprenger:2021}) placed in a weak horizontal and strong vertical confinement (this situation is known as active doping \cite{Bechinger:2016}). The particles are thus confined to a plane with little out-of-plane motion (resulting in a quasi-2D system), but with 3D rotations. They interact with each other via a screened-Coulomb potential which is approximated reasonably well by the Yukawa potential \cite{Kompaneets_PhD}.
The simplest model of active particles with inertia is the active Ornstein-Uhlenbeck model \cite{Loewen:2022,Caprini:2021}. For a single active particle in a harmonic confinement it predicts that in general case the dynamical
exponent in MSD takes on the following values as time progresses \cite{Loewen:2022}: $\alpha(t)=2,4,3,1,0$. In our single-layer system of mixed Janus particles, $\alpha(t)$ declined from $2$ at small times to $\approx 0.56$ at the maximum recorded time of $17.8$~s, see Fig.~\ref{Fig_MSD}(b). The apparent lack of superballistic regime ($\alpha>2$) is probably explained by the small value of the dimensionless particle mass $\tilde{m}=\tau_m/\tau=(\gamma\tau)^{-1}$ in the present experimental conditions, due to the relatively large activity persistence time $\tau$, see Fig.~3(a) in Ref.~\cite{Loewen:2022}. At later times, $\alpha(t)$ showed signs of stabilization around the value of $\approx 0.56$, see Fig.~\ref{Fig_MSD}(b). Ref.~\cite{Loewen:2022} predicted an oscillatory regime at later times, where a particle would oscillate in the harmonic confinement; this regime is characterized by constant MSD and $\alpha=0$. In the present experiments, the oscillatory regime was probably suppressed by the collisions between particles, which interrupted the particle oscillations.
Crowded environment situation \cite{Bechinger:2016} provides the opposite limiting case for our experimental system. It is instructive to compare our results to the 2D random Lorentz gas model, where active particles move ballistically or diffuse through a random lattice of fixed repelling obstacles. This model provides an idealized description of dynamical systems consisting of two sorts of particles, one fast and the other slow. In Ref.~\cite{Morin:2017}, the dynamics of self-propelled colloidal rollers placed on a flat substrate with randomly distributed stationary repelling microposts was studied experimentally. The colloidal rollers were $4.8$-$\mu$m diameter polystyrene beads immersed in hexadecane and made motile by Quincke electrorotation, the microposts were produced by conventional UV lithography. Subdiffusive motion of particles was observed at later times and the dynamical exponent $\alpha$ declined with increasing density of the obstacles down to the values $\alpha\approx0$ indicating a localization transition. In Ref.~\cite{Voigtmann:2009}, a molecular-dynamics simulation of the equimolar binary mixture of purely repulsive soft-interacting spheres with the size ratio of $0.35$ and equal masses was performed. Smaller particles were observed to diffuse faster than the larger ones; they showed subdiffusive motion with long-time $\alpha=0.2$--$1$ and a localization transition at a higher density. In the present experiment, we observed subdiffusive motion of the mixed Janus particles with $\alpha=0.56\pm0.27$, but no localization transition. We ascribe this behaviour to the Janus particle propensity to move in circular trajectories \cite{Nosenko:2020PRR_JP} and to their external confinement.
The magnitude of ${\rm MSD}(t)$ of the mixed Janus particles is $1-2$ orders of magnitude larger than that of the regular MF particles, which indicates larger displacements and larger average speeds of the Janus particles, apparently due to their activity. For the regular MF particles, the magnitude of ${\rm MSD}(t)$ gets larger for higher illumination laser power. Unexpectedly, for the Janus particles the dependence is opposite: the ${\rm MSD}(t)$ magnitude gets smaller for higher laser power. We will address this finding in more detail below.
\begin{figure}[tb]
\centering
\includegraphics[width=0.9\columnwidth]{fig3}
\caption {\label {Fig_T} Mean kinetic energy $\langle E_k \rangle$ of (a) mixed Janus particles and (b) regular MF particles as a function of the illumination laser power measured at three different times: (blue squares) after injecting the particles and cleaning the suspension, (purple circles) after a waiting time of $40$~min, (green diamonds) after a waiting time of $180$~min. The power-law fits are shown to highlight the trends.
}
\end{figure}
To clarify the effect of the illumination laser power on the particle motion, we measured the mean kinetic energy of the particles $\langle E_k\rangle$ (averaged between all particles in the suspension) as a function of the illumination laser power at three different times: after injecting the particles and cleaning the suspension, after a waiting time of $40$~min, and after a waiting time of $180$~min. The results are shown in Fig.~\ref{Fig_T}. The mean kinetic energy $\langle E_k\rangle$ of mixed Janus particles indeed {\it decreases} when the laser power is increased at all measurement times, see Fig.~\ref{Fig_T}(a). On the other hand, $\langle E_k\rangle$ increases for longer waiting times. The mean kinetic energy of the regular MF particles increases with the laser power for the waiting times of $0$~min and $40$~min, see Fig.~\ref{Fig_T}(b). The increase of $\langle E_k \rangle$ is due to the increased total area of ``active centers''. In the crystalline and active areas themselves, the $\langle E_k \rangle$ does not in fact depend much on the laser power. For the longest waiting time of $180$~min, however, $\langle E_k \rangle$ decreases when the laser power is increased, see Fig.~\ref{Fig_T}(b).
The observed dependence of $\langle E_k\rangle$ on the illumination laser power can be explained in the following way. Two oppositely directed driving forces act on a Janus particle \cite{Nosenko:2020PRR_JP}: asymmetric ion drag force $F_i$ and the photophoretic force $F_{\rm ph}$. The ion drag force arises due to the momentum transfer from the ion flow in the vicinity of the particle and includes the collection and orbital parts \cite{Khrapak:2002,Nosenko:2007PoP}. We speculate that it is asymmetric for a Janus particle due to different electric properties of its Pt-coated and uncoated halves. Here, the component of $F_i$ {\it parallel} to the Janus particle axis of symmetry is considered, which comes on top of the main part of $F_i$, which is directed toward the rf electrode. The photophoretic force acts on a nonuniform object immersed in a neutral gas when their temperatures are not equal \cite{Mackowski:1989,Horvath:2014,Du:2017}. The Pt-coated side of a Janus particle is expected to have a higher temperature than the other side. Indeed, in the experiments of Ref.~\cite{Nosenko:2010}, MF particles with thin Pd coating absorbed the laser radiation more effectively than regular MF particles. Based on the observed dependence of $\langle E_k\rangle$ on $P_{\rm laser}$, we conjecture that $F_i>F_{\rm ph}$ for the Janus particles in the experimental conditions of the present work. Therefore, when the laser power is increased and $F_{\rm ph}$ becomes larger, the net force declines. For the regular MF particles, the driving force reduces to the photophoretic force only \cite{footnote,Soong:2010}, leading to the weakly rising dependence of $\langle E_k\rangle$ on the illumination laser power for the waiting times of $0$~min and $40$~min, see Fig.~\ref{Fig_T}(b). For the longest waiting time of $180$~min, however, the dependence becomes falling similarly to the mixed Janus particles. Whether the proposed model or some other particle propulsion mechanism (e.g., preferential plasma sputtering of one of the particle sides) is at work can be verified in future experiments, for example by looking at the scaling of the particle self-propulsion force with the gas pressure, discharge power, and the particle size.
The temporal variation trend of $\langle E_k\rangle$ may be due to the {\it in-situ} plasma deposition of a non-uniform patchy metal film on the surface of suspended particles similar to that observed in Ref.~\cite{Kononov:2021}. The acquired coating would in fact produce an imperfect Janus particle \cite{Kononov:2021}, leading to the falling dependence of $\langle E_k\rangle$ on the illumination laser power for the MF particles for the longest waiting time of $180$~min. For both regular MF and mixed Janus particles, $\langle E_k\rangle$ increased for longer waiting times, presumably because all suspended particles received more {\it in-situ} metal coating with time. Another reason of the gradually rising $\langle E_k\rangle$ may be continuing damaging of the particle surface due to plasma sputtering.
To summarize, a system consisting of micron-size melamine formaldehyde microspheres, some of which were coated on one side with a thin layer of platinum (Janus particles) and suspended as a single layer in an rf argon plasma was studied experimentally. Due to self-propulsion of the Janus particles the system became active and did not form an ordered lattice, unlike a similar system without inclusion of Janus particles in the same experimental conditions. The mean kinetic energy of the particles depended on the illumination laser power and the time the particles spent suspended in plasma. The dynamical exponent $\alpha$ of the particle mean-squared displacement declined from $\alpha=2$ at small times indicating ballistic motion to $\alpha=0.56\pm0.27$ at longer times due to the combined effect of the Janus particle propensity to move in circular trajectories and external confinement. No superballistic regime with $\alpha>2$ was observed. The experimental findings can be explained by an interplay between two oppositely directed driving forces acting on a Janus particle, asymmetric ion drag force and the photophoretic force.
\section{Acknowledgments}
Thomas Voigtmann is acknowledged for carefully reading the manuscript and helpful discussions.
\section{Author declarations}
The author has no conflicts of interest to disclose.
\section{Data availability}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2,869,038,155,030 | arxiv | \section{Introduction \label{sec:intro}}
Density functional theory (DFT) is one of the most successful
approaches to describe the ground state properties of electronic systems
\cite{PhysRev.136.B864,PhysRev.140.A1133}.
A strong point of DFT is its computational feasibility
and often offering the best compromise of accuracy and computational costs.
The feasible computational cost of DFT calculations
can be achieved by the mapping from a fully-interacting problem
to a non-interacting problem based on the Hohenberg-Kohn theorems.
In this mapping, all the complexities of the many-body problem
are absorbed in the unknown exchange-correlation functional.
Therefore, the accuracy of the DFT calculations essentially depends
on the approximation of the exchange-correlation functionals.
In the past decades, various exchange-correlation functionals have been developed
to realize accurate description of the electronic ground state
such as local density approximations (LDA) \cite{PhysRevB.23.5048,PhysRevB.45.13244},
generalized gradient approximations (GGA) \cite{PhysRevA.49.2421,PhysRevLett.77.3865},
meta-GGA \cite{PhysRevLett.91.146401,doi:10.1063/1.476577,PhysRevLett.115.036402},
and hybrid functions \cite{doi:10.1063/1.464913,doi:10.1063/1.472933,doi:10.1063/1.1564060}.
Furthermore, detailed studies clarified
several exact properties of the exact exchange-correlation functional and potential
such as asymptotic behavior
of the potential in Coulombic systems
\cite{PhysRevA.30.2745,PhysRevA.29.2322,PhysRevA.49.2421},
and spiky features in the molecular dissociation
\cite{PhysRevA.40.4190,Gritsenko1997,doi:10.1063/1.3271392,PhysRevB.93.155146,Burke2006exactconditions}.
However, the systematic improvement of the exchange-correlation functionals and potentials
is still a non-trivial task due to
highly nonlinear and nonlocal natures of the density functional
\cite{doi:10.1063/1.1390175,Medvedev49}.
In contrast to DFT, wavefunction theories \cite{szabo1989book} such as the configuration
interaction and the coupled-cluster method
offer a formal possibility to straightforwardly improve
the accuracy up to the exact solution
by increasing the size of a search space
although the required computational costs can easily become infeasible.
Furthermore, in different fields,
various methods have been developed for accurate description of electronic
structure such as the \textit{GW} method
\cite{Aryasetiawan_1998,AULBUR20001,RevModPhys.74.601,vanSetten2013} and
the quantum Monte Carlo method
\cite{RevModPhys.73.33,PhysRevB.84.245117,PhysRevB.98.075122,PhysRevLett.120.025701}.
However, such accurate approaches require huge computational costs
and they become infeasible for large systems.
Because DFT and wavefunction theory are based on different
characteristics,
the two approaches often have different points of strength and weakness.
For example, DFT with conventional approximations tends to well
capture the \textit{dynamical} correlation effect \cite{doi:10.1063/1.474864},
which requires many Slater determinants for accurate description of many-body
wavefunctions, while DFT
suffers from dramatical failures in describing
the \textit{static} correlation \cite{Cohen792,PhysRevLett.102.066403,doi:10.1063/1.3271392,Ess2011},
which requires a few but more than one Slater determinant
for accurate description of many-body wavefunctions.
On the other hand, the wavefunction theory
can naturally capture the static correlation effect through the configuration interaction.
Combining the two approaches, one may be able to realize an overall accurate
theoretical descriptions for quantum many-body systems.
Hybrid functional \cite{doi:10.1063/1.464913,doi:10.1063/1.472933}
is one of successful examples,
and it includes a part of nonlocal Fock-like exchange interaction based on
the wavefunction theory.
Importantly, note that hybrid functional is beyond the conventional theoretical framework
of DFT but it is based on the generalized Kohn-Sham scheme \cite{PhysRevB.53.3764},
where interacting model systems are introduced to take into account a part
of electron-electron interaction but the systems are still represented
by a single Slater determinant.
Thus, the generalized Kohn-Sham systems are still described by fully-uncorrelated
wavefunctions.
Another successful example is a combination of
the configuration interaction method and DFT
\cite{GRIMME1996128,Borowski1998,doi:10.1063/1.479866,GRAFENSTEIN1998593,FILATOV1998689,doi:10.1063/1.4804607}
where the Kohn-Sham orbitals based on DFT are used to construct the configuration
interaction approach.
Since the Kohn-Sham orbitals and their orbital energies
may take into account substantial amount of the dynamical correlation effect,
the configuration interaction based on the DFT approach drastically
improves the description.
In this work, we explore yet another possibility to accurately and efficiently
combine DFT and wavefunction
theory, introducing a mapping between a fully-interacting many-body system
and an \textit{effectively-interacting} Kohn-Sham system instead of
the non-interacting Kohn-Sham system.
Fromager \textit{et al.} have achieved the connection between a fully-interacting system
and a fictitious interacting system in terms of the range separation in
multiconfigurational density functional theory \cite{doi:10.1063/1.2566459}.
Effectively-interacting systems are not generally described by
a single Slater determinant but requires a correlated wavefunction.
Therefore, the wavefunction theory may be naturally introduced within
the formal theoretical framework of DFT. By optimally choosing the effective interaction,
the electronic correlation may be efficiently described by
the combination of DFT and wavefunction theory.
To explore more in detail the properties of an effectively interacting Kohn-Sham systems,
we investigate how the exact exchange-correlation potential looks like for
that correlated reference Kohn-Sham system, employing one-dimensional
two-electron systems.
The paper is organized as follows: In Sec.~\ref{sec:methods}
we first introduce a mapping between a fully-interacting system and
an effectively-interacting Kohn-Sham system.
Then, we describe effective interactions that are used in this work.
In Sec.~\ref{sec:numerics}, numerical methods to compute exact exchange-correlation
potentials of effectively-interacting Kohn-Sham systems are described.
In Sec.~\ref{sec:results},
we investigate exact exchange-correlation potentials
and properties of effectively-interacting Kohn-Sham systems
for one-dimensional helium atom, hydrogen molecules
and heteronuclear diatomic molecule.
Finally, our findings are summarized in Sec.~\ref{sec:summary}.
\section{Methods \label{sec:methods}}
We introduce a mapping from a fully interacting many-body system to
an effectively-interacting Kohn-Sham system. For this purpose, we first
consider a fully-interacting $N$-electron system.
The ground state of
the electronic system is described by the following Schr\"odinger equation
\be
\hat H \Psi(\bm r_1,\cdots, \bm r_N) = E_{gs} \Psi(\bm r_1,\cdots,\bm r_N),
\label{eq:exact-se}
\end{eqnarray}
with the non-relativistic many-body Hamiltonian given by
(atomic units used unless stated otherwise)
\be
\hat H = \sum_{i=1}^N \left [
-\frac{1}{2}\bm{\nabla}^2_i +v_{ext}(\bm{r}_i) \right ]
+\frac{1}{2}\sum_{i\neq j} W(\bm r_i-\bm r_j),
\label{eq:many-body-ham}
\end{eqnarray}
where $v_{ext}(\bm r)$ is a one-body external potential and $W(\bm r)$ is
the electron-electron interaction.
Then, we introduce an effectively-interacting Kohn-Sham system that
satisfies the following interacting Kohn-Sham equation
\be
\hat H_{KS} \Phi_{KS}(\bm r_1,\cdots,\bm r_N) = E_{KS} \Phi_{KS}(\bm r_1,\cdots,\bm r_N)
\label{eq:ks-eq}
\end{eqnarray}
with the interacting Kohn-Sham Hamiltonian
\be
\hat H_{KS} = \sum_{i=1}^N \left [
-\frac{1}{2}\bm \nabla^2_i +v_{KS}(\bm r_i) \right ]
+\frac{1}{2}\sum_{i\neq j} W_{eff}(\bm r_i-\bm r_j),\nonumber \\
\label{eq:ks-ham}
\end{eqnarray}
where $v_{KS}(\bm r)$ is the one-body Kohn-Sham potential, and $W_{eff}(\bm r)$
is an arbitrary effective interaction.
Here, the Kohn-Sham potential $v_{KS}(\bm r)$ is introduced
such that the ground-state density of the fully-interacting system
is reproduced by that of the Kohn-Sham system
\be
\rho(\bm r)&=&N\int d\bm r_2\cdots d\bm r_N |\Psi(\bm r,\bm r_2,\cdots,\bm r_N)|^2 \nonumber \\
&=&N\int d\bm r_2\cdots d\bm r_N |\Phi_{KS}(\bm r,\bm r_2,\cdots,\bm r_N)|^2.
\end{eqnarray}
Note that here the Kohn-Sham potential $v_{KS}(\bm r)$ is not uniquely constructed
since a constant shift of the potential does not affect the ground state density.
Furthermore, the reconstruction of the corresponding energy functional or
exchange-correlation functional is not trivial.
Levy and Zahariev proposed a simple way to evaluate the exact
interacting ground state energy as a sum of orbital energies based on
the arbitrary constant term in the Kohn-Sham potential
\cite{PhysRevLett.113.113002}.
In this work, we extend this idea to interacting Kohn-Sham systems
and set the arbitrary constant in the Kohn-Sham potential $v_{KS}(\bm r)$
such that the ground state energy of the Kohn-Sham system $E_{KS}$
is identical to that of the fully-interacting system $E_{gs}$.
The Hohenberg-Kohn theorems offer one-to-one correspondence
between the ground state density $\rho(\bm r)$ and
the one-body external potential $v_{ext}(\bm r)$ once the interaction $W(\bm r)$
is given \cite{PhysRev.136.B864}.
Because the Hohenberg-Kohn theorems are not limited to the Coulomb interaction
but generally applicable to arbitrary interactions,
there is one-to-one correspondence between the one-body potential $v_{KS}(\bm r)$
and the corresponding ground state electron density once the effective interaction
$W_{eff}(\bm r)$ is defined.
Thus, once both interactions, $W(\bm r)$ and $W_{eff}(\bm r)$, are given,
there is a one-to-one correspondence between $v_{ext}(\bm r)$ and $v_{KS}(\bm r)$
through the common ground state density $\rho(\bm r)$,
resulting in a one-to-one correspondence between the fully-interacting many-body system
and the effectively-interacting Kohn-Sham system.
If the effective interaction is set to zero, $W_{eff}(\bm r)=0$,
this one-to-one mapping is reduced to the conventional Kohn-Sham mapping
between the interacting system and the corresponding
non-interacting Kohn-Sham system.
For later convenience, we decompose the Kohn-Sham potential $v_{KS}(\bm r)$
into the external potential $v_{ext}(\bm r)$,
the residual Hartree potential $v_{R-H}(\bm r)$
and the exchange-correlation potential $v_{xc}(\bm r)$.
Here, we define the residual Hartree potential as
\be
v_{R-H}(\bm r) = \int d\bm r' \Delta W_{res} (\bm r-\bm r')\rho(\bm r'),
\nonumber \\
\label{eq:vh}
\end{eqnarray}
where $\Delta W_{res}(\bm r)$ is the residual interaction defined
as $\Delta W_{res}(\bm r) = W(\bm r)-W_{eff}(\bm r)$.
Note that, if the effective interaction is set to zero, $W_{eff}(\bm r)=0$,
the residual Hartree potential $v_{R-H}(\bm r)$
is reduced to the conventional Hartree potential,
$v_{H}(\bm r) = \int d\bm r' W(\bm r-\bm r')\rho(\bm r')$.
Furthermore, if $W_{eff}(\bm r)=W(\bm r)$,
the residual Hartree potential vanishes.
Therefore, the residual Hartree potential, Eq.~(\ref{eq:vh}),
can be seen as a natural extension of the conventional Hartree potential.
Then, we define the exchange-correlation potential
as the rest of the Kohn-Sham potential
\be
v_{xc}(\bm r):= v_{KS}(\bm r)-v_{ext}(\bm r)-v_{R-H}(\bm r).
\label{eq:vxc}
\end{eqnarray}
The exchange-correlation potential in Eq.~(\ref{eq:vxc})
is reduced to the conventional exchange-correlation potential
of the non-interacting Kohn-Sham system if the effective interaction $W_{eff}(\bm r)$
is set to zero. Thus, Eq.~(\ref{eq:vxc}) can be also seen
as an extension of the conventional exchange-correlation potential.
In this work, we investigate effectively-interacting
Kohn-Sham systems and their exact exchange-correlation potentials
to explore a possibility to combine DFT and wavefunction
theory. To practically elucidate the effectively-interacting Kohn-Sham systems,
we consider the example of one-dimensional spin-$1/2$ two-electron systems.
As the electron-electron interaction, we employ the one-dimensional
soft Coulomb potential
\be
W(x) = \frac{1}{\sqrt{x^2 + \sigma^2}},
\end{eqnarray}
where $\sigma$ is a softening parameter, which is set to $0.5$~a.u.
As reference (interacting and non-interacting) Kohn-Sham systems,
we consider three kinds of systems in this work.
The first one is a \textit{non-interacting} Kohn-Sham system,
where the effective interaction $W_{eff}(x)$ is set to zero.
Note that this choice is nothing but the conventional non-interacting Kohn-Sham system
or standard DFT \cite{PhysRev.140.A1133}.
The second one is a \textit{$1/4$-interacting} system,
where the effective interaction is set to the quarter of
the bare soft Coulomb interaction, $W_{eff}(x)=W(x)/4$.
The third one is a \textit{long-range interacting} system,
where
$W_{eff}(x)=\mathrm{erf}(\sqrt{x^2+\sigma^2}/a_0)W(x)$ with
the Bohr radius $a_0$.
Thus, the short-range part of the Coulomb interaction is ignored
in the long-range interacting system.
\section{Numerical details \label{sec:numerics}}
Here, we describe numerical procedures to compute
the exact exchange-correlation potentials of the Kohn-Sham systems.
For the non-interacting two-particle Kohn-Sham system,
one can easily compute the exact Kohn-Sham potential as
\be
v_{KS}(x) = \frac{1}{2} \frac{1}{\sqrt{\rho(x)}}\frac{\partial^2}{\partial x^2}
\sqrt{\rho(x)}+E_{gs}.
\end{eqnarray}
For interacting Kohn-Sham systems, we employ the following
iterative scheme to obtain the exact Kohn-Sham potential
that reproduces the target ground-state density $\rho^{target}(x)$:
\begin{description}
\item[(i)] Start from an initial guess of the Kohn-Sham potential, $v^{(i=0)}_{KS}(x)$.
In this work, we employ the external potential $v_{ext}(x)$
as the initial guess.
\item[(ii)]
Compute the ground-state density $\rho^{(i)}(x)$,
solving the interacting Kohn-Sham equation, Eq.~(\ref{eq:ks-eq}),
with the trial potential $v^{(i)}_{KS}(x)$.
\item[(iii)] Then, evaluate the deviation from the target density by
\be
r^{(i)}_{error} = \int dx \left| \rho^{(i)}(x)-\rho^{target}(x) \right|^2.
\end{eqnarray}
\item[(iv)] If the error $r^{(i)}_{error}$ is larger than a given threshold $\eta$,
the trial Kohn-Sham potential is updated by the following formula
\be
v^{(i+1)}_{KS}(x)=v^{(i)}_{KS}(x)+\alpha_i \frac{\rho^{(i)}(x)-\rho^{target}(x)}
{\rho^{(i)}(x)+\rho^{target}(x)+\epsilon},
\end{eqnarray}
where $\alpha_i$ is a mixing parameter, and $\epsilon$ is a small positive number.
In this work, we set $\epsilon$ to $10^{-8}$~a.u., and choose
the mixing parameter $\alpha_i$ such that
the error of the updated density $r^{(i+1)}_{error}$ in the next iteration
becomes smaller than that of the previous iteration $r^{(i)}_{error}$.
In practical calculations, we set the initial value of $\alpha_0=0.1$.
In the iterative procedure, we employ an acceptance-rejection procedure to determine
the value of $\alpha_i$. At each iteration, the initial guess of $\alpha_i$ is evaluated
as $\alpha_i = 1.1\alpha_{i-1}$. If the computed error $r^{(i+1)}_{error}$ is smaller
than the previous error $r^{(i)}_{error}$, the guess value of $\alpha_i$ is accepted.
If the computed error is not smaller than the previous error, the guess value of
$\alpha_i$ is rejected, and the new guess value is set
as a half of the previous guess value. This procedure is recursively repeated
until $r^{(i+1)}_{error}$ becomes smaller than $r^{(i)}_{error}$.
\item[(v)] Repeat the above iterative procedure until the error $r^{(i)}_{error}$
becomes smaller than a given threshold $\eta$.
In this work, we set the threshold $\eta$ to $10^{-9}$ a.u.
\end{description}
Note that similar iterative approaches to compute the exact Kohn-Sham potential have been
proposed \cite{PhysRevA.47.R1591,Nielsen2018}.
However, any stable approaches can be employed in this work
because the size of the problems is small, and the accurate results can be obtained
with reasonable computational costs.
\section{Results \label{sec:results}}
\subsection{1D Helium atom \label{subsec:1d-he}}
First, we investigate the effectively-interacting Kohn-Sham systems
of the one-dimensional helium atom. To describe the helium atom, we employ the following
external potential
\be
v^{He}_{ext}(x)=-\frac{2}{\sqrt{x^2+\sigma^2}}.
\end{eqnarray}
Figure~\ref{fig:he_vxc}~(a) shows the exact ground-state density of the helium atom,
obtained by numerically solving the two-dimensional Schr\"odinger equation
with the conjugate gradient method.
Figure~\ref{fig:he_vxc}~(b) shows
the exact exchange-correlation potentials with respect to the three different interacting
Kohn-Sham systems. The different effective interactions
provide substantially different exchange-correlation potentials.
One may see that the long-range tail of the exact exchange-correlation potential
of the long-range interacting system (blue-dashed line) decays
faster than those of the non-interacting and
the $1/4$-interacting Kohn-Sham systems.
To clearly compare the exchange-correlation potentials
of the different Kohn-Sham systems,
Figure~\ref{fig:he_vxc}~(c) shows
the exchange-correlation force field, $-dv_{xc}(x)/dx$.
One can clearly see that the exchange-correlation force field
of the long-range interacting
Kohn-Sham system shows much faster decay than
the other systems. Furthermore, the long-range interacting
system has the smallest force field over the whole spatial region.
These features indicate a possibility to control
the asymptotic behaviors of the exchange-correlation potential by
choosing a suitable effective interaction used to set the reference interacting
Kohn-Sham system.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\columnwidth]{exact_xc_potential.pdf}
\caption{\label{fig:he_vxc}
(a) The exact ground-state density $\rho(x)$ of the one-dimensional
helium atom. (b) The exact exchange-correlation potentials for
the effectively-interacting Kohn-Sham systems: the red-solid line shows
the results for the non-interacting system, $W_{eff}(x)=0$;
the green-dashed line shows that for the $1/4$-interacting system,
$W_{eff}(x)=1/(4\sqrt{x^2+\sigma^2})$;
and the blue-dotted line shows the long-range interacting system,
$W_{eff}(x)=\mathrm{erf}(\sqrt{x^2+\sigma^2}/a_0)/\sqrt{x^2+\sigma^2}$.
(c) The force field $-v_{xc}(x)/dx$ of each exchange-correlation
potential in the panel (b).
}
\end{figure}
Then, we investigate details of the asymptotic behavior
of the exchange-correlation potentials.
Figure~\ref{fig:he_vxc_ahympt} shows the long-range behavior
of the exact exchange-correlation potentials shown in Fig.~\ref{fig:he_vxc}~(b).
In Fig.~\ref{fig:he_vxc_ahympt}~(a), $v_{xc}(x)$ of the non-interacting system
is shown as the red line, while an analytic curve,
$-1/x+c$, is described by the red circles. Thus,
$v_{xc}(x)$ of the non-interacting system has $-1/x$ asymptotics.
This behavior is known as the asymptotic behavior of the exchange-correlation
potential of a Coulombic system \cite{PhysRevA.49.2421}.
In Fig.~\ref{fig:he_vxc_ahympt}~(b), $v_{xc}(x)$ of the $1/4$-interacting system
is shown as the green line, while the analytic curve,
$-\frac{3}{4x}+c$, is described by the green circles.
Therefore, $v_{xc}(x)$ of the $1/4$-interacting system
has $-\frac{3}{4x}$ asymptotics, which is different from
the conventional asymptotics of the exchange-correlation potential.
The $-\frac{3}{4x}$ asymptotics corresponds to the asymptotic behavior
of the residual interaction of the $1/4$-interacting system,
$\Delta W_{res}(x)=W(x)-W_{eff}(x)=3/(4\sqrt{x^2+\sigma^2})$.
Therefore, the asymptotic behavior of $v_{xc}(x)$
of an effectively-interacting Kohn-Sham system can be
characterized by that of the residual interaction.
In Fig.~\ref{fig:he_vxc_ahympt}~(c), $v_{xc}(x)$ of the long-range-interacting system
is shown as the blue line, while the analytical curve,
$a\times \exp[-bx]+c$, is described by the blue circles.
One sees that the asymptotic decay of $v_{xc}$ is slower than
that of the residual interaction,
$\Delta W_{res}(x)=W(x)-W_{eff}(x)=\mathrm{erfc}(\sqrt{x^2+\sigma^2}/a_0)/\sqrt{x^2+\sigma^2}$.
This fact can be understood by the asymptotic behavior of
the electron density: The residual interaction decays
so fast that the asymptotics of $v_{xc}(x)$ is dominated
by that of the electron density in order to
correctly remove the self-interaction error due to
the local density.
This feature may indicates that
the local density approximation for the exchange-correlation
functional may work well for the long-range interacting Kohn-Sham system
because the exchange-correlation potential vanishes once
the density vanishes and the nonlocality in the exchange-correlation
potential is expected to be milder than
that of the non-interacting Kohn-Sham system.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\columnwidth]{exact_xc_potential_asym.pdf}
\caption{\label{fig:he_vxc_ahympt}
Asymptotic behaviors of the exact exchange-correlation
potentials for (a) the non-interacting, (b) $1/4$-interacting
and (c) the long-range-interacting Kohn-Sham systems.}
\end{figure}
Due to effective interactions $W_{eff}(x)$,
the ground-state wavefunction of interacting Kohn-Sham Hamiltonian
in Eq.~(\ref{eq:ks-eq}) is not generally described by a single
Slater determinant, but it requires multi-Slater determinants
for the accurate description.
Therefore, in the interacting Kohn-Sham system,
the electronic correlation can be treated separately:
A part of the correlation can be treated as the explicit
correlation in the many-body Kohn-Sham wavefunction,
while the other part can be implicitly treated
in the exchange-correlation potential, $v_{xc}(x)$, or the corresponding
density functional $E_{xc}[\rho(x)]$.
This separation enables to combine DFT
and wavefunction theory in order to efficiently
describe the electronic correlation. For example, the static correlation
may be efficiently treated by the multi-configuration interaction
as the explicit correlation in the correlated wavefunction, while
the dynamical correlation may be treated by DFT through
the exchange-correlation functional.
To explore such possibility,
we next investigate the explicit correlation
in the effectively-interacting Kohn-Sham wavefunctions.
For this purpose, we consider the eigendecomposition of the one-body
reduced density matrix
\be
\rho_{1RDM}(x,x')&=&2\int dx_2 \Psi_{KS}(x,x_2)\Psi^*_{KS}(x',x_2) \nonumber \\
&=&\sum_{i=1}n_i \phi_i(x)\phi^*_i(x'),
\end{eqnarray}
where eigenvectors $\phi_i(x)$ are known as natural orbitals \cite{PhysRev.97.1474},
and eigenvalues $n_i$ are seen as their occupations.
Here, we assume that
the occupation numbers are arranged in decreasing order
$n_i\ge n_{i+1}$.
Since we treat the spatial part of
the spin-singlet wavefunction, $\Psi_{KS}(x,x')$,
natural occupations are restricted as $0\le n_i \le 2$.
The occupation distribution deeply links to the number of
configurations that is required to accurately describe a correlated
electronic wavefunction
\cite{PhysRev.97.1474,szabo1989book}.
If only a small number of orbitals have substantial occupations
and the others have negligible occupations,
the correlated system can be described by
a small number of Slater determinants.
In contrast, if a larger number of orbitals have substantial
occupations, a larger number of configurations are required.
Electronic correlation in the first case is called
\textit{static} correlation, while that in the latter case
is called \textit{dynamical} correlation.
Figure~\ref{fig:natural_occ} shows the distribution of the occupations $n_i$
of the correlated ground-state wavefunction of the one-dimensional
helium atom. The red circles show the occupation distribution
of the fully-interacting system, the green squares show that of
the $1/4$-interacting Kohn-Sham system,
and the blue triangles show that of the long-range interacting Kohn-Sham system.
As seen from the figure, the occupations of the higher natural orbitals
are significantly suppressed in the effectively-interacting Kohn-Sham systems,
compared with the fully-interacting problem.
Thus, a large part of the electronic correlation is transferred from
the explicit correlation in the many-body wavefunction to the exchange-correlation
functional.
Furthermore, in Fig.~\ref{fig:natural_occ},
the long-range interacting system shows the rapid decrease of the occupations
of the higher natural orbitals.
This fact indicates that, in the long-range interacting Kohn-Sham system,
the significant part of the dynamical correlation is transferred to
the exchange-correlation functional, while the static correlation
is treated as the explicit correlation in the reference correlated wavefunction.
Importantly, we note that the long-range interacting Kohn-Sham system has the weakest
exchange-correlation potential with the fastest asymptotic decay
among all the investigated Kohn-Sham systems (see Fig.~\ref{fig:he_vxc}).
Therefore, this fact clearly demonstrates that
a proper choice of the effective interaction
enables to efficiently decompose the electronic correlation
into the exchange-correlation functional part
and the explicit wavefunction correlation part, resulting in
an efficient description of the electronic correlation
based on a combination of DFT and wavefunction theory.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\columnwidth]{natual_occ.pdf}
\caption{\label{fig:natural_occ}
Distribution of natural occupation over a few tens of
natural orbitals. The results of the fully-interacting system
(red circle), the $1/4$-interacting system (green square),
and the long-range interacting system (blue triangle) are shown.
}
\end{figure}
\subsection{1D H$_2$ molecule \label{subsec:1d-h2}}
Next, we investigate the effectively-interacting Kohn-Sham systems
of the one-dimensional hydrogen molecule.
To describe the hydrogen molecule,
we employ the following external potential
\be
v^{H_2}_{ext}(x)=-\frac{1}{\sqrt{(x-\frac{R}{2})^2+\sigma^2}}
-\frac{1}{\sqrt{(x+\frac{R}{2})^2+\sigma^2}}, \nonumber \\
\end{eqnarray}
where $R$ is the distance of the hydrogen atoms.
In this work, we set $R$ to $5$~a.u.
Figure~\ref{fig:vxc_h2}~(a) shows the exact ground state
electron density of the one-dimensional hydrogen molecule,
obtained by numerically solving the two-dimensional Schr\"odinger equation
with the conjugate gradient method.
At the center of the two hydrogen atoms,
the electron density becomes close to zero.
Figure~\ref{fig:vxc_h2}~(b) shows the exact exchange-correlation potentials
of the non-interacting and the effectively-interacting Kohn-Sham systems.
The exact exchange-correlation potential of the non-interacting system
(red-solid line) shows a spiky structure at the center.
This spiky structure was investigated in the previous works
for the strongly correlated system
\cite{doi:10.1063/1.3271392,PhysRevB.93.155146,PhysRevA.54.1957,PhysRevB.90.241107}.
One sees that the peak structure is strongly suppressed
in the $1/4$-interacting Kohn-Sham system (green-dashed line).
Furthermore, the exchange-correlation potential
of the long-range interacting Kohn-Sham system
(blue-dotted) shows very smooth feature around the central region.
This fact indicates that while the strong correlation effect
is encoded in the spiky structure
in the non-interacting Kohn-Sham system,
that of the effectively-interacting Kohn-Sham systems
is directly taken care through the explicit correlation
of the reference wavefunction.
Therefore, by properly choosing the effective interaction
of the Kohn-Sham system,
one can transfer the electronic correlation effect from
the exchange-correlation functional/potential
to the explicit correlation in the wavefunction in order to
reduce the complexity for developing accurate approximations
for the remaining exchange-correlation functional and potential.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\columnwidth]{exact_xc_potential_h2.pdf}
\caption{\label{fig:vxc_h2}
(a) The exact ground-state density $\rho(x)$ of the one-dimensional
hydrogen molecule.
(b) The exact exchange-correlation potentials for
the effectively-interacting Kohn-Sham systems: the results for
the non-interacting system (red-solid),
the $1/4$-interacting system (green-dashed),
and the long-range interacting system (blue-dotted) are shown.
}
\end{figure}
\subsection{1D heteronuclear diatomic molecule \label{subsec:1d-hetero-molecule}}
Finally, we investigate the effectively-interacting Kohn-Sham systems of the one-dimensional
heteronuclear diatomic molecule.
In order to describe the heteronuclear diatomic molecule,
we employ the following external potential
\be
v^{HM}_{ext}(x)=-\frac{1-\delta}{\sqrt{(x-\frac{R}{2})^2+\sigma^2}}
-\frac{1+\delta}{\sqrt{(x+\frac{R}{2})^2+\sigma^2}}, \nonumber \\
\end{eqnarray}
where $R$ is the distance of the hydrogen atoms, and $\delta$ is the charge imbalance
between the two nuclei.
In this work, we set $R$ to $5$~a.u, and $\delta$ to $0.2$~a.u.
Figure~\ref{fig:vxc_hetero_molec}~(a) shows the exact ground state
electron density of the one-dimensional heteronuclear diatomic molecule,
obtained by numerically solving the two-dimensional Schr\"odinger equation
with the conjugate gradient method.
Reflecting the charge imbalance between two nuclei, the electron density $\rho(x)$
shows the asymmetric structure.
Figure~\ref{fig:vxc_hetero_molec}~(b) shows the exact exchange-correlation
potentials of the effectively-interacting Kohn-Sham systems.
The exchange-correlation potential of the non-interacting Kohn-Sham system
(red-solid line) asymptotically approaches to different values in the positive and negative
$x$ regions. The energy difference of the asymptotic values reflects the difference
of the ionization potentials of the two atoms. This feature is known as a \textit{step}
of the exact exchange-correlation potential in heteroatomic molecules
\cite{doi:10.1063/1.3271392}.
As seen from Fig.~\ref{fig:vxc_hetero_molec}~(b), the exchange-correlation potentials
of the $1/4$-interacting Kohn-Sham system (green-dashed line)
and that of the long-range interacting Kohn-Sham system (blue-dotted line) show
the weaker step feature than that of the non-interacting system (red-solid line).
To quantify the size of the step feature, we evaluate the difference of
the potential at $x=-8$~a.u. and $8$~a.u.,
$\Delta_{s}=v_{xc}(x=-8 \textrm{a.u.})-v_{xc}(x=8 \textrm{a.u.})$.
The step size of the non-interacting Kohn-Sham system is $\Delta_s = 0.47$~a.u.,
and that of the $1/4$-interacting Kohn-Sham system is $\Delta_s = 0.39$~a.u.
The long-range interacting Kohn-Sham system provides the smallest step size,
$\Delta_s = 0.15$~a.u.
Thus, one can clearly conclude that the effectively-interacting Kohn-Sham systems
significantly reduce the step feature of the exchange-correlation potential by
transferring a part of electronic correlation from the exchange-correlation potential
to the explicit correlation in the reference wavefunction.
Consistently with the above findings, the long-range interacting Kohn-Sham system
has the largest reduction of the complex feature of the exchange-correlation potential.
Thus, effective interaction based on the range separation is suggested to be a key
to achieve an efficient description of static and dynamical correlation with
the combined theory of DFT and wavefunction theory.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\columnwidth]{exact_xc_potential_h2_frac_charge.pdf}
\caption{\label{fig:vxc_hetero_molec}
(a) The exact ground-state density $\rho(x)$ of the one-dimensional
heterocuclear diatomic molecule.
(b) The exact exchange-correlation potentials for
the effectively-interacting Kohn-Sham systems: the results for
the non-interacting system (red-solid),
the $1/4$-interacting system (green-dashed),
and the long-range interacting system (blue-dotted) are shown.
horizontal black-dotted lines show values of each potential $v_{xc}(x)$
evaluated at $x=8$~a.u.
}
\end{figure}
\section{Summary and Outlook\label{sec:summary}}
In order to explore a possibility to combine DFT and wavefunction
theory, we investigated a mapping from a fully interacting
problem to an effectively-interacting problem instead of
the conventional mapping to the non-interacting Kohn-Sham system.
To elucidate such mapping, we considered three kinds of
effectively-interacting Kohn-Sham systems.
One is the usual non-interacting Kohn-Sham system.
The second one is the $1/4$-interacting
Kohn-Sham system, where the effective interaction is set to the quarter
of the full interaction. The last one is the long-range interacting
Kohn-Sham system, where the short-range part of the full interaction
is ignored.
To practically investigate the properties of the effectively interacting
Kohn-Sham system, we first investigated the exact
exchange-correlation potentials of the one-dimensional
helium atom. As a result, we found that
the asymptotic behavior of the exchange-correlation potential
is determined by that of the residual interaction,
which is defined as the difference of the full and the effective
interactions.
This fact further indicates a possibility to construct a good
local density approximation for an effectively-interacting Kohn-Sham systems
by optimally choosing a short-range residual interaction
because the exchange-correlation potential vanishes as density vanishes
and the nonlocal density dependence is expected be suppressed.
Next, we evaluated the occupation distribution of the natural orbitals of
the ground-state wavefunction of the effectively-interacting Kohn-Sham systems.
As a result, we found that the occupations of the higher natural orbitals
are significantly suppressed in the effectively-interacting Kohn-Sham systems,
especially in the long-range interacting Kohn-Sham system.
This fact indicates that the effectively-interacting Kohn-Sham systems
offer an efficient decomposition of the electronic correlation
into the dynamical correlation in the DFT part and the static correlation
in the wavefunction part.
Then, we investigated the exact exchange-correlation potentials
of the one-dimensional hydrogen molecule.
Consistently with the previous works \cite{doi:10.1063/1.3271392,PhysRevB.93.155146,PhysRevA.54.1957,PhysRevB.90.241107},
we observed the spiky feature of the exchange-correlation potential.
Once the effective interaction is turned on, the spiky feature is
strongly suppressed. Furthermore, in the long-range interacting Kohn-Sham system,
the spiky feature completely vanishes and the exchange-correlation potential
becomes smooth at the center of the molecule.
Finally, we investigated the exact exchange-correlation potentials of
the one-dimensional heteronuclear diatomic molecule in order to study
the step feature of the exchange-correlation potential,
which reflects the different ionization potentials
of the two atoms \cite{doi:10.1063/1.3271392}.
Consistently with the above analysis, we found that the difficult step feature
is significantly reduced in the effectively-interacting Kohn-Sham systems,
compared with the non-interacting Kohn-Sham system.
Especially, the long-range interacting Kohn-Sham system shows the smallest step feature,
indicating the effectiveness of the concept of the range separation
in the effectively-interacting Kohn-Sham systems.
Based on the above findings, we can conclude that
the effectively-interacting Kohn-Sham approach
can open a way to efficiently describe the electronic correlation effect
by the combination of DFT and wavefunction theory,
decomposing the electronic correlation effect into
the DFT part and the wavefunction theory part.
Another important fact is that the interacting Kohn-Sham scheme can be reduced
to the hybrid functional, applying the Hartree-Fock approximation to
the interacting Kohn-Sham equation, Eq.~(\ref{eq:ks-eq}).
Therefore, the interacting Kohn-Sham scheme offers a possibility
to improve the hybrid functional approximation within the formal theoretical framework
of DFT, by adding the explicit correlation in the many-body Kohn-Sham wavefunctions.
In this work, we limited ourselves to the theoretical analysis of one-dimensional
two-electron systems in order to investigate the exact properties of
the interacting Kohn-Sham systems.
However, importantly, the above findings can be straightforwardly extended to
multi-dimensional many-electron systems
and open a possibility to construct robust and efficient density functionals.
For example, based on the weak nonlocality of the exchange-correlation
potential of the interacting Kohn-Sham system, accurate LDA may be
constructed from the homogeneous electron gas with a suitable effective interaction.
Furthermore, inspired by the \textit{GW} method and the screened hybrid-functional approach,
a screening effect can be incorporated in an effective interaction of the Kohn-Sham system,
and accurate description for solid-state materials may be realized.
These extensions of the present work to multi-dimensional many-electron systems are
under way.
\begin{acknowledgments}
This work was supported by the European Research Council (ERC-2015-AdG694097),
the Cluster of Excellence 'Advanced Imaging of Matter' (AIM),
and JST-CREST under Grant No. JP-MJCR16N5.
Support by the Flatiron Institute, a division of the Simons Foundation is acknowledged.
S.A.S. gratefully acknowledges the fellowship from the Alexander von Humboldt Foundation.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
|
2,869,038,155,031 | arxiv | \section{Introduction}
The most basic parameter that can be used to describe a telescope is its primary aperture
size. In many cases, that parameter is such an inseparable part of a telescope's identity,
it is in fact part of the name, or at least cited in the same breath as its proper name - the 5-m Hale,
the 3.5-m WIYN, etc.
As first explored by \citet{mei78,mei79a,mei79b}, this parameter can be linked to another fundamental
parameter - that of cost.
Many additional parameterizations can be utilized to specific the capabilities and performance of a
given telescope, such as moving mass, instrument suite, and site, but aperture size
is matched only by choice of operational wavelength as a fundamental cost driver. As described
in \citet{mei78}, a proportionality of cost to aperture size that scales as the 2.8 power
(cost $ \propto D^{2.8}$) was
found to be true to first order.
In this manuscript, we will explore the impact that
an entire generation of telescopes since then has had upon the aperture-cost power law.
\section{Ground-Based Telescopes}
Data used in this analysis can be found in Table 1, and their online cost references
\& other backing information; we have made every effort to obtain the
publicly published cost data
that most accurately reflect the telescope construction cost.
For each telescope, the cost data point was intended to be inclusive of telescope
mirror, structure, enclosure, and other essential
site work based on these references, and exclusive of instrumentation
and operations cost.
Cost data were normalized to year 2000 US dollars using the standard federal
tables for inflation adjustment for the past century. There are unfortunately as many
acronyms as there are telescopes, and rather than expand them all here,
the reader is encouraged to reference those links he or she has an interest in.
We should note that the costs cited herein are potentially a bit `soft', in that in many cases,
a telescope's initial construction is followed by a period (sometimes years) in which the
operation of the aperture is optimized. In many cases this optimization is improving
the performance of the telescope beyond its initial specifications, but in a few
cases this commissioning phase is needed just to meet the original design goals. For
that latter case, the operation costs of that extended commissioning phase should
be included in the true aperture cost, but we are unable to precisely do such accounting
here.
\subsection{Pre-1980}
Large telescopes built prior to 1980 had certain basic characteristics typically in common. These
characteristics include:
\begin{itemize}
\item Equatorial mounts - Even the massive 5-m Hale has a equatorial mount, with an axis parallel to the
Earth's rotational axis. The period 1970-1980 saw the first breaks with tradition on this point, with the
6-m Soviet (now Russian) SAO telescope.
\item Slow optical systems - F/ratios were typically greater than 3, and never less than 2.5.
\item Thick mirrors - Some lightweighting was incorporated into these mirrors, but thermal inertia
and the resultant mirror seeing remains a substantial problem for these apertures.
\end{itemize}
As a result of the first two points above, such
designs had substantial impact upon tube length, and as a result, enclosure size and
attendant expense.
\subsection{Post-1980}
Large telescopes built after 1980 had the following basic characteristics in common:
\begin{itemize}
\item Alt-az mounts - Advances in computer control and optomechanical devices now allow for the more compact
mounting allowed by the alt-ax mounts
\item Fast optical systems - Of the major ($>$2.5m) apertures built since 1980, not a single one had a
f/ratio greater than 2.5, and none since 1989 have been greater than 1.8.
\item Thin mirrors - Often these primaries are coupled with active control systems to dynamically
compensate for changes in the angle between the pointing vector and the gravity vector.
\end{itemize}
A special class of telescopes in the post-1980 era are the {\it giant segmented mirror} (GSM)
{\it telescopes}. Beginning with the Keck I telescope, optical systems in excess of 8.4-m have begun to be available
to the astronomical community. Currently operational GSM telescopes are Keck I, Keck II, and the
Hobby-Eberly telescope, with the GTC, SALT, and LBT apertures all under construction. These
telescopes all have effective areas in excess of 9-m.
A second special class appearing in the post-1980 era were large telescopes that
made special efforts in trading operation capability for increased aperture size. Both the
Hobby-Eberly and SALT telescopes have eliminated structural elevation pointing for
simplified design and reduced cost, and the liquid mercury telescope of Univ. British Columbia
is restricted to zenith pointing for even greater cost savings, much like the Arecibo 305
meter radio telescope.
\input{vanBelle.tab1.tex}
\begin{figure}\label{fig_cost_dia}
\epsscale{1.0}
\plotone{fig1_040524b.eps}
\caption{Cost versus aperture diameter for optical telescopes built before
and after 1980. For the pre-1980 fit, cost $\propto D^{2.77}$, and for the
post-1980 fit (exclusive of the giant segmented mirrors), cost $\propto D^{2.45}$. The two
limited operations telescopes plotted are the UBC 6-m liquid mecury telescope and
the 9-m (effective) HET.}
\end{figure}
\subsection{Discussion on Ground-Based Apertures}
As seen in Figure 1, there appears to be a clear progression of cost with telescope size.
An examination of the aperture built prior to 1980 shows cost $\propto D^{2.77}$, which
as would be expected is consistent with \citet{mei78}. For those monolithic apertures
built since 1980, the cost-aperture power law is slightly shallower, with
cost $\propto D^{2.46}$, but still significantly greater than merely scaling with telescope
area, $D^2$. The GSM telescopes that have been built appear to drop below the post-1980
line, just as the post-1980 line drops below the pre-1980 line.
Our interpretation of this offset in the power law intercept is the cost reducing
impact of fundamentally new technologies. At the $\sim$1980 turning point, the improvement was
a combination of telescope mounting and faster optical systems, reducing the overall
size of the telescopes. For the advance associated with GSMs, the improvement is
the cost reduction associated with fabrication of segmented versus monolithic
primary mirrors. There are unfortunately not enough data points to determine
if the GSMs will also follow a cost $\propto D^{2.46}$ power law; however, we naively
expect for the cost-aperture relationship to generally adhere to this slope.
As such, we
may easily predict general costs for future apertures built
using technology associated with the current family of GSMs. We expect a 30-m
telescope to cost roughly \$1.4 billion, and a 100-m telescope is expected
to be roughly \$26 billion, {\it using current GSM technology}. If, as can be reasonably
postulated, advances in telescope construction technology can be applied to
the next generation of large apertures, reductions of 2-3$\times$ can be expected with
each new family of technology, as seen in the progression from pre-1980 to post-1980 to
GSMs. A \$600M, 30-m telescope can be reasonably argued to be only a single technology
generation away. However, following this same reasoning, a \$2B, 100-m telescope is
probably a full three technology generations away from being realized.
\section{Space-Based Telescopes}
Unfortunately, there are only a few operational examples of space-based
telescopes. The obvious candidate is the Hubble Space Telescope (HST). Of NASA's other
three `Great Observatories', only the
Space Infrared Telescope Facility (SIRTF) has a mirror design that lends itself
to comparison within this context. A full accounting of flight designs is appropriate
within the context of categorizing the approaches to space telescopes:
\begin{itemize}
\item Delivery to orbit - Both HST and SIRTF are examples of this category of space-based
mission.
\item Assembly in orbit - Given the payload shroud constraints of a $\sim$5m diameter on
even the largest of launch vehicles, a number of spacecraft that have flown or are
in the planning stages take advantage of a ground-based construction, with a space-based
assembly stage. This can be as simple as an autonomous unfurling, or a more complicated
and drawn out assembly phase prior to operations.
It is worth noting that there are two obvious classes of telescope in this category - those
that benefit from {\it robotic} assembly, and (particularly within the context
of the space station) those that would be the product
of {\it human} assembly. Of surprise to many, there are three clear examples of at least the
robotic assembly approach to
date: the 8-m VSOP and 12-m commercial MBSat radio antennas, both of which have flown, and
the 6.5-m near-infrared JWST, which has not flown but is
clearly committed to this approach and will be orbited within the next ten years.
\item Fabrication in orbit - This approach is, at present, somewhat more fanciful than the previous two,
potentially making using of some sort of {\it in situ} resource utilization (and as a result,
bypassing the limitations of launch vehicle lift restrictions). Although the most promising in
terms of ultimate aperture size, we will only mention this approach here in passing, for the
sake of completeness, due to its gross technical immaturity.
\end{itemize}
A further complication worth considering is the prospect of {\it on-orbit servicing}, which
can be applied equally to all three categories above.
Given the small number of examples to date for space-based telescopes, no general inference can
be drawn from the relationship between telescope cost and aperture size for these apertures.
Indeed, the similar relative cost between Hubble and JWST - on the order of \$1 to \$2 billion
dollars for each - would indicate within the simple confines of the
rough analysis presented herein for ground-based apertures
that telescope size is independent of cost. Instead,
our assessment is that the predominant phenomenon at play is rapid technological development
as it impacts aperture size, rather than simple scaling of a single family of technology.
\section{General Discussion}
{\it Ground-based Telescopes.} There are two key factors that affect the aperture-cost scaling
law for ground-based telescopes:
\begin{itemize}
\item Environment - Environment manifests itself in two significant
ways for ground-based telescopes.
Inclement weather is the first of these two ways - the telescope must be protected from
precipitation and other hazards associated with being open to the air. For all major optical
telescopes to date,
this is accomplished by construction of a telescope enclosure, typically
a dome. The second weather factor is wind - acceptably low wind velocities do not preclude
operation under transparent conditions, but wind shake can significantly degrade telescope
performance. For most optical telescopes, an enclosure can also mitigate the effect
of wind upon the telescope structure, typically a co-rotating dome.
For optical
telescopes, as the aperture grows, the dome grows as $\sim (\textrm{f/ratio} \times D)^3$. Reduction of telescope
f/ratios over time have improved the situation over the past twenty years, but
this factor ultimately can be no smaller than $\sim D^3$.
A common mistake at many observatories is the assumption that the dome is a simple
element of the overall observatory, not worth a great deal of thought or investment; the
result is often years of expensive maintenance headaches and/or operational limitations.
Recent illustrations accompanying proposals for a 30m-class telescope compare the aperture size
to that of a baseball diamond; this is a particularly illustrative example, noting that
recent retractable roof baseball stadiums have been built and are worthwhile
enclosures to consider when trying to approximate price. While larger than the
enclosure for a 30m telescope (in that they have to enclose an outfield and
grandstands), they are also significantly simpler in that they only retract, and do not
have to rotate. A recent example of this sort of venue in Seattle was built for \$600M
(telescope not included).
\item Gravity - Observational pointing access to the sky is typically achieved through orienting
the telescope structure in two axes, frequently elevation \& azimuth or right ascension \&
declination.
(The Hobey-Eberly and Liquid Mercury Telescope are notable exceptions to
this observation, and have traded significant operational flexibility for
economic advantage, as seen in Figure 1.) Changing a
telescope's elevation or declination alters the angle
between the telescope's pointing vector and the local gravity vector. Since the telescope
must maintain its alignments throughout all pointings, the structure must be tolerant
of this variable angle. As such, the telescope structure often grows as a hemisphere
behind the aperture it supports - the growth, and cost, of this structure will scale
as $\sim D^3$. Clever design of this structure can reduce the power law to something
closer to the square of the aperture diameter, but consistency of the
$D^{2.7}$ aperture-cost scaling law indicates there are perhaps limits to cleverness
dictated by modern construction materials and techniques.
\end{itemize}
Both of these factors affect the relevant power law for ground-based telescopes.
Elimination of the telescope dome for the largest of the new telescopes is certainly an option
(and actively under consideration for some of the larger apertures proposed),
although it will clearly multiply the deleterious effect of wind shake on the telescope
backing structure and push the cost of the backing structure back towards $\sim D^3$.
These two ever present ground-based factors will push the aperture-cost scaling law away
from $\sim D^2$ and towards $\sim D^3$.
{\it Space-based Telescopes.}
As with ground-based apertures, there are two key factors that affect the aperture-cost scaling
law for space-based telescopes:
\begin{itemize}
\item Structural stability - As with ground-based telescopes, a space-based telescope's backing
structure will be responsible for maintaining the unique shape of the primary mirror, regardless
of pointing. However, given the absence of a significant gravitational field, the
structure may be designed primarily for aperture alignment rather than support against an
external field. There will be no changing external force to cope with as the aperture points
to different portions of the sky. As such, it is our expectation that the
structure will be primarily 2-dimensional assembly and that the
aperture-cost law associated with maintaining optical figure will scale as $\sim D^2$.
\item Environment - For structures of significant size in space, an important consideration
that begins to impact operational considerations is the space `weather', primarily
due to the sun. Particulate solar wind, radiation pressure, and heating effects
of the solar environment will all have to be accounted for.
It is likely that large telescopes in space will need a shield between the primary aperture
and the sun. This shield,
while notionally as large or even larger than the aperture itself, will also manifest
itself as a primarily 2-dimensional structure. Also, given the substantially relaxed
requirements for such a shield to maintain a given shape, it can be a fairly
gossamer structure. Such a shield provides the additional benefit of cooling
of the telescope; this type of structure is already a part of the baseline JWST design
and is not considered to be a significant cost driver.
The cost of this structure should also scale as $\sim D^2$.
\end{itemize}
It is also worth noting that certain expensive design drivers for ground-based telescopes are
not necessarily present in punitive space-based designs. For example, since a dome is no longer
enclosing the telescope structure, a driver for relatively fast focal ratios (and difficult
to fabricate parabolas) is removed.
Overall, our expectation is that ground-based telescope costs will continue
to scale as $\sim D^{2.5}$. Improvements in technology will provide one-time
shifts in the zero-point of the aperture-cost relationship, with no impact
upon slope. In contrast to the ground-based case, we expect space-based
apertures to have a much slower aperture-cost relationship, growing as
slowly as $\sim D^{2.0}$. The difference in slopes has a striking consequence:
{\bf At some given aperture size, it will be just as expensive to deliver an operational
space-based or ground-based telescope}. This equality is {\it independent} of the
obvious advantages a space-based aperture has over its ground-based counterpart.
Isolating the cross-over point of the two power laws will be of particular interest,
in that it points to the size domain that will be exclusively inhabited
by space-based apertures.
At the present, using these putative values for the power law slopes, and starting
from the points established by the current generation of GSMs for the ground-based case,
and JWST for the space-based case, the cross-over point appears to appear at the
300m filled aperture size, at a cost of \$100 billion dollars. This is a completely
unrealistic sum for any telescope. However, if we advance from this starting point
and move forward two technology generations for both space-based and ground-based telescopes -
with the attendant shift in power law intercepts - our cross-over location shifts to
a 120m filled aperture at a cost of \$10 billion dollars. This is still quite a speculative
sum, but getting to be significantly more realistic. If there is a more rapid advance
in space technology than in ground-based telescope technology (which these authors
do not think untenable given the relative levels of investment),
and two generations of space-based observatory technology evolve
for every one of on the ground, our cross-over shifts to 70-m, \$3 billion dollars. Given
these sorts of possible scenarios, it is our expectation that the largest aperture built upon
the ground will be in the region of 100-m.
Above and beyond the initial cost of an observational facility,
there are two additional aspects of telescope finances that are not being examined in great detail
in this simple analysis, but they bear mentioning here:
\begin{itemize}
\item Instrumentation - A substantial portion of the cost of any operational observatory is
its instrumentation. For ground-based apertures, this can be an evolving suite of instruments
with various specialized specifications and design goals. For these facilities, and for those
space-based observatories with on-orbit servicing, ongoing instrumentation upgrades represent an
ongoing cost of the facility.
\item Operational Costs - For ground-based observatories, this number can run annually from 5\% to 30\%
of the overall initial construction cost. There are two aspects of this cost that
can be specifically identified here: first, that of ongoing maintenance, and second, that of
the actual observing done with the facility.
\end{itemize}
For those space-based observatories that do not benefit from on-orbit servicing, some of these
costs simply do not appear - new instrumentation does not need to be developed, nor does daily
maintenance need to be physically performed upon the spacecraft(s). However, this potentially translates
into limitations in terms of instrument capability and mission lifetime, particularly in relation
to ground-based facilities, so the actual benefit or penalty of these considerations is
not entirely clear.
\section{Conclusions}
We have shown the telescope cost growth scaling law of $\sim D^{2.77}$ that was first
noted in \citet{mei78} for ground-based telescopes is slightly shallower
for the apertures that have been built since 1980, at $\sim D^{2.46}$,
but remains generally true. We have also
presented arguments in support of a similar, but notably shallower, scaling law for space-based
telescopes, closer to $\sim D^{2.0}$. An important implication of these two power laws is their
intersection - this point defines a telescope that will be equally expensive to build on the
ground or in space. This point is independent of the advantages to be gained in siting
the aperture in space versus on the ground.
This is particularly interesting as the astronomical community contemplates construction
of ground-based apertures that are up to 100m in size. Given the limits of public and private
support for construction of new telescopes, it would be prudent for the community to carefully
consider the directions they take their more ambitious technology development efforts. These
investments should be directed with consideration regarding if
those efforts eventually will dead end, as in the ground-based case, or have
substantial growth options.
Current thinking in terms of
`overwhelmingly' large apertures will of course eventually give way to thoughts about
even larger instruments capable of achieving science goals with fundamental implications
for astrophysical discovery.
These goals include continental mapping of
nearby exosolar terrestrial planets, a complete sub-pc catalog of star formation within the
local group, and surveys of the early universe at $z>10$.
\acknowledgments
We acknowledge fruitful discussions with a large number of individuals in the field,
most of whom expressed varying degrees of healthy skepticism at our premise.
Portions of this work were
performed at the California Institute
of Technology under contract with the National Aeronautics and
Space Administration.
|
2,869,038,155,032 | arxiv |
\section{Introduction}
\label{sec:intro}
Modern high performance computing~\mbox{(HPC)} systems are characterized by a large number of computing resources and their heterogeneity.
Efficiently exploiting HPC systems along these two characteristics represents a significant challenge for parallel applications.
Increasing the number of computing resources assigned to a parallel application can reduce the execution time.
However, such a reduction is not guaranteed due to the management of parallelism and communication overheads.
Also, heterogeneity adds several challenges in managing the assigned computing resources.
Executing parallel applications on heterogeneous resources requires that the effects of the lower performance computing resources do not dominate the performance of the other computing resources.
The parallel spin-image algorithm~\mbox{(PSIA)}~\cite{eleliemy2016loadbalancing} is a parallel version of the well known \mbox{spin-image} algorithm~\mbox{(SIA)}~\cite{CVJohnson}.
The \mbox{SIA} is widely used in different domains, such as face detection~\cite{choi2013angular}, object recognition~\cite{johnson1999using}, 3D map registration~\cite{mei2013new}, and 3D database retrieval systems~\cite{assfalg20043d}.
The main limitation of the \mbox{SIA} is its computational time complexity and, consequently, its execution time.
The \mbox{PSIA}~\cite{eleliemy2016loadbalancing} is introduced to overcome this limitation of the \mbox{SIA}.
However, the \mbox{PSIA}~\cite{eleliemy2016loadbalancing} only employs a static load balancing technique to distribute the process of the \mbox{spin-image} generation among the available computing resources.
Moreover, to the best of our knowledge, the scalability of the \mbox{PSIA} has not yet been studied.
Similar to most scientific applications, the main source of parallelism in the \mbox{PSIA} is a loop, which for \mbox{PSIA} consists of independent iterations.
Efficient loop scheduling techniques are, therefore, needed for parallelizing and executing this loop on parallel computing systems.
Among the loop scheduling techniques, the dynamic loop scheduling~\mbox{(DLS)} techniques have been shown to be most effective in optimizing the execution of parallel loop iterations by scheduling them during execution~\cite{taxonomy,Plata,IBto,IBtog}.
This work proposes a novel version of \mbox{PSIA}, namely \mbox{EPSIA}, that integrates a number of \mbox{DLS} techniques, such as self scheduling \mbox{SS}~\cite{tang1986processor}, guided self scheduling \mbox{GSS}~\cite{polychronopoulos1987guided}, and factoring \mbox{FAC}~\cite{hummel1992factoring}.
\mbox{EPSIA} is generic enough to integrate different \mbox{DLS} techniques.
The performance of the \mbox{PSIA} and the performance of the \mbox{EPSIA} are evaluated via different scalability experiments and on different homogeneous and heterogeneous computing resources.
The achieved performance of the \mbox{EPSIA} under different \mbox{DLS} techniques is studied.
The remainder of this work is organized as follows.
In Section~\ref{sec:background}, a background of the \mbox{PSIA} and the \mbox{DLS} techniques used in this work is provided.
The most relevant work in the literature, concerning the performance optimizations of the \mbox{SIA}, is also reviewed in Section~\ref{sec:background}.
In Section~\ref{sec:proposed}, the proposed \mbox{EPSIA} that can use different \mbox{DLS} techniques is introduced.
The experimental setup and the information needed to reproduce this work is presented in Section~\ref{sec:experimentsetup}.
In Section~\ref{sec:results}, the results of executing the proposed \mbox{EPSIA} are compared with the results of executing the \mbox{PSIA} on homogeneous and heterogeneous computing resources to derive their weak and strong scalability profiles, respectively.
In Section~\ref{sec:conclusion}, the conclusion of this work and the potential future work are outlined.
\section{Background and Related Work}
\label{sec:background}
This section describes the PSIA and the most relevant work concerning the PSIA optimization.
\subsection{Parallel \mbox{Spin-Image} Algorithm}
\label{subsec:psia}
The spin-image algorithm~\mbox{(SIA)} was originally introduced in 1997 by Johnson~\cite{CVJohnson}.
It converts a \mbox{3D} object to a set of \mbox{2D} images which are considered as a shape descriptor for that \mbox{3D} object.
The crux of the SIA is the process of generating the \mbox{2D} images.
The \mbox{spin-image} generation process can be explained as a process of spinning a sheet of paper through a \mbox{3D} object.
When a sheet of paper spins around a certain oriented point through a \mbox{3D} object, other oriented points of that object are pasted onto that sheet of paper.
After a complete cycle around the oriented point, the spinning sheet of paper represents a \mbox{spin-image} generated at that oriented point.
In \figurename{~\ref{fig:spin}}, taken from~\cite{CVJohnson}, the \mbox{spin-image} generation process is illustrated using eight animated frames.
Three parameters characterize the~\mbox{SIA}:~$W$,~$B$, and~$S$, and are included in Table~\ref{tab:notation}.
$W$~denotes the number of pixels in a row or column of the generated \mbox{spin-image} and is similar to the width of the spinning sheet of paper.
The \mbox{SIA} assumes square \mbox{spin-images} with equal widths and heights.
$B$~is a factor of the 3D mesh resolution, used to determine the storage capacitance of each cell on the spinning sheet of paper.
Increasing~$B$ means that many oriented points will be pasted to the same cell on the spinning sheet of paper.
Consequently, the effect of individual oriented points on the generated \mbox{spin-image} will be reduced.
$S$~is a constraint for the \mbox{spin-image} generation process.
If the angle between $np_i$ and $np_j$, the normal vectors of the two oriented points $P_i$ and $P_j$, respectively, is greater than $S$, then the oriented point $P_j$ does not contribute to the generated \mbox{spin-image} at $P_i$.
In \figurename{~\ref{fig:angle}}, $\theta$ is the angle between the two normal vectors $np_i$ and $np_j$.
\begin{figure}
\includegraphics[width=\columnwidth, clip,trim=0.3cm 15cm 0cm 1cm]{newfigures/spinning.pdf}
\caption{An analogy of the \mbox{spin-image} generation process with eight animation frames (from~\cite{CVJohnson})}
\label{fig:spin}
\end{figure}
\begin{table}
\centering
\vspace{-0.35cm}
\caption{Glossary of Notation}
\label{tab:notation}
\begin{tabular*}{.9\columnwidth}{l|l}
\textbf{Symbol} & \textbf{Description} \\
\hline
$M$ & \begin{tabular}[c]{@{}l@{}} Number of oriented points \end{tabular} \\ \hline
$N$ & \begin{tabular}[c]{@{}l@{}}Number of \mbox{spin-images}\\ $1 \leq N \leq M$ \end{tabular}\\ \hline
$P_i$ & \begin{tabular}[c]{@{}l@{}} An oriented point with a known normal vector where \\ a \mbox{spin-image} can be generated, $0$ $\leq$ $i$ $\textless$ $M$ \end{tabular} \\ \hline
$OP$ & \begin{tabular}[c]{@{}l@{}}Set of all oriented points that belong to a \mbox{3D} object\\ \{$P_{i}$ $|$ $0$ $\leq$ $i$ $\textless$ $M$\} \end{tabular} \\ \hline
$np_i$ & A 3D vector that represents the normal vector of $P_i$\\ \hline
$\theta$ & \begin{tabular}[c]{@{}l@{}} The angle between two normal vectors $np_i$ and $np_j$,\\ $0$ $\leq$ $i$,$j$ $\textless$ $M$ \end{tabular}\\ \hline
$W$ & \begin{tabular}[c]{@{}l@{}} The number of pixels in a row or column of the \\generated \mbox{spin-image} where the generated \mbox{spin-image}\\ is assumed to be a square matrix \end{tabular}\\ \hline
$B$ & \begin{tabular}[c]{@{}l@{}} A factor of the 3D mesh resolution that is used to \\determine the storage capacitance of the\\ generated \mbox{spin-image}, 0 $\textless$ $B$ $\leq$ 10 \end{tabular}\\ \hline
$S$ & \begin{tabular}[c]{@{}l@{}}Maximum allowed angle $\theta$ between $P_i$ and $P_j$, where \\ $P_j$ contributes to the generated \mbox{spin-image} at $P_i$ \end{tabular}\\ \hline
$WO$ &\begin{tabular}[c]{@{}l@{}} The number of workers used to generate the \mbox{spin-images} \end{tabular}\\ \hline
$wo_k$ &\begin{tabular}[c]{@{}l@{}} A worker that represents an MPI rank, pinned to a certain \\computing resource (core), $1 \leq k \leq WO$ \end{tabular}\\ \hline
\end{tabular*}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth, clip, trim= 0cm 21.5cm 5cm 0cm]{newfigures/theta.pdf}
\caption{A \mbox{3D} object in which $P_j$ does not contribute to the generated spin-image at $P_i$ due to the fact that $\theta$ is greater than $S$}
\label{fig:angle}
\end{center}
\end{figure}
The time complexity of the \mbox{SIA} is~$O(N M)$.
If~$N$ approximately equals~$M$, \mbox{3D} objects with more than~$100K$ oriented points represent a significant challenge for the \mbox{SIA} in terms of its execution time.
PSIA~\cite{eleliemy2016loadbalancing} exploits the inherent parallelism within the \mbox{SIA} where the calculation of each individual \mbox{spin-image} is independent of other \mbox{spin-image} calculations.
The steps of the \mbox{PSIA} are listed in Algorithm~\ref{algo:spin}.
\begin{figure}
\let\@latex@error\@gobble
\begin{algorithm}[H]
\SetKwInOut{Input}{Input parameters}
\SetKwInOut{Output}{Output parameter}
calculateSpinImages (W, B, S, OP, N)\;
\Input{\mbox{W: image width}, \mbox{B: bin size}, \mbox{S: support angle}, \mbox{OP: list of oriented points}, \mbox{N: number of generated spin-images}}
\Output{spinImages: list of generated spin-images}
spinImages = createSpinImagesList(N)\;
M = getLength(OP)\;
\textbf{Parallel} \For{ i = 0 $\rightarrow$ $N$}
{
tempSpinImage[W, W]\;
init(tempSpinImage)\;
P = OP[i]\;
\For{j = 0 $\rightarrow$ $M$}
{
X = OP[j]\;
$np_i$ = getNormal(P)\;
$np_j$ = getNormal(X)\;
\If{acos($np_i \cdot np_j$) $\le S$}
{
$k$ = $\Bigg \lceil$ $\cfrac{W/2 - np_i \cdot (X-P) }{B}$ $\Bigg \rceil$ \;
\vspace{0.2cm}
$l$ = $\Bigg \lceil$ $\cfrac{ \sqrt{||X-P||^2 - (np_i\cdot(X-P))^2} }{B}$ $\Bigg \rceil$\;
\If{0 $\le$ k $\textless$ W and 0 $\le$ l $\textless$ W}
{ tempSpinImage[k, l]++\; }
}
}
add(spinImages, tempSpinImage)\;
}
\caption{Parallel \mbox{spin-image} algorithm (from~\cite{eleliemy2016loadbalancing}) }
\label{algo:spin}
\end{algorithm}
\end{figure}
There are two experimental setups for executing any implementation of Algorithm~\ref{algo:spin}.
The first setup is when the number of parallel computing resources, $WO$, used in the experiment equals~$N$.
In such a setup, each worker generates precisely one \mbox{spin-image}, i.e, according to Algorithm~\ref{algo:spin}, it executes the code between \mbox{Lines 5-20} \emph{only once}.
In practice, it is not always feasible for the number of workers~$WO$ to equal~$N$, especially when~$N$ approximately equals $M$.
The second setup is when $WO$ is smaller than~$N$.
Each worker generates a certain number of \mbox{spin-images} proportional to the ratio of $N$ divided by $WO$.
This means that each worker executes the code between \mbox{Lines 5-20} of Algorithm~\ref{algo:spin} \emph{more than once}.
In both experimental setups, the performance of the algorithm is dominated by the performance of the slowest worker.
A worker can be the slowest performing worker in two cases: (1)~It has a larger amount of computations than others and/or (2)~It has lower processing capabilities than others.
In \figurename{~\ref{fig:loadim}, twenty MPI ranks executed the PSIA application to generate 80,000 \mbox{spin-images}. They yielded unequal finishing times due to the workload imbalance.
According to Algorithm~\ref{algo:spin}, only certain computing resources will execute Line~16
based on their evaluation of the condition mentioned in Line~15.
The operations in Line~15 represent a memory read and a memory write.
The proportion of the memory operations and the computations performed by each resource affects and determines its delivered performance.
Consequently, the MPI rank with the largest finishing time dominates the performance of the entire process of generating the \mbox{spin-images}.
DLS techniques are the best candidates to address the challenge of the workload imbalance.
\begin{figure}
\centering
\includegraphics[width=\textwidth, clip,trim=0cm 13.4cm 22cm 0cm]{newfigures/load-imbalance.pdf}
\caption{Variation in MPI ranks' finishing times for executing the \mbox{PSIA} application with 20 MPI worker ranks and one MPI master rank to generate 80,000 \mbox{spin-images}}
\label{fig:loadim}
\end{figure}
\subsection{Dynamic Loop Scheduling}
\label{subsec:dls}
In scientific applications, loops are, in general, one of the main sources of parallelism.
Parallel loops are categorized as \mbox{DOALL} and \mbox{DOACROSS} loops~\cite{chen1991empirical}.
The \mbox{DOALL} loops have no dependencies between their iterations while \mbox{DOACROSS} loops consist of iterations that are data-dependent.
As shown in Algorithm~\ref{algo:spin}, there are no dependencies between the iterations of the outer loop~\mbox{(Lines 4-21)}.
Therefore, the \mbox{PSIA} is an example of a \mbox{DOALL} loop.
In this section, the most common and successful dynamic loop scheduling~\mbox{(DLS)} techniques for the \mbox{DOALL} loops are discussed.
The \mbox{DLS} techniques are used to schedule loops with no dependencies between their iterations, or loops where most dependencies between iterations can be eliminated via various loop transformations.
Using \mbox{DLS}, the scheduling decisions are performed during the application execution time~\cite{fann2000intelligent}.
The \mbox{DLS} techniques considered in this work include \mbox{SS}~\cite{tang1986processor}, \mbox{GSS}~\cite{polychronopoulos1987guided}, and \mbox{FAC}~\cite{hummel1992factoring}.
\mbox{SS} assigns a single loop iteration each time to a requesting computing resource.
The main advantage of \mbox{SS} over other \mbox{DLS} techniques is its ability to achieve an optimized load balance between all processing elements.
However, this advantage comes at a very high overhead.
\mbox{GSS} divides the total number of loop iterations into variable-sized chunks of loop iterations.
In each scheduling step, \mbox{GSS} divides the remaining loop iterations by the total number of processing elements.
\mbox{GSS} is considered as a compromise between \mbox{SS} and \mbox{STATIC}, providing an acceptable load balance at an acceptable scheduling overhead.
\mbox{GSS} has the disadvantage of overloading the first free and requesting computing resource with the first and largest chunk of iterations.
The remaining loop iterations may not be sufficient to ensure a balanced execution among the computing resources.
\mbox{FAC} was designed to handle iterations of variable execution time.
It schedules the loop iterations in batches of P of equal sized chunks where P is the total number of computing resources.
The reason for selecting the three above-mentioned \mbox{DLS} techniques and STATIC for the present work is to cover a broad spectrum of the performance of the \mbox{PSIA} using the loop scheduling techniques.
\mbox{SS} and \mbox{STATIC} represent the two extreme cases of the \mbox{DLS} techniques.
\mbox{STATIC} has the lowest communication overhead and the lowest ability to balance the execution of the loop iterations among the workers.
SS has the highest communication overhead and the highest ability to balance the execution of the loop iterations among the workers.
The expected performance of \mbox{GSS} and \mbox{FAC} represent intermediate points between \mbox{STATIC} and \mbox{SS}.
Further work is needed and planned as future work to include other more complex \mbox{DLS} techniques.
\subsection{Related Work}
\label{subsec:related_work}
In~\cite{eleliemy2016loadbalancing}, an empirical approach was used to achieve the best performance of \mbox{PSIA} executing on a heterogeneous computing system that consisted of an Intel~\mbox{CPU} and an Intel Knights Corner~\mbox{(KNC)} co-processor.
The main goal of the work in~\cite{eleliemy2016loadbalancing} was to achieve a load balanced execution of the algorithm between the 24 cores CPU and the 64 cores \mbox{KNC}.
The approach taken in~\cite{eleliemy2016loadbalancing} \textit{statically} divides the workload (the generation of \mbox{spin-images}) unequally in such a way that guarantees that the \mbox{CPU} cores and the \mbox{KNC} cores finish the execution at the same time.
In practice, to perform such a static division of the generation of the \mbox{spin-images}, certain information regarding the time to generate each \mbox{spin-image} is required.
This information was obtained by generating each \mbox{spin-imag}e on the two available computing architectures.
However, the obtained information was only valid for specific computing architectures and for the input data used.
Motivated by the work in~\cite{eleliemy2016loadbalancing}, the present work demonstrates the need for using \textit{dynamic loop scheduling} within~\mbox{PSIA} and extends it into~\mbox{EPSIA}.
\mbox{EPSIA} employs dynamic loop scheduling to execute efficiently \emph{both} on heterogeneous as well as homogeneous computing resources.
For distributed memory architectures (similar to the ones used in this work), the \mbox{DLS} techniques were integrated within a \mbox{master-worker} execution model~\cite{IBto,IBtog}.
Without loss of generality, the present work differs from~\cite{IBto, IBtog} as follows:
(1)~The~master is a dedicated resource and performs the \mbox{DLS}-based chunk calculations and the work assignment using multiple threads (16 threads);
(2)~There is no communication or work reassignment among the workers;
(3)~The input data is initially replicated in the main memory of all workers;
(4)~The~workers only send the results of calculating all chunks \emph{after} they receive the termination signals from the master.
\section{The Proposed Efficient PSIA}
\label{sec:proposed}
The efficient version of PSIA, proposed in this work and denoted \mbox{EPSIA}, is introduced next.
The \mbox{EPSIA} employs a \mbox{master-worker} execution model.
As shown in \figurename{~\ref{fig:protocol}}, the \mbox{master-worker} communication protocol consists of five steps:
(1)~A free worker \textit{requests} an amount of work (chunk of loop iterations);
(2)~The master \textit{calculates} (according to the selected DLS technique) and \textit{assigns} a chunk of loop iterations to the requesting worker;
(3)~When the worker finishes the assigned chunk of loop iterations, it notifies the master and \textit{requests} another chunk of loop iterations;
(4)~If there are still unexecuted loop iterations, the master calculates and assigns a new chunk of loop iterations to that worker; otherwise it sends a \textit{termination} signal;
(5)~When a worker receives a \textit{termination} signal, it \textit{sends back} the results of executing the assigned chunks to the master.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth, clip, trim= 0cm 6.5cm 0cm 0cm]{newfigures/protocol2.pdf}
\caption{The master-workers communication protocol. The encircled number denotes the order of control messages exchanged }
\label{fig:protocol}
\end{center}
\end{figure}
To integrate the \mbox{master-worker} execution model into \mbox{PSIA}, certain changes are needed to be made to Algorithm~\ref{algo:spin}.
The proposed algorithm is shown in Algorithm~\ref{algo:adaptedcalc} in which, the code parts in blue font color~\mbox{(Lines 1, 2, and 3)} represent the modifications required for Algorithm~\ref{algo:spin} to employ the master-worker execution model.
Recall from~\ref{subsec:dls} that the current work differs from previous work~\cite{IBto,IBtog} as following: (1)~The master is dedicated to handle the worker requests using multiple threads; (2)~The workers do not communicate with each others; (3)~The input data is replicated; (4)~The results are collected from the workers at the end.
These distinctions are made to more closely align with the earlier \mbox{PSIA} implementation and to allow a meaningful comparison with \mbox{EPSIA}.
As discussed next in Section~\ref{sec:experimentsetup}, the main memory of recent computing resources satisfies the memory requirements of dense 3D objects.
Therefore, replicating the information of the 3D object and storing the generated \mbox{spin-images} on the worker side result in lightweight messages between the master and the workers.
Moreover, a dedicated master resource offers rapid responses to the workers, especially when executing on large number of workers.
The usefulness of the \mbox{master-worker} execution model and the integration of the communication protocol from \figurename{~\ref{fig:protocol}} in \mbox{EPSIA} is described as two pseudo code algorithms than can be found below.
\begin{figure}
\let\@latex@error\@gobble
\begin{algorithm}[H]
\SetKwInOut{Input}{Inputs}
\SetKwInOut{Output}{Output}
adCalculateSpinImages (W, B, S, OP, M, {\color{blue}spinImages, start, end})\;
\Input{W: \mbox{image width}, B: \mbox{bin size}, S: \mbox{support angle}, \mbox{OP: list of oriented points}, \mbox{M: number of oriented points}, \mbox{spinImages: list of spin-images to be filled}}
\For{{\color{blue}imageCounter = start $\rightarrow$ end}}
{
\color{black}
{\color{blue}P = OP[imageCounter]\;}
tempSpinImage[W, W]\;
init(tempSpinImage)\;
\For{j = 0 $\rightarrow$ $M$}
{
X = OP[j]\;
$np_i$ = getNormal(P)\;
$np_j$ = getNormal(X)\;
\If{acos($np_i \cdot np_j$) $\le S$}
{
$k$ = $\Bigg \lceil$ $\cfrac{W/2 - np_i \cdot (X-P) }{B}$ $\Bigg \rceil$ \;
\vspace{0.2cm}
$l$ = $\Bigg \lceil$ $\cfrac{ \sqrt{||X-P||^2 - (np_i\cdot(X-P))^2} }{B}$ $\Bigg \rceil$\;
\If{0 $\le$ k $\textless$ W and 0 $\le$ l $\textless$ W}
{ tempSpinImage[k, l]++\; }
}
}
add(spinImages, tempSpinImage)\;
}
\caption{Modification of the \mbox{spin-image} calculation for integration with the \mbox{master-worker} execution model and the \mbox{DLS} techniques}
\label{algo:adaptedcalc}
\end{algorithm}
\end{figure}
\begin{algorithm}[H]
\SetKwInOut{Input}{Inputs}
\SetKwInOut{Output}{Output}
generatingSpinImages (OF, W, B, S, N, DM)\;
\Input{\mbox{OF: location of the input data}, \mbox{W: image width}, \mbox{B: bin size}, \mbox{S: support angle}, \mbox{N: number of generated spin-images}, \mbox{DM: DLS technique}}
\Output{spinImages: list of generated spin images}
OP = read3DPoints(OF)\;
scheduledTasks = 0\;
schedulingStep = 0\;
receivedResults = 0\;
startEnd[2]\;
workersCount = getCountOfWorkers()\;
sendToWorkers(OP)\;
\While{scheduledTasks $\textless$ N}
{
requestWork = receiveRequestAnyWorker()\;
worker = getSourceOfRequest(requestWork)\;
chunk = getChunk(DM, schedulingStep, N, workersCount)\;
startEnd[0] = scheduledTasks\;
startEnd[1] = scheduledTasks + chunk\;
sendResponse(worker, startEnd, assignWork)\;
scheduledTasks = scheduledTasks + chunk\;
}
\While{receivedResults $\textless$ workersCount}
{
request = receiveRequestFromAnyWorker()\;
requestType = getRequestType(request)\;
worker = getSourceOfRequest(request)\;
\eIf{requestType = assignWork}
{
sendResponseToWorker(worker, NULL, terminate)\;
}
{
receiveDataFromWorker(worker, tempSpinImages)\;
add(spinImages, tempSpinImages)\;
receivedResults++\;
}
}
\caption{The proposed \mbox{APSIA} master perspective}
\label{algo:master}
\end{algorithm}
\newpage
\begin{algorithm}[H]
\SetKwInOut{Input}{Inputs}
\SetKwInOut{Output}{Output}
generatingSpinImages (OF, W, B, S, DM)\;
\Input{\mbox{OF: location of the input data}, \mbox{W: image width}, \mbox{B: bin size}, \mbox{S: support angle}, \mbox{DM: DLS technique}}
\Output{\mbox{spinImages: list of generated spin images}}
receiveFromMaster(OP)\;
M = getLength(OP)\;
startEnd[2]\;
sendRequest(assignWork)\;
response = receiveResponseFromMaster()\;
spinImages = createSpinImagesList(M)\;
\While{response = assignWork}
{
startEnd = getResponseData(response)\;
/* as shown in Algorithm~2 */\;
adCalculateSpinImages(W, B, S, OP, M, spinImages, startEnd[0], startEnd[1])\;
}
sendDataToMaster(spinImages)\;
\caption{The proposed \mbox{APSIA} worker perspective}
\label{algo:worker}
\end{algorithm}
\section{Setup of Experiments}
\label{sec:experimentsetup}
This section contains certain essential information concerning the experimental setup needed to reproduce the current.
\subsection{Input Data Set}
\label{subsec:data_set}
As discussed in Section~\ref{subsec:psia}, the time complexity of \mbox{SIA} is~$O(N M)$.
It is important to consider the \mbox{3D} objects of high density regarding the number of \mbox{3D} points.
In Table~\ref{tab:dataset}, the objects of the \mbox{3D} mesh watermarking~\cite{dataset} data set are presented.
The \mbox{3D} mesh watermarking data set consists of ten dense \mbox{3D} objects.
These \mbox{3D} objects vary regarding the number of points from approximately~$3K$ to approximately~$826K$ points.
Out of the 3D~objects in Table~\ref{tab:dataset}, the \textit{Ramesses} object is considered as the extreme case in terms of \mbox{3D} points density for the \mbox{EPSIA}.
\textit{Ramesses} object contains the largest number of oriented points, approximately~$826K$,
and is considered for comparing the performance of the proposed~\mbox{EPSIA} and the earlier~\mbox{PSIA}.
Similar to~\cite{eleliemy2016loadbalancing}, the present work considers the three \mbox{spin-image} generation parameters W, B and S to be 5, 0.1 and 2$\pi$, respectively. In addition, the present work considers the number of generated \mbox{spin-images}~N to be 10\% of the total number of oriented points~M~(see Table~\ref{tab:notation})~\cite{eleliemy2016loadbalancing}.
\begin{table}[!t]
\caption{The 3D objects in the mesh watermarking \mbox{data set}~\cite{dataset}}
\vspace{-0.35cm}
\label{tab:dataset}
\begin{center}
\begin{tabular}{ c | c}
\textbf{Object} & \textbf{Approximate number of points ($\times$ $10^3$)} \\ \hline
Cow & 3 \\ \hline
Casting & 5 \\ \hline
Bunny & 35 \\ \hline
Hand & 37 \\ \hline
Dragon & 50 \\ \hline
Crank & 50 \\ \hline
Rabbit & 71 \\ \hline
Venus & 101 \\ \hline
Horse & 113 \\ \hline
\textbf{Ramesses} & \textbf{826} \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Hardware Platform Specifications}
\label{subsec:hw_specs}
Two different types of computing resources are used in this work to assess and compare the performance of the proposed \mbox{EPSIA} and the earlier \mbox{PSIA}.
The first platform type, denoted Type1, represents a \mbox{two-socket} processor~(20~cores) Intel Xeon~\mbox{E5-2640} with~\mbox{64 GB~RAM}. Each core has 32~KB and 256~KB as L1 and L2 caches, respectively.
Cores of the same processor socket share 25~MB L3 cache.
The second platform type, denoted Type2, is a standalone Intel Xeon Phi~7210~(64~cores) with~\mbox{96 GB~RAM}. Each core has 32~KB L1 cache. Each tile (two cores) has 1~MB L2 cache.
The platform types Type1 and Type2 are part of a computing cluster that consists of 26~nodes:~22 of Type1 and~4 nodes of Type2.
All nodes are interconnected in a non-blocking fat-tree topology. The network characteristics are: Intel \mbox{OmniPath} fabric, \mbox{100~GBit/s} link bandwidth, and 100~ns~(for homogeneous resources) and 300~ns~(for heterogeneous resources) link latency.
This computing cluster is actively used for research and educational purposes.
Therefore, only eight nodes of Type1 and four nodes of Type2 were dedicated to the present work.
\subsection{Implementation and Execution Details}
\label{subsec:implementation}
The Intel message passing interface library (Intel-MPI, version \mbox{17.0.1}) was used to compile and execute the implementation of the proposed \mbox{EPSIA}.
The \mbox{Intel-MPI} library has the advantage of default pinning of operating system level processes to hardware cores (i.e., process pinning).
Pinning a particular MPI process to a hardware core eliminates the undesired process migration that may be performed by the operating system during execution.
Moreover, to examine the performance of the \mbox{DLS} in one of the worst cases, all master-worker control and data messages exchanged (cf. Fig.~\ref{fig:protocol}) are implemented using MPI point-to-point synchronous communication primitives.
A user-specified machine file is used to map the MPI ranks to the computing resources (cores of nodes of Type1 and Type2).
All computing resources are listed in the machine file in a certain order.
This order indicates the MPI rank assigned to each computing resource during the execution of the application.
Executing on homogeneous resources of Type1 or Type2 where all computing resources are similar, this order has no influence on performance.
However, when executing on heterogeneous resources of Type1 and Type2, all computing resources of Type2 are listed in the machine file before computing resources of Type1.
The rational behind this listing is to enable the nodes with the largest number of cores~(Type2) take the first MPI ranks.
In the next section, the influence of this listing is presented and discussed.
The master~(MPI rank~=~0) is always mapped to a dedicated computing resource.
This computing resource is a core of a dedicated node of Type1.
This dedicated computing resource is always written at the beginning of the machine file.
Each experiment has been executed fifteen times to obtain certain descriptive measurements, such as maximum, minimum, average, median, first, and third quartiles.
\subsection{Reproducibility Information}
\label{subsec:reprod}
To enable reproduction of this work, apart from the information in Sections~\ref{subsec:data_set},~\ref{subsec:hw_specs}, and~\ref{subsec:implementation}, the source code of the proposed \mbox{EPSIA} is available upon request from the authors under the lesser general public license~\mbox{(LGPL)}.
In addition to the raw results which are already available online\footnote{https://c4science.ch/diffusion/3863/}, an Easybuild\footnote{http://easybuild.readthedocs.io} configuration file is provided to guarantee the usage of a toolchain that is similar to the one used for this work.
The code was compiled and executed using the Intel MPI~version~17.0.1.
The \mbox{O3} compilation flag was used for execution on Type1 nodes.
In addition, the \mbox{xCommon-AVX512} compilation flag was used for execution on Type2 nodes.
All parallel computing nodes use CentOS Linux release~7.2.1511 as operating system.
The open grid scheduler/grid engine version 2011.11p1 is the batch system of the HPC platform where all experiments were performed. The network file system (NFS) version 4 (NFS4) is configured and used for the HPC platform.
\section{Experimental Results and Evaluation}
\label{sec:results}
In this section, the results of executing the \mbox{EPSIA} on homogeneous and heterogeneous computing resource are discussed and compared to the results of executing the \mbox{PSIA}.
\subsection{Performance of~\mbox{EPSIA} vs.~\mbox{PSIA} on Homogeneous Computing Resources}
\label{subsec:perfomancewod}
In this section, the performance of the \mbox{PSIA} is compared to the performance of the proposed \mbox{EPSIA} for two scalability profiles: weak and strong.
As discussed in Section~\ref{subsec:related_work}, the \mbox{PSIA} \textit{statically} divides and assigns the \mbox{spin-image} calculations to the available computing resources.
In all experiments, the \mbox{PSIA} is referred to as \mbox{PSIA-STATIC}.
\mbox{EPSIA-SS}, \mbox{EPSIA-GSS} and \mbox{EPSIA-FAC} denote the proposed \mbox{EPSIA} code parallelized with the three \mbox{DLS} techniques: \mbox{SS}, \mbox{GSS}, and \mbox{FAC}, respectively.
\subsubsection{Weak Scalability}
\label{subsubsec:weawod}
For conducting weak scalability experiments, the number of the generated \mbox{spin-images} and the number of the computing resources are increased such that their ratio is kept constant at $8K$ \mbox{spin-images} per computing node.
The number of the generated \mbox{spin-images} in this ratio represents approximately~1\% of the total \mbox{spin-images} that can be generated from the \textit{Ramesses} object.
This work percentage is selected to result in a suitable, yet representative, execution time per experiment, given that each experiment has been executed fifteen times.
A comparison between the \emph{parallel execution time} of the proposed \mbox{EPSIA} and the PSIA achieved by executing them on different node counts of the two platform types is presented in \figurename{~\ref{fig:wst1}~and~\ref{fig:wst2}}.
The execution time of the \mbox{PSIA-STATIC} is significantly higher than that of \mbox{EPSIA-SS}, \mbox{EPSIA-GSS}, and \mbox{EPSIA-FAC}.
For \mbox{PSIA-STATIC} on \mbox{Type1} nodes, increasing the number of the generated \mbox{spin-images} from~$8K$ to~$64K$ (i.e., by a factor of 8) and increasing the number of the computing resources from~$20$ to~$160$ (i.e., by a factor of 8) result in an undesired performance degradation.
Specifically, the execution time increased from~21 to 25~seconds, an almost~20\% increase.
The \mbox{EPSIA-SS} did not exhibit such performance degradation.
Specifically, the execution time increased from~20 to 20.5~seconds, an almost~1\% increase.
Similarly to the performance on \mbox{Type1} nodes, for \mbox{PSIA-STATIC} on \mbox{Type2} nodes, increasing the number of the generated spin images from~$8K$ to~$32K$ (i.e., by a factor of 4) and increasing the number of the computing resources from~$64$ to~$256$ (i.e., by a factor of 4) result in an undesired performance degradation.
In particular, the execution time increased from~30 to 35~seconds, approximately a~17\% increase.
Executing the \mbox{EPSIA-SS} on \mbox{Type2} nodes resulted in poor performance compared with the execution on \mbox{Type1} nodes.
In particular, the execution time increased from~27.5 to 30~seconds, approximately a~9\% increase.
However, \mbox{EPSIA-SS} still outperforms all other versions of the proposed \mbox{EPSIA}.
\begin{figure}
\centering
\includegraphics[width=\textwidth, clip,trim=0cm 2.5cm 0cm 0cm]{newfigures/t1-weak.pdf}
\caption{Scalability of the proposed \mbox{EPSIA} and the earlier \mbox{PSIA} on homogeneous computing resources of Type1. The number of generated \mbox{spin-images} per computing node is~$8K$.}
\label{fig:wst1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth, clip,trim=0cm 2.5cm 0cm 0cm]{newfigures/t2-weak.pdf}
\caption{Scalability of the proposed \mbox{EPSIA} and the earlier \mbox{PSIA} on homogeneous computing resources of Type2. The number of generated \mbox{spin-images} per computing node is~$8K$.}
\label{fig:wst2}
\end{figure}
Such a difference in the performance of different \mbox{EPSIA} versions and the \mbox{PSIA-STATIC} can be explained due to the load imbalance by the static division and the static assignment of the generation of the \mbox{spin-images} in \mbox{PSIA-STATIC}.
In general, the \mbox{SS} algorithm incurs high communication overhead caused by the large volume\footnote{Depending on the input data distribution strategy, which can be either centralized, partitioned, or replicated.} and/or number of messages\footnote{At least equal to the total number of parallel tasks within the application.} between the master and the worker.
In this work, however, the input data is \emph{replicated} and the master exchanges only lightweight messages (a few bytes per message) with the workers to indicate the chunk sizes they need to execute.
The number of such lightweight messages corresponds to the total number of chunks of tasks calculated by the dynamic loop scheduling algorithm and is different across DLS techniques.
The superiority of the \mbox{EPSIA-SS} over the other two \mbox{EPSIA} versions can be explained by its fine-grain self-scheduled task assignment design as well as by the high speed of the network infrastructure used in the experiments and due to the usage of a multithreaded master process on a dedicated computing node.
Both these aspects result in a more balanced execution time among the computing resources, hence, a shorter parallel execution time, using \mbox{EPSIA-SS}.
\subsubsection{Strong Scalability}
\label{subsubsec:strwod}
To perform strong scalability experiments, the number of generated spin-images is kept constant while the number of the computing resources is increased.
The number of generated \mbox{spin-images} is set at $80K$, which represents approximately~10\% of the total \mbox{spin-images} that can be generated from the \textit{Ramesses} object.
A comparison between the \emph{parallel cost} of executing the proposed \mbox{EPSIA} and the earlier \mbox{PSIA} on Type1 and Type2 nodes is presented in \figurename{~\ref{fig:st1}~and~\ref{fig:st2}}, respectively.
The parallel cost is calculated as the number of the computing resources used to execute a parallel application multiplied by the total parallel execution time of that application.
The selection of parallel cost as a performance metric (over the parallel execution time) is due to the fact that it reflects the benefits of using additional computing resources versus the time needed to execute the parallel algorithm.
A good strong scalability profile of a program corresponds to an almost constant parallel cost for any number of computing resources.
It can be seen in \figurename{~\ref{fig:st1}~and~\ref{fig:st2}}, that \mbox{PSIA-STATIC} does not exhibit a strong scalability profile for both computing resource types.
Similar to the weak scalability results in Section~\ref{subsubsec:weawod}, the three versions of the proposed \mbox{EPSIA} outperform \mbox{PSIA-STATIC}.
\begin{figure}
\centering
\includegraphics[width=\textwidth, clip,trim=0cm 2.5cm 0cm 0cm]{newfigures/t1-strong.pdf}
\caption{Scalability of the proposed \mbox{EPSIA} and the earlier \mbox{PSIA} on homogeneous computing resources of Type1. The number of generated \mbox{spin-images} is $80K$.}
\label{fig:st1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth, clip,trim=0cm 2.5cm 0cm 0cm]{newfigures/t2-strong.pdf}
\caption{Scalability of the proposed \mbox{EPSIA} and the earlier \mbox{PSIA} on homogeneous computing resources of Type2. The number of generated \mbox{spin-images} is $80K$.}
\label{fig:st2}
\end{figure}
The performance advantage of \mbox{EPSIA-SS} over \mbox{EPSIA-GSS} and \mbox{EPSIA-FAC} is attributed to the small message sizes exchanged between the master and the workers and to the high speed of the network infrastructure used in the experiments.
The performance gap between the \mbox{EPSIA-SS}, and \mbox{EPSIA-GSS} and \mbox{EPSIA-FAC} can be explained similarly to the performance gap between the same algorithms in the weak scalability experiments in Section~\ref{subsubsec:weawod}.
The performance gap may, however, be reduced in certain other cases where the network infrastructure has a lower performance than the one used in this work.
In both weak (Section~\ref{subsubsec:weawod}) and strong (Section~\ref{subsubsec:strwod}) scalability experiments, the \mbox{EPSIA-SS} achieves a speed up of approximately 1.26 on the largest number of computing resources, compared to the performance of \mbox{PSIA-STATIC} on \mbox{Type1} nodes.
On \mbox{Type2} nodes, \mbox{EPSIA-SS} achieves a speed up of approximately 1.16 compared against \mbox{PSIA-STATIC}.
\subsection{Performance of~\mbox{EPSIA} vs.~\mbox{PSIA} on Heterogeneous Computing Resources}
\label{subsec:perfomancewd}
The performance of the weak scalability and the strong scalability experiments executed on heterogeneous computing resources is shown in \figurename{~\ref{fig:hybridw}~and~\ref{fig:hybridws}}, respectively.
These performance results are very similar to the results obtained on homogeneous computing resources.
\mbox{EPSIA-GSS} exhibits an interesting behavior on heterogeneous computing resources compared to that on homogeneous computing resources.
In particular, its performance is almost similar to the performance of \mbox{PSIA-STATIC}.
This is due to the order in which the available Type1 and Type2 resources request work from the master.
As discussed in Section~\ref{sec:background}, the \mbox{GSS} algorithm assigns the largest chunk of loop iterations to the first requesting worker.
Recall from Section~\ref{subsec:implementation} that the heterogeneous worker computing resources listed in the machine file used in this work commence with Type2 followed by Type1.
Also, the master is a dedicated computing resource (core) mapped on a separate node of Type1 and it is always written in the machine file before the worker computing resources of Type1 and Type2.
This listing of resources in the machine file is meant to enable the use of the computing nodes with the largest number of computing cores, i.e., 64~cores for Type2 compared to 20~cores for Type1.
Changing this listing may enhance the performance of \mbox{EPSIA-GSS} without changing the main semantic and trend of the results where \mbox{PSIA-STATIC} and \mbox{EPSIA-SS} perform the worst and the best, respectively.
In both the weak and the strong scalability experiments, the \mbox{EPSIA-SS} achieves a speed up of approximately 2 on the largest number of computing resources, compared to the \mbox{EPSIA-STATIC} on nodes of \mbox{Type1} and \mbox{Type2}.
\begin{figure}
\centering
\includegraphics[width=\textwidth, clip,trim=0cm 1.5cm 0cm 0cm]{newfigures/t1t2-weak-a.pdf}
\caption{Scalability of the proposed \mbox{EPSIA} and the earlier \mbox{PSIA} on heterogeneous computing resources of Type1 and Type2, respectively. The number of generated \mbox{spin-images} per computing node is~$8K$.}
\label{fig:hybridw}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth, clip,trim=0cm 1.5cm 0cm 0cm]{newfigures/t1t2-strong-a.pdf}
\caption{Scalability of the proposed \mbox{EPSIA} and the earlier \mbox{PSIA} on heterogeneous computing resources of Type1 and Type2, respectively. The number of generated \mbox{spin-images} is $80K$.}
\label{fig:hybridws}
\end{figure}
\section{Conclusion and Future Work}
\label{sec:conclusion}
The static assignment of the spin-image generation tasks using \mbox{PSIA}~\cite{eleliemy2016loadbalancing} causes severe load imbalance during execution.
The load imbalance worsens when executing the \mbox{PSIA} on heterogeneous computing resources.
By employing dynamic loop scheduling (DLS) techniques and the master-worker execution model, the proposed \mbox{EPSIA} reduces the load imbalance when executing on homogeneous and on heterogeneous computing resources, as well as delivers a high performance at increased scales than in the previous work.
The proposed \mbox{EPSIA} employs three different \mbox{DLS} techniques: \mbox{SS}, \mbox{GSS}, and \mbox{FAC}.
For the largest problem size ($80K$~\mbox{spin-images}), the performance of the \mbox{EPSIA-SS} outperforms the performance of the earlier \mbox{PSIA} by a factor of 1.2~and 2~on homogeneous and heterogeneous computing, respectively.
Due to the high speed network used in this work, the \mbox{EPSIA-SS} shows the best performance.
More investigation is needed and planned to assess the performance of the proposed \mbox{EPSIA} across different hardware setups.
Also, additional and more complex \mbox{DLS} techniques will be integrated with the \mbox{EPSIA}.
As discussed in Section~\ref{subsec:perfomancewd}, the performance of the \mbox{EPSIA-GSS} is affected on heterogeneous computing resources by the type of resource requesting work in the initial chunk allocations.
Further work is, therefore, needed to understand the effects of different resource listings in the machine file on the performance of \mbox{EPSIA}.
\section*{Acknowledgment}
This work is in part supported by the Swiss National Science Foundation in the context of the “Multi-level Scheduling in Large Scale High Performance Computers” (MLS) grant, number 169123. |
2,869,038,155,033 | arxiv | \section*{Introduction}
The moduli space $\sM_g$ of curves of genus $g$ is known to be unirational for $g \leq 14$ \cite{Severi, SernesiUnirationality, ChangRanUnirationality, VerraUnirationality}, while for $g=22$ or $g \geq 24$ it is proved to be of general type \cite{HarrisMumfordKodaira, EisenbudHarrisKodaira, FarkasGeometry, FarkasBirational}. For the cases in between, only partial results are available: $\sM_{23}$ has positive Kodaira dimension \cite{FarkasGeometry}, $\sM_{15}$ is rationally connected \cite{ChangRanKodaira, BrunoVerraRationally} and $\sM_{16}$ is uniruled \cite{ChangRanSlope, FarkasBirational}.
Similarly, the unirationality of Hurwitz spaces $\sH_{g,d}$ parameterizing $d$-sheeted branched simple covers of the projective line by smooth curves of genus $g$ is of fundamental interest. For small values of $d$ or $g$ they are proven to be unirational, but for larger values few results are known. See Section \ref{unirationalHurwitzSpaces} for a discussion on the known results.
In this paper we introduce a correspondence between (general) curves $C$ in $\PP^4$ with fixed genus and degree, together with a hypersurface $X\supset C$, and the space of certain matrix factorizations on $X$. This leads to a new technique to construct curves in $\PP^4$, which has been positively used by Schreyer \cite{SchreyerMatrix} in the particular case of curves of genus $15$ and degree $16$.
The goal of this paper, in addition to showing how matrix factorizations can be used to construct curves in $\PP^4$, is to use this technique to prove new positive results. Our main contribution is the following
\begin{theorem*}[Theorem \ref{unirationalityThm}]
$\sH_{12,8}$ is unirational.
\end{theorem*}
To prove this result, we construct explicitly a unirational dominant family of curves of genus 12 and degree 14 in $\PP^4$ by means of matrix factorizations, showing thus that the Brill--Noether space $\sW^4_{12,14}$ is unirational. A general point $(C,L)$ in $\sW^4_{12,14}$ gives rise to a point $(C,K_C-L)$ in $\sW^1_{12,8}$ and conversely, whence the unirationality of $\sW^1_{12,8}$ and $\sH_{12,8}$.
The study of the correspondence between curves and matrix factorizations in another particular case leads to a very cheap proof of the following
\begin{theorem*}[Corollary \ref{unirulednessThm}]
$\sW^1_{13,9}$ is uniruled.
\end{theorem*}
The same method yields a proof of the uniruledness of $\sW^1_{12,8}$, already implied by the previous theorem, and of $\sW^1_{11,7}$ and $\sW^1_{10,6}$, already known to be unirational \cite{GeissUnirationality,GeissThesis}.
In Section \ref{unirationalHurwitzSpaces} we will formulate some speculations and questions about the range of unirational Hurwitz spaces, which partly motivates our study; we remark that the unirationality of $\sH_{12,8}$ and the uniruledness of $\sW^1_{13,9}$ fit perfectly into the picture.
Matrix factorizations can be used constructively more in general. We present a way to construct unirational families of curves of genus $g \in [16,20]$; even though these families will be far from being dominant on $\sM_g$, such concrete examples offer the chance to prove some other results. For instance, we are able to prove the following
\begin{theorem*}[Theorem \ref{generalQuartic}]
A general cubic hypersurface in $\PP^4$ contains a family of dimension $2d$ of curves of genus $g$ and degree $d$ for
\[
(g,d) \in \{(12,14), (13,15)\}.
\]
A general quartic hypersurface in $\PP^4$ contains a $d$-dimensional family of curves of genus $g$ and degree $d$ for
\[
(g,d) \in \{(16,17), (17,18), (18,19), (19,20), (20,20)\}.
\]
\end{theorem*}
The construction of our families of curves of genus $g \in [16,20]$ relies on considering particular rational surfaces arising when trying to adapt our technique to these specific cases. Other instances of results which can be proved by looking at specific examples concern the structure of the syzygies of general curves of particular genera and degrees, as mentioned in Theorem \ref{constructionThm}.
In the paper, we will often need to exhibit a concrete example to prove that some open conditions are generally satisfied. Our explicit constructions are performed by means of the software \Mac \cite{M2} and run best over a finite field. Semicontinuity arguments will ensure the existence of suitable examples over the rational or the complex field as well, as explained in Remark \ref{posCharSuffices}. For the supporting documentation regarding the computational proofs contained in this paper, we will always refer to \cite{SchreyerTanturriCode}. \newline
The paper is structured as follows: in Section \ref{unirationalHurwitzSpaces} we survey the known results about the unirationality of Hurwitz spaces and we present some questions and speculations about what kind of general behavior can be expected. In Section \ref{matrixFact} we recall some basic definitions and general facts about matrix factorizations and we explain, starting with a motivating example, the correspondence between particular matrix factorizations and curves in $\PP^4$.
The key point of the correspondence is the Reconstruction Theorem \ref{reconstructionThm}. In Section \ref{uniruledness} we prove Theorem \ref{constructionThm}, which gives us an effective method to produce curves in $\PP^4$ starting from suitable matrix factorizations; moreover, we use the previous correspondence to provide a cheap proof of the uniruledness of $\sW^1_{13,9}$ (Corollary \ref{unirulednessThm}). In Section \ref{uniratHurwitz} we prove our main result, Theorem \ref{unirationalityThm}; for this sake, we use particular matrix factorizations arising from suitable auxiliary curves of genus 10 and degree 13. Finally, in Section \ref{families} we construct unirational families of curves of genus $16 \leq g \leq 20$ lying on particular rational surfaces in $\PP^4$.
\begin{ack}
The authors would like to thank the referee for valuable suggestions and remarks.
\end{ack}
\begin{notation*}
In the paper we will use \Mac notation for Betti tables. If a module $M$ has Betti numbers $\beta_{i,j}= \dim \Tor_i^R(M,{K})_j $ over a ring $R$ with base field ${K}$, its Betti table will be written as \rule{0pt}{0pt}
\[
\begin{array}{c|ccccc}
& \rule{0.5ex}{0pt} 0 \rule{0.5ex}{0pt} & \rule{0.5ex}{0pt} 1 \rule{0.5ex}{0pt} & \rule{0.5ex}{0pt} 2 \rule{0.5ex}{0pt} & \dotso \\ \hline
0& \beta_{0,0} & \beta_{1,1} & \beta_{2,2} &\dotso \\
1& \beta_{0,1} & \beta_{1,2} & \beta_{2,3} & \dotso\\
2& \beta_{0,2} & \beta_{1,3} & \beta_{2,4} & \dotso\\
\vdots& \vdots & \vdots &\vdots & \ddots
\end{array}
\]
\end{notation*}
\section{Unirationality of Hurwitz spaces}\label{unirationalHurwitzSpaces}
In this section we briefly survey what we know about the unirationality of the Hurwitz spaces $\sH_{g,d}$. To put the question into the right framework we recall a few facts from Brill--Noether theory.
A general curve $C$ of genus $g$ has a linear system $g^r_d$ of dimension $r$ of
divisors of degree $d$ if and only if the Brill--Noether number
$$ \rho=\rho(g,r,d)=g-(r+1)(g+r-d)$$
is non-negative.
Moreover, in this case, the Brill--Noether scheme
$$W^r_d(C)=\{ L \in \Pic^d(C) \mid \hh^0(L) \ge r+1 \}$$
has dimension $\rho$.
Recall some notation from \cite{ACGH}:
$$
\sM^r_{g,d} = \{ C \in \sM_g \mid \exists L \in W^r_d(C) \}, \;
$$
$$
\sW^r_{g,d} = \{ (C,L) \mid C \in \sM^r_{g,d}, L \in W^r_d(C) \},
$$
$$
\sG^r_{g,d} = \{ (C,L,V) \mid (C,L) \in \sW^r_{g,d}, V \subset \HH^0(L), \dim V =r+1 \}.
$$
Thus we have natural morphisms
$$
\xymatrix{
\sH_{g,d} \ar[r]^\alpha & \sG^1_{g,d} \ar[r]^\beta & \sW^1_{g,d} \ar[r]^\gamma & \sM^1_{g,d}; \cr}
$$
with our notation, $\alpha$ is a $\PGL(2)$-bundle over the base point free locus, with fibers corresponding to the choices of a basis of $V$, the fibers of $\beta$ are Grassmannians $\GG(2,\HH^0(C,L))$, and the fibers of $\gamma$ are the $W^1_d(C)$. Thus the unirationality of $\sH_{g,d}$ is equivalent to the unirationality of $\sW^1_{g,d}$.
The unirationality of $\sH_{g,d}$ for $2 \le d \le 5$ and arbitrary $g\ge 2$ has been known for a long time. The case $d=5$ is due to Petri \cite{Petri}, with clarification given by the Buchsbaum--Eisenbud structure Theorem \cite{BuchsbaumEisenbud,SchreyerSyzygies}, and independently to B.~Segre \cite{Segre}, with clarification by Arbarello and Cornalba \cite{ArbarelloCornalba}.
The case for $g \le 9$ is due to Mukai:
\begin{theorem}[Mukai \cite{Mukai}]
A general canonical curve $C$ of genus $g=7,8,9$ arises as transversal intersection of a linear space with a homogeneous variety:
\rule{0pt}{0pt}
\begin{center}
\begin{tabular}{ccc}
\midrule
\rule{5pt}{0pt}$7$\rule{5pt}{0pt} & \rule{5pt}{0pt}$C = \PP^{6} \cap {\rm Spinor}^{10} \subset \PP^{15}$\rule{5pt}{0pt} & \rule{5pt}{0pt}isotropic subspaces of $Q^8 \subset \PP^9$\rule{5pt}{0pt} \\
$8$ & $C = \PP^{7} \cap \GG(2,6)^8 \subset \PP^{14}$ & Grassmannian of lines in $\PP^5$ \\
$9$ & $C = \PP^{8} \cap \LL(3,6)^6 \subset \PP^{13}$ & Lagrangian subspaces of $(\CC^6,\omega)$ \\
\midrule
\end{tabular}
\end{center}
\end{theorem}
Structure results for canonical curves of genus $g \le 6$ are classical, see, e.g., \cite{SchreyerSyzygies}.
\begin{corollary}
The moduli spaces $\sM_{g,g}$ of $g$-pointed curves of genus $g$ and the universal Picard varieties $\Pic^d_g$ are unirational for $g\le 9$ and any $d$. The spaces
$ \sM^1_{g,d}$ and $\sH_{g,d}$ are unirational for $g \le 9$ and $d \ge g$.
\end{corollary}
\begin{proof} The argument is the same as in \cite[\textsection 1]{VerraUnirationality}. We can choose $g$ general points $p_1,\ldots,p_g$ in the homogeneous variety and can take $\PP^{g-1}$ as their span. Then the intersection of the homogeneous variety with this $\PP^{g-1}$ gives a smooth curve $C$ of genus $g$
together with $g$ marked points. For the line bundle, we may take $L= \sO_C(\sum_{j=1}^g d_j p_j)$ for integers $d_1, \ldots, d_g$ with $\sum_{j=1}^g d_j =d$.
As for the unirationality of $\sM^1_{g,d}$ for $d\ge g+1$, with $L$ as above we have $\hh^0(C,L) \ge 2$. In case $d=g$, we take $L=\omega_C(-\sum_{j=1}^{g-2} p_j)$, which is a line bundle $L \in W^1_g(C) \setminus W^2_g(C)$ by
Riemann--Roc
. The unirationality of $\sH_{g,d}$ then follows. \qedhere
\end{proof}
In the range $d \le 5$ or $g\le 9$, apart from a few cases due to Florian Gei\ss\phantom{ }\cite{GeissThesis}, only the unirationality of $\sH_{9,8}$ needed to be proved. This has recently been established in \cite{DamadiSchreyer}.
\begin{figure}
\begin{scriptsize}
\begin{tabular}{|c|cccc|ccccccccc}
{45}&\bcell&\bcell&\bcell&\bcell{\color{blue}P} &\bcell{\color{blue}G}&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell \cr
$\mid$&\bcell&\bcell&\bcell& \bcell{\color{blue}$\mid$} &&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell\\
$\mid$&\bcell&\bcell&\bcell& \bcell{\color{blue}$\mid$} &&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell\\
{40}&\bcell&\bcell&\bcell&\bcell{\color{blue}P} &\bcell{\color{blue}G}&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell \\
$\mid$&\bcell&\bcell&\bcell& \bcell{\color{blue}$\mid$} &&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell\\
$\mid$&\bcell&\bcell&\bcell& \bcell{\color{blue}$\mid$} &&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell\\
{36}&\bcell&\bcell&\bcell&\bcell{\color{blue}P} &\bcell{\color{blue}G}&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell \\
{35}&\bcell&\bcell&\bcell&\bcell{\color{blue}P} &\bcell{\color{blue}G}&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell\\
34&\bcell&\bcell&\bcell& \bcell{\color{blue}P} &&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell\\
{33}&\bcell&\bcell&\bcell&\bcell{\color{blue}P} &\bcell{\color{blue}G}&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell\\
32&\bcell&\bcell&\bcell&\bcell {\color{blue}P}&&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell &\rcell \\
{31}&\bcell&\bcell&\bcell&\bcell {\color{blue}P}&\bcell{\color{blue}G}&\rcell &\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell\\
{30}&\bcell&\bcell&\bcell&\bcell{\color{blue}P} &\bcell{\color{blue}G}&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell\\
29&\bcell&\bcell&\bcell&\bcell {\color{blue}P}&&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell &\rcell \\
{28}&\bcell&\bcell&\bcell&\bcell{\color{blue}P} &\bcell{\color{blue}G}&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell\\
27 &\bcell&\bcell&\bcell&\bcell {\color{blue}P}&\bcell{\color{blue}G}&&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell\\
{26}&\bcell&\bcell&\bcell&\bcell{\color{blue}P} &\bcell{\color{blue}G} &&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell{\color{red}$ {\hbox{EH}}$} \\
{25}&\bcell&\bcell&\bcell&\bcell{\color{blue}P} &\bcell{\color{blue}G} &&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell{\color{red}$ {\hbox{HM}}$} \\
{24}&\bcell&\bcell&\bcell&\bcell{\color{blue}P} &\bcell{\color{blue}G} &&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell{\color{red}$ {\hbox{EH}}$}&\rcell{\color{red}$ {\hbox{EH}}$} \\
{23}&\bcell&\bcell&\bcell&\bcell{\color{blue}P} &\bcell{\color{blue}G}&&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell{\color{red}HM}&\rcell{\color{red}HM}\\
{22}&\bcell&\bcell&\bcell&\bcell {\color{blue}P}&\bcell {\color{blue}G}&&\rcell&\rcell&\rcell&\rcell&\rcell{\color{red}$ {\;\hbox{F}\;}$} &\rcell{\color{red}$ {\;\hbox{F}\;}$}&\rcell{\color{red}$ {\;\hbox{F}\;}$}\\ \hline
21&\bcell&\bcell&\bcell&\bcell {\color{blue}P}&\bcell {\color{blue}G}&&\rcell&\rcell&\rcell&\rcell&\rcell{\color{black}} &\rcell&\rcell\\
$\mid$ &\bcell&\bcell&\bcell&\bcell{\color{blue}$\mid$}&\bcell{\color{blue}$\mid$}&&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell\\
$\mid$ &\bcell&\bcell&\bcell&\bcell{\color{blue}$\mid$}&\bcell{\color{blue}$\mid$}&&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell\\
$\mid$ &\bcell&\bcell&\bcell&\bcell{\color{blue}$\mid$}&\bcell{\color{blue}$\mid$}&&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell&\rcell\\
16&\bcell&\bcell&\bcell&\bcell{\color{blue}P}&\bcell{\color{blue}G}&&\rcell&\rcell$ { \; \;}$&\rcell&\rcell&\rcell&\rcell&\rcell\\
15&\bcell&\bcell&\bcell&\bcell{\color{blue}P}&\bcell{\color{blue}G}&&\rcell&\rcell{\color{violet}V}&\rcell&\rcell&\rcell&\rcell&\rcell\\
14&\bcell&\bcell&\bcell&\bcell{\color{blue}P}&\bcell{\color{blue}G} & &\bcell{\color{blue}$ {\; \hbox{V} \;}$} &\rcell&\rcell&\rcell&\rcell&\rcell&\rcell{\color{red} FV}\\
13&\bcell&\bcell&\bcell&\bcell{\color{blue}P}&{\bcell\color{blue}G}&\bcell {\color{blue}KT}&&\rcell{\color{violet}ST}&\rcell&\rcell&\rcell &\rcell{\color{red} FV}&\rcell{\color{red} CKV}\\
12&\bcell&\bcell&\bcell&\bcell{\color{blue}P}&\bcell{\color{blue}G}&\bcell{\color{blue}$ { \,\hbox{G} \,}$}&\bcell{\color{blue}ST}&\bcell{\color{blue}S}&\rcell&\rcell&\rcell{\color{red} FV}&\rcell{\color{red} CKV} &\rcell{\color{red} CKV}\\
11&\bcell&\bcell&\bcell&\bcell{\color{blue}P}&\bcell{\color{blue}G}&\bcell{\color{blue}G}&\bcell{\color{blue}CR}&&\rcell&\rcell{\color{violet} FV}&\rcell{\color{red} CKV}&\rcell {\color{red} CKV}&\rcell{\color{red} CKV}\\
10&\bcell&\bcell&\bcell&\bcell{\color{blue}P}&\bcell{\color{blue}G}&\bcell{\color{blue}G}&\bcell{\color{blue}KT}&&{\color{violet} FV}&\rcell{\color{red} CKV} &\rcell{\color{red} CKV} &\rcell{\color{red} CKV}&\rcell{\color{red} BFV}\\
\hline
{\bf \color{blue}9}&\bcell&\bcell&\bcell&\bcell{\color{blue}P}&\bcell{\color{blue}G} &\bcell{\color{blue}G}&\bcell{\color{blue}DS}&\bcell{\color{blue}M}&\bcell
{\color{blue}M}&\bcell{\color{blue}M}&\bcell{\color{blue}M}&\bcell{\color{blue}M}&\bcell{\color{blue}M} \\
{\bf \color{blue}8}&\bcell&\bcell&\bcell&\bcell{\color{blue}P}&\bcell{\color{blue}$\mid$} &\bcell {\color{blue}G}&\bcell{\color{blue}M}&\bcell{\color{blue}M}&\bcell{\color{blue}M}&\bcell{\color{blue}M}&\bcell {\color{blue}M}&{\bcell\color{blue}M}&\bcell{\color{blue}M}\\
{\bf \color{blue}7}&\bcell &\bcell&\bcell &\bcell{\color{blue}P}&\bcell{\color{blue}$\mid$} &\bcell{\color{blue}M}&
\bcell{\color{blue}M} & \bcell{\color{blue}M} &\bcell{\color{blue}M}& \bcell{\color{blue}M} &\bcell{\color{blue}M}&\bcell{\color{blue}M} &\bcell{\color{blue}M} \\ \hline
{\bf \color{blue}6}&\bcell {\color{blue}}& \bcell {\color{blue}}&\bcell{\color{blue}} &\bcell{\color{blue}$\,$}&\bcell{\color{blue}$\,$} &\bcell
{\color{blue}}&\bcell &\bcell&\bcell&\bcell&\bcell&\bcell&\bcell \\
{\bf \color{blue}$\mid$}&\bcell {\color{blue}$\;$}&\bcell &\bcell&\bcell&\bcell&\bcell&\bcell&\bcell&\bcell&\bcell&\bcell&\bcell &\bcell \\
{\bf \color{blue}1}&\bcell {\color{blue}$\;$}&\bcell &\bcell&\bcell&\bcell&\bcell&\bcell&\bcell&\bcell&\bcell&\bcell&\bcell &\bcell \\ \hline
$ g\;\slash\; d$& {\bf \color{blue}2}& {\bf \color{blue}3} & {\bf \color{blue}4} & {\bf \color{blue}5} & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 \\ \hline
\end{tabular}
\end{scriptsize}
\caption{
Color coding indicates where $\sW^1_{g, d}$ is known to be {\color{blue}unirational}, {\color{violet} uniruled}
or {\color{red} not unirational}. \label{Fig1}
Results are due to {\color{blue}M}ukai ($g \le 9$), {\color{blue}P}etri or B.~Segre ($d=5$) \cite{Mukai, Petri, Segre},
{\color{red}E}isenbud, {\color{red}H}arris, {\color{red}M}umford, {\color{red}F}arkas, {\color{red}B}ini, {\color{red}C}asalaina-Martin, {\color{red}K}ass, {\color{red}F}ontanari and {\color{red}V}iviani \cite{BiniFontanariViviani, CasalainaKassViviani, EisenbudHarrisKodaira, FarkasGeometry, FarkasBirational, FarkasVerra, HarrisMumfordKodaira}, {\color{blue}C}hang and {\color{blue}R}an, {\color{blue}V}erra, {\color{blue}G}ei\ss , {\color{blue}D}amadi and {\color{blue}S}chreyer, {\color{blue}S}chreyer and {\color{blue}T}anturri, {\color{blue}K}eneshlou and {\color{blue}T}anturri \cite{ChangRanUnirationality, ChangRanKodaira, ChangRanSlope, DamadiSchreyer, GeissThesis, GeissUnirationality, KeneshlouTanturri, SchreyerComputer, VerraUnirationality}.
}
\end{figure}
Outside the range $d\le 5$ or $g\le 9$ there are only finitely many pairs $(g,d)$ for which $\sH_{g,d}$ is known to be unirational.
\begin{question} Are there only finitely many pairs $(g,d)$ with $g\ge 10$ and $d\ge 6$ such that $\sH_{g,d}$
is unirational?
\end{question}
In particular, we may ask
\begin{question} Are the genera $g$ such that $\sH_{g,6}$ is unirational bounded?
\end{question}
Florian Gei\ss \ \cite{GeissUnirationality} proved the unirationality of $\sH_{g,6}$ for the values $g \in \{9,\ldots,28,30,31,33,35,36,40,45\}$ using models of curves in $\PP^1\times \PP^2$ of bidegree $(6,d_2)$ and liaison, $d_2=d_2(g)$ being the minimal number such that $\rho(g,2,d_2) \ge 0$. His proof actually shows the unirationality of a covering space of $\sW^1_{g,6}$.
\begin{question} Are the genera $g$ such that $\sH_{g,7}$ is unirational bounded?
\end{question}
\begin{question} Is $g=14$ the largest genus such that $\sH_{g,8}$ is unirational? In other words, is Verra's case \cite{VerraUnirationality}
extremal? Is $g=12$ the largest genus such that $\sH_{g,9}$ is unirational?
\end{question}
If all these questions have an affirmative answer, then the range of pairs $(g,d)$ such that $\sW^1_{g,d}$ and $\sH_{g,d}$ are not unirational
has roughly shape as indicate in Figure \ref{Fig1} with the color red.
\section{Matrix factorizations and the Reconstruction Theorem}
\label{matrixFact}
\subsection{Matrix factorizations}
Matrix factorizations were introduced by Eisenbud in his seminal paper \cite{EisenbudHomological}. We recall here some basic facts and properties for matrix factorizations over the special case of a polynomial ring $S=K[x_0,\ldots,x_n]$, which is the case of interest for the paper. Any module will be assumed to be finitely generated.
Let $f \in S$ be a nonzero homogeneous form of degree $s$. A \emph{matrix factorization} of $f$ (or on the hypersurface $\Vi(f)$) is a pair $(\varphi, \psi)$ of maps
\[
\varphi: G \to F, \quad \qquad \psi: F \to G(s),
\]
where $F=\bigoplus_{\ell=1}^r S(-a_\ell)$ and $G=\bigoplus_{\ell=1}^{r'} S(-b_\ell)$ are free $S$-modules, satisfying $\psi\circ \varphi = f \cdot \id_G$ and $\varphi(s)\circ \psi = f \cdot \id_F$. This condition forces the two matrices representing the maps to be square, i.e., $r=r'$.
If $(\varphi,\psi)$ is a matrix factorization, then $\coker \varphi$ is a maximal Cohen--Macaulay module (MCM for short) on the hypersurface ring $S/f$. Conversely, a finitely generated MCM $S/f$-module $M$ has a minimal free resolution over $S$
\[
0 \longleftarrow M \longleftarrow F\longleftarrow G \longleftarrow 0;
\]
multiplication by $f$ on this complex is null homotopic
\[
\xymatrix{
0 & \ar[l] M \ar[d]_0& \ar[l] F \ar[d]_f \ar@{.>}[dr]^-{\exists\psi}& \ar[l]_\varphi G \ar[d]^f & \ar[l] 0 \\
0 & \ar[l] M(s) & \ar[l] F(s) & \ar[l]^{\varphi(s)} G(s) & \ar[l] 0 \\
}
\]
and yields therefore a matrix factorization $(\varphi,\psi)$. As an $S/f$-module, $M$ has the infinite 2-periodic resolution
\[
\xymatrix{
0 &\ar[l] M & \ar[l] \overline F& \ar[l]_{\overline \varphi} \overline G & \ar[l]_-{\overline{\psi}(-s)} \overline F(-s) &\ar[l]_-{\overline{\varphi}(-s)} \overline G(-s) & \ar[l]_-{\overline{\psi}(-2s)} \ldots \\
}
\]
where $\overline F=F \tensor S/f$ and $\overline G=G \tensor S/f$. In particular, this sequence is exact, and the dual sequence corresponding to the transposed matrix factorization
$(\psi^t,\varphi^t)$ is exact as well.
If $N$ is an arbitrary $S/f$ module, then any minimal free resolution becomes eventually 2-periodic. If
$$
0 \longleftarrow N \longleftarrow F_0\longleftarrow F_1 \longleftarrow \ldots \longleftarrow F_c \longleftarrow 0
$$
is a minimal free resolution of $N$ of length $c$ as an $S$-module, then the Shamash construction \cite{Shamash} produces a (non-necessarily minimal) free resolution of $N$ of the form
\[
0 \leftarrow N \leftarrow \overline F_0\leftarrow \overline F_1 \leftarrow
\begin{array}{c}
\overline F_2 \\
\oplus\\
\overline{F_0}(-s)
\end{array}
\leftarrow
\begin{array}{c}
\overline F_3 \\
\oplus\\
\overline{F_1}(-s)
\end{array}
\leftarrow
\begin{array}{c}
\overline F_4 \\
\oplus\\
\overline{F_2}(-s)\\
\oplus\\
\overline{F_0}(-2s)
\end{array}
\leftarrow
\ldots ,
\]
which becomes 2-periodic after the $(c-1)$-th step. This construction allows us to control to some extent the degrees of the entries of the corresponding minimal matrix factorization of $f$ induced by an $S/f$-module $N$, if we know the Betti numbers of $N$ as an $S$-module. The Shamash construction has the following peculiarity: at the $i$-th step
\begin{equation}
\label{shamashConstr}
\xymatrix{\displaystyle\bigoplus_{j\ge 0} \overline F_{i-1-2j}\left(-js \right) & \ar[l] \displaystyle\bigoplus_{j\ge 0} \overline F_{i-2j}(-js)
}
\end{equation}
the components $\overline F_{i-1-2j}(-js) \leftarrow \overline F_{i-2j}(-js)$ are inherited from the maps $F_{i-1-2j} \leftarrow F_{i-2j}$ in the resolution of $N$ over $S$ for any $j$, while the component
\begin{equation}
\label{shamashzero}
\xymatrix{\displaystyle\bigoplus_{j\geq 1} \overline F_{i-1-2j}\left(-js \right) & \ar[l] \overline F_{i}
} \quad \mbox{ is the zero map}.
\end{equation}
\subsection{Curves and matrix factorizations}
An easy way to produce matrix factorizations on a hypersurface $X=\Vi(f)$ in $\PP^4$ is to consider a module $N$ over $S=\KK[x_0,\dotsc,x_4]$ annihilated by $f$. A matrix factorization of $f$ is given by the periodic part of a minimal free resolution of $N$ as a module over $S_X:=S/f$.
Our motivating example will be a general curve of genus $12$ and degree $14$ in $\PP^4$.
\begin{proposition}
\label{expected1412}
Let $C$ be a general linearly normal non-degenerate curve of genus $12$ and degree $14$ in $\PP^4$. Then $C$ is of maximal rank, and the homogeneous coordinate ring $S_C=S/I_C$ and the section ring $\Gamma_*(\sO_C):=\oplus_{n \in \ZZ}\HH^0(\sO_C(n))$ have minimal free resolutions with the following Betti tables:
\begin{equation}
\label{res1412}
\bettif{
0& 1 & . & & & \\
1& & . & & & \\
2& & 4 & & & \\
3& & 5 & 18& 12& 2
}
\qquad \qquad
\bettit{
0& 1 & & & \\
1& . & & & \\
2& 2 & 14 & 15 & 2 \\
3& & & & 2
}
\end{equation}
In particular, the cubic threefolds containing $C$ form a $\PP^3$. The minimal resolution of $\Gamma_*(\sO_C)$ as a module over the homogeneous coordinate ring of a cubic threefold $X \supset C$ is eventually 2-periodic with Betti numbers
\[
\begin{array}{c|ccccccc}
& \rule{0.5ex}{0pt} 0 \rule{0.5ex}{0pt} & \rule{0.5ex}{0pt} 1 \rule{0.5ex}{0pt} & \rule{0.5ex}{0pt} 2 \rule{0.5ex}{0pt} & \rule{0.5ex}{0pt} 3 \rule{0.5ex}{0pt} & \rule{0.5ex}{0pt} 4 \rule{0.5ex}{0pt} & \dotso \\ \hline
0& 1 & & & \\
1& . & & & \\
2& 2 & 13 & 15 & 2 \\
3& & & 2 & 15 & 15 & \dotso\\
4& & & & & 2 & \dotso
\end{array}
\]
\begin{proof}
We assume that the maps $\HH^0(\PP^4,\sO_{\PP^4}(n))\rightarrow \HH^0(\PP^4,\sO_{C}(n))$ are of maximal rank, i.e., $C$ has maximal rank. Since $\sO_C(n)$ is non-special for $n \geq 2$, by Riemann--Roch we can compute the Hilbert function of the homogeneous coordinate ring of $C$ and therefore the numerator of its Hilbert series
\[
(1-t)^5H_C(t)=1-4t^3-5t^4+18t^5-12t^6+2t^7.
\]
Thus, we expect the Betti table of $S/I_C$ to look like the one in (\ref{res1412}). Analogously, the numerator of the Hilbert series of $\Gamma_*(\sO_C)$ under the maximal rank assumption is
\[
(1-t)^5H_{\Gamma_*(\sO_C)}(t)=1+2t^2-14t^3+15t^4-2t^5-2t^6
\]
and the expected Betti table is (\ref{res1412}).
To show that the Betti tables are indeed the expected ones and that, a posteriori, a general curve $C$ is of maximal rank, we only need to exhibit a concrete example, which we construct via matrix factorizations as explained in the proof of Theorem \ref{unirationalityThm} and summarized in Algorithm \ref{algorithmUnirat}. The function \texttt{verifyAssertionsOfThePaper(1)} of \cite{SchreyerTanturriCode} produces the \Mac code needed to verify all the above assertions. Another family of examples can be obtained as explained in Corollary \ref{unirulednessThm}.
A free resolution of $\Gamma_*(\sO_C)$ as a module over the cubic hypersurface ring $S_X$ can be obtained via the Shamash construction, from which we can deduce the Betti numbers of the minimal $S_X$-resolution:
$$\beta^{S_X}_{1,3}(\Gamma_*\sO_C)=\beta^S_{1,3}(\Gamma_*\sO_C)-1$$
since the equation of $X$ is superfluous over $S_X$, and $\beta^{S_X}_{2,5}(\Gamma_*\sO_C)=\beta^{S_X}_{3,5}(\Gamma_*\sO_C)=2$ follows from (\ref{shamashzero}).
\end{proof}
\end{proposition}
\begin{remark}
\label{posCharSuffices}
Throughout the paper we will sometimes need to exhibit explicit examples of modules defined over the rationals $\QQ$ or complex numbers $\CC$ satisfying some open conditions on their Betti numbers. Our constructions will involve only linear algebra, especially Gr\"obner basis computations, and will depend only on the choice of some parameters; a choice of rational values for the parameters thus gives rise to modules over $\QQ$, hence over $\CC$.
An ultimate goal would be to perform the computations over the function field $\QQ(t_1,\ldots,t_N)$, where $N$ is the number of free parameters.
This however is out of reach for computer algebra systems today.
We have implemented our constructions using the computer algebra system \Mac \cite{M2}. A priori it would be possible to perform these computations over $\QQ$, but this might require too much time, so instead we work over a finite prime field $\mathbb{F}_p$. We can view our choice of the initial parameters in $\mathbb{F}_p$ as the reduction modulo $p$ of some choices of parameters in $\ZZ$. Then, the so-obtained module $M_p$ can be seen as the reduction modulo $p$ of a family of modules defined over a neighborhood $\Spec \ZZ[\frac{1}{b}]$ of $(p) \in \Spec \ZZ$ for a suitable $b \in \ZZ$ with $ p \nmid b$.
If $M_p$ satisfies our open conditions, then by semicontinuity the generic fiber $M$ satisfies the same open conditions, and so does the general element of the family over $\QQ$ or $\CC$.
\end{remark}
Let $C$ be a curve as in Proposition \ref{expected1412}. We can consider $M=\Gamma_*(\sO_C)$ as a $S_X$-module, being $X$ a generally chosen cubic threefold containing $C$. If $C$ is general, the periodic part of its minimal free resolution yields, up to twist, a matrix factorization of the form
\[
\xymatrix{
S^{15} \oplus S^2(-1) &
S^2(-1) \oplus S^{15}(-2)
\ar[l]_-{\psi} &
S^{15}(-3) \oplus S^{2}(-4) \ar[l]_-\varphi.
}
\]
\begin{definition}[Shape of a matrix factorization]
We will call the Betti numbers of the minimal periodic resolution
\begin{equation*}
\begin{array}{ccccc}
15 & 2 & & \\
2 & 15 & 15 & 2 \\
& & 2 & 15 & \ldots \\
\end{array}
\end{equation*}
the \emph{shape} of the matrix factorization. When the degree $s$ of the hypersurface containing the curve is fixed (in the current example we have $s=3$), then the shape of a matrix factorization is determined by the Betti numbers $\beta(\psi)$ of $\psi$. In the current case they are
\begin{equation}
\label{shape1412}
\begin{array}{cc}
15 & 2 \\
2 & 15
\end{array}
\end{equation}
\end{definition}
In general, starting from a curve $C$ in $\PP^4$ contained in a (smooth) hypersurface $X$, the 2-periodic part of a minimal resolution of the section module $\Gamma_*(\sO_C)$ over $S_X$ will produce a matrix factorization. The shape is uniquely determined for a general pair $C \subset X \subset \PP^4$ in a component of the Hilbert scheme of pairs. For a given pair, different choices of the resolution yield equivalent matrix factorizations. They all define the same sheaf $\sF=(\coker \varphi)^\sim$ on $X$, which turns out to be an ACM vector bundle, see e.g.\ \cite[Proposition 2.1]{CasanellasHartshorneGorenstein}.
We have thus established one way of the correspondence between curves and matrix factorizations. In what follows we will see that, to some extent, it is possible to recover the original curve from the matrix factorizations it induces.
\subsection{Monads and the Reconstruction Theorem}
Let us consider a pair $(C,X)$ of a general curve $C$ of degree 14 and genus 12 in $\PP^4$ and a general (smooth) cubic hypersurface $X=\Vi(f)\supset C$. The curve induces, up to twist, a matrix factorization of shape (\ref{shape1412})
\begin{equation*}
{\small
\xymatrix{
\sO_X^{15}(-1) \oplus \sO_X^2(-2) &
\ar[l]_-\psi \sO_X^2(-2) \oplus \sO_X^{2+13}(-3) &
\ar[l]_-\varphi \sO_X^{15}(-4) \oplus \sO_X^{2}(-5).
}
}
\end{equation*}
Here, we have distinguished in $\sO_X^{2+13}(-3)$ the two copies coming directly from the third step of the resolution of $\Gamma_*{\sO_C}$ as an $S$-module, see the Shamash construction (\ref{shamashConstr}). The map $\psi$ can be regarded as a block matrix, with a zero submatrix $\sO_X^2(-2) \leftarrow \sO_X^2(-2) \oplus \sO_X^{2}(-3)$ by (\ref{shamashzero}).
Let $\sF = (\coker \varphi)^{\sim}$; we can form a complex
\begin{equation}
\label{monad812}
\xymatrix{
0 &\ar[l]
\sO_X^2(-2) & \ar[l]_{\qquad \alpha}
\sF & \ar[l]_{\beta \phantom{abcdefg1}}
\sO_X^{2}(-2) \oplus \sO_X^{2}(-3) & \ar[l]
0.
}
\end{equation}
We claim that this complex is a monad for the ideal sheaf $\sI_{C/X}$, i.e., $\alpha$ is surjective, $\beta$ injective and $\ker \alpha / \image \beta \cong \sI_{C/X}$. In other words, we can recover the original curve $C$ from the complex. The claim is a special case of the following
\begin{theorem}[Reconstruction Theorem]
\label{reconstructionThm}
Let $C\subset \PP^4$ be a non-degenerate linearly normal curve of genus $g$ and degree ${d} \ge g$ not contained in any quadric
and let $X=\Vi(f)$ be a smooth hypersurface of degree $s$ containing $C$. Let $F_{\bullet}$ and $\overline G_{\bullet}$ be minimal free resolutions of
$\Gamma_*(\sO_C)$ over $S$ and $S/f$ respectively, let $\varphi$ denote the syzygy map $\overline G_3 \leftarrow \overline G_4$ and $\sF=(\coker \varphi)^{\sim}(s)$. Then the complex of vector bundles on $X$
\begin{equation}
\label{monad}
\xymatrix{
0 &\ar[l]
(\overline{F'_0})^{\sim} &\ar[l]_{\phantom{abcd}\alpha}
\sF &\ar[l]_{\beta \phantom{ab}}
\left(\overline{F_3}(s)\right)^{\sim} &\ar[l]
0,
}
\end{equation}
where the maps are induced by $\overline G_{\bullet}$ via the Shamash construction and $F'_0$ is the complement of $S$ in $F_0=S \oplus F'_0$, is a monad for the ideal sheaf of $C$ on $X$, i.e., $\beta$ is injective, $\alpha$ is surjective, and $\ker \alpha/\image \beta \cong \sI_{C/X}$.
If $s\ge 4$ the monad is uniquely determined by $\sF$.
\end{theorem}
\begin{proof}
Since ${d} \ge g$ the line bundle $\sO_C(2)$ is non-special. It follows that $\Gamma_*(\sO_C)$ has Betti table
$$
\bettit{
0& 1 & & & \\
1& . & & & \\
2& \beta_{0,2} & \beta_{1,3}& \beta_{2,4} & \beta_{3,5} \\
3& \beta_{0,3} & \beta_{1,4} & \beta_{2,5} & \beta_{3,6} \\
}
$$
Indeed, $\beta_{1,2}=0$ by assumption. Since ${\rm Hom}(F_{\bullet},S(-5))$ resolves $\Gamma_*(\omega_C)$, we must have $\beta_{3,n}=0$ for
$n-5\ge 2$, because $\HH^0(\omega_C(-2))=0$. So $\Gamma_*\sO_C$ is 3-regular and non-zero Betti numbers can only occur in the indicated range.
Let us assume $s=3$. The Shamash resolution starts with the Betti numbers
$$
\bettit{
0 & 1 & \phantom{\beta_{3,6} + \beta_{1,3}} & \phantom{\beta_{3,6} + \beta_{1,3}} & \phantom{\beta_{3,6} + \beta_{1,3}} \\
1& \phantom{\beta_{3,6} + \beta_{1,3}} & & \phantom{\beta_{2,4} +}\phantom{a} 1 \phantom{ab} & \\
2& \beta_{0,2} & \beta_{1,3} & \beta_{2,4} + 0 \phantom{ab} & \beta_{3,5} \phantom{ + \beta_{1,2} a} \\
3& \beta_{0,3} & \beta_{1,4} & \beta_{2,5}+\beta_{0,2} & \beta_{3,6} + \beta_{1,3}\\
4& & & 0 \phantom{ab} + \beta_{0,3}& 0 \phantom{ab} + \beta_{1,4} \\
}
$$
We see that, in the induced map $\overline{F_1}\leftarrow\overline{F_0}(-3)$, there is a non-zero constant $1\times 1$ submatrix; this means that in this case the Shamash resolution is always non-minimal, and in a minimal resolution a cancellation occurs, causing $\beta_{1,3}$ to decrease by one. Such cancellation corresponds to the equation $f$ of $X$ in $S$ becoming superfluous in $S/f$.
By definition, the map $\overline G_2 \leftarrow \overline G_3$ factorizes through $\sF$. As
\[
F_0=S \oplus S^{\beta_{0,2}}(-2) \oplus S^{\beta_{0,2}}(-3), \qquad
F_3=S^{\beta_{3,5}}(-5) \oplus S^{\beta_{3,6}}(-6),
\]
the complex (\ref{monad}) has the form
$$
0 \leftarrow \sO^{\beta_{0,2}}_X(-2)\oplus \sO^{\beta_{0,3}}_X(-3) \leftarrow \sF \leftarrow \sO^{\beta_{3,5}}_X(-2)\oplus \sO^{\beta_{3,6}}_X(-3) \leftarrow 0.
$$
It is indeed a complex because of (\ref{shamashzero}). We claim that the first map is surjective, the second one is injective and that the homology in the middle is isomorphic to $\sI_{C/X}$.
The first claim follows since the cokernel of the composition
$$
\sO^{\beta_{0,2}}_X(-2)\oplus \sO^{\beta_{0,3}}_X(-3) \leftarrow \sF \leftarrow \sO^{\beta_{1,3}-1}_X(-3)\oplus \sO^{\beta_{1,4}}_X(-3),
$$
where the ``${}-1$'' represents the missing equation of $X$ over $S/f$, coincides by construction with the sheafification restricted to $X$ of the cokernel of $F'_0 \leftarrow F_1$; such cokernel is a module of finite length (a submodule of the Hartshorne--Rao module of $C$), hence its sheafification is zero.
Let $\sG:=\ker(\alpha)$. Being the sheafification of a MCM module over $X$, the sheaf $\sF$ is a vector bundle and $\sG$ is a vector bundle as well. It remains to show that
$$
\xymatrix{\sG &\ar[l]_-{\gamma} \sO^{\beta_{3,5}}_X(-2)\oplus \sO^{\beta_{3,6}}_X(-3)
}
$$
is injective and a presentation of $\sI_{C/X}$. To see this, we apply the functor $\sHom(-, \omega_X)$ to $\gamma$ and obtain $$
\xymatrix{\sG^*(-2) \ar[r]^-{\gamma^*(-2)} & \sO^{\beta_{3,5}}_X\oplus \sO^{\beta_{3,6}}_X(1).
}
$$
The cokernel of this map coincides by construction with the cokernel of the dual of the sheafification of the last map of $F_{\bullet}$
$$
\xymatrix{
\sO_X^{\beta_{2,4}}(-1)\oplus \sO_X^{\beta_{2,5}} \ar[r] & \sO_X^{\beta_{3,5}} \oplus \sO_X^{\beta_{3,6}}(1),
}
$$
which is a presentation of $\omega_C$ by duality on $\PP^4$.
Since
\begin{align*}
\rank \sF &= \rank F_0-\rank F_1+\rank F_2+\rank F_0\cr
&=\rank F_3+\rank F'_0+1,
\end{align*}
we have $\rank \sG = \beta_{3,5} + \beta_{3,6} +1$. Hence both $\gamma^*(-2)$ and $\gamma$ drop rank in expected codimension $2$; applying again $\sHom(-, \omega_X)$ to $\gamma^*(-2)$ we get that $\gamma$ is injective and by the Hilbert--Burch Theorem \cite[Theorem 20.15]{Eisenbud} it fits into an exact complex
\[
\xymatrix{
0 &
\ar[l] \sO_C(\ell) &
\ar[l] \sO_X(\ell) &
\ar[l] \sG &
\sO^{\beta_{3,5}}_X(-2)\oplus \sO^{\beta_{3,6}}_X(-3) \ar[l]_-{\gamma} &
0 \ar[l]
}
\]
for some $\ell$. By applying again $\sHom(-, \omega_X)$ to this last exact sequence one gets that $\gamma^*(-2)$ is a presentation of $\omega_C(-\ell)$, hence $\ell=0$.
The argument for $s\ge 4$ is similar, the only difference being that the second and third term in the Shamash resolution of $\Gamma_*(\sO_C)$ differ in their twist. For example, the third term is
$S^{\beta_{3,5}}(-5)\oplus S^{\beta_{3,6}}(-6) \oplus S^{\beta_{1,3}}(-3-s)\oplus S^{\beta_{1,4}}(-4-s)$, and we see that for $s\ge 4$ the monad is uniquely determined by $\sF$.
\end{proof}
\section{General matrix factorizations and uniruledness}
\label{uniruledness}
In the last section we saw how, from a matrix factorization induced by a curve $C$, it is possible to recover $C$ itself. Within this section we will show how we can use the Reconstruction Theorem to actually \emph{construct} new curves on a hypersurface $X$, starting from a general matrix factorization $(\psi,\varphi)$ on $X
. The key point for proving this result is exhibiting, case by case in the range of interest of the paper
\[
(g,{d}) \in \{(12,14), (13,15), (16,17), (17,18), (18,19), (19,20), (20,20)\},
\] a concrete example satisfying some open conditions.
This leads naturally to the strategy of constructing (unirational) families of matrix factorizations on a hypersurface $X$ to approach the problem of constructing projective curves. In the range of interest for this paper, such strategy turns out to be effective because of the following considerations.
On the one hand, one could have that a general hypersurface of the appropriate degree does not contain a curve with the prescribed genus and degree. As anticipated in the introduction and proved in the last section, Theorem \ref{generalQuartic} shows that this does not happen, so we can start with a general choice of $X$.
On the other hand, the space of matrix factorizations of a given shape on a general hypersurface may very well have many components. This leads to the following
\begin{question}
Is the space of matrix factorizations of shape (\ref{shape1412}) on a general cubic hypersurface irreducible?
\end{question}
Other similar questions can of course be formulated for other cases of interest. Our approach will be to construct unirational families of matrix factorizations, dominants on some component (or union of components) of such space; we will then show that the points in such particular component give rise to the desired curves. This last claim requires further explanations, see Remark \ref{grassmannians} here below.
\subsection{Constructing new curves from matrix factorizations}
\begin{remark}
\label{grassmannians}
Let $(\psi, \varphi)$ be a general matrix factorization of shape (\ref{shape1412}) over a cubic hypersurface $X$; in particular, we have a map
\[
\xymatrix{
\sO_X^{15}(-1) \oplus \sO_X^2(-2) &
\ar[l]_-\psi \sO_X^2(-2) \oplus \sO_X^{15}(-3).
}
\]
In Theorem \ref{reconstructionThm} we constructed, from a matrix factorization induced by a curve, a complex (\ref{monad812}). To find a similar complex we need to find a rank 2 subbundle $\sO_X^2(-3)$ of $\sO_X^{15}(-3)$ such that the composition
\[
\xymatrix{
\sO_X^2(-2) &
\ar[l]_-\delta\sO_X^{15}(-3) &
\ar@{_{(}->}[l] \sO_X^2(-3),
}
\]
where $\delta$ is the map extracted from $\psi$, is zero. The map $\delta$ is represented by a matrix of linear forms, and has a kernel of dimension at least $5=15-2\cdot \hh^0(\mathcal{O}_{\mathbb{P}^4}(1))$. Having such kernel of dimension precisely $5$ is an open condition on the space of matrix factorizations which is satisfied by the concrete examples we construct in \cite{SchreyerTanturriCode}. This means that, for a given general matrix factorization, we get a complex for any choice of $\sO_X^2(-3)$ inside $\ker \delta$, which corresponds to the choice of a $p \in \GG(2,5)$. A general choice of $p$ produces a complex (\ref{monad812}); Theorem \ref{constructionThm} below will show that such complex is a monad for a smooth curve of genus 12 and degree 14.
The situation is very similar in the case of curves of genus 13 and degree 15. Here we have to choose again a rank 2 subbundle $\sO_X^2(-3)$ inside the kernel, which is now $3$-dimensional in general. This yields many choices parameterized by $\GG(2,3)=\PP^2$. Again, a general choice produces a monad and a smooth curve.
\end{remark}
\begin{theorem}
\label{constructionThm}
Let $(g,{d})$ be in
\[
\{(12,14), (13,15), (16,17), (17,18), (18,19), (19,20), (20,20)\}
\]
and let ${H}_{{d},g}$ be the component of the Hilbert scheme $\Hilb_{{d} t+1-g}(\PP^4)$ dominating $\sM_g$. Let $C \in {H}_{{d},g}$ be a general point, i.e., a general curve of genus $g$ and degree ${d}$ in $\PP^4$.
\begin{enumerate}
\item The quotient ring $S/I_C$ and the section module $\Gamma_*(\sO_C)$ have expected resolutions, i.e., their Betti tables correspond to the ones listed in Table \ref{expectedresolutions} below.
\item Let $s = \min \{s' \mid \hh^0(\sI_C(s')) \neq 0 \}$ and consider a general hypersurface $X$ with equation $f \in (I_C)_s$. The minimal free $S/f$-resolution of $\Gamma_*(\sO_C)$ is eventually 2-periodic and gives rise to a matrix factorization of $f$ of shape as in Table \ref{shapesmonads
.
\item For each choice of $(g,{d})$ above, let $s$ be the (expected) minimum degree of a hypersurface containing a general curve of genus $g$ and degree ${d}$ and let $X$ be a general hypersurface of degree $s$. There is a component of the space of matrix factorizations on $X$ of shape corresponding to $(g,{d})$ in Table \ref{shapesmonads} whose general element gives rise to complexes of the form (\ref{monad}), which turns out to be a monad for $\sI_{C'/X}$, the ideal sheaf of a smooth curve $C'$ of genus $g$ and degree ${d}$ with respect to $X$.
\end{enumerate}
\begin{proof}
As in Proposition \ref{expected1412}, we can compute the expected Betti tables of the $S$-resolutions of $S/I_C$ and $\Gamma_*(\sO_C)$. These are summarized in Table \ref{expectedresolutions}. In Table \ref{shapesmonads} we list the expected shapes of the matrix factorizations and the corresponding monads we can construct.
For a matrix factorization, giving rise to a monad for the ideal sheaf of a smooth curve with right genus and degree is an open condition. When the complex is not uniquely determined, i.e.\ for $s=3$ (see Remark \ref{grassmannians}), it is an open condition on the space of complexes, parametrized by a rational variety. To prove the third part of the Theorem, it is thus sufficient to explicitly construct, for each of the aforementioned cases, a matrix factorization of the given shape and a complex of the form (\ref{monad}) which is a monad for a smooth curve with assigned genus and degree. The fact that a general hypersurface $X$ of the appropriate degree $s$ contains such a curve will be proved in Theorem \ref{generalQuartic} and relies again on the computation of explicit examples.
The function \texttt{verifyAssertionsOfThePaper(2)} of \cite{SchreyerTanturriCode} provides the \Mac code useful to
produce, for each pair $(g,{d})$, a matrix factorization on a hypersurface $X$ of degree $s$ such that
\begin{itemize}
\item the shape of the matrix factorization is as listed in Table \ref{shapesmonads};
\item a complex built from the matrix factorization, according to the numerology of the expected resolution of the section module of a general curve and the Reconstruction Theorem \ref{reconstructionThm}, is a monad for a smooth curve $C$ of genus $g$ and degree ${d}$;
\item $S/I_C$ and $\Gamma_*{\sO_C}$ have expected resolutions as in Table \ref{expectedresolutions}, and $\Gamma_*{\sO_C}$ induces a matrix factorization on a general supporting hypersurface $X'$ of degree $s$ of shape as in Table \ref{shapesmonads}.
\end{itemize}
To prove the first two points of the Theorem, which correspond to open conditions on ${H}_{{d},g}$, it is sufficient to check the last assertion on a particular example.
We use different constructions to explicitly exhibit a matrix factorization satisfying the statements. For $g=12$ or $g=13$, the procedure followed can be found in Corollary \ref{unirulednessThm}. For $g=12$, an alternative way is to use curves of genus 10 and degree 13, as explained in Proposition \ref{auxiliaryE}. For $g \geq 16$, see Section \ref{familiesOfCurves}. As mentioned in Remark \ref{posCharSuffices}, it is sufficient to run our constructions over a finite field.\qedhere
{\small
\begin{table}[h!bt]
\caption{Expected Betti tables.}
\label{expectedresolutions}
\begin{tabular}{ccc} \toprule
$(g,{d})$ & $\beta_{i,j}(S/I_C)$ & $\beta_{i,j}(\Gamma_*(\sO_C))$\\ \midrule
\rule{1.2ex}{0ex}$(12,14)$\rule{1.2ex}{0ex} & \rule{1.2ex}{0ex}$\bettif{
0& 1 & . & & & \\
1& & . & & & \\
2& & 4 & & & \\
3& & 5 & 18& 12& 2
}$\rule{1.2ex}{0ex} &
\rule{1.2ex}{0ex}$\bettit{
0& 1 & & & \\
1& . & & & \\
2& 2 & 14 & 15 & 2 \\
3& & & & 2
}$ \rule{1.2ex}{0ex}
\\ \midrule
\rule{1.2ex}{0ex}$(13,15)$\rule{1.2ex}{0ex} & \rule{1.2ex}{0ex}$\bettif{
0& 1 & . & & & \\
1& & . & & & \\
2& & 2 & & & \\
3& & 12 & 27& 17& 3
}$\rule{1.2ex}{0ex} &
\rule{1.2ex}{0ex}$\bettit{
0& 1 & & & \\
1& . & & & \\
2& 3 & 17 & 18 & 3 \\
3& & & & 2
}$ \rule{1.2ex}{0ex}
\\ \midrule
\rule{1.2ex}{0ex}$(16,17)$\rule{1.2ex}{0ex} & \rule{1.2ex}{0ex}$\bettif{
0& 1 & . & & & \\
1& & . & & & \\
2& & . & & & \\
3& & 17 & 29& 13& \\
4& & & & 1& 1
}$\rule{1.2ex}{0ex} &
\rule{1.2ex}{0ex}$\bettit{
0& 1 & & & \\
1& . & & & \\
2& 4 & 19 & 18 & 1 \\
3& & & & 3
}$ \rule{1.2ex}{0ex}
\\ \midrule
\rule{1.2ex}{0ex}$(17,18)$\rule{1.2ex}{0ex} & \rule{1.2ex}{0ex}$\bettif{
0& 1 & . & & & \\
1& & . & & & \\
2& & . & & & \\
3& & 14 & 18& & \\
4& & & 2 & 10& 3
}$\rule{1.2ex}{0ex} &
\rule{1.2ex}{0ex}$\bettit{
0& 1 & & & \\
1& . & & & \\
2& 5 & 22 & 21 & 2 \\
3& & & & 3
}$ \rule{1.2ex}{0ex}
\\ \midrule
\rule{1.2ex}{0ex}$(18,19)$\rule{1.2ex}{0ex} & \rule{1.2ex}{0ex}$\bettif{
0& 1 & . & & & \\
1& & . & & & \\
2& & . & & & \\
3& & 11 & 7& & \\
4& & & 17 & 19& 5
}$\rule{1.2ex}{0ex} &
\rule{1.2ex}{0ex}$\bettit{
0& 1 & & & \\
1& . & & & \\
2& 6 & 25 & 24 & 3 \\
3& & & & 3
}$ \rule{1.2ex}{0ex}
\\ \midrule
\rule{1.2ex}{0ex}$(19,20)$\rule{1.2ex}{0ex} & \rule{1.2ex}{0ex}$\bettif{
0& 1 & . & & & \\
1& & . & & & \\
2& & . & & & \\
3& & 8 & & & \\
4& & 4 & 32& 28& 7
}$\rule{1.2ex}{0ex} &
\rule{1.2ex}{0ex}$\bettit{
0& 1 & & & \\
1& . & & & \\
2& 7 & 28 & 27 & 4 \\
3& & & & 3
}$ \rule{1.2ex}{0ex}
\\ \midrule
\rule{1.2ex}{0ex}$(20,20)$\rule{1.2ex}{0ex} & \rule{1.2ex}{0ex}$\bettif{
0& 1 & . & & & \\
1& & . & & & \\
2& & . & & & \\
3& & 9 & .& & \\
4& & & 26 & 24& 6
}$\rule{1.2ex}{0ex} &
\rule{1.2ex}{0ex}$\bettit{
0& 1 & & & \\
1& . & & & \\
2& 6 & 24 & 21 & . \\
3& & & & 4
}$ \rule{1.2ex}{0ex}
\\ \bottomrule
\end{tabular}
\end{table}
}
{\small
\begin{table}[h!bt]
\caption{Shapes of the matrix factorizations and corresponding monads.}
\label{shapesmonads}
\begin{tabular}{ccc} \toprule
$(g,{d})$ & shape of $\psi$ & monad\\ \midrule
$(12,14)$ &
$\begin{array}{cc}
15 & 2 \\
2 & 15
\end{array}$ &
$\xymatrix{
\sO_X^2(-2) \oplus \sO_X^{2}(-3) \ar@{^{(}->}[r] &
\sF \ar@{->>}[r] &
\sO_X^{2}(-2)
}$
\\ \midrule
$(13,15)$ &
$\begin{array}{cc}
18 & 3 \\
3 & 18
\end{array}$ &
$\xymatrix{
\sO_X^3(-2) \oplus \sO_X^{2}(-3) \ar@{^{(}->}[r] &
\sF \ar@{->>}[r] &
\sO_X^{3}(-2)
}$
\\ \midrule
$(16,17)$ &
$\begin{array}{cc}
19 & 1 \\
. & 3 \\
4 & 19
\end{array}$ &
$\xymatrix{
\sO_X(-1) \oplus \sO_X^{3}(-2) \ar@{^{(}->}[r] &
\sF \ar@{->>}[r] &
\sO_X^{4}(-2)
}$
\\ \midrule
$(17,18)$ &
$\begin{array}{cc}
22 & 2 \\
. & 3 \\
5 & 22
\end{array}$ &
$\xymatrix{
\sO_X^2(-1) \oplus \sO_X^{3}(-2) \ar@{^{(}->}[r] &
\sF \ar@{->>}[r] &
\sO_X^{5}(-2)
}$
\\ \midrule
$(18,19)$ &
$\begin{array}{cc}
25 & 3 \\
. & 3 \\
6 & 25
\end{array}$ &
$\xymatrix{
\sO_X^3(-1) \oplus \sO_X^{3}(-2) \ar@{^{(}->}[r] &
\sF \ar@{->>}[r] &
\sO_X^{6}(-2)
}$
\\ \midrule
$(19,20)$ &
$\begin{array}{cc}
28 & 4 \\
. & 3 \\
7 & 28
\end{array}$ &
$\xymatrix{
\sO_X^4(-1) \oplus \sO_X^{3}(-2) \ar@{^{(}->}[r] &
\sF \ar@{->>}[r] &
\sO_X^{7}(-2)
}$
\\ \midrule
$(20,20)$ &
$\begin{array}{cc}
22 & . \\
. & 4\\
6 & 24
\end{array}$ &
$\xymatrix{
\sO_X^{4}(-2) \ar@{^{(}->}[r] &
\sF \ar@{->>}[r] &
\sO_X^{6}(-2)
}$
\\ \bottomrule
\end{tabular}
\end{table}
}
\end{proof}
\end{theorem}
\begin{remark}
Theorem \ref{constructionThm} holds also in the case of curves of genus 15 and degree 16; the study of that particular case allowed the first author to construct some unirational families of such curves and to show the uniruledness of $\sW^4_{16,15}$ \cite{SchreyerMatrix}. The case of genus 16 and degree 17 was already the topic of the master's thesis \cite{MuellerThesis}.
\end{remark}
\begin{remark}
We expect Theorem \ref{constructionThm} to hold in other circumstances as well. Our interest in the cases above has the following reasons.
The first two cases correspond to the Brill--Noether spaces $\sW^4_{12,14}$ and $\sW^4_{13,15}$, which by Serre duality are birational to $\sW^1_{12,8}$ and $\sW^1_{13,9}$ respectively.
The remaining cases are motivated by a (so far unsuccessful) attempt of proving the unirationality of the moduli space $\sM_g$ for $g \geq 16$. We have chosen ${d}$ such that $\rho(g,4,{d})=g - 5\hh^1(\sO_C(1))$ takes the minimal non-negative value. See Section \ref{families} for further details.
There are cases in which we do not expect the Theorem to hold, at least not in the formulation above. For instance, consider
the family of curves of genus 14 and degree 16 in $\PP^4$ which are contained in cubic hypersurfaces. These curves forms a divisor $\sD$ in
$\sW^4_{14,16}$. Their matrix factorizations have the shape
\[
\begin{array}{cc}
21 & 4 \\
4 & 21
\end{array}
\]
and we would need a rank 2 subbundle inside the kernel of the map corresponding to the last row of the Betti table above. As the kernel of a general such map is just 1-dimensional, we believe that a general matrix factorization of this shape is not induced by any curve in $\sD$.
\end{remark}
\subsection{Uniruledness results}
A consequence of Remark \ref{grassmannians} and of Theorem \ref{constructionThm} is that, if $(g,{d})=(12,14)$ or $(13,15)$, for a fixed general matrix factorization (in the sense of Theorem \ref{constructionThm}) on a general cubic hypersurface $X$ of shape as in Table \ref{shapesmonads} we have a rational map
\begin{equation}
\label{rationalChoice}
\xymatrix{
\sV \ar@{-->}[r] & \sW^4_{g,{d}},
}
\end{equation}
where $\sV$ is $\GG(2,5)$, $\PP^2$ respectively.
\begin{corollary}\label{unirulednessThm}
$\sW^4_{12,14}$ and $\sW^4_{13,15}$ and the corresponding $\sW^1_{12,8}$ and $\sW^1_{13,9}$ are uniruled.
\end{corollary}
\begin{proof}
Take a general point in $\sW^4_{12,14}$ or $\sW^4_{13,15}$ and choose an embedding $C \subset \mathbb{P}^4$. Consider a general cubic hypersurface $X$ containing $C$ and consider the induced matrix factorization on $X$. The induced rational map (\ref{rationalChoice}) sends a general choice of the monad to a point $\sW^4_{12,14}$, $\sW^4_{13,15}$ respectively. The image of this map is a rational variety; if it is not a point, then it contains a rational curve which passes through $C$ and whose points parametrize points of $\sW^4_{12,14}$, $\sW^4_{13,15}$ respectively, whence the conclusion. For the map (\ref{rationalChoice}), being non-constant is an open condition on the space of matrix factorizations, hence it is sufficient to check it for a concrete example.
To construct the two necessary examples, we start from a $g$-nodal rational curve $C'$ of genus $g$ having a $g^1_{2g-2-{d}}=|D|$ (see \cite{BoppMaster,BoppCode}). We embed $C'$ in $\PP^4$ via $|K_{C'}-D|$ and obtain a singular curve $C' \subset \PP^4$ of genus $g$ and degree ${d}$. We consider the matrix factorization on a cubic hypersurface obtained from $C'$ and choose a random point in $\sV$. We check that the resulting curve $C$ is smooth; since $C'$ is a point in the boundary as a point in $\overline {\sW}^4_{d,g}$, the map is not constant. An implementation of the code is provided by the function \texttt{verifyAssertionsOfThePaper(2)} in \cite{SchreyerTanturriCode}.
By passing to the Serre dual linear systems, this yields the uniruledness of the corresponding spaces $\sW^1_{12,8}$ and $\sW^1_{13,9}$ as well. \qedhere
\end{proof}
\section{A unirational Hurwitz space}
\label{uniratHurwitz}
Our aim is to use all the machinery developed so far to construct a unirational family of curves dominating $H_{12,14}$, the component of the Hilbert scheme of curves of genus 12 and degree 14 in $\PP^4$ which dominates $\sW^4_{12,14}$. By considering the dual models, this will imply the unirationality of $\sW^1_{12,8}$ and $\sH_{12,8}$.
The idea is to use Theorem \ref{constructionThm}. If we manage to produce a large enough unirational family of general matrix factorizations, we can hope that the space of curves we obtain is dominant. In other terms, we translate the problem of constructing curves with fixed invariants to the problem of constructing matrix factorizations on cubic threefolds with an assigned shape.
\subsection{Betti tables and auxiliary modules}
\label{bettiCandidates}
Let us fix a cubic form $f \in S$. A matrix factorization of $f$ with shape (\ref{shape1412}) might be hard to construct. Nonetheless, the Shamash construction gives us a way to partially predict the shape of a matrix factorization arising as the 2-periodic part of the resolution of an arbitrary $S/f$-module $N$, provided that we know the Betti numbers $\beta_{i,j}(N)$ of $N$ as an $S$-module. Thus, a possible approach is to construct auxiliary $S$-modules $N$ giving rise over $S/f$ to a matrix factorization of $f$ with the desired shape.
For such $N$, how should its Betti table $\beta_{i,j}(N)$ look like? If we assume that no cancellation will occur when taking the minimal part of the Shamash resolution, i.e., the Shamash resolution is already minimal, a prescribed shape imposes linear conditions on the entries of a table $\beta_{i,j}$ filled with natural numbers. For instance, if we assume $\pd N < 5$, for the shape (\ref{shape1412}) such a table has the following form, up to twist:
\[
\bettif{
0& \beta_{0,0} & \beta_{1,1} & . & . & . \\
1& \beta_{0,1} & \beta_{1,2} & \beta_{2,3} & \beta_{3,4} & . \\
2& . & . & \beta_{2,4} & \beta_{3,5} & \beta_{4,6} \\
3& . & . & .& . & \beta_{4,7}
}
\quad
\mbox{s.t.}
\quad
\left\{
\begin{array}{l}
\beta_{0,0}+\beta_{2,3}+\beta_{4,6} = 15\\
\beta_{0,1}+\beta_{2,4}+\beta_{4,7} = 2\\
\beta_{1,1}+\beta_{3,4}=2\\
\beta_{1,2}+\beta_{3,5}=15
\end{array}
\right.
\]
It turns out that a finite number of candidate Betti tables are allowed. As the transposed of a matrix factorization is again a matrix factorization, we could as well consider Betti tables giving rise to matrix factorizations with the dual shape
\[
\begin{array}{cc}
2 & .\\
15 & 15 \\
. & 2 \\
\end{array}
\]
We might also tolerate cancellations, i.e., we might assume that the Shamash resolution is not minimal; this makes the number of candidate Betti tables become infinite. However, we can always limit our search to finitely many cases, fixing for instance the entries of the tables in which we allow cancellations and an upper bound for their number.
By doing this, we end up with a list of tables; we can further limit our search to the ones lying in the Boij--S\"oderberg cone, i.e., tables $\beta_{i,j}$ for which there exists a rational number $q \in \mathbb{Q}$ and an $S$-module $M'$ such that $q\cdot\beta_{i,j}=\beta_{i,j}(M')$. It is of course convenient to let a computer deal with all the possibilities.
\begin{example}
\label{exampleList}
A list of tables satisfying the aforementioned conditions can be produced by a \Mac computation, whose implementation is provided by the function \texttt{verifyAssertionsOfThePaper(3)} in \cite{SchreyerTanturriCode}. An example of a table in this list is
\begin{equation}
\label{auxiliaryBetti}
\bettif{
0& 1 & . & & & \\
1& & . & & & \\
2& & 5 & & & \\
3& & 2 & 15& 11& 2
}
\end{equation}
\end{example}
Suppose there exists an auxiliary $S$-module $N$ with resolution $F_{\bullet}$ with Betti numbers (\ref{auxiliaryBetti}), and consider a cubic form $f$. If we apply the Shamash construction to get a resolution of $N$, it is easy to see that the induced map $\overline{F_0}(-3) \rightarrow \overline{F_1}$ has a non-zero invertible part, hence the expected shape of the induced matrix factorization is (\ref{shape1412}).
The following proposition shows that such an auxiliary module $N$ exists and its induced matrix factorization has indeed the expected shape.
\begin{proposition}
\label{auxiliaryE}
Let $E$ be a general curve of genus 10 and degree 13 in $\PP^4$ and $X=\Vi(f)$ a general cubic threefold containing it. Then the Betti table of $S/I_E$ is (\ref{auxiliaryBetti}), the matrix factorization induced by $S/I_E $ on $X$ has shape (\ref{shape1412}) and is general enough in the sense of Theorem \ref{constructionThm}, i.e., it can be used to construct curves of genus 12 and degree 14.
\end{proposition}
\begin{proof}
For such a curve $E$, all the statements correspond to open conditions and it is sufficient to check them on a particular example. An implementation of its construction is provided by the function \texttt{verifyAssertionsOfThePaper(4)} in \cite{SchreyerTanturriCode}; an explanation of the procedure used is to be found in the proof of Theorem \ref{unirationalityThm} and in Algorithm \ref{algorithmUnirat}.
\end{proof}
\subsection{Unirationality of \texorpdfstring{$\sH_{12,8}$}{H 12 8}}
Summarizing, we can use general curves $E$ of genus 10 and degree 13 to get curves $C$ of genus 12 and degree 14. Moreover, such construction is unirational; this means that a unirational family of $E$'s yields a unirational family of $C$'s. Thus, we can focus on the former in the attempt of constructing a family dominating the latter.
\begin{theorem}
\label{unirationalityThm}
The spaces $\sW^4_{12,14}$ and $\sH_{12,8}$ are unirational.
\end{theorem}
\begin{proof}
Let $H_{13,10} \subset \Hilb_{13t+1-10}(\PP^4)$ and $H_{14,12} \subset \Hilb_{14t+1-12}(\PP^4)$ denote the components whose general elements are linearly normal non-degenerate smooth curves of degree and genus $({d},g)=(13,10)$ or $(14,12)$ respectively.
These components dominate $\sW^4_{10,13}$ and $\sW^4_{12,14}$.
We will exhibit a unirational family of curves $C$ in $H_{14,12}$ by explicitly constructing a dominant family of curves $E$.
To do that, suppose we have a unirational parameterization of $\sM_{10,5}$, the moduli space of curves of genus 10 with 5 marked points; start from a curve $E$ and an effective divisor $D$ of degree 5. The linear system $|K_E-D|$ embeds $E$ in a curve of degree 13 in $\PP^4$ by Riemann--Roch. The construction dominates $H_{13,10}$ and via matrix factorizations this unirational parameterization induces a unirational family in $H_{14,12}$.
A unirational parameterization of $\sM_{10,5}$ can be constructed as follows. In \cite{GeissUnirationality}, a dominant unirational family of $6$-gonal curves $E$ of genus $10$ is constructed by means of liaison of curves in $\PP^1 \times \PP^2$. We can moreover modify the last step of the construction (see Algorithm \ref{algorithmUnirat} below) to impose $E$ to pass through five unirationally chosen points.
Thus we have produced a unirational family of curves in $H_{14,12}$, whose general element is a smooth irreducible curve of maximal rank with expected Betti table as in Proposition \ref{expected1412}. The corresponding code is implemented in the function \texttt{randomCurveGenus12Degree14InP4} of \cite{SchreyerTanturriCode}, along the lines of Algorithm \ref{algorithmUnirat}. It remains to prove that the family of curves constructed from pairs $(E,X)$ with $E \in H_{13,10}$ and $X \in \PP(\HH^0(\sI_E(3)))$ via matrix factorizations dominates $H_{14,12}$. For this it suffices to prove that
we can recover $E$ from a matrix factorization $(\varphi,\psi)$ of shape (\ref{shape1412}).
\begin{proposition}
\label{reconstruction1310}
Let $E \in H_{13,10} $ be a general curve of genus $10$ and degree $13$,
let $X$ be a general cubic containing $E$ and let $\sF$ be the rank $7$ vector bundle on $X$ associated to the matrix factorization induced by $N=S/I_E$, i.e., $\sF$ is the image of $\psi$
$$
\xymatrix{ \sO^{15}_X(-3) \oplus \sO^2_X(-4)& &\ar[ll]_{ \begin{pmatrix} \psi_{11} & \psi_{12} \cr 0 & \psi_{22} \cr \end{pmatrix} } \sO^2_X(-4) \oplus \sO^{15}_X(-5)}.
$$
There exists an exact complex induced by the Shamash construction
$$
0 \leftarrow \sI_{E/X} \leftarrow \sO^4_X(-3) \oplus \sO^2_X(-4) \leftarrow \sF \leftarrow \sO^2_X(-4) \leftarrow 0;
$$
moreover, for a general choice of a quotient $\sO_X^4(-3) \leftarrow \sO^{15}_X(-3)$ which composes to zero with the component
$\psi_{1,1}$ of $\psi$, the complex
\begin{equation}
\label{resolutionE'}
\sO^4_X(-3) \oplus \sO^2_X(-4) \leftarrow \sF \leftarrow \sO^2_X(-4) \leftarrow 0
\end{equation}
is a locally free resolution of the ideal sheaf of a smooth curve $E'\in H_{13,10}$ on $X$.
Let $(\psi,\varphi)$ be a given general matrix factorization on $X$ of shape (\ref{shape1412}) and let $\sF$ be the image of $\psi$. Then the choice of the quotient $q$ as above corresponds to the choice of a point in $\PP^4$; for a general such choice, (\ref{resolutionE'}) is a locally free resolution of the ideal sheaf of a smooth curve $E'\in H_{13,10}$ on $X$.
\end{proposition}
\begin{proof} The first step is just reversing the Shamash construction of the $S_X$-resolution of $N=S_E$.
Since $X$ is smooth the kernel of the map $ \sI_{E/X} \leftarrow \sO^4_X(-3) \oplus \sO^2_X(-4)$ is already a vector bundle $\sG$ on $X$. The bundle $\sF$ surjects onto $\sG$ with the image of $\sF \leftarrow \sO^2_X(-4)$ contained in the kernel. Since the kernel of the map
$\sG \leftarrow \sF$ is a rank 2 vector bundle of the same degree as $\sO^2_X(-4)$, the induced map between the kernel and $\sO^2_X(-4)$ is an isomorphism.
The fact that, for a given (general) matrix factorization, a general choice of the quotient $q$ yields a complex (\ref{resolutionE'}) which is a locally free resolution of a smooth curve $E'\in H_{13,10}$ is an open condition both on matrix factorizations and in $\PP^4$. It is thus sufficient to check it computationally on an explicit example, as can be done with the code provided by the function \texttt{verifyAssertionsOfThePaper(5)} in \cite{SchreyerTanturriCode}.
\end{proof}
Finally, to conclude with the unirationality of $\sH_{12,8}$ we note that a general point in $\sW^4_{12,14}$ gives as Serre dual model a point in $\sW^1_{12,8}$ and conversely. Moreover, the choice of a basis of $\PP^1$ is rational, and thus we get a unirational family of $\PP^1$-coverings of degree $8$. The locus of curves in $\sH_{12,8}$ having a smooth component of the Brill--Noether locus of expected dimension is open and contains the points we explicitly construct, hence our family is dominant. This completes the proof of Theorem \ref{unirationalityThm}.
The function \texttt{randomGenus12Degree8CoverOfP1} in \cite{SchreyerTanturriCode} is an implementation of the above unirational construction and produces a random canonical curve of genus 12 together with two hyperplanes in $\PP^{11}$ cutting out a $g^1_8$. \qedhere
\end{proof}
\begin{remark}
Let
$M^{15 \; 2}_ {\;2 \; 15}(X)$
denote the component, in the space of equivalence classes of shape (\ref{shape1412}) on a given cubic $X$, whose general element is induced by a curve $C \in H_{14,12}$. Above we have established a unirational correspondence between spaces of curves on $X$
$$\xymatrix{ \{C \subset X \} \ar[rd]_{\GG(2,5)} && \ar[dl]^{\PP^4} \{ E \subset X \} \cr
& M^{15 \; 2}_ {\;2 \; 15}(X) &\cr
}
$$
whose fibers are open subsets of a $\GG(2,5)$ or $\PP^4$ respectively.
We may interchange the role of $C$ and $E$: since $S_C$ and $\Gamma_*(\sO_E)$ have Betti tables
$$
\bettif{0& 1 & . & & & \\
1& & . & & & \\
2& & 4 & & & \\
3& & 5 & 18& 12& 2
} \quad \hbox{ and } \quad
\bettit{
0& 1 & & & \cr
1 & . & & & \cr
2& 2 & 15 & 18 & 5 \cr
3 & & & & 1\cr
}
$$
they both lead to matrix factorizations on $X$ of shape $$\begin{matrix} 15 & 2 \cr 5 & 18\end{matrix}\, $$ By the Reconstruction Theorem \ref{reconstructionThm}, and the same argument as in Proposition \ref{reconstruction1310}, we get another correspondence
$$\xymatrix{ \{C \subset X \} \ar[rd]_{\GG(2,5)} && \ar[dl]^{\PP^4} \{ E \subset X \} \cr
& M^{15 \; 2}_ {\; 5 \; 18}(X) &\cr
}.
$$
We believe that this symmetry can be explained by the fact that curves $C \in H_{14,12}$ are linked to curves $E \in H_{13,10}$ via a complete intersection of three cubics:
$$ \deg C + \deg E = 27=3^3 \hbox{ and } g_C-g_E =\frac{1}{2}(C-E).((9-5)H)=2.$$
This fact yields a correspondence
$$\xymatrix{
& \ar[dl]_{\PP^3} \{ \hbox{c.i.}\, C \cup E \} \ar[dr]^{\GG(3,5)}& \cr
H_{14,12}&& H_{13,10} \cr
}
$$
and a simpler proof that $H_{14,12}$ is unirational, as further shown in \cite[Remark 3.3]{KeneshlouTanturri}.
\end{remark}
\begin{algorithm}
\label{algorithmUnirat}
Summarizing, the following construction yields a unirational parameterization of $\sW^4_{12,14}$. The first four steps are a slight modification of the construction in \cite{GeissUnirationality}. The algorithm is implemented by the function \texttt{randomCurveGenus12Degree14InP4} in \cite{SchreyerTanturriCode}.
\begin{enumerate}
\item On $\PP^1 \times \PP^2$, start with a rational curve of degree 4 together with 3 general lines. Call $E''$ their union.
\item Choose two general forms $g_i \in \HH^0(\sI_{E''}(4,2))$ and construct $E'$ as the linkage of $E''$ on the complete intersection defined by $g_1, g_2$.
\item Choose unirationally five general points $\{p_j\}$ in $\PP^1 \times \PP^2$ and choose, in the $7$-dimensional space $\HH^0(\sI_{E''}(3,3))$, two general forms $f_i$ vanishing on each $p_j$.
\item Construct $E$ as the linkage of $E'$ in the complete intersection defined by $f_1, f_2$. By construction, $E$ passes through $p_j$, is a general curve of genus $10$ and $D=p_1+\ldots+p_5$ is a general effective divisor of degree $5$ on $E$.
\item Embed $E$ via $|K_E-D|$ into $\PP^4$. The curve $E \subset \PP^4$ is a general curve of genus $10$ and degree $13$.
\item Choose a general cubic hypersurface $X \supset E$ and consider the matrix factorization on $X$ induced by $S/I_E$.
\item Choose a general point $p \in \GG(2,5)$ as in Remark \ref{grassmannians}, construct the monad (\ref{monad}) and the corresponding curve $C \subset X$, which is a curve of genus 12 and degree 14.
\end{enumerate}
\end{algorithm}
\section{Families of curves on rational surfaces}
\label{families}
In this section, we show how matrix factorizations can be used to construct unirational families of curves of genus $g$ and degree ${d}$ in $\PP^4$, with $(g,{d})$ belonging to
\[
\{(16,17), (17,18), (18,19), (19,20), (20,20)\}.
\]
The main motivation for the choice of these cases is the unknown unirationality of the corresponding moduli spaces of curves. One would like to produce a unirational family of projective curves which is dominant on the underlying moduli space of curves. As a general expectation, curves with fixed genus and lower degree should be easier to construct; the degree ${d}$ considered for each $g$ above is chosen as the minimum such that the Brill--Noether number $\rho(g,4,{d}) \geq 0$.
\subsection{Explicit construction}
We can try to mimic the technique used in Section \ref{bettiCandidates} and look for auxiliary modules whose Betti tables satisfy certain conditions.
A list of candidate Betti tables can be produced with the same technique and implementation used in Example \ref{exampleList}. Alternatively, the function \texttt{precompiledListOfCandidates} in \cite{SchreyerTanturriCode} prints precomputed lists for each genus $g \in [16,20]$.
For instance, the lists contain the tables reported in Table \ref{bettiauxiliary}. All of them correspond to modules $N$ supported on a curve which will be denoted by $Z$. We will assume that $\sL=\widetilde N$ is a line bundle on $Z$.
{\small
\begin{table}[h!bt]
\caption{Betti tables for auxiliary modules}
\label{bettiauxiliary}
\begin{tabular}{ccc} \toprule
$(g,{d})$ & $\beta_{i,j}(N)$ & $(\codim \supp N, \deg N)$ \\ \midrule
\rule{1.2ex}{0ex}$(16,17)$\rule{1.2ex}{0ex} & \rule{1.2ex}{0ex}$\bettif{
0& 6 & 10 & 3 & & \\
1& & 3 & & & \\
2& & 1 & 13& 9 & 1
}$\rule{1.2ex}{0ex} &
$ (3, 19)$
\\ \midrule
\rule{1.2ex}{0ex}$(17,18)$\rule{1.2ex}{0ex} & \rule{1.2ex}{0ex}$\bettif{
0& 6 & 10 & 3 & & \\
1& & 3 & & & \\
2& & 2 & 16& 12 & 2
}$\rule{1.2ex}{0ex} &
$(3,18)$
\\ \midrule
\rule{1.2ex}{0ex}$(18,19)$\rule{1.2ex}{0ex} & \rule{1.2ex}{0ex}$\bettif{
0& 6 & 10 & 3 & & \\
1& & 3 & & & \\
2& & 3 & 19& 15 & 3
}$\rule{1.2ex}{0ex} &
$(3,17)$
\\ \midrule
\rule{1.2ex}{0ex}$(19,20)$\rule{1.2ex}{0ex} & \rule{1.2ex}{0ex}$\bettif{
0& 6 & 10 & 3 & & \\
1& & 3 & & & \\
2& & 4 & 22& 18 & 4
}$\rule{1.2ex}{0ex} &
$(3,16)$
\\ \midrule
\rule{1.2ex}{0ex}$(20,20)$\rule{1.2ex}{0ex} & \rule{1.2ex}{0ex}$\bettif{
0& 6 & 10 & 3 & & \\
1& & 4 & & & \\
2& & & 16& 14& 3
}$\rule{1.2ex}{0ex} &
$(3,16)$
\\ \bottomrule
\end{tabular}
\end{table}
}
The first row in these Betti tables is independent of $(g, {d})$ and the corresponding complex over $S$, dualized and sheafified,
\begin{equation}
\label{monadY}
\xymatrix{
0 \ar[r] &
\sO_{\PP^4}^6(-4) \ar[r] &
\sO_{\PP^4}^{10}(-3) \ar^-{\alpha}[r] &
\sO_{\PP^4}^{3}(-2) \ar[r] &
0
}
\end{equation}
could be a monad for the ideal sheaf of a surface $Y\subset \PP^4$.
Two families of smooth surfaces of this kind are known:
\begin{itemize} \item the Alexander surfaces $Y$ \cite{Alexander}, $\PP^2$ blown up in $10$ general points embedded
via the linear system $|14L-\sum_{i=1}^{10} E_i|$, where $L$ is the strict transform of a general line in $\PP^2$ and $E_i$ are the exceptional divisors corresponding to the 10 blown-up points, and
\item the blow-ups $Y'$ of Enriques surfaces in a single point embedded by $|H-E|$, where $H$ is a Fano polarization and $E$ the exceptional divisor \cite{AureRanestad}.
\end{itemize}
Both surfaces have degree $9$, $K_Y^2=-1$, sectional genus $\pi=6$ and as Hartshorne--Rao module $\HH^1_*(\sI_Y) =\coker(S^{10}(-3)\to S^3(-2))$ a module with Hilbert series $3t^2+5t^3+t^4$. They differ by the Betti numbers of their Hartshorne--Rao modules,
which are
$$
\begin{tabular}{c|cccccc}
& 0 & 1 & 2 & 3 & 4 & 5 \cr\hline
2 & 3 & 10 & 6 \cr
3 & & & 15 &26 & 15 & 3 \cr
4 & & & 1& 3& 3& 1\cr
\end{tabular}
\qquad \hbox{ and } \qquad
\begin{tabular}{c|cccccc}
& 0 & 1 & 2 & 3 & 4 & 5 \cr\hline
2 & 3 & 10 & 6 \cr
3 & & & 15 &25 & 12 & \cr
4 & & & & & & 1\cr
\end{tabular}
$$
respectively. Hence also $S_Y$ and $S_{Y'}$ have different Betti tables:
$$
\begin{tabular}{c|cccccc}
& 0 & 1 & 2 & 3 & 4 \cr\hline
0 & 1 & .\cr
1& & .\cr
2 & & .\cr
3 & & . \cr
4 & & 15 &26 & 15 & 3 \cr
5 & & 1& 3& 3& 1\cr
\end{tabular}
\qquad \hbox{ and } \qquad
\begin{tabular}{c|cccccc}
& 0 & 1 & 2 & 3 & 4 \cr\hline
0 & 1 & . &\cr
1& & .\cr
2 & & .\cr
3 & & .\cr
4& & 15 &25 & 12 & \cr
5 & & & & & 1\cr
\end{tabular}
$$
The rational surface $Y$ has a 6-secant line and contains no $(-1)$-line, while the Enriques surface has no 6-secant line and contains one $(-1)$-line.
For further details, see \cite{DeckerEinSchreyer}.
\begin{proposition}\label{intersectionXY}
If $C$ is a curve of genus $g$ and degree ${d}$ obtained via matrix factorizations from an auxiliary module $N$ with Betti table as in Table \ref{bettiauxiliary} such that
\begin{enumerate}
\item $\sL= \widetilde N$ is a line bundle on a curve ${Z}$ different from $C$, and
\item (\ref{monadY}) is a monad for a smooth surface $Y$ of degree $9$ as above,
\end{enumerate}
then $C$ lies on $Y$. More precisely,
if $f \in (I_C)_4$ is any quartic which annihilates $N$ and $X=\Vi(f)$ the corresponding hypersurface, then
$$Y \cap X=C \cup {Z}.$$
\begin{proof}
Since $Y$ does not lie on any quartic, the intersection $Y\cap X$ is proper
and the sequence (\ref{monadY}) restricted to $X$
\begin{equation}
\label{monadYX}
\xymatrix{
0 \ar[r] &
\sO_{X}^6(-4) \ar[r] &
\sO_{X}^{10}(-3) \ar[r] &
\sO_{X}^{3}(-2) \ar[r] &
0
}
\end{equation}
is a monad for the ideal sheaf $\sI_{Y\cap X/X}$ of $Y\cap X$ on $X$. We claim that (\ref{monadYX}) is a subcomplex of the sheafified dual of the suitably twisted linear strand in the Shamash resolution of $N$.
For example, let us focus on the case $(g,{d})=(16,17)$. The dual linear strand reads
$$ 0 \to
\sO_{X}^{0+1}(-5) \to
\sO_{X}^{6+13}(-4) \to
\sO_{X}^{10+9}(-3) \to
\sO_{X}^{3+1}(-2) \to 0
$
and the maps from a first to a second summand are all zero by (\ref{shamashzero}). Thus, we get a commutative diagram of monads
$$ \xymatrix{
0 \ar[r] & \sO_{X}^6(-4) \ar[r] \ar[d] & \sO_{X}^{10}(-3) \ar[r] \ar[d] & \sO_{X}^{3}(-2) \ar[r] \ar[d]& 0 \cr
0 \ar[r] & \sO_{X}^3(-2)\oplus \sO_X(-1) \ar[r] & \sF \ar[r] & \sO_{X}^{4}(-2) \ar[r] & 0 \cr
}$$
where the first vertical map is up to sign a component of the dual of the first map of the $S_X$-resolution of $N$, and the third one is the inclusion induced by the Shamash resolution of $N$. The map on homology gives us a map $\sI_{Y\cap X/X} \to \sI_{C/X}$
between torsion free sheaves,
whose double dual is a map $\sO_X \to \sO_X$. Thus, to conclude that $C$ is a component of $Y \cap X$, it suffices to prove that $\sI_{Y\cap X/X} \to \sI_{C/X}$ is not the zero map. Let $\sJ$ and $\sK$ denote the kernels in the monads. We get a diagram
$$
\xymatrix{
0 \ar[r] & \sO_{X}^6(-4) \ar[r] \ar[d] & \sJ \ar[r] \ar[d] & \sI_{Y\cap X/X} \ar[r] \ar[d]& 0 \cr
0 \ar[r] & \sO_{X}^3(-2)\oplus \sO_X(-1) \ar[r] & \sK \ar[r] & \sI_{C/X} \ar[r] & 0 \cr
}
$
of exact sequences.
If the map on the right was zero, we would get a homotopy $\sJ \to \sO_{X}^3(-2)\oplus \sO_X(-1)$, which since $\HH^1(\sO_X(n))=0$ for all $n$ would lift to a
map $\sO_X^{10}(-3) \to \sO_{X}^3(-2)\oplus \sO_X(-1)$ such that
$$
\xymatrix{
\sO_{X}^6(-4) \ar[r] \ar[d] & \sO_X^{10}(-3) \ar[dl] \cr
\sO_{X}^3(-2)\oplus \sO_X(-1) & \cr
}
$$
commutes. But this contradicts the fact that the map
$$
\xymatrix{
S_X^6 & \ar[l] S_X^{10}(-1)\oplus S_X^3(-2) \oplus S_X(-3)
}
$$
is the first map in the minimal free resolution of $N$ as
an $S_X$-module.
Therefore, $C$ is a component of $Y\cap X$. The curve ${Z}$ is also contained in $Y\cap X$. Since
$$\deg C + \deg {Z} = \deg C + \deg N =36=\deg Y \deg X$$
there are no further components, and $C \cup {Z} = Y\cap X$. The proof for the other pairs $(g,{d})$ is similar.
\end{proof}
\end{proposition}
\subsection{Families of curves on rational surfaces}
\label{familiesOfCurves}
We have two ways to tackle the construction of our curves $C$: we could try to produce a module $N$ having a Betti table as in Table \ref{bettiauxiliary}, then induce a matrix factorization and get a curve as described in the previous sections. A key observation is that the line bundle $\sL$ on the curve $Z$ coincides with $\left.\omega_Y(1)\right|_Z$.
This approach works, and led us to discover Proposition \ref{intersectionXY} and the fact that some of desired curves $C$ lie on Alexander surfaces. An implementation of the construction of curves on Alexander surfaces via matrix factorizations is provided by the function \texttt{verifyAssertionsOfThePaper(6)} in \cite{SchreyerTanturriCode}.
A second, more convenient approach is to look for our desired curves $C$ directly on these surfaces, e.g., the Alexander surfaces $Y$. The genus and the degree of $C$ impose conditions on the divisor class $[C] =a_0L-\sum a_iE_i \in \Pic(Y)$.
By maximizing the dimension of the linear systems, we can maximize the dimension of the corresponding unirational families of curves. In Table \ref{dimunirationalfam} we list the linear systems achieving the maximal dimension; a general element in such linear systems is a curve which satisfies all our assertions, as one can verify by computing a single randomly chosen example, see the code provided by the function \texttt{verifyAssertionsOfThePaper(7)} in \cite{SchreyerTanturriCode}. In particular this proves the first two assertions of Theorem \ref{constructionThm}.
{\small
\begin{table}[h!bt]
\caption{Unirational families of curves on the Alexander surface}
\label{dimunirationalfam}
\begin{tabular}{ccc} \toprule
$(g,d)$ &linear system & dimension\\\midrule
\rule{1.2ex}{0ex}$(16,17)$\rule{1.2ex}{0ex}
& $21L-\sum_{i=1}^{4}7E_i-\sum_{j=5}^{10}6E_j$
& \rule{1.2ex}{0ex} 26 \rule{1.2ex}{0ex}
\\ \midrule
\rule{1.2ex}{0ex}$(17,18)$\rule{1.2ex}{0ex}
& $22L-\sum_{i=1}^{8}7E_i-6E_9-5E_{10}$
& \rule{1.2ex}{0ex} 27 \rule{1.2ex}{0ex}
\\ \midrule
\rule{1.2ex}{0ex}$(18,19)$\rule{1.2ex}{0ex}
& $19L-\sum_{i=1}^{7}6E_i-\sum_{j=8}^{10}5E_j$
& \rule{1.2ex}{0ex} 29 \rule{1.2ex}{0ex}
\\ \midrule
\rule{1.2ex}{0ex}$(19,20)$\rule{1.2ex}{0ex}
& $20L-7E_1-7E_2-\sum_{i=3}^{8}6E_i-5E_9-5E_{10}$
& \rule{1.2ex}{0ex} 30 \rule{1.2ex}{0ex}
\\ \midrule
\rule{1.2ex}{0ex}$(20,20)$\rule{1.2ex}{0ex}
& $20L-7E_1-\sum_{i=2}^{9}6E_i-5E_{10}$
& \rule{1.2ex}{0ex} 31 \rule{1.2ex}{0ex}
\\ \bottomrule
\end{tabular}
\end{table}
}
Unfortunately, the so-constructed unirational families are far from being dominant on the corresponding moduli spaces.
Curves of same degree and genus on a blown-up Enriques surface give at best families of the same dimension.
There are many other possible choices of a candidate Betti table of $N$.
For instance, for $g \geq 16$, other even simpler rational surfaces show up and we can
produce other examples of curves lying on them. Unfortunately,
all the unirational families we have been able to construct are not dominant. Nonetheless, there is no reason why one should not be able to realize bigger families of projective models via matrix factorizations starting from different Betti tables, the biggest obstacle being of course the construction of suitable auxiliary modules $N$.
\subsection{Curves lying on a general hypersurface}
We conclude by showing that, even though the examples of curves of genus $g \geq 16$ are far from being general as projective models, we can still use them, as well as the examples of curves with lower genera constructed in the previous sections, to prove that a general hypersurface contains a whole family of them.
\begin{theorem}
\label{generalQuartic}
A general cubic hypersurface in $\PP^4$ contains a family of dimension $2{d}$ of curves of genus $g$ and degree ${d}$ for
\[
(g,{d}) \in \{(12,14), (13,15)\}.
\]
A general quartic hypersurface in $\PP^4$ contains a ${d}$-dimensional family of curves of genus $g$ and degree ${d}$ for
\[
(g,{d}) \in \{(16,17), (17,18), (18,19), (19,20), (20,20)\}.
\]
\begin{lemma}
\label{eulerNormal}
Let $C$ be a curve of genus $g$ and degree ${d}$ in $\PP^n$ and $X$ a hypersurface of degree $s$ containing it. Then
\[
\chi(\sN_{C/X})={d} (n+1-s)+(1-g)(n-4).
\]
\begin{proof}
The Euler sequence of $\PP^n$ restricted to $C$ yields
\[
\chi(\left.\sT_{\PP^{n}}\right|_{C})=(n+1)({d}+1-g)-1+g.
\]
Since $\left.\sN_{X/\PP^{n}}\right|_{C} \cong \sO_{C}(s)$, from the sequence defining $\sN_{X/\PP^{n}}$ restricted to $C$ we get
\[
\chi(\left.\sT_{X}\right|_{C}) = (n+1)({d}+1-g)-1+g - ({d} s+1-g).
\]
The conclusion follows by looking at the short exact sequence defining $\sN_{C/X}$.
\end{proof}
\end{lemma}
\begin{proof}[Proof of Theorem \ref{generalQuartic}]
Let $C$ be a general curve in $\PP^4$ of genus $g$ and degree ${d}$, and let $X$ be a general hypersurface of degree $s$ containing it, with $s$ chosen accordingly to $(g,{d})$ as in the statement of the Theorem. By Lemma \ref{eulerNormal}, $\chi(\sN_{C/X})={d} (5-s)$.
We claim that $\hh^{1}(\sN_{C/X})=0$. It is sufficient to check this vanishing on one example for each pair $(g,{d})$, as can be done with the \Mac code provided by the function \texttt{verifyAssertionsOfThePaper(8)} in \cite{SchreyerTanturriCode}, and conclude by semicontinuity. Hence, $\hh^0(\sN_{C/X})={d}(5-s)$.
Let $\sT_s$ be the space of threefolds of degree $s$ containing a general curve $C$ of genus $g$ and degree ${d}$, up to projective equivalences. Let $m:=\hh^0(\PP^4,\sI_C(s))-1=\binom{4+s}{4}-s{d}+g-2$. We have
\[
\dim(\sT_s)=\dim \sM_g + \rho({d},4,g) + m - \hh^0(\sN_{C/X}) = \binom{4+s}{4}-25. \qedhere
\]
\end{proof}
\end{theorem}
|
2,869,038,155,034 | arxiv | \section{Introduction}
Recently, the dialog response selection task has attracted increasing attention,
whose goal is to select proper responses given a dialog history (often a multi-turn conversation) and a set of candidates.
So far, the cross-encoder models become dominant in dialog response selection research \cite{Gu2020SpeakerAwareBF,Whang2021DoRS,Xu2021LearningAE,Su2021DialogueRS,han-etal-2021-fine}.
In order to capture the rich interactions between context and candidate, such cross-encoder models jointly encode them.
However, in inference, these cross-encoder models must compute the matching degree for each and every possible context-response pair by feeding it into large neural models,
which is infeasible to run over millions of candidates in practice.
Therefore, they can only be used as a rerank module in a recall-then-rerank pipeline framework, that is, ranking a rather small size
of the candidates recalled from the full candidate pool by a fast recall module.
The recall-then-rerank pipeline faces two limitations in real-world scenarios: (1) the errors in the recall module will be propagated to the rerank module.
The accuracy of powerful rerank models may not be fully exploited;
(2)
widely adopted recall modules, such as BM25 and TF-IDF \cite{Robertson2009ThePR}, usually rely on the context-context similarity for response selection,
which precludes the use of abundant nonparallel corpus that only contains unpaired sentences.
In this paper, we explore an end-to-end framework for the dialog response selection task that contains a dense retrieval model.
Different from the pipeline framework, it discards the recall module and directly searches proper responses from a full candidate pool that may contain millions of candidates.
Concretely, we use two decoupled encoders to encode context and response separately and obtains their matching degree via the inner product of their semantic representations.
To compensate the drawback of losing interaction between dialog context and response,
we build the decoupled encoders with the BERT model \cite{devlin-etal-2019-bert}, and employ two effective training strategies to train them.
The first one is in-batch negative sampling that optimizes the shared semantic space of context and response by using contrastive loss. The second one is fine-grained data augmentation that feeds lots of augmented samples into DR-BERT model during training, which are cut from the multi-turn conversation context.
We name our proposed \textbf{D}ense \textbf{R}etrieval model for dialog response selection as DR-BERT.
This end-to-end framework has the following advantages: (1) due to the decoupled context and response encoders, the representations of all candidates in the corpus can be pre-computed, indexed, and searched efficiently using the off-the-shelf vector search toolkit \cite{JDH17,Karpukhin2020DensePR}; (2) without the requirement of the recall module,
it can select responses from a large-scale nonparallel corpus that is not in context-response form, which could potentially improve response quality further.
To test our proposed DR-BERT model and end-to-end framework, we conduct experiments in two settings: (1) re-rank experiment: following previous works, baseline models are required to rank a given candidate set, and information retrieval metrics are used to test their performance automatically; (2) full-rank experiment: baselines are required to search the proper responses from a full candidate pool, and the quality of these searched responses are measured by human evaluation.
Furthermore, in the full-rank experiment, in addition to the training corpus, we also examine our end-to-end framework by adding two nonparallel corpora separately.
Extensive re-rank experimental results demonstrate that the DR-BERT model achieves the comparable performance with the previous state-of-the-art cross-encoder models, and even significantly outperforms them on two datasets.
Besides, full-rank experimental results prove the superiority of the end-to-end framework over the recall-then-rerank pipeline framework. Finally, our contributions can be summarized as follows:
\begin{itemize}
\item We explore the end-to-end framework for dialog response selection task, which directly selects proper responses from the whole corpus.
\item We propose the DR-BERT model, which works coherently with our end-to-end framework. In addition, two training strategies are introduced to further improve our model's performance.
\item Extensive experiments and in-depth analyses are conducted to further reveal the merits of our proposed approach.
\end{itemize}
\section{Related Works}
\subsection{Dialog Response Selection Models}
Based on the adopted model structure, we can divide previous dialog response selection models into two categories: interaction-based and representation-based models \cite{Tao2021ASO}. Both of them consists of two important components: the encoding function $f$ and the aggregation function $\rho$.
\noindent \textbf{Interaction-based models}.
$f$ is first used to encode the context-response pair $(\{c_{i,j}\}_{j=0}^m, r_i)$, and collect the rich interaction matching information between them, where $i$ denotes the $i$-th training sample in the dataset and $m$ is the number of the utterance in the multi-turn dialog history.
Then, $\rho$ is utilized to generate the final matching score between conversation context and candidate based on the collected interaction information.
For earlier previous works \cite{wu-etal-2017-sequential,Yuan2019MultihopSN}, the encoding function $f$ is built by RNNs, CNNs, and self-attention networks, and the aggregation function $\rho$ is usually built with RNNs.
For recent works, i.e., cross-encoder models \cite{Whang2020AnED,Gu2020SpeakerAwareBF,Xu2021LearningAE,han-etal-2021-fine}, $f$ is a heavy pre-trained language model (PLM), which processes the concatenation of the multi-turn conversation context and response. $\rho$ is a nonlinear transformation followed by a sigmoid function.
In practice, interaction-based models must compute the matching degree for each and every context-response pair, which results in slow inference speed.
This problem is more serious for recent cross-encoder models, due to their heavy PLM-based encoding function $f$.
\noindent \textbf{Representation-based models}.
$f$ first encodes the context and response into their semantic representations separately by using the neural models.
Then, the similarity function $\rho$ is used to obtain the matching degree, such as dot product and cosine similarity.
Previous works \cite{Lowe2015TheUD,Zhou2016MultiviewRS} build $f$ by using CNNs, RNNs, and self-attention networks. In this paper, we construct the $f$ with the widely used pre-trained language models (PLMs), i.e., the BERT model, and the inner product similarity function $\rho$ is used to generate matching degrees.
\subsection{Dialog Response Selection Framework}
So far, the most famous dialog response selection framework is the recall-then-rerank pipeline framework, which consists of cascaded recall and rerank modules. Recall module first recalls a coarse-grained candidate set from the whole corpus based on the semantic correlation and the word overlap between a given query and the candidate's context, such as TF-IDF and BM25 module. Then, rerank module ranks the coarse-grained candidate set by using the heavy ranking models, for example, the cross-encoder models. However, because the size of the coarse-grained candidate set is relative small, the overall performance of the recall-then-rerank pipeline framework cannot be guaranteed. In this paper, we explore the end-to-end framework for dialog response selection, which directly selects the proper responses from the whole corpus in an end-to-end fashion.
\begin{figure*}[h]
\center{\includegraphics[width=\textwidth, height=6.2cm] {img1.pdf}}
\caption{
The overview of our DR-BERT training, offline index and online inference. $c$ and $r$ represent conversation context and response respectively. $V_{c}$ and $V_{r}$ denotes their representations in the semantic space. Please note that the offline index may comes from the nonparallel corpus, and thus enables the DR-BERT to select response from huge nonparallel corpus in online inference.}
\label{img:overview}
\end{figure*}
\section{Methodology}
The framework of DR-BERT is shown in Figure \ref{img:overview}. It separately represents the context $c_i$ and response $r_i$ to two vectors $V_{c_i}$ and $ V_{r_i}$ via two decoupled encoders, and obtain their matching score via inner product. Since the encoders are independent, we are able to pre-compute the representations of all possible responses and cached them as index. Then in online inference, with the cached index, the response selection can be reduced to Maximum Inner Product Search (MIPS). With well-designed data structures and search algorithms (e.g., inverted index and learn to hash), the retrieval can be done efficiently.
In the following, we will first introduce the whole training process and two training strategies of DR-BERT in Section \ref{sec:dr-bert training}. The offline index and online inference process will be described in Section \ref{sec:online_inf}. Finally, we introduce some implementation details in Section \ref{sec:imp_detail}.
\iffalse
\subsection{Overview}
Suppose that a dialog response selection corpus $\mathcal{D}=\{(c_i, r_i)\}_{i=1}^N$ contains $N$ training samples, where $c_i=[u_{i,1},u_{i,2},...,u_{i,m_i}]$ denotes a multi-turn conversation that has $m_i$ utterances, $r_i$ denotes the ground-truth response of $c_i$. In this paper, our goal is to build an end-to-end framework for dialog response selection task, which contains our proposed DR-BERT model. As shown in Figure \ref{img:overview}, there exist two important procedures in our proposed end-to-end framework: DR-BERT training and online inference. DR-BERT training utilizes two training strategies to optimize the DR-BERT model effectively: fine-grained data augmentation and in-batch negative sampling with contrastive loss. Online inference could search responses from the whole corpus with the help of the cached index.
\fi
\subsection{DR-BERT Training}
\label{sec:dr-bert training}
Before training, we first employ the proposed fine-grained data augmentation training strategy to supply varied training samples for the given dialog response selection corpus $\mathcal{D}=\{(c_i,r_i)\}_{i=1}^{|\mathcal{D}|}$. During training, we randomly sample a batch of context-response pairs $\{(c_i,r_i)\}_{i=1}^n$ from the corpus $\mathcal{D}$ in each training step, where $n$ is the batch size. Then, the context and response BERT encoders ( BERT$_{ctx}$ and BERT$_{res}$) separately encode the dialog context and response into their corresponding embeddings $\{(V_{c_i},V_{r_i})\}_{i=1}^n$, and the matching degree matrix $\mathcal{M}$ can be obtained via the inner product. Finally, based on $\mathcal{M}$, the in-batch negative sampling is used to construct the contrastive loss for optimizing the DR-BERT model effectively.
The details of two training strategies are shown as follows.
\subsubsection{Fine-grained Data Augmentation}
Accurate and effective semantic representations are the key to the powerful performance of the DR-BERT model. Thus, it is essential to feed more context-response pairs into BERT$_{ctx}$ and BERT$_{res}$ models during training. Thus, we design this training strategy to generate fine-grained training samples by cutting the original training samples into multiple pieces. Specifically, given one multi-turn conversation context $[c_{i,1},c_{i,2},...,c_{i,m_i}]$ in $\mathcal{D}$, where $m_i$ is the number of the utterances, we cut it into $k$ fine-grained training samples, where $k$ is the fine-grained degree. Specifically, we regard last $k$ utterances as the ground-truth responses, and their previous utterances are the conversation contexts, which leads to $k$ augmented training samples $\{([c_{i,1},...,c_{i,m_i-j}],c_{i,m_i-j+1})\}_{j=1}^k$. These fine-grained training samples don't exist in the $\mathcal{D}$, and are beneficial for the generalization of the context and response BERT encoders in DR-BERT model.
\subsubsection{In-batch Negative Sampling with Contrastive Loss}
Given a pair of training sample $(c_i, r_i)$ and its label $y_i$, previous representation-based models \cite{Lowe2015TheUD,Zhou2016MultiviewRS} leverage the binary cross entropy loss or triplet margin loss to optimize the models. Different from the traditional way, in this paper, we treat the dialog response selection as the semantic textual similarity (STS) task \cite{Gao2021SimCSESC}, and optimize the shared context and response semantic space by using the in-batch negative sampling with contrastive loss.
Specifically, first of all, the matching degree matrix $\mathcal{M}=\{s_{i,j}\}_{i,j=1}^n$ is obtained, where $s_{i,j}=f(V_{c_i},V_{r_j})$ is the similarity between $c_i$ and $r_j$, and $f$ is the inner product similarity function.
Then, for each training sample $(V_{c_i},V_{r_i})$ in the batch, the ground-truth responses of other training samples will be treated as the negative samples \cite{Humeau2020PolyencodersAA}, and the contrastive loss can be constructed to optimize the DR-BERT model:
\begin{equation}
\begin{split}
& L=\frac{1}{n}\sum_{i=1}^n-\log(\frac{e^{f(V_{c_i},V_{r_i})}}{\sum_{j=1}^n e^{f(V_{c_i},V_{r_j})}})
\end{split}
\label{eq:1}
\end{equation}
The central idea of the in-batch negative sampling with contrastive loss is to bring dialog context and its appropriate responses closer and push away inappropriate responses far from each other in the shared semantic space. Then, after training, the appropriate responses closely distribute around the given dialog context, which brings convenience for efficient online retrieval.
\subsection{Offline Index and Online Inference}
\label{sec:online_inf}
After training DR-BERT model, we could construct the end-to-end framework for online inference. First of all, we need to conduct the following steps to build the offline index. Given large amounts of responses $\{r_i\}_{i=1}^{\mathcal{N}}$ collected from the parallel and nonparallel corpus, where $\mathcal{N}$ is the number of the responses in the corpus, the response encoder BERT$_{res}$ is first used to convert them into corresponding semantic embeddings $\{V_{r_i}\}_{i=1}^{\mathcal{N}}$. Then, the cached index can be built with the sentences and their corresponding embeddings $\{(V_{r_i},r_i)\}_{i=1}^{\mathcal{N}}$.
For online inference, when we receive a dialog history $\{c_{i}\}_{i=1}^m$ that contains the user's last utterance, the context encoder BERT$_{ctx}$ in DR-BERT first encodes it into the context semantic embedding $V_{c}$. Then, the simple matrix multiplication can be used to calculate the matching degrees between $V_{c}$ and $\{V_{r_i}\}_{i=1}^{\mathcal{N}}$, and the response $r_k$ that has the highest matching degree will be returned to the user.
\subsection{Implementation Details}
\label{sec:imp_detail}
For DR-BERT training, following the previous works, we use the bert-base-chinese checkpoint for Chinese datasets and the bert-base-uncased checkpoint for the English dataset. Before fine-tuning our proposed DR-BERT model on these datasets, we also conduct the domain-specific post-training procedure to warm up two BERT models \cite{han-etal-2021-fine}. To make sure the comparison is fair, we directly leverage the post-training checkpoints of the previous work \cite{han-etal-2021-fine}. More details of the post-training checkpoints in our study are described in Appendix \ref{appendix_post_train}. The fine-grained degree in our proposed fine-grained data augmentation strategy is 5. The DR-BERT model is optimized by AdamW optimizer \cite{Kingma2015AdamAM} with a learning rate of 5e-5 and a batch size of 64. For online inference, the FAISS package \cite{JDH17} is used to build and search the cached index. Due to the page limitation, more details of the hyper-parameters in our implementation can be found in Appendix \ref{appendidx_hyper}.
\iffalse
\begin{algorithm}[h]
\small
\caption{\small{\textbf{DR-BERT Training}}}
\label{alg:1}
\LinesNumbered
\KwIn{Dataset $\mathcal{D}=\{(c_i,r_i)\}_{i=1}^{|\mathcal{D}|}$; DR-BERT model that contains context encoder BERT$_{ctx}$ and response encoder BERT$_{res}$;}
\KwOut{Trained DR-BERT Model}
\For{training step t=1, ...}{
Randomly sample one batch of context-response pairs $\mathcal{B}_t=\left\{\left(c_{i}, r_{i}\right)\right\}_{i=1}^{|\mathcal{B}_t|}$\ from $\mathcal{D}$\;
Invoke \textit{fine-grained data augmentation}, generates $k\times |\mathcal{B}_t|$ fine-grained augmented training samples\;
Construct a new training batch $\mathcal{B}_t'$ contains samples in $\mathcal{B}_t$ and augmented samples\;
\For{$(c_i, r_i) \in \mathcal{B}_t'$}{
$e_{c_i}=$ BERT$_{ctx}(c_i)$\;
$e_{r_i}=$ BERT$_{res}(r_i)$\;
}
Collect semantic embeddings $\{(e_{c_i}, e_{r_i}\}_{i=1}^{|\mathcal{B}_t'|}$, and optimize the model by using Eq. \ref{eq:1}.
}
\end{algorithm}
\fi
\begin{table*}[h]
\small
\renewcommand{\arraystretch}{1.2}
\begin{center}
\resizebox{0.85\textwidth}{!}{
\begin{tabular}{c|ccc|ccc|ccc|ccc}
\hlinewd{0.7pt}
\multirow{2}{*}{\textbf{Dataset}}
& \multicolumn{3}{c|}{\textbf{Ubuntu}}
& \multicolumn{3}{c|}{\textbf{Douban}}
& \multicolumn{3}{c|}{\textbf{E-commerce}}
& \multicolumn{3}{c}{\textbf{RRS}} \\
\cline{2-13}
&Train&Val&Test&Train&Val&Test&Train&Val&Test&Train&Val&Test\\ \hline
size &1M&500K&500K&1M&50K&6.6K&1M&10K&10K&0.4M&5K&1K\\
pos:neg &1:1&1:9&1:9&1:1&1:1&1.2:8.8&1:1&1:1&1:9&1:1&1:9&1.2:8.8 \\
avg turns &10.13&10.11&10.11&6.69&6.75&6.45&5.51&5.48&5.64&5&5&5 \\
\hlinewd{0.75pt}
\end{tabular}
}
\caption{The statistics of four multi-turn response selection datasets in this paper.}
\label{tab:dataset}
\end{center}
\end{table*}
\section{Experiments}
To test our proposed DR-BERT model and end-to-end framework, we conduct two experiments: re-rank and full-rank experiments. For re-rank experiment, we follow the previous evaluation protocol \cite{Gu2020SpeakerAwareBF,Su2021DialogueRS,han-etal-2021-fine}, which tests whether the baseline models could accurately rank a small size of candidate set based on their corresponding dialog context. For full-rank experiment, baselines are required to select proper responses from the whole corpus, and the quality of these responses are measured by human evaluation.
It should be noted that the previous datasets are either domain-specific or hard to understand, which brings difficulty to our human annotations in the full-rank experiment.
Thus, we also release a high-quality multi-turn dialogue response selection corpus,
named \textbf{R}estoration200k for \textbf{R}esponse \textbf{S}election (RRS),
and all of our human annotations in full-rank experiments are collected on its test set.
\subsection{Re-rank Experiment}
\subsubsection{Datasets}
Except for three widely used benchmarks for dialog response selection task: (1) Ubuntu corpus \cite{Lowe2015TheUD}; (2) Douban corpus \cite{wu-etal-2017-sequential}; (3) E-commerce corpus \cite{zhang-etal-2018-modeling}, we also test baseline models on our released RRS corpus. The statistics of these datasets are shown in Table \ref{tab:dataset}, and more details of them can be found in Appendix \ref{appendix_dataset}.
Our released RRS corpus is built from a Chinese high-quality open-domain dialogue dataset, Restoration200K \cite{pan-etal-2019-improving}. All dialogue sessions in Restoration200K are already annotated by human, so it is easy to understand.
Following the previous works \cite{wu-etal-2017-sequential}, we further process this dataset for response selection task: (1) \textbf{Train set}: for each context-response pair in the train set, we randomly sample a response in the train set as its negative sample; (2) \textbf{Validation set}: given one context-response pair, we collect 9 extra hard negative samples by using BM25 recall module\footnote{\url{https://www.elastic.co/cn/elasticsearch/}}; (3) \textbf{Test set}: we first sample 1,200 context-response pairs. Then, we search 15 hard negative samples for each multi-turn context by using the BM25 recall module. To make sure the quality of our test set, we hire 3 annotators to re-label these 16 candidates (1 ground-truth and 15 hard negative samples) for each context, and the ``false negative'' samples in 15 hard negative samples can be effectively detected. Specifically, annotators are required to classify each candidate into 3 categories: (a) positive; (b) negative; (c) hard to judge.
Each candidate receives three labels and the majority of the labels are taken as the final decision. After discarding all of the ``hard to judge'' samples and the sessions with lacking negative samples, we keep 1,000 valid sessions as the test set, and each session consists of 10 candidates. It should be noted that Fleiss's Kappa \cite{Fleiss1971MeasuringNS} of the labeling is 0.53, which indicates relatively high agreement among annotators.
\begin{table*}[tb]
\small
\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{3.2pt}
\scalebox{0.83}{
\begin{tabular}{cccccccccccccccc}
\hlinewd{0.75pt}
\multirow{2}{*}{\textbf{Models}}&\multicolumn{6}{c}{\textbf{Douban}}&\multicolumn{3}{c}{\textbf{Ubuntu}}&\multicolumn{6}{c}{\textbf{RRS}}\\
\cmidrule(lr){2-7}
\cmidrule(lr){8-10}
\cmidrule(lr){11-16}
&MAP&MRR&P@1&R$_{10}$@1&R$_{10}$@2&R$_{10}$@5&\rm R$_{10}$@1&\rm R$_{10}$@2&\rm R$_{10}$@5&MAP&MRR&P@1&R$_{10}$@1&R$_{10}$@2&R$_{10}$@5\\
\hlinewd{0.75pt}
DualLSTM&-&-&-&-&-&-&0.638&0.784&0.949&-&-&-&-&-&-\\
Multi-View&-&-&-&-&-&-&0.662&0.801&0.951&-&-&-&-&-&-\\
SMN&0.529&0.569&0.397&0.233&0.396&0.724&0.726&0.847&0.961&0.487&0.501&0.309&0.281&0.442&0.723\\
DUA&0.551&0.599&0.421&0.243&0.421&0.780&0.752&0.868&0.962&-&-&-&-&-&-\\
DAM&0.550&0.601&0.427&0.254&0.410&0.757&0.767&0.874&0.969&0.511&0.534&0.347&0.308&0.457&0.751\\
IoI&0.573&0.621&0.444&0.269&0.451&0.786&0.796&0.894&0.974&-&-&-&-&-&-\\
ESIM&-&-&-&-&-&-&0.796&0.874&0.975&-&-&-&-&-&-\\
MSN&0.587&0.632&0.470&0.295&0.452&0.788&0.800&0.899&0.978&0.550&0.563&0.383&0.343&0.498&0.798\\
IMN&0.570&0.615&0.433&0.262&0.452&0.789&0.794&0.889&0.974&-&-&-&-&-&-\\
\hline
BERT&0.591&0.633&0.454&0.280&0.470&0.828&0.817&0.904&0.977&0.632&0.644&0.460&0.414&0.618&0.882\\
SA-BERT&0.619&0.659&0.496&0.313&0.481&0.847&0.855&0.928&0.983&0.660&0.670&0.488&0.444&0.653&0.922 \\
Poly-encoder&0.608&0.650&0.475&0.299&0.494&0.822&0.882&0.949&0.990&0.718&0.729&0.589&0.536& 0.702&0.922\\
UMS$_{BERT+}$&0.625&0.664&0.499&0.318&0.482&0.858&0.876&0.942&0.988&-&-&-&-&-&-\\
BERT-SL&-&-&-&-&-&-&0.884&0.946&0.990&-&-&-&-&-&-\\
SA-BERT+HCL&0.639&0.681&0.514&0.330&0.531&0.858&0.867&0.940&0.992&0.671&0.683&0.503&0.454&0.659&0.917\\
BERT-FP&\textbf{0.644}&0.680&0.512&0.324&\textbf{0.542}&\textbf{0.870}&0.911&\textbf{0.962}&\textbf{0.994}&0.698&0.708&0.546&0.493&0.699&0.919\\
\hline
DR-BERT&\textbf{0.644}&\textbf{0.683}&\textbf{0.517}&\textbf{0.334}&0.528&0.869&\textbf{0.912}&0.959&0.992&\textbf{0.754}& \textbf{0.767}&\textbf{0.644}&\textbf{0.585}&\textbf{0.736}& \textbf{0.942}\\
\hlinewd{0.75pt}
\end{tabular}}
\caption{The re-rank experimental results on Douban corpus, Ubuntu corpus, and our released RRS corpus.}
\label{tab:main_result}
\end{table*}
\subsubsection{Evaluation Metrics}
Following previous works, \cite{Xu2016IncorporatingLK,wu-etal-2017-sequential,zhang-etal-2018-modeling,pan-etal-2019-improving,han-etal-2021-fine}, we use information retrieval metrics to test baselines automatically: (1) recall at position k in 10 candidates (R$_{10}$@k); (2) mean average precision (MAP); (3) mean reciprocal rank (MRR); (4) precision at one (P@1).
\subsubsection{Baseline Models}
According to whether to use the pre-trained language models (PLMs) \cite{devlin-etal-2019-bert,Liu2019RoBERTaAR}, we divide the previous works into two categories: Non-PLM-based models and PLM-based models.
\textbf{Non-PLM-based Matching Models}.
Before PLMs becomes dominant in NLP researches, previous works construct the matching models by using CNNs, RNNs, and stacked self-attention networks. This kind of model includes, but is not limited to, DualLSTM \cite{Lowe2015TheUD}, Multi-View \cite{Zhou2016MultiviewRS}, SMN \cite{wu-etal-2017-sequential}, DUA \cite{zhang-etal-2018-modeling}, DAM \cite{Zhou2018MultiTurnRS}, IoI \cite{Tao2019OneTO}, ESIM \cite{Chen2019SequentialMM}, MSN \cite{Yuan2019MultihopSN}, IMN \cite{Gu2020UtterancetoUtteranceIM}.
\textbf{PLM-based Matching Models}.
Recently, with the wide application of PLMs in lots of downstream NLP tasks, more and more PLM-based matching models, i.e., cross-encoders, are proposed, which just simply post-train and fine-tune on these response selection benchmarks. With the help of the powerful natural language understanding capability in PLMs, the state-of-the-art performance is refreshed again and again. This kind of model includes, but is not limited to, BERT \cite{Whang2020AnED}, SA-BERT \cite{Gu2020SpeakerAwareBF}, Poly-encoder \cite{Humeau2020PolyencodersAA}, UMS$_{BERT+}$ \cite{Whang2021DoRS}, BERT-SL \cite{Xu2021LearningAE}, SA-BERT+HCL \cite{Su2021DialogueRS}, BERT-FP \cite{han-etal-2021-fine}. Our proposed DR-BERT model is also a PLM-based matching model, which contains two decoupled BERT-based encoders.
\subsubsection{Re-rank Experimental Results}
The re-rank experimental results on Douban corpus, Ubuntu corpus, and our released RRS corpus are shown in Table \ref{tab:main_result}. Due to the page limitation, the results on the E-Commerce corpus can be found in Appendix \ref{appendidx_ubuntu}.
Through these experimental results, it can be observed that our proposed DR-BERT model achieves the comparable performance with previous state-of-the-art cross-encoder models, such as SA-BERT+HCL and BERT-FP, on Douban and Ubuntu datasets, and even significantly outperforms them on E-Commerce and our RRS datasets. In view of the fact that the DR-BERT completely discards the fine-grained interaction between the conversation context and candidate, this observation is far beyond our expectation, which also suggests that our proposed training strategies could effectively remedy the drawback of DR-BERT's non-interaction architecture.
\subsection{Full-rank Experiment} \label{sec:full-rank-experiment}
We design two different experimental settings for the full-rank experiment: base and nonparallel settings. For base setting, the searched corpus only contains the samples collected in the RRS train set. For the nonparallel setting, two extra nonparallel corpora are added to the corpus in the base setting separately.
\subsubsection{Datasets}
To collect effective human annotations, we choose our proposed high-quality RRS corpus for the full-rank experiment. In the base setting, the corpus for the pipeline framework contains the context-response pairs collected from the RRS train set, and the corpus for the end-to-end framework only contains the ground-truth responses in the RRS train set. In the nonparallel setting, two nonparallel corpora are added: (a) in-dataset nonparallel corpus contains 866,085 single sentences collected from the multi-turn conversation context in RRS train set; (b) out-dataset nonparallel corpus contains over 3.75 million single sentences crawled from the Douban group, a famous Chinese social website.
\subsubsection{Evaluation Metrics}
In the full-rank experiment, we report the average human scores for each baseline. The human annotation protocol in our study is described as follows: 8 annotators are hired to label 800 dialogue sessions. Each session contains a multi-turn dialog history and multiple candidates generated by baselines. Based on the informativeness, fluency, and correlation, annotators are required to rate each candidate on a five-point scale, where 1, 3, and 5 indicate complete uncorrelation, ordinary correlation, and excellent performance respectively. 2 and 4 are used in unsure cases. It should be noted that the Fleiss's Kappa \cite{Fleiss1971MeasuringNS} of this five-point scale labeling is 0.58, which indicated relatively high agreement among annotators. More details about the meanings of these scores can be found in Appendix \ref{appendix_hep}.
\subsubsection{Baseline Frameworks}
\noindent \textbf{BM25} \cite{Robertson2009ThePR}:
without the cross-encoder rerank models, only the BM25 recall module is used in the recall-then-rerank pipeline framework.
\noindent \textbf{BERT-FP} \cite{han-etal-2021-fine}:
without the recall module, only the state-of-the-art BERT-FP cross-encoder model is used in the pipeline framework, which ranks all of the responses in the corpus.
\noindent \textbf{BM25+BERT-FP}:
it jointly employs the BM25 recall module and BERT-FP rerank model to select the response from the whole corpus.
\noindent \textbf{DR-BERT}:
our end-to-end framework is used to search the proper responses, which consists of the DR-BERT model and cached index.
\noindent \textbf{DR-BERT+BERT-FP}:
it replaces the BM25 recall module with our proposed DR-BERT model in the recall-then-rerank pipeline framework.
It should be noted that the size of the recalled candidate set is 100 for all the pipeline baselines (BM25+BERT-FP and DR-BERT+BERT-FP).
\subsubsection{Full-rank Experimental Results}
\noindent \textbf{Base Setting.}
The experimental results of full-rank base setting can be found in Table \ref{tab:full_rank}, and we can make the following conclusions:
\begin{itemize}
\item Our proposed end-to-end framework significantly outperforms the pipeline baselines, such as BM25+BERT-FP and DR-BERT+BERT-FP. There are two reasons for this observation: (1) DR-BERT ranks candidates more accurately than the BERT-FP model, which has already been proved in re-rank experiment; (2) end-to-end framework directly matches all of the responses in the corpus, which could find better candidates that don't exist in the candidate set provided by recall module.
\item To our surprise, the quality of the responses generated by the BERT-FP model is extremely bad (the average human score is very close to 1, i.e., completely uncorrelated responses). It should be noted that this phenomenon can also be observed on other cross-encoder models. After our careful check, we suggest that this is because lots of hard negative candidates in the full candidate pool could easily confuse the cross-encoder models, and obtain higher scores than ground-truths. This observation also proves the vulnerable performance of cross-encoders.
\item Removing fine-grained data augmentation training strategy leads to performance degradation, which proves its effectiveness further.
\end{itemize}
\begin{table}[h]
\small
\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{3.2pt}
\begin{center}
\scalebox{0.9}{
\begin{tabular}{cc}
\hlinewd{0.75pt}
\textbf{Baselines} & \textbf{Avg. Human Scores (1-5)} \\ \hlinewd{0.75pt}
BM25 & 2.35 \\
BERT-FP & 1.59 \\
BM25+BERT-FP & 3.11 \\
DR-BERT+BERT-FP & 3.27 \\
DR-BERT w/o. FG & 3.49 \\
DR-BERT & \textbf{3.52} \\ \hlinewd{0.75pt}
DR-BERT+in-dataset & 3.57 \\
DR-BERT+out-dataset & \textbf{3.65} \\ \hlinewd{0.75pt}
\end{tabular}
}
\caption{Full-rank experimental results on our released high-quality RRS test set. FG denotes the fine-grained data augmentation training strategy.}
\label{tab:full_rank}
\end{center}
\end{table}
\noindent \textbf{Nonparallel Setting.}
As shown in the last three rows in Table \ref{tab:full_rank},
it can be found that both in-dataset and out-dataset nonparallel corpus improve the performance of the end-to-end framework. Besides, DR-BERT with out-dataset nonparallel corpus achieves the best average human scores. This observation suggests that, in real applications, an end-to-end framework could liberate from the requirement of the parallel corpus, and achieve better performance with the help of the large-scale nonparallel corpus, for example, the non-conversational corpus.
\section{Analysis}
\subsection{Ablation Study}
\begin{table*}[tb]
\small
\begin{center}
\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{3.2pt}
\scalebox{0.83}{
\begin{tabular}{cccccccccccccccc}
\hlinewd{0.75pt}
\multirow{2}{*}{\textbf{Models}}&\multicolumn{6}{c}{\textbf{Douban}}&\multicolumn{3}{c}{\textbf{Ubuntu}}&\multicolumn{6}{c}{\textbf{RRS}}\\
\cmidrule(lr){2-7}
\cmidrule(lr){8-10}
\cmidrule(lr){11-16}
&MAP&MRR&P@1&R$_{10}$@1&R$_{10}$@2&R$_{10}$@5&\rm R$_{10}$@1&\rm R$_{10}$@2&\rm R$_{10}$@5&MAP&MRR&P@1&R$_{10}$@1&R$_{10}$@2&R$_{10}$@5\\
\hlinewd{0.75pt}
DR-BERT&\textbf{0.644}&\textbf{0.683}&\textbf{0.517}&\textbf{0.334}&\textbf{0.528}&\textbf{0.869}&\textbf{0.912}&\textbf{0.959}&\textbf{0.992}&\textbf{0.754}& \textbf{0.767}&\textbf{0.644}&\textbf{0.585}&\textbf{0.736}& \textbf{0.942}\\
w/o. FG&0.622&0.665&0.498&0.316&0.512&0.832&0.886&0.951&0.991&0.732&0.743&0.607&0.549& 0.719&0.934\\
w/o. CL&0.616&0.655&0.487&0.309&0.501&0.819&0.888&0.943&0.988&0.678&0.690&0.540&0.484&0.655&0.888\\
\hlinewd{0.75pt}
\end{tabular}
}
\end{center}
\caption{Ablation study on Douban corpus, Ubuntu corpus, and RRS corpus. FG denotes the fine-grained data augmentation. CL denotes the in-batch negative sampling with contrastive loss.}
\label{tab:ablation_study}
\end{table*}
In this section, we conduct the ablation study to analyze the contribution of the proposed training strategies: fine-grained data augmentation (FG) and in-batch negative sampling with contrastive loss (CL). It should be noted, if CL is removed, the triplet margin loss with fewer negative samples is used to optimize the DR-BERT model.
Due to the page limitation, more details of the triplet margin loss in our ablation study can be found in Appendix \ref{appendix:triplet_margin_loss}.
As shown in Table \ref{tab:ablation_study}, it can be found that the performance sharply decreases when each training strategy is removed, which proves their effectiveness. Besides, we can obtain the conclusion CL $\textgreater$ FG, with respect to the contribution.
\subsection{Hyper-parameter}
Previous works on the contrastive semantic representation task demonstrate that the batch size is the crucial hyper-parameter for contrastive loss \cite{He2020MomentumCF}. In this part, we study how this key hyper-parameter affects our proposed DR-BERT model. As shown in Figure \ref{img:ablation_study}, we can make the following conclusions: (1) with the increase of the batch size during training, R$_{10}@1$ and P@1 metrics increase first but then they decrease, which demonstrates that large amounts of the negative samples are not beneficial; (2) DR-BERT model achieves the best performance when the batch size is near 64.
\begin{figure}[h]
\center{\includegraphics[width=0.5\textwidth, height=6.8cm] {batch_size-eps-converted-to.pdf}}
\caption{The effect of batch size on R$_{10}@1$ and P@1 metrics. This analysis is conducted on our released RRS Corpus.}
\label{img:ablation_study}
\end{figure}
\subsection{Inference Speed}
\begin{table}[h]
\small
\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{3.2pt}
\begin{center}
\subtable[Small corpus contains 0.2M pairs or responses.]{
\scalebox{0.95}{
\begin{tabular}{c| p{1.25cm}<{\centering} | p{1cm}<{\centering}}
\hlinewd{0.75pt}
\textbf{Models} & \multicolumn{2}{c}{\textbf{Avg. Time Cost (ms)}} \\ \hlinewd{0.75pt}
BM25+BERT-FP & \multicolumn{2}{c}{134.91} \\
w/o. BERT-FP & \multicolumn{2}{c}{18.30} \\
w/o. BM25 & \multicolumn{2}{c}{116.44} \\ \hlinewd{0.75pt}
& \textbf{GPU} & \textbf{CPU} \\ \hlinewd{0.75pt}
DR-BERT(brute-force) & \ 33.96 & \ 189.32\ \\
DR-BERT(ANNS-LSH) & - & \ 22.36 \\
DR-BERT(ANNS-IVF) & \textbf{11.19} & 16.24 \\ \hlinewd{0.75pt}
\end{tabular}
}
}
\qquad
\subtable[Big corpus contains 4.6M pairs or responses.]{
\scalebox{0.95}{
\begin{tabular}{c| p{1.25cm}<{\centering} | p{1cm}<{\centering} }
\hlinewd{0.75pt}
\textbf{Models} & \multicolumn{2}{c}{\textbf{Avg. Time Cost (ms)}} \\ \hlinewd{0.75pt}
BM25+BERT-FP & \multicolumn{2}{c}{333.92} \\
w/o. BERT-FP & \multicolumn{2}{c}{169.18} \\
w/o. BM25 & \multicolumn{2}{c}{163.39} \\ \hlinewd{0.75pt}
& \textbf{GPU} & \textbf{CPU} \\ \hlinewd{0.75pt}
DR-BERT(brute-force) & 198.11 & 4,611.78 \\
DR-BERT(ANNS-LSH) & - & 222.83 \\
DR-BERT(ANNS-IVF) & \textbf{11.56} & 23.97 \\
\hlinewd{0.75pt}
\end{tabular}
}
}
\end{center}
\caption{Average time cost in milliseconds on the RRS test set. Because the LSH algorithm is hard to be accelerated by using GPU devices, we don't record its time cost on GPU devices.}
\label{tab:inference_speed}
\end{table}
In this part, we evaluate the inference speed to prove the superiority of the end-to-end framework over the pipeline framework on the RRS test set. Specifically, we report the average time cost of three baselines: BM25+BERT-FP, DR-BERT(brute-force), and DR-BERT(ANNS-*). DR-BERT(brute-force) obtains the response by conducting brute-force search on the whole corpus, and DR-BERT(ANNS-*) uses the Approximate Nearest Neighbor Search (ANNS) algorithms to speed up its searching. In this paper, two classic ANNS algorithms are tested: inverted index (IVF) and local sensitive hashing (LSH). As shown in Table \ref{tab:inference_speed}, we can make the following conclusions:
\begin{itemize}
\item The time cost of BM25+BERT-FP is much bigger than DR-BERT methods. Besides, as shown in Table \ref{tab:inference_speed} (b), it can be found that the time cost of BM25 recall module significantly increases as the size of the corpus increases, which limits its applications.
\item With the help of the acceleration provided by GPU devices, DR-BERT achieves a faster inference speed than BM25+BERT-FP, even if the brute-force search is conducted.
\item Approximate nearest neighbor algorithm significantly improves the inference speed of DR-BERT. For example, the IVF algorithm improves the average time cost from 198.11 to 11.56 on the big corpus.
\end{itemize}
\section{Conclusions and Future Work}
In this paper, we deeply explore the end-to-end framework for the dialog response selection task and propose the DR-BERT model for it. Besides, two effective training strategies are used to train DR-BERT effectively: fine-grained augmentation and in-batch negative sampling. Extensive experiments prove not only the powerful performance of our proposed DR-BERT model but also the obvious superiority of end-to-end framework over pipeline framework. In the future, we will explore and test more effective training strategies for our proposed DR-BERT in the end-to-end framework.
|
2,869,038,155,035 | arxiv | \section{Introduction}
The history of the analysis of invariant properties for dynamical
system goes back 1890s, when mathematician Lyapunov firstly
introduced his theory on ordinary differential equations. This theory was
later on named by Lyapunov theory or stability theory. This theory proves
that the stability of a dynamical system, i.e. all trajectories will
approach a fixed point as time is going to infinity, can be
transformed to analyze the properties of a function that is named
Lyapunov candidate function. Stability theory is related to the concept of invariant set.
Blanchini \cite{Blanchini} provides an excellent survey paper about
invariance of dynamical system. Positively invariant set is an important concept that is widely used in
many areas e.g., control theory, electronic systems, economics, etc. e.g., see \cite{Boyd, luen, shen}. Given a set and a dynamical system,
verifying whether the set is an invariant set of the given system is an interesting topic in this field. A general equivalent condition is given
by Nagumo, e.g., \cite{nagu}. The explicit conditions for linear system and some common sets are derived by Hov\'{a}th, et \cite{bits1, bits2, song1, song2}.
The discrete and continuous system are usually considered separately for invariant sets.
Preserving the
invariance from a continuous system to a discrete system by using certain discretization methods is studied by Hov\'{a}th et \cite{song3}.
In this paper, we construct some special Lorenz cone using Dikin ellipsoid and hyperplane, and the structures of the constructed cones are studied. The novelty
of this method is that linking the mathematical optimization tools to the invariant set. The motivation of the construction is to design more potential invariant cone
within positive orthant, which is usually a common requirement in practical application.
\emph{Notation and Conventions.} We use the following
notation and conventions to avoid unnecessary repetitions
\begin{itemize}
\item The inertia of a matrix is denoted by inertia$\{Q\}=\{a,b,c\}$ that indicates the number
of positive, zero, and negative eigenvalues of $Q$, respectively.
\item The basis in $\mathbb{R}^n$ is denoted by $e_1=\{1,0,...,0\}, e_2=\{0,1,...,0\},...,
e_n=\{0,0,...,1\}$. And we let $e=(1,1,...,1).$
\item The aim of using $x_{[k]}$ to indicate the discrete
state variable is to distinguish with the $k$-th coordinate, denoted
by $x_k$, of variable $x$.
\end{itemize}
\section{Preliminaries}
\subsection{Invariant Sets}
In this paper, the linear discrete and continuous
systems are described as follows:
\begin{equation}\label{eqn:dy2}
x_{k+1}=A_dx_{k},
\end{equation}
\begin{equation}\label{eqn:dy1}
\dot{x}(t)=A_cx(t),
\end{equation}
where $x_{k}, x_{k+1}, x(t)\in\mathbb{R}^{n}$
are the state variables, and $A_d, A_c\in \mathbb{R}^{n\times n}$ are constant coefficient
matrices.
Followed on the linear systems above, the invariant sets for the corresponding discrete and continuous forms are introduced.
\begin{definition}\label{definv1}
A set $\mathcal{S}\in\mathbb{R}^n$ is called an \textbf{invariant
set} for
the discrete system (\ref{eqn:dy2}) if $x_{k}\in \mathcal{S}$ implies
$x_{k+1}\in \mathcal{S}$, for all $k\in \mathbb{N}.$
A set $\mathcal{S}\in\mathbb{R}^n$ is called an \textbf{invariant
set} for
the continuous system (\ref{eqn:dy1}) if $x(0)\in \mathcal{S}$ implies
that $x(t)\in\mathcal{S}$, for all $t\geq0$.
\end{definition}
In fact, the
set $\mathcal{S}$ in Definition \ref{definv1} is usually refereed to as
\emph{positively invariant set} in literatures, as we can see that it only considers the forward time or nonnegative time. Since we only consider positively
invariant set in this paper, we call it invariant set for
simplicity.
\begin{definition}\label{definv123}
An operator $\mathcal{A}$ is called \textbf{invariant} on a set $\mathcal{S}\in\mathbb{R}^n$ if $\mathcal{A}\mathcal{S}\subset\mathcal{S}.$
\end{definition}
An immediate connection between Definition \ref{definv1} and \ref{definv123} is that a set $\mathcal{S}$ is an invariant
set for the linear discrete system (\ref{eqn:dy2}) if and only if
$A_d$ is invariant on
$\mathcal{S}$;
$\mathcal{S}$ is an invariant
set for the linear continuous system (\ref{eqn:dy1}) if and only if \footnote{Here recall that
$e^{At}=\sum_{k=1}^\infty\frac{(At)^k}{k!}$.} $e^{A_ct}$
is invariant on
$\mathcal{S}$.
\subsection{Hyperplane, Ellipsoid, Lorenz Cone, and Dikin Ellipsoid}
In this subsection, the definitions and formulas of some common types in $\mathbb{R}^n$, namely, hyperplane,
ellipsoid, Lorenz cone, and Dikin ellipsoid are introduced.
\begin{definition}\label{defhyp}
A \textbf{hyperplane}, denoted by $\mathcal{H}\in \mathbb{R}^n,$ is
represented as either
\begin{equation}\label{eqhyp1}
\mathcal{H}=\mathcal{H}(a,\alpha)=\{x\in \mathbb{R}^n\,|\,a^Tx=\alpha,
\alpha\in \mathbb{R}\},
\end{equation}
or
equivalently
\begin{equation}\label{eqhyp2}
\mathcal{H}=\mathcal{H}(x_0,H)=\{x\in \mathbb{R}^n\,|\,x=x_0+Hz, z\in \mathbb{R}^{n-1}\},
\end{equation}
where $x_0$ is a point in $\mathcal{H}$ and $H$ consists of all basis, denoted by $h_1,h_2,...,h_{n-1},$ of the
complementary space of $a$, i.e., $H=[h_1,h_2,...,h_{n-1}]\in
\mathbb{R}^{n\times n-1}, a^TH=0, $ $ H^Ta=0,$ and
\emph{span}$\{a,H\}=\mathbb{R}^n.$
\end{definition}
The vector $a$ in (\ref{eqhyp1}) is called the \emph{normal vector} of the hyperplane $\mathcal{H}.$
The matrix $H$ in (\ref{eqhyp2}) is called the \emph{complementary matrix} of the vector $a$. Moreover, if
$h_1,h_2,...,h_{n-1}$ are mutually orthonormal, i.e., $h_i^Th_j=\delta_{ij}$, where $\delta_{ij}$ is the Kronecker delta, then we have $H^TH=I_{n-1},$ and we call
$H$ the \emph{orthonormal complementary matrix} of the vector $a$.
Formule (\ref{eqhyp1}) is straightforward and can be seen in many literatures. We will use Formula (\ref{eqhyp2}) in this paper since it considers a hyperplane in the affine plane.
\begin{comment}
\begin{definition}\label{defheep}
An \textbf{ellipsoid}, denoted by $\mathcal{E}\in \mathbb{R}^n$, centered at the
origin, is represented as follows:
\begin{equation}\label{elli}
\mathcal{E}=\{x\in\mathbb{ R}^n ~|~ x^TQx\leq 1\},
\end{equation}
where $Q\in \mathbb{R}^{n\times n}$ and $Q\succ0$.
\end{definition}
Note that any ellipsoid with nonzero center can be easily mapped to an ellipsoid centered at origin.
\begin{definition}\label{defhyloc}
A \textbf{Lorenz cone}\footnote{A Lorenz cone is also called an
ice cream cone, or a second order cone.}, denoted by $\mathcal{C_L}\in \mathbb{R}^n$,
with vertex at origin, is represented as follows:
\begin{equation}\label{ellicone}
\mathcal{C_L}=\{x\in \mathbb{R}^n~|~x^TQx\leq 0,~ x^TQu_n\leq0\},
\end{equation}
where $Q\in \mathbb{R}^{n\times n}$ is a symmetric nonsingular
matrix with one negative eigenvalue $\lambda_n$, i.e., ${\rm
inertia}\{Q\}=\{n-1,0,1\}$.
\end{definition}
Similar to ellipsoids, any Lorenz cone with nonzero vertex can be easily mapped to a Lorenz cone with vertex at origin.
In particular, let $Q=\tilde{I}=\text{{diag}}\{1,...,1,-1\}$ and $e_n=(0,...,0,1)^T$,
then we have Lorenz cone $ \mathcal{C_L^*}=\{x\in \mathbb{R}^n\;|\; x^T\tilde{I}x\leq 0,
x^Te_n\geq0\}.$ We call $\mathcal{C_L^*}$ the \emph{standard Lorenz cone}. In fact, one can prove that each Lorenz cone can be mapped into
a standard Lorenz cone by certain transformation, e.g., \cite{song1}.
\end{comment}
\begin{definition}\label{def4}
An \textbf{ellipsoid}, denoted by $\mathcal{E}\in\mathbb{R}^n, $ is represented as
\begin{equation}\label{eq16}
\mathcal{E}=\mathcal{E}(Q,p,\rho)=\{x\in \mathbb{R}^n |
x^TQx+2p^T+\rho\leq1\},
\end{equation}
where $Q\succ0,$ and $ \rho = p^TQ^{-1}p$. A \textbf{standard
ellipsoid}, denoted by $\mathcal{E}^*\in \mathbb{R}^n$, is
represented as
\begin{equation}\label{eq17}
\mathcal{E}^*=\mathcal{E}(\tilde{Q},\textbf{0},0)=\{x\in
\mathbb{R}^n|a_1x_1^2+a_2x_2^2+...+a_nx_n^2\leq 1\}=\{x\in
\mathbb{R}^n|x^T\tilde{Q}x\leq1\}
\end{equation}\label{eq21}
where $\tilde{Q}=\emph{diag}\{a_1,a_2,...,a_n\}, $ with $ a_i>0$,
for $i=1,2,...,n$.
\end{definition}
\begin{definition}\label{def3}
A \textbf{Lorenz cone}, denoted by $\mathcal{C_L}\in \mathbb{R}^n,$ is represented
as
\begin{equation}\label{eq13}
\mathcal{C_L}=\mathcal{C_L}(Q,p,\rho)=\{x\in
\mathbb{R}^n|x^TQx+2p^Tx+\rho\leq0\},
\end{equation}
where $Q$ is a symmetric matrix with inertia$(Q)=\{n-1,0,1\},$ $p\in
\mathbb{R}^n,$ and $ \rho = p^TQ^{-1}p$. A \textbf{standard Lorenz
cone}, denoted by $\mathcal{C_L}^*\in \mathbb{R}^n,$ is represented
as
\begin{equation}\label{eq14}
\mathcal{C_L^*}=\{x\in \mathbb{R}^n|x_1^2+x_2^2+\cdots+x_{n-1}^2\leq
x_n^2, x_n\geq0\} =\{x\in \mathbb{R}^n|x^T \tilde{I} x\leq0,
x^T\tilde{I} e_n\leq0\}
\end{equation}
where $\tilde{I}=\emph{diag}\{1,1,...,1,-1\}, $ and $
e_n=(0,...,0,1)^T.$
\end{definition}
\begin{remark}
The center of $\mathcal{E}$ in the form of (\ref{eq16}) is
$-Q^{-1}p.$ The vertex of $\mathcal{C_L}$ is $-Q^{-1}p$, and the
axis of $\mathcal{C_L}$ in the form of (\ref{eq13}) is $\{x\in
\mathbb{R}^n|x=-Q^{-1}p+\alpha Pe_n \}$, where $\alpha \in
\mathbb{R}$.
\end{remark}
In fact, one can prove that each Lorenz cone can be mapped into
a standard Lorenz cone by certain transformation, e.g., \cite{song1}. We notice that a Lorenz cone in the form of (\ref{eq13}) consists of
two branches, one of which is centrosymmetric to the other one with
respect to the vertex. A standard Lorenz cone $\mathcal{C_L^*} $ in
the form of (\ref{eq14}) is a convex set and a self-duel cone, i.e.,
the dual cone\footnote{A duel cone of a cone $\mathcal{C}$ is
defined as $\{y\in \mathbb{R}^n~|~y^Tx\geq 0, \text{ for all } x\in
\mathcal{C}\}.$} is coincidence with itself. Also, the formula of
$\mathcal{C_L^*}\cup (-\mathcal{C_L^*})$ is given as $\{x\in
\mathbb{R}^n|x^T \tilde{I} x\leq0\}.$
The relationships between general and standard ellipsoids and
between general and standard Lorenz cones are given in the following
lemma.
\begin{lemma}
There exists two nonsingular matrices $P, $ and $\tilde{P}$, such
that
\begin{equation}\label{eq18}
\tilde{P}^{-1}\mathcal{E}^*=P^{-1}\mathcal{E}+(QP)^{-1}p,
\end{equation}
where $\mathcal{E}$ and $\mathcal{E}^*$ are defined as (\ref{eq16})
and (\ref{eq17}), respectively. There exists a nonsingular matrix
$\bar{P}$, such that
\begin{equation}\label{eq15}
\mathcal{C_L^*}\cup
(-\mathcal{C_L^*})=\bar{P}^{-1}\mathcal{C_L}+(Q\bar{P})^{-1}p,
\end{equation}
where $\mathcal{C_L}$ and $\mathcal{C_L^*}$ are given as
(\ref{eq13}) and (\ref{eq14}), respectively.
\end{lemma}
\begin{proof} We only present the proof of (\ref{eq18}), the proof of (\ref{eq15})
is analogous. Since $Q\succ0$, there exists an orthogonal matrix
$U$ consisting of all the eigenvectors $\{\lambda_i\}$ of $Q$, such
that $U^TQU=\text{diag}\{\lambda_1,...,\lambda_n\}$. Denoting
$Q_1=\text{diag}\{\frac{1}{\sqrt{\lambda_1}},...,\frac{1}{\sqrt{\lambda_n}}\}$,
we have $Q_1^TU^TQUQ_1=\tilde{I}.$ We let $P=UQ_1$ and
$\tilde{P}=\text{diag}\{\sqrt{a_1},...,\sqrt{a_n}\}$, both of which
are nonsingular, then (\ref{eq16}) and (\ref{eq17}) can be
respectively rewritten as
\begin{equation*}
\mathcal{E}=\{x\in
\mathbb{R}^n~|~(P^{-1}x+\tilde{I}P^{T}p)^T(P^{-1}x+\tilde{I}P^{T}p)\leq1\},
\text{ and } \mathcal{E}^*=\{x\in
\mathbb{R}^n~|~(\tilde{P}^{-1}x)^T(\tilde{P}^{-1}x)\leq1\}.
\end{equation*}
Noting that $P\tilde{I}P^{T}=Q^{-1}$ implies
$\tilde{I}P^{T}=(QP)^{-1}$, we deduce (\ref{eq18}) immediately.
\end{proof}
\begin{definition} \label{dikin}
\cite{bert, Terlaky}
A \textbf{Dikin Ellipsoid}, denoted by $\mathcal{E_D}\in
\mathbb{R}^n$, is represented as
\begin{equation}\label{eq31}
\mathcal{E_D}=\Big\{x\in
\mathbb{R}^n\,|\,\sum_{i=1}^n\frac{(x_i-c_i)^2}{c_i^2}\leq1\Big\}=\{x\in
\mathbb{R}^n\,|\,(x-c)^TC^{-2}(x-c)\leq 1\},
\end{equation}
where $c=(c_1,c_2,...,c_n)^T$,
$C=\emph{diag}\{c_1,c_2,...,c_n\}$, and $c_i>0$, for $i=1,2,...,n.$
\end{definition}
The point $c$ is the center of Dikin elliposid according to (\ref{eq31}). In fact, the ellipsoid (\ref{eq31}) was introduced by Dikin and widely used in designing some mathematical optimization
algorithms, e.g., affine scaling interior
point methods \cite{bert,Terlaky}. A common property of every Dikin ellipsoid is
that it is constantly in the positive octant of $\mathbb{R}^n$ including its boundary. This is a key property to design mathematical optimization algorithms.
\begin{lemma} \cite{bert,Terlaky}
Assume the Dikin ellipsoid $\mathcal{E_D}$ is given as (\ref{eq31}) and let $x\in \mathcal{E_D}$, then $x\geq0$.
\end{lemma}
\begin{proof} For every $i$, we have $\frac{(x_i-c_i)^2}{c_i^2}\leq
\sum_{i=1}^n\frac{(x_i-c_i)^2}{c_i^2}\leq1$. This yields $-c_i\leq
x_i-c_i\leq c_i,$ i.e., $0\leq x_i\leq 2c_i. $
\end{proof}
Let us consider the hyperplane $\mathcal{H}$ given as (\ref{eqhyp1}) with the normal vector
$a$, and assume $\mathcal{H}$ intersects through the center of the Dikin ellipsoid
$\mathcal{E_D}$ given as (\ref{eq31}), then we have
\begin{equation}\label{eq112}
\mathcal{H}=\mathcal{H}(a,a^Tc)=\{x\in \mathbb{R}^n\,|\,a^Tx=a^Tc\},
\end{equation}
and the intersection $\mathcal{H}\cap \mathcal{E_D}$ is also an ellipsoid.
\begin{lemma}
Let a hyperplane $\mathcal{H}$ and a Dikin ellipsoid $\mathcal{E_D}$
be given as (\ref{eq112}) and (\ref{eq31}), respectively. Then
$\mathcal{H}\cap \mathcal{E_D}=\{x\in \mathbb{R}^n\,|\,x=c+Hz\}$, where
$ z\in \mathbb{R}^{n-1}$ satisfies
$z^TH^TC^{-2}Hz\leq1, $ and $H$ is a
complementary matrix of the vector $a.$
\end{lemma}
\begin{proof}
According to the second formula of $\mathcal{H}$ given as
(\ref{eqhyp2}), and let $x_0=c,$ we have $x=c+Hz$.
Substituting $x=c+Hz$ into (\ref{eq31}), this lemma
is immediate.
\end{proof}
\begin{definition}
\cite{wilk} A matrix $D\in \mathbb{R}^{n\times n}$ is called an
\textbf{arrowhead matrix} if it has the following form
\begin{equation}\label{eq8}
D=\left[
\begin{array}{cc}
\alpha& p \\
p^T & B \\
\end{array}
\right],
\end{equation}
where $\alpha\in \mathbb{R}, p\in\mathbb{R}^{n-1},$ and $
B=\emph{diag}\{b_1,b_2,...,b_{n-1}\}.$ Here we assume $b_1\geq b_2\geq
...\geq b_{n-1}.$
\end{definition}
\begin{lemma}\label{lemma11}
\cite{wilk} The following properties of the arrowhead matrix $D$
given as (\ref{eq8}) are true:
\begin{enumerate}
\item The characteristic polynomial of $D$ is
\begin{equation}\label{eq11}
\det(\lambda
I-D)=(\lambda-\alpha)\prod_{k=1}^{n-1}(\lambda-b_k)-\sum_{j=1}^{n-1}|p_j|^2\prod_{k=1,k\neq
j}^{n-1}(\lambda-b_k).
\end{equation}
\item The eigenvalues of $D$ are real and satisfying the following condition
\begin{equation}\label{eqn:arr}
\lambda_1\geq b_1\geq\lambda_2\geq b_2\geq \cdots\geq b_{n-1}\geq
\lambda_n.
\end{equation}
\end{enumerate}
\end{lemma}
\section{Construction of Novel Lorenz Cones}
In this section, we will construct some novel standard Lorenz cones and derive the corresponding explicit formulas for some special cases. In particular, these novel standard Lorenz cones are constructed
by using a Dikin ellipsoid as its base\footnote{A set $\mathcal{B}$ is refereed to as a base of a cone $\mathcal{C}$ if for any $x\in \mathcal{C}$, there exists a $\hat{x}\in \mathcal{B}$, such that $x=\lambda \hat{x}$ for some $\lambda>0$.}, or using the intersection of a Dikin ellipsoid
and a hyperplane as its base. Since the Lorenz cones we considered are standard, i.e., the vertices are the origin, we have that the cones are constantly in the positive octant in $\mathbb{R}^n$ including its boundary. Also, the properties of the constructed Lorenz cones, especially the structures of the eigenvalues of the matrices involved in the cone formulas, are studied. The motivation of these novel cones is that they are considered as
candidate invariant sets for dynamical systems in the positive orthant.
\begin{definition}
\cite{Laub} Let $X\in
\mathbb{R}^{m\times n}$. The \textbf{vectorization} of $X$ , denoted by $\emph{vec}(X)$, is writing all columns
of $X$ into a single vector, i.e.,
$\emph{vec}(X)=(x_{11},...,x_{m1},x_{12},...,x_{m2},...,x_{1m},...,x_{mn})^T.$
\end{definition}
\begin{lemma}
Let $\{v_i\}_{i=1}^n$ be a basis of
$\mathbb{{R}}^n$. Then $\emph{vec}(v_iv_j^T), 1\leq i, j\leq n$ is a
basis of $\mathbb{R}^{n^2}.$
\end{lemma}
\begin{comment}
\begin{proof}
To proof this lemma is equivalent to prove that $\{v_iv_j^T\}$ is a
basis for $R^{n\times n}$ (note that $\mathbb{R}^{n^2}$ means
$\mathbb{R}^{n^2\times 1}$). It is easy to have that $\{e_ie_j^T\}$
is a basis for $R^{n\times n}$. Since $\{v_i\}$ is a basis of
$\mathbb{R}^n$, there exists a nonsingular matrix $P\in
\mathbb{R}^{n\times n}$, such that $Pv_i=e_i$, for $i=1,...,n.$ Now
let $\sum_{i,j}\lambda_{ij}v_iv_j^T=\textbf{0}\in
\mathbb{R}^{n\times n},$ we have
$\sum_{i,j}\lambda_{ij}Pv_iv_j^TP^T=
\sum_{i,j}\lambda_{ij}e_ie_j^T=\textbf{0},$ which implies that
$\lambda_{ij}=0$, for $i,j=1,...,n$. Also, for any $X\in
\mathbb{R}^{n\times n},$ we let $\tilde{X}=P^{-1}XP^{-T}$, then we
have
$X=P\tilde{X}P^T=P(\sum_{i,j}\tilde{x}_{ij}e_ie_j^T)P^T=\sum_{i,j}\tilde{x}_{ij}v_iv_j^T.$
This proof is complete.
\end{proof}
\end{comment}
\begin{corollary}\label{cor1}
Let $a\in \mathbb{{R}}^n$, and $H=[h_1,h_2,...,h_{n-1}]\in
\mathbb{R}^{n\times(n-1)}$ be a complementary matrix of the vector $a$. Then
the following vectors are a basis of $\mathbb{R}^{n^2},$
\begin{equation}
\emph{vec}(aa^T),~\emph{vec}(ah_i^T), ~\emph{vec}(h_ia^T), \text{ and } \emph{vec}(h_ih_j^T), \text{ where }
1\leq i,j\leq n-1.
\end{equation}
\end{corollary}
\begin{comment}
\begin{definition}
\cite{Laub} Let $M\in \mathbb{R}^{p\times q}, $ $N\in
\mathbb{R}^{m\times n}.$ Then the \textbf{Kronecker product} of $M$
and $N$ denoted by $M\otimes N$ is defined as
\begin{equation*}
M\otimes N =\left[
\begin{array}{ccc}
m_{11}N & \cdots & m_{1q}N \\
\vdots & \ddots & \vdots \\
m_{p1}N & \cdots & m_{pq}N \\
\end{array}
\right]\in \mathbb{R}^{pm\times qn}.
\end{equation*}
\end{definition}
\end{comment}
\begin{lemma}\label{lemma1}
Let $a\in \mathbb{{R}}^n$, and $H=[h_1,h_2,...,h_{n-1}]\in
\mathbb{R}^{n\times(n-1)}$ be a complementary matrix of the vector $a$. Assume
$H^TXH=\textbf{0}\in\mathbb{R}^{(n-1)\times (n-1)},$ where $X\in
\mathbb{R}^{n\times n},$ then there exist $\mu\in\mathbb{R},$ and
$z_1, z_2\in \mathbb{R}^{n-1}$, such that
\begin{equation}\label{lem1:eq0}
X=\mu aa^T+az_1^TH^T+Hz_2a^T.
\end{equation}
If $X$ is further assumed to be symmetric, then $z_1=z_2.$
\end{lemma}
\begin{proof}
According to \cite{Laub}, the equation $H^TXH=\textbf{0}$ can be solved by the following equation
\begin{equation}\label{lem1:eq1}
(H^T\otimes
H^T)\text{vec}(X)=\text{vec}(H^TXH)=\text{vec}(\textbf{0})\in\mathbb{R}^{(n-1)^2},
\end{equation}
where $M\otimes N$ is the Kronecker product, e.g., \cite{Laub}.
According to (\ref{lem1:eq1}), we have that $\text{vec}(X)$ is in $\text{ker}(H^T\otimes H^T)$, i.e., the
kernel of $H^T\otimes H^T$. We note that $\text{vec}(h_ih_j^T),$ for $ i,j=1,...,n-1,$ are in the range space of
$H^T\otimes H^T.$ Then, according to Corollary \ref{cor1}, we have
that $\text{vec}(aa^T), \text{vec}(ah_i^T), $ and $\text{vec}(h_ia^T), $ for $i=1,...,
n-1$, are in $\text{ker}(H^T\otimes H^T).$ Thus
$\text{vec}(X)$ can be represented as a linear combination of
$\text{vec}(aa^T),\text{vec}(ah_i^T), $ and $\text{vec}(h_ia^T).$
Then (\ref{lem1:eq0}) is immediate by writing this linear
representation in a matrix form. If $X$ is a symmetric matrix, then we have $a(z_1-z_2)^TH^T=H(z_1-z_2)a^T$. By the linear independence of $\text{vec}(ah_i^T), $ and $\text{vec}(h_ia^T),$ for $i=1,2,...,n-1,$ we have $z_1=z_2.$
\end{proof}
\begin{lemma}\label{lemma2}
Let a hyperplane $\mathcal{H}$ and a Dikin ellipsoid $\mathcal{E_D}$
be given as (\ref{eq112}) and (\ref{eq31}), respectively, and the base of
a standard Lorenz cone $\mathcal{C_L}$ be
$\mathcal{H}\cap\mathcal{E_D}$. Let $\mathcal{C_L}$ be represented as
$\mathcal{C_L}=\{x\,|\,x^TQx\leq0\}$. Then the following conditions
hold:
\begin{equation}\label{cond}
H^TDH=\beta H^TQH, ~~c^TQH=0, ~~c^TQc=-\beta,
\end{equation}
where $\beta$ is a positive number and $H$ is a complementary
matrix of $a$.
\end{lemma}
\begin{proof} We use the representation
$\mathcal{H}=\mathcal{H}(c,H)=\{x\,|\,x=c+Hz\}$. Substituting $x=c+Hz$
into (\ref{eq31}) and $\mathcal{C_L}=\{x\,|\,x^TQx\leq0\}$,
respectively, we have
\begin{equation}\label{ineq}
z^TH^TDHz-1\leq0 \text{ ~ and ~} z^TH^TQHz+2c^TQHz+c^TQc\leq0.
\end{equation}
The lemma is immediate by comparing the two inequalities in (\ref{ineq}).
\end{proof}
\begin{theorem}\label{themmain}
Assume the conditions in (\ref{cond}) hold, then
\begin{equation}\label{eq6}
\gamma
Q=D-\frac{1+c^T\tilde{D}c}{(c^Ta)^2}aa^T-\frac{1}{c^Ta}(ac^TD\tilde{H}+\tilde{H}Dca^T),
\end{equation}
where $\gamma$ is a positive number,
$\tilde{D}=D-(D\tilde{H}+\tilde{H}D)$, and
$\tilde{H}=H(H^TH)^{-1}H^T$.
\end{theorem}
\begin{proof}
Since $\mathcal{C_L}$ is defined as
$\{x\,|\,x^TQx\leq0\}$, we let $\beta=1$ in (\ref{cond}) for
simplicity. According to Lemma \ref{lemma1} and the first condition
in (\ref{cond}), there exist $\mu\in\mathbb{R}$ and $ z\in
\mathbb{R}^{n-1},$ such that
\begin{equation}\label{eq1}
Q=D+\mu aa^T+az^TH^T+Hza^T.
\end{equation}
If we substitute (\ref{eq1}) into the second condition in
(\ref{cond}) and note that $H^Ta=0$, $a^TH=0$, and $c^Ta=\alpha$,
then we have
\begin{equation}\label{eq2}
z=-\frac{1}{\alpha}(H^TH)^{-1}H^TDc.
\end{equation}
If we
substitute (\ref{eq1}) and (\ref{eq2}) into the third condition in
(\ref{cond}), then we have
\begin{equation}\label{eq3}
\mu=-\frac{1}{\alpha^2}(1+c^TDc-c^T(D\tilde{H}+\tilde{H}D)c).
\end{equation}
where $\tilde{H}=H(H^TH)^{-1}H^T.$
We then substitute (\ref{eq2}) and (\ref{eq3}) into (\ref{eq1}), and this theorem is
immediate.
\end{proof}
It is interesting to note that (\ref{eq2}) is the same as the solution of
$Hz=-\frac{1}{\alpha}Dc$ solved by the least squares method. Since
$\tilde{H}$ is a symmetric matrix, all the eigenvalues of
$\tilde{H}$ are real numbers. Note that $\tilde{H}^k=\tilde{H}$
holds for any positive integer $k,$ thus the eigenvalues of
$\tilde{H}$ are either $0$ or $1$ by choosing $k=2$. Also, according
to Sylvester's law of inertia \cite{horn}, we have
inertia$\{\tilde{H}\}=\{n-1,1,0\} $ because of
$H^T\tilde{H}H=I_{n-1}.$ Thus, the matrix $\tilde{H}$ has
eigenvalues 1 with multiplier $n-1$ and 0 with multiplier 1.
\begin{remark}
The vectors $h_1,h_2,...,h_{n-1}$ in Lemma \ref{lemma1} are not
necessarily orthonormal. However, if we use the orthonormal basis,
i.e., $H^TH=I_{n-1},$ the previous computation may be simplified.
\end{remark}
We now consider the Lorenz cones generated by the origin and
the intersection between Dikin ellipsoids and some special hyperplanes.
\subsection{ {The hyperplane is $\mathcal{H}_i=\mathcal{H}(e_i,
c_i)=\{x\,|\,e_i^Tx=c_i\}.$}}
\begin{theorem}
Let a Dikin ellipsoid $\mathcal{E_D}$ be given as
(\ref{eq31}), and a hyperplane be $\mathcal{H}_i=\mathcal{H}(e_i,
c_i)$, and the base of the standard Lorenz cone $\mathcal{C_L}$ be $\mathcal{H}_i\cap\mathcal{E_D}$. Let $\mathcal{C_L}$ be represented as
$\mathcal{C_L}=\{x\,|\,x^TQ_ix\leq0\}$. Then
\begin{equation}\label{eq7}
Q_i=D+\frac{n-3}{c_i^2}E_{ii}+\sum_{j=1,j\neq
i}^n\frac{-1}{c_ic_j}(E_{ij}+E_{ji}),
\end{equation}
where $E_{ij}$ denotes an $n\times n$ matrix that has $ij$-th entry 1 and other
entries 0.
\end{theorem}
\begin{proof}
Note that the normal vector of $\mathcal{H}_i$ is $e_i$, thus the
complementary matrix of $e_i$ can be chosen as
$H=[e_1,...,e_{i-1},e_{i+1},...,e_n].$ Then we have
$\tilde{H}=HH^T=I_n-E_{ii}.$ Thus, we have
\begin{equation}\label{eq512}
\alpha = c_i, ~aa^T=E_{ii},~ ac^T=\sum_{j=1}^nc_jE_{ij},~
ca^T=\sum_{j=1}^nc_jE_{ji}, \text{ and }D\tilde{H}=\tilde{H}D=D-DE_{ii}.
\end{equation}
Substituting (\ref{eq512}) into (\ref{eq6}), the lemma is immediate.
\end{proof}
For example, let $i=1$ in (\ref{eq7}), we have that
$Q_1$ is explicitly given as follows:
\begin{equation}\label{eqq1}
Q_1=\left[
\begin{array}{cccc}
\frac{n-2}{c_1^2} & -\frac{1}{c_1c_2} & \ldots & -\frac{1}{c_1c_n} \\
-\frac{1}{c_1c_2} & \frac{1}{c_2^2} & & \\
\vdots & & \ddots & \\
-\frac{1}{c_1c_n} & & & \frac{1}{c_n^2} \\
\end{array}
\right].
\end{equation}
Note that $Q_1$ is an arrowhead matrix. For any $Q_i$ given as (\ref{eq7}), by certain row and column transformation, it can be represented as an arrowhead matrix. In fact, the matrix $Q_i$ given as (\ref{eq7}) has many interesting properties.
\begin{comment}
We consider the cone which is generated by the origin as the apex
and the intersection of plane $\{x\in R^n|e^Tx=e^Tc\}$ and Dikin
ellipsoid. Then we can get
\begin{lemma}
If we restrict the ellipsoid is Dikin ellipsoid in previous lemma,
then $Q$ is a arrowhead matrix, i.e.
\begin{equation}
Q=\left[
\begin{array}{cccc}
\frac{n-2}{c_1^2} & -\frac{1}{c_1c_2} & \ldots & -\frac{1}{c_1c_n} \\
-\frac{1}{c_1c_2} & \frac{1}{c_2^2} & & \\
\vdots & & \ddots & \\
-\frac{1}{c_1c_n} & & & \frac{1}{c_n^2} \\
\end{array}
\right].
\end{equation}
\end{lemma}
\textbf{Proof:} It is easy to get $H=[0,I_{n-1}]^T$, where $I_{n-1}$
is $n-1$ dimensional identity matrix. Since the ellipsoid is Dikin
ellipsoid, $M=diag(\frac{1}{c_1^1},...,\frac{1}{c_n^2})$. Thus
\begin{equation}
H^TMH=[0,I_{n-1}]diag(\frac{1}{c_1^1},...,\frac{1}{c_n^2})[0,I_{n-1}]^T=diag(\frac{1}{c_2^1},...,\frac{1}{c_n^2})
\end{equation}
and
\begin{equation}
H^TQH=[0,I_{n-1}]\left[
\begin{array}{cc}
a_{11} & a_{12:1n} \\
a_{21:n1} & Q_{22:nn} \\
\end{array}
\right]
[0,I_{n-1}]^T=Q_{22:nn}.
\end{equation}
Thus $Q_{22:nn}=diag(\frac{1}{c_2^1},...,\frac{1}{c_n^2})$ by
$H^TMH=H^TQH$.
Then
\begin{equation}
QH=\left[
\begin{array}{cccc}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & \frac{1}{c_2^1} & & \\
\vdots & & \ddots & \\
a_{n1} & & & \frac{1}{c_n^1} \\
\end{array}
\right]\left[
\begin{array}{ccc}
0 & \cdots & 0 \\
1 & & \\
& 1 & \\
& & 1 \\
\end{array}
\right]
=\left[
\begin{array}{ccc}
a_{12} & \cdots & a_{1n} \\
\frac{1}{c_2^2} & & \\
& \ddots & \\
& & \frac{1}{c_n^2} \\
\end{array}
\right]
\end{equation}
so
\begin{equation}
c^QH=(c_1a_{12}+\frac{1}{c_2},\cdots,c_1a_{1n}+\frac{1}{c_n})=(0,\cdots,0)
\end{equation}
yields $a_{1i}=a_{i1}=-\frac{1}{c_1c_i}$, since $Q$ is symmetric.
Finally, it is easy to obtain that
\begin{equation}
c^TQ=(a_{11}c_1-\frac{n-1}{c_1},0,\cdots,0)
\end{equation}
so
\begin{equation}
c^TQc=a_{11}c_1^2-(n-1)=-1
\end{equation}
yields
\begin{equation}
a_{11}=\frac{n-1}{c_1^2}.
\end{equation}
\end{comment}
\begin{lemma}\label{lemma12}
For every $Q_i$ given as (\ref{eq7}), we have $\det(Q_i)=-\prod_{k=1}^n\frac{1}{c_k^2}$.
\end{lemma}
\begin{proof} We only consider $Q_1$ given as (\ref{eqq1}). By choosing $\lambda=0$ and $A=Q_1$ in
(\ref{eq11}), we have
\begin{equation*}
\begin{split}
\det(-Q_1)
&=-\frac{n-2}{c_1^2}\prod_{k=2}^n\frac{-1}{c_k^2}-\sum_{j=2}^n\frac{1}{c_1^2c_j^2}\prod_{k=2,k\neq
j}^n\frac{-1}{c_k^2}
=(-1)^n\frac{n-2}{c_1^2}\prod_{k=2}^n\frac{1}{c_k^2}-(-1)^n\frac{n-1}{c_1^2}\prod_{k=2}^n\frac{1}{c_k^2}\\
&=(-1)^{n-1}\frac{1}{c_1^2}\prod_{k=2}^n\frac{1}{c_k^2}
=(-1)^{n-1}\prod_{k=1}^n\frac{1}{c_k^2}
\end{split}
\end{equation*}
Noting that that $\det(Q_1)=(-1)^n\det(-Q_1)$, this lemma is
immediate.
\end{proof}
\begin{lemma}
For every $Q_i$ given as (\ref{eq7}), we have $\emph{inertia}(Q_i)=\{n-1,0,1\}$.
\end{lemma}
\begin{proof}
Without loss of generality, we only consider $Q_1$. According to the
second statement in Lemma \ref{lemma11}, we have
$\lambda_{n-1}\geq\min_{j=2,...,n}\{c_j^{-2}\}>0.$ Also, the
determinant of $Q_i$ is negative according to Lemma \ref{lemma12},
thus we have that $\lambda_n$ is negative. This proof is complete.
\end{proof}
\begin{lemma}
A lower bound and an upper bound of the eigenvalues of $Q_i$ given as (\ref{eq7}) are
\begin{equation}\label{eq22}
\frac{1}{2}\left(\frac{n-2}{c_i^2}+\frac{1}{c_o^2}+\frac{1}{c_i}
\sqrt{\left(\frac{n-2}{c_i}-\frac{c_i}{c_o^2}\right)^2+4\sum_{j=1,j\neq
i}^n\frac{1}{c_j^2}}\right),
\end{equation}
and
\begin{equation}\label{eq12}
\frac{1}{2}\left(\frac{n-2}{c_i^2}+\frac{1}{c_*^2}+\frac{1}{c_i}
\sqrt{\left(\frac{n-2}{c_i}-\frac{c_i}{c_*^2}\right)^2+4\sum_{j=1,j\neq
i}^n\frac{1}{c_j^2}}\right),
\end{equation}
respectively, where $c_o=\max_{j=1,j\neq i}^n\{c_j\}$ and $c_*=\min_{j=1,j\neq
i}^n\{c_j\}$.
\end{lemma}
\begin{proof}
Without loss of generality, we only consider $Q_1$ given as (\ref{eqq1}). In our proof,
we let $Q_1$ be divided into two functions, $Q_1=f(x)+g(x), $ where $x\in
\mathbb{R}$, $f(x),$ and $g(x)$ are as follows:
\begin{equation}
f(x)=\frac{n-2-x}{c_1^2}E_{11}+\sum_{j=2}^n \frac{1}{c_j^2}E_{jj},
\text{ and }
g(x)=\frac{x}{c_1^2}E_{11}+\sum_{j=2}^n\frac{-1}{c_1c_j}(E_{ij}+E_{ji}).
\end{equation}
For any $x\in \mathbb{R},$ we have $\lambda_n(Q_1)\geq\lambda_n(f(x))+\lambda_n(g(x))$ and
$\lambda_1(Q_1)\leq \lambda_1(f(x))+\lambda_1(g(x)).$ Hence,
\begin{equation}
\lambda_1(Q_1)\leq \min_{x\in
\mathbb{R}}\{\lambda_1(f(x))+\lambda_1(g(x))\},
\text{ and } \lambda_n(Q_1)\geq \max_{x\in
\mathbb{R}}\{\lambda_n(f(x))+\lambda_n(g(x))\}.
\end{equation}
Note that $f(x)$ is a diagonal matrix for any $x\in \mathbb{R}$, then its eigenvalues are
$\lambda_1(f(x))=\max\{(n-2-x)c_1^{-2},c_*^{-2}\} \text { and } \lambda_n(f(x))=\min\{(n-2-x)c_1^{-2},c_o^{-2}\}.
$
Note that the rank of $g(x)$ is 1 for any $x\in \mathbb{R},$ and its characteristic polynomial is $
\lambda^{n-2}(\lambda^2-{x}c_1^{-2}\lambda-\sum_{j=2}^n(c_1c_j)^{-2}).
$ Thus, we have
\begin{equation}
\lambda_1(g(x))=\frac{1}{2}\left(\frac{x}{c_1^{2}}+\sqrt{\left(\frac{x}{c_1^{2}}\right)^2+4\sum_{j=2}^n\left(\frac{1}{c_1c_j}\right)^{2}}\right),
\end{equation}
and
\begin{equation}
\lambda_n(g(x))=\frac{1}{2}\left(\frac{x}{c_1^{2}}-\sqrt{\left(\frac{x}{c_1^{2}}\right)^2+4\sum_{j=2}^n\left(\frac{1}{c_1c_j}\right)^{2}}\right).
\end{equation}
We now consider the following four cases:
\begin{enumerate}
\item If $\frac{n-2-x}{c_1^2}\geq\frac{1}{c_*^2}$, {i.e.,} $x\leq
n-2-\frac{c_1^2}{c_*^2}$, then
\begin{equation}\label{c3}
\lambda_1(f(x))+\lambda_1(g(x))=\frac{n-2}{c_1^2}+\frac{2}{c_1}\left(\frac{\sum_{i=2}^n\frac{1}{c_i^2}}{\frac{x}{c_1}+\sqrt{\frac{x^2}{c_1^2}+4\sum_{i=2}^n\frac{1}{c_i^2}}}\right).
\end{equation}
\item If $\frac{n-2-x}{c_1^2}\leq\frac{1}{c_*^2}$, {i.e.} $x\geq
n-2-\frac{c_1^2}{c_*^2}$, then
\begin{equation}\label{c4}
\lambda_1(f(x))+\lambda_1(g(x))=\frac{1}{c_*^2}+\frac{1}{2c_1}\left(\frac{x}{c_1}+\sqrt{\frac{x^2}{c_1^2}+4\sum_{i=2}^n\frac{1}{c_i^2}}\right).
\end{equation}
\item If $\frac{n-2-x}{c_1^2}\geq\frac{1}{c_o^2}$, {i.e.,} $x\leq
n-2-\frac{c_1^2}{c_o^2}$, then
\begin{equation}\label{c1}
\lambda_n(f(x))+\lambda_n(g(x))=\frac{1}{c_o^2}-\frac{2}{c_1}\frac{\sum_{j=2}^n\frac{1}{c_j^2}}
{\frac{x}{c_1}+\sqrt{\frac{x^2}{c_1^2}+4\sum_{j=2}^n\frac{1}{c_j^2}}}.
\end{equation}
\item If $\frac{n-2-x}{c_1^2}\leq\frac{1}{c_o^2}$, {i.e.,} $x\geq
n-2-\frac{c_1^2}{c_o^2}$, then
\begin{equation}\label{c2}
\lambda_n(f(x))+\lambda_n(g(x))=\frac{n-2}{c_1^2}-\frac{1}{2c_1}\left(\frac{x}{c_1}+\sqrt{\frac{x^2}{c_1^2}+4\sum_{i=2}^n\frac{1}{c_i^2}}\right).
\end{equation}
\end{enumerate}
Since functions (\ref{c1}) and (\ref{c4})
are increasing functions and functions (\ref{c2}) and
(\ref{c3}) are decreasing functions, we have
\begin{equation}
arg \min_{x\in
\mathbb{R}}\{\lambda_1(f(x))+\lambda_1(g(x))\}=\{n-2-\frac{c_1^2}{c_*^2}\}
\end{equation}
and
\begin{equation}
arg \max_{x\in
\mathbb{R}}\{\lambda_n(f(x))+\lambda_n(g(x))\}=\{n-2-\frac{c_1^2}{c_o^2}\}.
\end{equation}
Then substituting $x=n-2-\frac{c_1^2}{c_o^2}$ into (\ref{c1})
(or (\ref{c2})), and substituting $x=n-2-\frac{c_1^2}{c_*^2}$ into (\ref{c3})
(or (\ref{c4})), the lower and upper bounds (\ref{eq22}) and (\ref{eq12}) are immediate.
This proof is complete.
\end{proof}
\begin{comment}
1).
\begin{equation}
\begin{split}
\lambda_n(F_a)+\lambda_n(G_a)&=\frac{n-2-a}{c_1^2}+\frac{1}{2}(\frac{a}{c_1^2}+\frac{1}{c_1}\sqrt{\frac{a^2}{c_1^2}+4\sum_{i=2}^n\frac{1}{c_i^2}})\\
&=\frac{n-2}{c_1^2}+\frac{1}{2c_1}(-\frac{a}{c_1}+\sqrt{\frac{a^2}{c_1^2}+4\sum_{i=2}^n\frac{1}{c_i^2}})\\
&=\frac{n-2}{c_1^2}+\frac{1}{2c_1}(\frac{4\sum_{i=2}^n\frac{1}{c_i^2}}{\frac{a}{c_1}+\sqrt{\frac{a^2}{c_1^2}+4\sum_{i=2}^n\frac{1}{c_i^2}}})
\end{split}
\end{equation}
Note that the function of $\lambda_n(F_a)+\lambda_n(G_a)$ is
decreasing function with respect to $a$, so the minimum value of
function is reached when $a$ chooses the largest possible values,
i.e. $a=n-2-\frac{c_1^2}{c_2^2}$.
2). If $\frac{n-2-a}{c_1^2}\leq\frac{1}{c_*^2}$, \emph{i.e.} $a\geq
n-2-\frac{c_1^2}{c_2^2}$.
\begin{equation}
\lambda_n(F_a)+\lambda_n(G_a)=\frac{1}{c_*^2}+\frac{1}{2}(\frac{a}{c_1^2}+\frac{1}{c_1}\sqrt{\frac{a^2}{c_1^2}+4\sum_{i=2}^n\frac{1}{c_i^2}})
\end{equation}
Note that the function of $\lambda_n(F_a)+\lambda_n(G_a)$ is
increasing function with respect to $a$, so the minimum value of
function is reached when $a$ chooses the smallest possible values,
i.e. $a=n-2-\frac{c_1^2}{c_2^2}$.
\end{comment}
\begin{comment}
(\ref{eqn:arr}), so we just need to prove the right inequality. We
divide matrix $Q$ into two part.
\begin{equation}
\left[
\begin{array}{cccc}
\frac{n-2}{c_1^2} & -\frac{1}{c_1c_2} & \ldots & -\frac{1}{c_1c_n} \\
-\frac{1}{c_1c_2} & \frac{1}{c_2^2} & & \\
\vdots & & \ddots & \\
-\frac{1}{c_1c_n} & & & \frac{1}{c_n^2} \\
\end{array}
\right]=
\left[
\begin{array}{cccc}
\frac{n-2-a}{c_1^2} & & & \\
& \frac{1}{c_2^2} & & \\
& & \ddots & \\
& & & \frac{1}{c_n^2} \\
\end{array}
\right]+
\left[
\begin{array}{cccc}
\frac{a}{c_1^2} & -\frac{1}{c_1c_2} & \ldots & -\frac{1}{c_1c_n} \\
-\frac{1}{c_1c_2} & 0 & & \\
\vdots & & \ddots & \\
-\frac{1}{c_1c_n} & & & 0 \\
\end{array}
\right]
\end{equation}
for any $a\in R$. Denote the first and second matrices in the right
side are $F_a$ and $G_a$ for simplicity, respectively, then
$Q=F_a+G_a$. We will prove this by using the fact that
\begin{equation}
Q=F+G\Rightarrow \lambda_n(Q)\leq \lambda_n(F_a)+\lambda_n(G_a),
\text{ for } \forall a\in R
\end{equation}
so we just need to investigate the following
\begin{equation}
\lambda_n(Q)\leq \min_{a\in R}\{\lambda_n(F_a)+\lambda_n(G_a)\},
\end{equation}
where $\lambda_n(A)$ represents the largest eigenvalue of matrix
$A$.
\end{comment}
\subsection{{The hyperplane is $\mathcal{H}=\mathcal{H}(e,
e^Tc)=\{x\in \mathbb{R}^n|e^Tx=c^Te\}.$}}
First of all, we compute an orthogonal
complementary basis of $e$. Obviously,
$\{e_1-e_2,e_1-e_3,...,e_1-e_n\}$ is a basis of the complementary
space of $e.$ We use this basis to construct an orthogonal complementary basis of $e$ given as follows:
\begin{equation}
h_i=-\frac{1}{\sqrt{i(i+1)}}\sum_{k=1}^ie_k+\frac{\sqrt{i}}{\sqrt{i+1}}e_{i+1},
\text{~~where~~} i=1,2,...,n-1,
\end{equation}
which can be explicitly written as
\begin{equation}
H=[h_1,h_2,...,h_{n-1}]=\left[
\begin{array}{rrrr}
-\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{6}} & -\frac{1}{\sqrt{12}} & \cdots \\
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{6}} & -\frac{1}{\sqrt{12}} & \cdots \\
& \frac{\sqrt{2}}{\sqrt{3}} & -\frac{1}{\sqrt{12}} & \cdots \\
& & \frac{\sqrt{2}}{\sqrt{3}} & \cdots \\
& & & \ddots \\
\end{array}
\right].
\end{equation}
For simplicity, we denote the $i$th row of $H$ by $p_i$,
where $i=1,2,...,n.$ Without loss of generality, assume $i>j$,
we have
\begin{equation}
p_i^Tp_j=-\frac{1}{{i}}+\sum_{k=i+1}^n\frac{1}{(k-1)k}=-\frac{1}{n},
\text{ and }
p_i^Tp_i=\frac{i-1}{{i}}+\sum_{k=i+1}^n\frac{1}{(k-1)k}=1-\frac{1}{n}.
\end{equation}
Thus, we have
\begin{equation}\label{eq23}
HH^T=I-\frac{1}{n}ee^T.
\end{equation}
Since $(HH^T)H=H$, {i.e.,} $(HH^T)h_i=h_i$, for $i\in\mathcal{I}(n-1),$ we have that $h_i$ is an eigenvector of $HH^T.$
Also, note that the rank of $HH^T$ is $n-1$, thus the eigenvalues of
$HH^T$ is 1 with multiplier $n-1$ and 0 with multiplier 1.
\begin{theorem}
Let a Dikin ellipsoid $\mathcal{E_D}$ be in the form of
(\ref{eq31}), and a hyperplane be $\mathcal{H}=\mathcal{H}(e,
e^Tc)$. Then the Lorenz cone $\mathcal{C_L}$ generated by the origin
and $\mathcal{H}\cap\mathcal{E_D}$ is represented as
$\mathcal{C_L}=\{x|x^TQx\leq0\}$, and
\begin{equation}\label{eq26}
Q=D+\frac{n-1}{(e^Tc)^2}ee^T-\frac{1}{e^Tc}(ec^TD+Dce^T)=D+\sum_{i=1}^n\sum_{j=1}^n\left(\frac{n-1}{(e^Tc)^2}-\frac{1}{e^Tc}\left(\frac{1}{c_i}+\frac{1}{c_j}\right)\right)E_{ij}.
\end{equation}
\end{theorem}
\begin{proof}
According to (\ref{eq23}), we have
$\tilde{H}=HH^T=I-\frac{1}{n}ee^T$, which yields $
D\tilde{H}=D-\frac{1}{n}Dee^T, $ and $\tilde{H}D=D-\frac{1}{n}ee^TD
$. Thus $
\tilde{D}=D-(D\tilde{H}+\tilde{H}D)=\frac{1}{n}(Dee^T+ee^TD)-D. $
Then we have
\begin{equation}\label{eq24}
1+c^T\tilde{D}c=1+\frac{1}{n}(c^TDee^Tc+c^Tee^TDc)-c^TDc=1-n+\frac{2\alpha}{n}\sum_{i=1}^n\frac{1}{c_i}.
\end{equation}
\begin{equation}\label{eq25}
\begin{split}
ec^TD\tilde{H}+\tilde{H}Dce^T&=ec^TD-\frac{1}{n}e(c^TDe)e^T+Dce^T-\frac{1}{n}e(e^TDc)e^T\\
&=ec^TD+Dce^T-\left(\frac{2}{n}\sum_{i=1}^n\frac{1}{c_i}\right)ee^T.
\end{split}
\end{equation}
Substituting (\ref{eq24}) and (\ref{eq25}) into (\ref{eq6}), this
lemma is immediate. The proof is complete.
\end{proof}
We now present an interesting result about the eigenvalue and
eigenvector structures of a class of rank 1 matrix.
\begin{lemma}
If the rank of a square matrix $A$ is $1$, the matrix $A$ has at
most $1$ nonzero eigenvalues with multiplier $1$.
\end{lemma}
\begin{proof}
The dimension of the kernel space of $Ax=0$ is $n-1.$ The basis in
this kernel space can be eigenvalues of $A$ corresponding to $0$.
Also, note that the sum of all eigenvalues is equal to the trace of
$A$ that may be zero, in which case, all eigenvalues are $0$. For
example, $A=E_{12}.$ The eigenvalues are all $0$ with eigenvectors
$e_1,e_2,...,e_n.$
\end{proof}
\begin{corollary}
Let two vectors $a,c\in \mathbb{R}^n$, then the eigenvalues of
$ac^T$ are $a^Tc$ with multiplier $1$ and $0$ with multiplier $n-1$.
An eigenvector corresponding to $a^Tc$ is $a$. The eigenvectors
corresponding to $0$ can be the complementary basis of $c$.
\end{corollary}
\begin{corollary}
Let a nonzero vector $a\in \mathbb{R}^n$, then the matrix $aa^T$ is
rank $1$, and the eigenvalues of $aa^T$ are $\|a\|^2$ with
multiplier $1$ and $0$ with multiplier $n-1$. An eigenvector
corresponding to $\|a\|^2$ is $a$. The eigenvectors corresponding to
$0$ can be the complementary basis of $a$.
\end{corollary}
\begin{corollary}
The eigenvalues of $ee^T$ is $n$ with multiplier $1$ and eigenvector
$e$, and $0$ with multiplier $n-1$ and eigenvectors
$e_1-e_2,e_1-e_3,...,e_1-e_n.$
\end{corollary}
\begin{lemma}\label{lemma13}
$\det(\beta I+\alpha ee^T)=(1+n\frac{\alpha}{\beta})\beta^n.$
\end{lemma}
\begin{proof}
Let $T_n=\det(\beta I+\alpha ee^T).$ Writing $T_n$ by definition, we
can find the following iteration formula:
\begin{equation}
T_n=(\beta+\alpha)T_{n-1}+\sum_{j=2}^n (-1)^{1+j}\alpha
M_{1j}=(\beta+\alpha)T_{n-1}-(n-1)\alpha^2\beta^{n-2}.
\end{equation}
Let $F_n=\frac{T_n}{\beta^n},$ then
$F_n=\frac{\beta+\alpha}{\beta}F_{n-1}-(n-1)\frac{\alpha^2}{\beta^2}.$
It is easy to prove that $F_n-F_{n-1}=\frac{\alpha}{\beta}.$ Then
this lemma is immediate.
\end{proof}
\begin{corollary}
$\det(I+\alpha ee^T)=1+n\alpha.$
\end{corollary}
\begin{lemma}
If $c_1=c_2=...=c_n=c,$ then eigenvalues of $Q$ in (\ref{eq26}) is
$\frac{1}{c^2}$ with multiplier $n-1$ and $-\frac{1}{nc^2}$ with
multiplier 1.
\end{lemma}
\begin{proof}
Since $c_1=c_2=...=c_n=c,$ we have
$Q=\frac{1}{c^2}(I-\frac{n+1}{n}ee^T).$ According to Lemma
\ref{lemma13}, we have $\det(Q-\lambda I )=0$ to yield the
characteristics polynomial is $(1-\lambda
c^2)^{n-1}(-\frac{1}{n}-\lambda c^2)=0.$ The the lemma is immediate.
\end{proof}
\subsection{\textbf{Tangent cone}}
\begin{lemma}\label{lemma15}
Let $A,B,C,D\in \mathbb{R}^n.$ Assume $AC\bot CB, CD\bot AB$, and
$D\in AB$. Then $ \|AB\|^2=\|AC\|^2+\|BC\|^2,
~~\|AC\|^2=\|AD\|\|AB\|, $ and $ \|BC\|^2=\|BD\|\|AB\|. $
\end{lemma}
\begin{proof}
Note that
$\overrightarrow{AB}=\overrightarrow{CB}-\overrightarrow{CA}$ and
$\overrightarrow{CB}\bot\overrightarrow{CA}$, the first equation is
immediate. The remaining two are not trivial to prove, even they
looks natural in 2 or 3 dimension. Now we present a rigorous proof.
We denote $x_A,x_B,x_C,$ and $x_D$ the coordinates of $A,B,C$ and
$D$ in $\mathbb{R}^n$, respectively. Since $D\in AB$, there exists
$0\leq \lambda \leq 1$ such that $x_D=\lambda x_A+(1-\lambda)x_B.$
Since $AC\bot BC $ and $CD\bot AB$, we have
\begin{equation}\label{eq27}
\left(\lambda x_A+(1-\lambda)x_B-x_C\right)^T(x_A-x_B)=0,~~~~~
(x_A-x_C)^T(x_B-x_C)=0.
\end{equation}
Expanding two equations in (\ref{eq27}) and plugging
$x_A^Tx_B-x_C^Tx_A=x_B^Tx_C-\|x_C\|^2$, which is obtained from the
second equation in (\ref{eq27}), into the first equation in
(\ref{eq27}), we have
\begin{equation}
\lambda=\frac{\|x_B-x_C\|^2}{\|x_A-x_B\|^2}=\frac{\|\overrightarrow{BC}\|^2}{\|\overrightarrow{AB}\|^2}.
\end{equation}
Also, $DA=(1-\lambda)(x_B-x_A)$ and $BA=x_B-x_A$, then
\begin{equation}
\|AD\|\|AB\|=(1-\lambda)(x_B-x_A)^T(x_B-x_A)=(1-\lambda)\|x_A-x_B\|^2=\|\overrightarrow{AB}\|^2-\|\overrightarrow{BC}\|^2=\|\overrightarrow{AC}\|^2.
\end{equation}
Similarly, the third equation in this lemma is easy to obtain.
\end{proof}
\begin{lemma}\label{lemma16}
The distance of a point $\bar{x}\in \mathbb{R}^n$ to a hyperplane
$\mathcal{S}=\{a^Tx=\alpha\}$ is
\begin{equation}
dist(x,S)=\frac{|a^T\bar{x}-\alpha|}{\|a\|}.
\end{equation}
\end{lemma}
\begin{theorem}
Assume the ellipsoid $\mathcal{E}=\{x\in \mathbb{R}^n|
(x-c)^T(x-c)\leq 1\},$ then the Lorenz cone $\mathcal{C}$ is
\begin{equation}
Q=I-\frac{1}{\|c\|^2-1}cc^T.
\end{equation}
\end{theorem}
\begin{proof}
Since the ellipsoid $\mathcal{E}$ is a $n$-sphere, the hyperplane
through the intersection of the ellipsoid and Lorenz cone (note that
this intersection is a $n-1$-dimensional ellipsoid) is $\{x\in
\mathbb{R}^n|c^Tx=\alpha\}$. We now compute the value of $\alpha.$
According to Lemma \ref{lemma15}, we obtain that the distance from
origin to this hyperplane is $\|c\|-1/\|c\|.$ Also, according to
Lemma \ref{lemma16}, we can compute $\alpha=\|c\|^2-1.$ Thus, the
hyperplane through $\mathcal{E}\cap \mathcal{C}$ is
\begin{equation}
\mathcal{S}=\{x\in \mathbb{R}^n~|~ c^Tx=\|c\|^2-1\}.
\end{equation}
According to \cite{julio1}, there exist parameters $z\in
\mathbb{R}^{n-1}, \lambda, \mu$, such that
\begin{equation}\label{eq28}
D=I+\lambda cc^T+cz^TH^T+Hzc^T, ~~0=-c-\mu c-(\|c\|^2-1)Hz,
\end{equation}
\begin{equation}
0=\|c\|^2-1-\lambda(\|c\|^2-1)^2+2\mu(\|c\|^2-1).
\end{equation}
From the second and third equation in (\ref{eq28}), we have
\begin{equation}\label{eq29}
Hz=\frac{-1}{\|c\|^2-1}(1+\mu)c, ~~~ \lambda(\|c\|^2-1)-2\mu =1.
\end{equation}
Substitute (\ref{eq29}) into the first equation in (\ref{eq28}), the
theorem is immediate.
\end{proof}
\section{Invariance of Lorenz Cone}
A tangent cone of a convex set $\mathcal{C}\subset \mathbb{R}^n$ at point $x\in
\mathcal{C}$ is defined as
\begin{equation}
T_{\mathcal{C}}(x)=\{z\in \mathbb{R}^n|\lim_{h\rightarrow0}\inf\frac{dist(x+hz,\mathcal{C})}{h}=0\}
\end{equation}
where $dist(x,\mathcal{C})=\inf_{y\in \mathcal{C}}\|x-y\|$. It is easy to see that
$T_{\mathcal{C}}(x)=\mathbb{R}^n$ if $x$ is located interior of $\mathcal{C}$. If $x$ is on the
boundary of $\mathcal{C}$ with smooth neighborhood, then the tangent cone is
the affine half-space which is obtained by parallelling the tangent
line at $x$ with respect to $\mathcal{C}$ to be through origin.
\begin{lemma}\cite{song1, nagu}
\textbf{[Nagumo]} Let $\mathcal{C}$ be convex and closed set in $R^n$, then
$\mathcal{C}$ is invariant with respect to dynamical system
(\ref{eqn:dy1}) if and only if $\forall x\in \partial \mathcal{C}$, then
$Ax\in T_{\mathcal{C}}(x)$, where $T_{\mathcal{C}}(x)$ is tangent cone of $\mathcal{C}$ at $x$.
\end{lemma}
This is an elegant and intuitive conclusion. We can understand this lemma
in the following way: the right side, i.e. $Ax$ in dynamical system
is actually the slope of tangent line of trajectory at point $x$,
since the left side is the derivative of trajectory. If the
trajectory starts from one point in $\mathcal{C}$, then the unique possibility
to move out from this set must be through some point on the
boundary. This lemma states that the slope is in the tangent cone at
$x$, which forces the trajectory to move back to $\mathcal{C}$.
Based on Nagumo lemma, it is easy to derive the sufficient and
necessary condition that one ellipsoid or cone is positively
invariant with respect to dynamical system (\ref{eqn:dy1}). For simplicity, we remove the suffix $c$ in $A_c$ in (\ref{eqn:dy1}).
\begin{lemma}
Let an ellipsoid be defined as $\mathcal{E}=\{x\in \mathbb{R}^n|x^TPx\leq1\}$, an cone
be defined as $\mathcal{C}=\{x\in \mathbb{R}^n|x^TQx\leq0\}$, then $\mathcal{E}$ or $\mathcal{C}$ is
a positively invariant set with respect to dynamical system if and
only if $\langle Ax,Px\rangle\leq0$ or $\langle Ax,Qx\rangle\leq0$
for $x$ on the boundary of the set, where $\langle a, b\rangle$ is
the inner product of $a$ and $b$.
\end{lemma}
\begin{proof} We just prove the ellipsoid case, the proof for cone
is almost same. It is easy to see that the outer normal at $x\in
\partial \mathcal{E}$ is $2Px$, and since the boundary of an ellipsoid is
smooth, the tangent cone at $x$ is
\begin{equation}
T_{\mathcal{E}}(x)=\{y|\langle y, Px\rangle\leq0\}.
\end{equation}
Then by Nagumo lemma, dynamical system (\ref{eqn:dy1}) is positively
invariant on $\mathcal{E}$ if and only if $Ax\in T_{\mathcal{E}}(x)$ for any $x$ on the boundary of
$\mathcal{E}$. Thus, this lemma was proved.
\end{proof}
According to \cite{stern}, it concluded that the dynamical system
(\ref{eqn:dy1}) is positively invariant for the ice cream cone
$\mathcal{C}_0=\{x\in R^n|x_1^2+...+x_{n-1}^2\leq x_n^2\}$ if and only if
there exists $a\in R$ such that
\begin{equation}
Q_nA+A^TQ_n+aQ_n\leq0,
\end{equation}
where $Q_n=diag(1,...,1,-1)$, and the inequality means semi-negative
definite.
\begin{lemma}
Let cone be defined as $\mathcal{C}=\{x\in \mathbb{R}^n|x^TQx\leq0\}$, then dynamical
system is positively invariant on $\mathcal{C}$ if and only if there exists
$a\in \mathbb{R}$ such that
\begin{equation}\label{eqn:a}
QA+A^TQ+aQ\leq0,
\end{equation}
where the inequality means semi-negative definite.
\end{lemma}
\begin{proof}
There exists one nonsingular transformation $P$ such that $\mathcal{C}=P\mathcal{C}_0$. Thus, $\forall x\in \mathcal{C}_0$,
there exists $x^*\in \mathcal{C}$ such that $x^*=Px$. Since $x^*$ satisfies
dynamical system equation, we have
\begin{equation}
(x^*)^{'}=Ax^*\Leftrightarrow (Px)^{'}=APx\Leftrightarrow
x^{'}=P^{-1}APx.
\end{equation}
Thus, dynamical system (\ref{eqn:dy1}) is positively invariant on
$\mathcal{C}$ is equivalent that the right dynamical system is positive
invariant on $\mathcal{C}_0$. By the previous lemma and $P^{T}QP=Q_n$, there
exist $a\in R$ such that
\begin{equation}
\begin{split}
Q_nP^{-1}AP+(P^{-1}AP)^TQ_n+aQ_n&\leq0\\
P^{T}QPP^{-1}AP+P^TA^TPP^{-1}QP+aP^{T}QP&\leq0\\
P^T(QA+A^TQ+aQ)P&\leq0
\end{split}
\end{equation}
In fact, the last ``inequality" is equivalent with (\ref{eqn:a}). If
(\ref{eqn:a}) is true, then for any $x$,
\begin{equation}
x^TP^T(QA+A^TQ+aQ)Px=(Px)^T(QA+A^TQ+aQ)(Px)\leq0.
\end{equation}
The the other hand, for any $x$, since $P$ is singular, there exists
a $y$ such that $P^{-1}x=y$, i.e. $x=Py$. Then
\begin{equation}
x^T(QA+A^TQ+aQ)x=y^TP^T(QA+A^TQ+aQ)Py\leq0.
\end{equation}
The proof is completed.
\end{proof}
Figure \ref{fig511} gives the shape of the constructed Lorenz cone by using the Dikin ellipsoid and two different hyperplanes.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{./figure/NovelCone1.eps}
\includegraphics[width=0.47\textwidth]{./figure/NovelCone2.eps}
\caption{A framework to derive invariance conditions for continuous systems.}
\label{fig511}
\end{figure}
\section{Conclusion}
In this paper, we study the cone based invariant sets for a given dynamical system. We construct some special Lorenz cones using Dikin ellipsoid and some hyperplanes. Dikin ellipsoid is originally introduced from mathematical optimization to design the polynomial time linear algorithm. We use this tool into the construction to ensure that the constructed Lorenz cone is located in positive orthant, which is usually a common requirement in the real world application. We also study the structure of the constructed cones, especially the eigenvalues structure of the related matrix in the formula of elliposid. The novelty of this paper is building the link between optimization and invariant sets. It also provides more flexibility for other researchers either in theory or in practice to choose more Lorenz cones for analysis.
\bibliographystyle{plain}
|
2,869,038,155,036 | arxiv | \section{Introduction}
Based upon models of planet formation, we understand the variety of elemental compositions of planetesimals as a familiar three step process \citep{McSweenHuss2010}. (i) Under nebular condensation, incorporation of an element into the planetesimal is a function of local temperature and pressure. (ii) Differentiation often occurs, redistributing all elements within the planetesimal; lithophile elements (Al, Ca, Ti) are enhanced in the crust while siderophile elements (Fe, Mn, Cr, Ni) settle into the core. (iii) Collisions lead to stripping and blending of cores and crust, redistributing elements within the entire planetary system. In the solar system, chondrites are a direct consequence of nebular condensation, i.e. step (i); achondrites and primitive achondrites have experienced post-nebular processing, i.e. steps (ii) and (iii) \citep{ONeillPlame2008}. The study of externally-polluted white dwarfs provides invaluable information about the elemental compositions of extrasolar rocky planetesimals, directly testing these models and contrasting with solar system objects \citep{Jura2013a,JuraYoung2014}.
The current picture is that beyond a few AUs, a large fraction of extrasolar planetesimals can survive to the white dwarf phase \citep{Jura2008}. From dynamical rearrangement during the post-AGB phase, some planetesimals can be perturbed into the tidal radius of the white dwarf and subsequently ``pollute" its pure hydrogen or helium atmosphere \citep{DebesSigurdsson2002, Jura2003, Bonsor2011, Debes2012a,Veras2013}. Calcium, the most easily detected element from optical surveys, has been identified in over 200 white dwarfs \citep{Zuckerman2003, Zuckerman2010, Koester2005b, Dufour2007, Koester2011}. So far, 30 heavily polluted white dwarfs have been found to show excess infrared radiation coming from the debris of these pulverized planetesimals [e.g. \citet{Mullally2007, Farihi2009, XuJura2012}, and references therein]. These stars always show 10 $\mu$m circumstellar silicate emission features when observed spectroscopically \citep{Reach2005b, Reach2009, Jura2009a}. Orbiting gaseous material has been detected in 9 polluted white dwarfs \citep{Gaensicke2006, Gaensicke2007, Gaensicke2008, Gaensicke2011, Gaensicke2012, Melis2010, Melis2012, Farihi2012a, Debes2012b}. With high-resolution spectroscopic observations, 19 heavy elements have been detected in white dwarf atmospheres, including C, O, Na, Mg, Al, Si, P, S, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu and Sr \citep{Zuckerman2007, Klein2010, Klein2011, Dufour2010, Dufour2012, Farihi2010b, Melis2010, Vennes2010, Vennes2011a, Zuckerman2011,Jura2012, Gaensicke2012, Xu2013a}.
Theoretical calculations show that extrasolar planetesimals with internal water can survive the red giant stage of their parent star \citep{JuraXu2010}. We can constrain the water mass fraction in extrasolar planetesimals by determining the abundance of accreted hydrogen and/or oxygen in polluted white dwarfs \citep{Klein2010}. By analyzing the hydrogen abundance in an ensemble of helium dominated white dwarfs, \citet{JuraXu2012} found that water is less than 1\% of the total accreted material. Recently, \citet{Farihi2013} identified a white dwarf which has accreted a large amount of oxygen, in excess of what can be combined into MgO, SiO$_2$, FeO, CaO and Al$_2$O$_3$; they concluded that the accreted asteroid is at least 26\% water by mass. In addition, if there is enough water in the disk, molecular water emission lines might be detectable in the near-infrared, similar to those around T Tauri stars \citep{CarrNajita2008}.
In this paper, we focus on two pulsating ZZ Ceti hydrogen white dwarfs, G29-38 (WD 2326+049) and GD 133 (WD 1116+026). G29-38 is a fascinating white dwarf and a record holder. It is the first and also the closest white dwarf identified with an infrared excess \citep{ZuckermanBecklin1987} and a 10 $\mu$m circumstellar silicate emission feature \citep{Reach2005b,Reach2009}. It is among the very first few hydrogen white dwarfs that were found to be polluted \citep{Koester1997}, which led to the identification of a white dwarf subclass ``DAZ" \citep{Zuckerman2003}. Very recently, G29-38, together with GD 133 and GD 31 are the first white dwarfs with photospheric detections of molecular hydrogen \citep{Xu2013b}. The atmospheric pollution in GD 133 was first reported in the SPY survey \citep{Koester2005b, KoesterWilken2006}. It also has an orbiting dust disk as well as a 10 $\mu$m silicate feature \citep{Jura2007b,Jura2009a}.
The paper is organized as follows. In Section 2, we report data acquisition and reduction methods. In Section 3, we determine stellar parameters for G29-38 and GD 133. The heavy element abundances are reported in Section 4. In Section 5, we compare the composition of the parent body accreted onto G29-38 and GD 133 with solar system objects. In Section 6, we put our results in perspective and conclusions are given in Section 7.
\section{Observations}
We performed spectroscopic studies of G29-38 and GD 133 with the High Resolution Echelle Spectrometer (HIRES) on the Keck telescope and the Cosmic Origins Spectrograph (COS) on the Hubble Space Telescope. G29-38 was also observed with the NIRSPEC on the Keck Telescope. The observation logs are presented in Table \ref{Tab: Log} and described in detail below.
\begin{table}[hp]
\begin{center}
\caption{Observation Logs}
\begin{tabular}{lllcccccc}
\\
\hline \hline
Star & UT Date & Instrument & $\lambda$ range & Exposure \\
& & & ({\AA}) & (sec) \\
\hline
G29-38 & 2006 Jun 11 & HIRES/red & 5690-10160 & 7,200\\
& 2008 Aug 7 & HIRES/blue & 3115-5950 & 2,000\\
& 2010 Oct 17 & COS & 1145-1445 & 1,999\\
& 2011 Jan 19 & COS & 1145-1445 & 7,035\\
& 2011 Aug 15 & NIRSPEC & 27,500-36,000 & 2,160 \\
GD 133 & 2008 Feb 13 & HIRES/blue & 3135-5965 & 2,700\\
& 2008 Feb 26 & HIRES/red & 4600-8995 & 2,400\\
& 2011 May 28 & COS & 1145-1445 & 13,460\\
\hline
\label{Tab: Log}
\end{tabular}
\end{center}
\end{table}
\subsection{{\it Keck}/HIRES Optical Spectroscopy}
The optical data were acquired with HIRES \citep{Vogt1994} on the Keck I telescope at Mauna Kea Observatory under good weather conditions except for the night of Aug 7, 2008, where high cirrus clouds were present that caused 2-3 magnitudes of extinction. The C5 slit with a width of 1$\farcs$148 was used for all observations. The spectral resolution is $\sim$ 40,000 as measured from the Th-Ar lamps.
The MAKEE software\footnote{MAKEE Keck Observatory HIRES Data Reduction Software, http://www2.keck.hawaii.edu/inst/common/makeewww/} was used to extract the spectra from the flat-fielded two-dimensional image of each exposure with the trace of a bright calibration star. Wavelength calibration was performed using the standard Th-Ar lamps. Following \citet{Klein2010, Klein2011}, we used IRAF to normalize the spectra and combine echelle orders. When multiple exposures were present, each exposure was processed separately based on steps outlined above and combined afterwards, weighted by their count rate. For GD 133, there was second order contamination in 8200-9000 {\AA} region and we followed \citet{Klein2010} to calibrate and extract that part of the spectrum. For both stars, the final spectra were continuum-normalized but not flux calibrated. The signal-to-noise ratio (S/N) for G29-38 is 50-90 shortward of 3850 {\AA} and 90-210 for longer wavelengths. For GD 133, the S/N is 30-60 shortward of 3900 {\AA} and longward of 6000 {\AA} and 60-110 for the rest.
\subsection{{\it HST}/COS Ultraviolet Spectroscopy}
G29-38 and GD 133 were observed as part of the HST cycle 18 program 12290, ``Do Rocky Extrasolar Minor Planets Have a Composition Similar to Bulk Earth?". G29-38 was observed at two different times due to the malfunction of a gyro during part of the first observation. Instrument configuration and data reduction procedures were described in \citet{Xu2013b}. Following \citet{Jura2012}, we extracted night-time portions of the data to remove geocoronal O I emission lines near 1304 {\AA}. For GD 133, there were 4711 sec of useful night time data and the S/N of the unsmoothed spectrum is 8 around O I 1304 {\AA}. For G29-38, there are only 400 sec of effective night time exposure and the data were not used for the analysis.
\subsection{{\it Keck}/NIRSPEC Infrared Spectroscopy}
G29-38 was observed with the NIRSPEC \citep{McLean1998, McLean2000} on the Keck II telescope in low resolution R $\sim$ 2000 spectroscopy mode and a central wavelength of 3 $\mu$m. The slit size was chosen to be 42 $\times$ 0{\farcs}57, matching the average seeing of the night around 0{\farcs}6. Exposures were 60 sec each; 60 co-added images with a 1 sec frame time. The target was observed at two nod positions; a complete set includes an ABBA nod pattern and has a total on target time of 4 minutes. After 3-5 sets of observations on G29-38, an equal number of sets were taken on the calibration star HD 222749 (B9V) to remove telluric features and instrument transmission features.
All spectroscopic reductions were made using the REDSPEC software\footnote{NIRSPEC Data Reduction with REDSPEC, http://www2.keck.hawaii.edu/inst/nirspec/redspec.html} following procedures outlined in \citet{McLean2003}, which includes corrections of nonlinearity in the spatial and spectral dimensions, wavelength calibrations with the Ne and Ar lamps, extraction of the spectra and removal of telluric features and the instrument response function. To restore the spectral slope, the spectrum is multiplied by a black body curve of 9150 K, the temperature of the calibration star. The last step is to flux calibrate the spectrum to the IRAC 3.6 $\mu$m flux \citep{Farihi2008a}. The final spectrum is shown in Figure \ref{Fig: NIRSPEC}; it has a higher spectral resolution and S/N than previous near-infrared data from the IRTF \citep{Tokunaga1990} and Gemini \citep{Farihi2008a}.
\begin{figure}[hp]
\plotone{fig1.pdf}
\caption{Keck/NIRSPEC KL band spectrum for G29-38 and flux calibrated to IRAC 3.6 $\mu$m photometry. The spectral resolution is 2000 and the data are neither binned nor smoothed. The noisy region shortward of 2.9 $\mu$m is due to bright sky background and the emission around 3.3 $\mu$m is from an instrumental artifact. The spectrum is generally featureless with a gentle increase towards longer wavelength, consistent with the dust disk model. }
\label{Fig: NIRSPEC}
\end{figure}
\section{Model Atmosphere and Stellar Parameters}
Synthetic white dwarf model atmospheres were computed with basic input parameters including effective temperature, T$_{eff}$, surface gravity, g, and atmospheric abundances of heavy elements. The computed model spectra presented here are a new grid with two major changes compared to previous work in \citet{Koester2009a, Koester2010}. (i) The mixing-length parameter ML2/$\alpha$ is taken as 0.8, which is now the preferred value of the Montreal group \citep{Tremblay2010}. (ii) New Stark broadening data are used \citep{TremblayBergeron2009}. The adopted T$_{eff}$ and g are shown in Table \ref{Tab: Parameters} and elemental abundances in Table \ref{Tab: Abundances}. Below we describe the fitting process in detail. Fortunately, precise stellar parameters are not essential for our analysis because we are most interested in the relative abundance ratios, which are fairly insensitive to particular models \citep{Klein2011}.
\begin{table}[htpb]
\begin{center}
\caption{\bf Adopted Stellar Parameters}
\begin{tabular}{llccccccc}
\\
\hline \hline
Star & M$_*$ & T$_{eff}$ & log g & D & log M$_{cvz}$/M$_*$$^a$ \\
& (M$_{\odot}$)& (K) & (cm s$^{-2}$) & (pc) \\
\hline
G29-38 & 0.85 & 11820 $\pm$ 100 & 8.40 $\pm$ 0.10 & 13.6 $\pm$ 0.8 & -13.9\\
GD 133 & 0.66 & 12600 $\pm$ 200 & 8.10 $\pm$ 0.10 & 36.6 $\pm$ 3.2 & -16.2 \\
\hline
\label{Tab: Parameters}
\end{tabular}
\end{center}
{\bf Notes.} $^a$ M$_{cvz}$ is the mass of the convection zone. The convection zone of GD 133 is within a Rosseland mean opacity $\sim$ 8, considerably shallower than that of G29-38.
\end{table}
\subsection{G29-38}
G29-38 has a parallax $\pi$= 0.0734 $\pm$ 0.0040 arcsec \citep{vanAltena2001} as well as UBVRI photometry \citep{Holberg2008a}. Its infrared photometry was not used for the fitting due to contamination from the dust disk \citep{ZuckermanBecklin1987}. Additional data were also used for the analysis, including the HST/FOS spectra, two optical spectra from the 2.2m Calar Alto telescope \citep{Koester1997} and two spectra from the VLT/UVES \citep{Koester2005b, Koester2009a}.
\begin{table}[htpb]
\begin{center}
\caption{Final Atmospheric Abundances}
\begin{tabular}{lcclccllll}
\\
\hline \hline
& \multicolumn{3}{c}{G29-38} & \multicolumn{3}{c}{GD 133} \\
Z & [Z/H]$^a$ & t$_{set}$ & $\dot{M}$(Z$_i$)$^b$ & [Z/H]$^a$ & t$_{set}$ & $\dot{M}$(Z$_i$)$^b$\\
& & (10$^{-1}$ yr) & (g s$^{-1}$) & & (10$^{-3}$ yr) & (g s$^{-1}$)\\
\hline
C & -6.90 $\pm$ 0.12 & 7.8 & 1.2 $\times$ 10$^6$ & $<$ -7.9 & 5.3 & $<$ 7.6 $\times$ 10$^4$\\
N & $<$ -5.7 & 6.4 & $<$ 2.6 $\times$ 10$^7$ & $<$ -5.8 & 3.4 & $<$ 1.7 $\times$ 10$^7$\\
O & -5.00 $\pm$ 0.12 & 4.5 & 2.2 $\times$ 10$^8$ & -6.00 $\pm$ 0.11 & 2.4 & 1.8 $\times$ 10$^7$\\
Na & $<$ -6.7 & 2.1 & $<$ 1.3 $\times$ 10$^7$ & $<$ -6.3 & 3.7 & $<$ 8.2 $\times$ 10$^6$\\
Mg & -5.77 $\pm$ 0.13 & 2.5 & 9.8 $\times$ 10$^7$ & -6.5: & 9.2 & 2.2 $\times$ 10$^6$:\\
Al & $<$ -6.1 & 3.4 & $<$ 3.8 $\times$ 10$^7$ & $<$ -5.7 & 6.4 & $<$ 2.3 $\times$ 10$^7$\\
Si & -5.60 $\pm$ 0.17 & 4.6 & 9.4 $\times$ 10$^7$ & -6.60 $\pm$ 0.13 & 5.5 & 3.4 $\times$ 10$^6$\\
S & $<$ -7.0 & 4.1 & $<$ 4.8 $\times$ 10$^6$ & $<$ -7.0 & 2.9 & $<$ 3.0 $\times$ 10$^6$\\
Ca & -6.58 $\pm$ 0.12 & 2.0 & 3.1 $\times$ 10$^7$ & -7.21 $\pm$ 0.13 & 6.2 & 1.1 $\times$ 10$^6$\\
Ti & -7.90 $\pm$ 0.16 & 2.7 & 1.4 $\times$ 10$^6$ & $<$ -8.0 & 5.3 & $<$ 2.4 $\times$ 10$^5$\\
Cr & -7.51 $\pm$ 0.12 & 2.4 & 4.0 $\times$ 10$^6$ & $<$ -6.8 & 4.3 & $<$ 5.1 $\times$ 10$^6$ \\
Mn & $<$ -7.2 & 2.2 & $<$ 9.5 $\times$ 10$^6$ & $<$ -7.0 & 4.2 & $<$ 3.5 $\times$ 10$^6$ \\
Fe & -5.90 $\pm$ 0.10 & 2.1 & 2.0 $\times$ 10$^8$ & $<$ -5.9 & 3.6 & $<$ 5.2 $\times$ 10$^7$\\
Ni & $<$ -7.3 & 1.9 & $<$ 9.6 $\times$ 10$^6$ & $<$ -7.0 & 3.1 & $<$ 5.1 $\times$ 10$^6$\\
\\
Total$^c$ & & & 6.5 $\times$ 10$^8$ & & & 2.4 $\times$ 10$^7$\\
\hline
\label{Tab: Abundances}
\end{tabular}
\end{center}
{\bf Notes.} See Table \ref{Tab: Lines} for details. \\
$^a$ [X/Y] = log n(X)/n(Y), the logarithmic number ratio of the abundance of element X relative to the abundance of Y.\\
$^b$ The instantaneous mass accretion rate of an element into the white dwarf's atmosphere (see section 5). This is calculated by dividing the mass of an element currently in the convection zone with its settling time.\\
$^c$ The total accretion rate including all elements with positive detections.
\end{table}
The surface gravity of G29-38 can be tightly constrained from the parallax. For any reasonable effective temperature within the instability strip the gravity has to be in the interval 8.30-8.50 with the most consistent solution of 8.40. Varying the parallax within the quoted error of 0.004 arcsec shifts the optimum log g value by 0.05 dex. Holding gravity as a fixed value, we are able to derive T$_{eff}$; with all available observing data, the best solution is listed in Table \ref{Tab: Parameters}. The fits to Balmer lines from H$\alpha$ to H$\eta$ are shown in Figure \ref{Fig: BalmerLines}. The higher order Balmer lines in the model are not as deep as observed. G29-38 is a pulsating ZZ Ceti white dwarf and the velocity fields tend to cause line profiles to be broader and shallower \citep{KoesterKompa2007}. But this effect is only relevant for the innermost cores within 1 {\AA} and does not influence the parameter determinations. The problem can be solved by adopting a lower surface gravity but this contradicts the parallax measurement. Assuming the parallax is correct, the disagreement could indicate a problem with our implementation of the Balmer line broadening theory \citep{TremblayBergeron2009} and/or the calculation of occupation probabilities based on the prescription of \citet{HummerMihalas1988}\footnote{ The real issue is that G29-38 is in the parameter range where the Balmer line strengths reach their maximum and are rather insensitive to changes in stellar parameters. As a result, we use all available data to derive the stellar parameters. In addition, relative abundance ratios are not strongly dependent on stellar parameters. As illustrated in \citet{Klein2011}, for PG 1225-079, simultaneous changes of 1500 K in temperature and 0.6 dex in log g lead to a mixmum change of 0.1 dex for relative abundances. }. Our newly derived stellar mass is 0.85 M$_{\odot}$, significantly higher than all previous analysis \citep{Koester1997, GIammichele2012} but close to the value of 0.79 M$_{\odot}$ derived from asteroseismology \citep{ChenLi2013}.
\begin{figure}[hp]
\plotone{fig2.pdf}
\caption{Model fits (red dashed lines) to Balmer lines, including H$\alpha$ to H$\eta$ from bottom to top with T$_{eff}$ = 11,820 K, log g=8.40 for G29-38. Each line is offset by 0.3 in relative intensity for clarity. The underlying spectrum in black is from the SPY survey \citep{Koester2005b, Koester2009a}. Ca II K-line is seen at the left wing of H$\epsilon$ with $\Delta$$\lambda$ of -37 {\AA}. }
\label{Fig: BalmerLines}
\end{figure}
\subsection{GD 133}
There is no known parallax for GD 133 and we rely completely on spectroscopic method to derive its stellar parameters. Refitting the SPY spectra \citep{Koester2005b} with our latest model grid gives T$_{eff}$ = 12,729 K and log g = 8.02. GD 133 was also studied by \citet{Gianninas2011}; with a different set of data and model, they derived T$_{eff}$ = 12,600 K and log g = 8.17. The average of the two log g from optical studies is 8.10 and we can derive T$_{eff}$ from fitting the Lyman $\alpha$ profile in the COS spectrum. The formal error of this fitting is extremely small ($\sim$ 10 K) and the quoted error is dominated by the error in log g. The errors listed in Table \ref{Tab: Parameters} for log g and T$_{eff}$ include systematic and statistical errors. The final parameters are close enough to previous values that the new fits are not shown here.
\section{Atmospheric Abundance Determinations}
To reflect instrumental broadening, the computed model spectra are convolved with the Line Spread Function of COS \citep{Kriss2011} or a Gaussian profile for the HIRES data. Then the abundances of individual elements are determined by comparing the equivalent width (EW) of each spectral line with that from the model spectra \citep{Klein2010, Klein2011}. Compared to helium atmosphere white dwarfs with the same amount of pollution [e.g. \citet{Dufour2012}], the analysis in hydrogen atmosphere white dwarfs is less affected by blending of different absorption lines due to the high continuum opacity of hydrogen atoms. However, molecular hydrogen lines are pervasive in the COS data for both stars. The analysis for GD 133 is less affected because the number density of molecular hydrogen in GD 133 is about 0.4 dex smaller than that in G29-38 \citep{Xu2013b}. For both stars, we present model spectra with and without the contribution from molecular hydrogen.
Following \citet{Xu2013a}, upper limits to the abundances of elements were estimated by varying the input abundance of an element and comparing the model spectra with data. The presence of numerous molecular hydrogen lines in the COS data complicates this process and the model is not ideal for computing the line strength of individual H$_2$ lines due to the lack of accurate broadening parameters \citep{Xu2013b}. To be conservative, all upper limits obtained from the ultraviolet data were determined by using the model spectra without contributions from H$_2$; the numbers can be lower if molecular hydrogen contributes to a significant portion of the total EW.
All detailed measurements are presented in Table \ref{Tab: Lines} and the final abundances in Table \ref{Tab: Abundances}. For G29-38, there are 27 optical spectral lines identified from 5 different elements and 8 ions, including Mg I, Mg II, Ca I, Ca II, Cr II, Fe I, Fe II and Ti II. The average velocity of all absorption lines, including Doppler shift and gravitational redshift, is 36 $\pm$ 2 km s$^{-1}$. The COS data reveal photospheric detection of C I, O I and Si II with an average velocity of 40 $\pm$ 4 km s$^{-1}$, which agrees with the optical value. We have also detected an interstellar line of Si II at 1260.4 {\AA} and C II at 1334.5 {\AA} with an average velocity of 11 km s$^{-1}$. This is very close to the radial velocity of 9.5 km s$^{-1}$ measured in the Hyades cloud\footnote{According to the dynamical model of the local interstellar medium: http://lism.wesleyan.edu/LISMdynamics.html} \citep{RedfieldLinsky2008}, which lies within 15 pc of the Sun. For GD 133, there are 5 optical spectral lines identified from Ca II and the marginal detection of Mg II as well as 4 ultraviolet lines from Si II and O I. The average velocity shift is 49 $\pm$ 2 km s$^{-1}$ and 58 $\pm$ 4 km s$^{-1}$ for the optical and ultraviolet data, respectively. The difference between these two velocities are most likely due to the absolute wavelength uncertainty of 15 km s$^{-1}$ for the medium resolution grating G130M on {\it COS} (COS Instrument Handbook). Interstellar lines at Si II 1260.4 {\AA}, O I 1302.2 {\AA} and C II 1334.5 {\AA} are also detected with an average velocity of 15 km s$^{-1}$, close to the radial velocities of several nearby clouds, including the Gem Cloud and NGP Cloud. \citep{RedfieldLinsky2008}. In both stars, upper limits were derived for a few elements.
\begin{center}
\begin{longtable}{lllllccccc}
\caption{Measured Equivalent Widths of Photospheric Lines and Abundance Determinations} \\
\hline \hline
& & & & \multicolumn{2}{c}{G29-38} & \multicolumn{2}{c}{GD 133} \\
Ion & $\lambda$$^a$ & E$_{low}$ & log $gf$ & EW & [Z/H] & EW & [Z/H]\\
& ({\AA}) & (eV) & & (m{\AA}) & & (m{\AA})\\
\hline
\endfirsthead
\multicolumn{6}{c}{Table \ref{Tab: Lines} --- \emph{Continued}} \\
\hline
\hline
& & & & \multicolumn{2}{c}{G29-38} & \multicolumn{2}{c}{GD 133} \\
Ion & $\lambda$$^a$ & E$_{low}$ & log $gf$ & EW & [Z/H] & EW & [Z/H]\\
& ({\AA}) & (eV) & & (m{\AA}) & & (m{\AA})\\
\hline
\endhead
\endfoot
\hline
\endlastfoot
C I & 1277.6 & 0.01 & -0.40 & 52 $\pm$ 9$^{b-H_2}$ & -6.96 $\pm$ 0.08 & ... & ... \\
C I & 1329.6 & 0.01 & -0.62 & 42 $\pm$ 8$^{b-H_2}$ & -6.85 $\pm$ 0.08 & ... & ...\\
C II & 1335.7 & 0.01 & -0.34 & ... & ... & $<$ 20$^{b-H_2}$ & $<$ -7.9 \\
C & & & & & -6.90 $\pm$ 0.12 & & $<$ -7.9 \\
\\
N I & 1411.9 & 3.58 & -1.3 & $<$ 27 & $<$ -5.7 & $<$ 25 & $<$ -5.8 \\
\\
O I & 1152.2 & 1.97 & -0.27 & 81 $\pm$ 22$^{b-H_2}$ & -5.00 $\pm$ 0.12 & ... & ...\\
O I & 1304.9 & 0.02 & -0.84 & ... & ... & 63 $\pm$ 16 & -6.00 $\pm$ 0.11 \\
O & & & & & -5.00 $\pm$ 0.12 & & -6.00 $\pm$ 0.11 \\
\\
Na I & 5891.6 & 0 & 0.12 & $<$ 24 & $<$ -6.7 & $<$ 30 & $<$ -6.3\\
\\
Mg I & 3830.4 & 2.71 & -0.23 & 14 $\pm$ 3 & -5.62 $\pm$ 0.09 & ... & ...\\
Mg I & 3833 & 2.71 & 0.12, -0.36 & 21 $\pm$ 5 & -5.89 $\pm$ 0.10 & ... & ...\\
Mg I & 3839.4 & 2.72 & 0.39 & 35 $\pm$ 4 & -5.86 $\pm$ 0.05 & ... & ...\\
Mg I & 5174.1 & 2.71 & -0.39 & 31 $\pm$ 4 & -5.65 $\pm$ 0.06 & ... & ...\\
Mg I & 5185.0 & 2.71 & -0.17 & 42 $\pm$ 4 & -5.72 $\pm$ 0.04 & ... & ... \\
Mg II & 4482 & 8.86 & 0.74, 0.59 & 41 $\pm$ 7 & -5.91 $\pm$ 0.08 & 14 $\pm$ 4 & -6.5: \\
Mg & & & & & -5.77 $\pm$ 0.13 & ... & -6.5: \\
\\
Al I & 3945.1 & 0 & -0.62 & $<$ 16 & $<$ -6.1 & $<$ 19 & $<$ -5.7\\
\\
Si II & 1260.4 & 0 & 0.387 & ... & ... & 93 $\pm$ 26 & -6.67 $\pm$ 0.11\\
Si II & 1264.7 & 0.04 & 0.64 & 320 $\pm$ 64$^{b-Si II}$ & -5.76 $\pm$ 0.09 & 115 $\pm$ 20 & -6.70 $\pm$ 0.07\\
Si II & 1265.0 & 0.04 & -0.33 & 320 $\pm$ 64$^{b-Si II}$ & -5.76 $\pm$ 0.09 & 51 $\pm$ 12 & -6.60 $\pm$ 0.13\\
Si II & 1309.3 & 0.04 & -0.43 & 106 $\pm$ 11$^{b-H_2}$ & -5.48 $\pm$ 0.05 &61 $\pm$ 12$^{b-H_2}$ & -6.42 $\pm$ 0.05\\
Si & & & & & -5.60 $\pm$ 0.17 & & -6.60 $\pm$ 0.13\\
\\
S I & 1425.0 & 0 & -0.12 & $<$ 55$^{b-H_2}$ & $<$ -7.0 & $<$ 38$^{b-H_2}$ & $<$ -7.0\\
\\
Ca I & 4227.9 & 0 & 0.27 & 17 $\pm$ 3 & -6.67 $\pm$ 0.07 & ... &... \\
Ca II & 3159.8 & 3.12 & 0.24 & 67 $\pm$ 5 & -6.69 $\pm$ 0.03 & ... &... \\
Ca II & 3180.3 & 3.15 & 0.50 & 109 $\pm$ 5 & -6.61 $\pm$ 0.02 & 37 $\pm$ 4 & -7.23 $\pm$ 0.05 \\
Ca II & 3182.2 & 3.15 & -0.46 & 40 $\pm$ 4 & -6.46 $\pm$ 0.04 & ... &... \\
Ca II & 3707.1 & 3.12 & -0.48 & 24 $\pm$ 4 & -6.65 $\pm$ 0.07 & ... &... \\
Ca II & 3934.8 & 0 & 0.11 & 294 $\pm$ 6 & -6.65 $\pm$ 0.01 & 154 $\pm$ 13 & -7.08 $\pm$ 0.04 \\
Ca II & 3969.6 & 0 & -0.2 & 78 $\pm$ 7 & -6.58 $\pm$ 0.04 & 27 $\pm$ 6 & -7.07 $\pm$ 0.09 \\
Ca II & 8500.4 & 1.69 & -1.4 & 84 $\pm$ 6 & -6.34 $\pm$ 0.03 & ... & ... \\
Ca II & 8544.4 & 1.69 & -0.46 & 185 $\pm$ 16 & -6.54 $\pm$ 0.04 & 42 $\pm$ 7 & -7.33 $\pm$ 0.07 \\
Ca II & 8664.5 & 1.69 & -0.72 & No Data & No Data & 28 $\pm$ 4 & -7.34 $\pm$ 0.07 \\
Ca & & & & & -6.58 $\pm$ 0.12 & & -7.21 $\pm$ 0.13\\
\\
Ti II & 3235.4 & 0.05 & 0.43 & 13 $\pm$ 2 & -8.06 $\pm$ 0.06 & ... & ...\\
Ti II & 3237.5 & 0.03 & 0.24 & 16 $\pm$ 6 & -7.88 $\pm$ 0.15 & ... & ...\\
Ti II & 3350 & 0.61,0.05 & 0.43,0.53 & 31 $\pm$ 4 & -7.95 $\pm$ 0.05 & ... & ...\\
Ti II & 3362.2 & 0.03 & 0.43 & 11 $\pm$ 2 & -8.12 $\pm$ 0.08 & $<$ 12 & $<$ -8.0\\
Ti II & 3373.8 & 0.01 & 0.28 & 19 $\pm$ 3 & -7.79 $\pm$ 0.05 & ... & ...\\
Ti II & 3384.7 & 0 & 0.16 & 19 $\pm$ 4 & -7.70 $\pm$ 0.09 & ... & ...\\
Ti II & 3686.3 & 0.61 & 0.13 & 11 $\pm$ 2 & -7.87 $\pm$ 0.08 & ... & ...\\
Ti & & & & & -7.90 $\pm$ 0.16 & ... & $<$ -8.0 \\
\\
Cr II & 3125.9 & 2.46 & 0.30 & 13 $\pm$ 4 & -7.42 $\pm$ 0.11 & No Data & No Data\\
Cr II & 3133.0 & 2.48 & 0.42 & 10 $\pm$ 4 & -7.61 $\pm$ 0.17 & No Data & No Data\\
Cr II & 3369.0 & 2.48 & -0.09 & ... & ... & $<$ 14 & $<$ -6.8 \\
Cr & & & & & -7.51 $\pm$ 0.12 & & $<$ -6.8\\
\\
Mn II & 3443.0 & 1.78 & -0.36 & $<$ 14 & $<$ -7.2 & $<$ 15 & $<$ -7.0\\
\\
Fe I & 3582.2 & 0.86 & 0.41 & 12 $\pm$ 2 & -5.97 $\pm$ 0.06 & ... & ... \\
Fe I & 3735.9 & 0.86 & 0.32 & 13 $\pm$ 3 & -5.82 $\pm$ 0.09 & ... & ...\\
Fe II & 1361.4 & 1.67 & -0.52 & ... & ... & $<$ 17 & $<$ -5.9 \\
Fe II & 3228.7 & 1.67 & -1.18 & 12 $\pm$ 3 & -5.79 $\pm$ 0.09 & ... & ...\\
Fe & & & & & -5.90 $\pm$ 0.10 & & $<$ -5.9 \\
\\
Ni II & 1335.2 & 0.19 & -0.19 & $<$ 20$^{b-H_2}$ &$<$ -7.3 & $<$ 24$^{b-H_2}$ & $<$ -7.0\\
\label{Tab: Lines}
\end{longtable}
\end{center}
{\bf Notes.} The adopted abundances are also presented in Table \ref{Tab: Abundances}. \\
$^a$ Wavelengths are in vacuum. Atomic data for absorption lines are taken from the Vienna Atomic Line Database \citep{Kupka1999}. \\
$^b$ Blended line. The contributing element is noted in the superscript and the EW is the sum of the blend.
\subsection{Carbon}
The carbon lines in the observed wavelength interval can be contaminated by interstellar absorption because they arise from low lying energy levels. In a stellar environment, the C II 1335.7 line is always stronger than the C II 1334.5 line due to its larger statistical weight. However, as shown in Figure \ref{Fig: C_1}, the observed C II 1335.7 line is relatively weaker in both stars; we conclude that these C II lines are mostly interstellar. In addition, the measured velocity for the C II 1334.5 {\AA} line is offset from all other photospheric lines in G29-38 and GD 133. This line also suffers from additional contamination from H$_2$.
There are C I lines at 1277.6 {\AA} and 1329.6 {\AA} detected in G29-38, as shown in Figures \ref{Fig: C_1} and \ref{Fig: C_G29-38}. The detection is nominally at least 5 $\sigma$ and, in the absence of H$_2$, the derived abundance [C/H] would be -6.90 $\pm$ 0.08. However, due to their proximity to H$_2$ lines, the EW of the C I lines are less certain and we assign a conservative final abundance of -6.90 $\pm$ 0.12. For GD 133, the C II line at 1335.7 {\AA} was used to place the upper limit, which is consistent with the upper bound from the C I 1329.6 {\AA} line in Figure \ref{Fig: C_1}.
\begin{figure}[hp]
\plotone{fig3.pdf}
\caption{Carbon and nickel spectral line region for G29-38 and GD 133. The black line represents HST/COS data smoothed with a 3 pixel boxcar. The red and blue lines represent the model spectra without and with contributions from molecular hydrogen respectively, and with abundances from Table \ref{Tab: Abundances}. Red labels represent lines that are used for abundance determinations in Table \ref{Tab: Lines}. Wavelengths are given in vacuum and the heliocentric reference frame.}
\label{Fig: C_1}
\end{figure}
\begin{figure}[hp]
\plotone{fig4.pdf}
\caption{Similar to Figure \ref{Fig: C_1} except for C I lines in G29-38. }
\label{Fig: C_G29-38}
\end{figure}
\clearpage
\subsection{Oxygen}
As presented in Figure \ref{Fig: O_G29-38}, we have detected O I 1152.2 {\AA} in G29-38, which can be reproduced by a model with an oxygen abundance of -5.00. The upper bound from O I triplet lines around 7775 {\AA} also agrees with this result.
In the wavelength coverage of COS, there are several O I lines around 1300 {\AA}, which can be contaminated from geocoronal emissions. When extracting the night time portions of the data for GD 133, we detected O I lines at 1302.1 {\AA} and 1304.9 {\AA}, as shown in Figure \ref{Fig: O_GD133}. The O I 1302.1 {\AA} line comes from the ground state and there were at least two components, which agree with the interstellar and photospheric contributions, respectively. We use the O I 1304.9 {\AA} line and a model with [O/H] = -6.00 can successfully reproduce the data. This derived abundance is consistent with the strength of the O I 1302.1 {\AA} and O I 1306.0 {\AA} lines in Figure \ref{Fig: O_GD133}.
\begin{figure}[hp]
\plotone{fig5.pdf}
\caption{Similar to Figure \ref{Fig: C_1} except for the detection of O I in G29-38. The right panel shows Keck/HIRES data smoothed with a 3 pixel boxcar and the spectrum is not flux calibrated. Only one model is plotted for the optical data, which are free from H$_2$ contamination.}
\label{Fig: O_G29-38}
\end{figure}
\begin{figure}[htpb]
\plotone{fig6.pdf}
\caption{Similar to Figure \ref{Fig: C_1} except for the night time portions of COS data of GD 133. The spectrum was smoothed with a 3 pixel boxcar. }
\label{Fig: O_GD133}
\end{figure}
\subsection{Magnesium}
Magnesium is detected only in the optical data, as shown in Figure \ref{Fig: Mg}. In G29-38, the magnesium abundance was derived from a total of 7 spectral lines from both Mg I and Mg II. In GD 133, only the Mg II 4482 {\AA} doublet is marginally seen and we were able to derive a tentative magnesium abundance.
\begin{figure}[hp]
\plotone{fig7.pdf}
\caption{Similar to Figure \ref{Fig: C_1} except for magnesium lines in the Keck spectra for G29-38 and GD 133. The spectrum in the top panel was smoothed with a 7 pixel boxcar while the ones in the lower panels are averaged by a 3 pixel boxcar. The fit to H$\eta$ in G29-38 is not ideal (see discussion in section 3.1) but we were still able to directly compare the EWs of Mg I lines in the data with the model. Detection of Mg in GD 133 is marginal.}
\label{Fig: Mg}
\end{figure}
\subsection{Silicon}
Several ultraviolet silicon lines are used for the analysis, as shown in Figure \ref{Fig: Si_1}. Si II 1260.4 {\AA} was not used for G29-38 because it is blended with an interstellar line. For G29-38, because Si II 1264.7 {\AA} and 1265.0 {\AA} lines are blended with each other, we report the total EW of this feature in Table \ref{Tab: Lines}. Si II 1309.3 {\AA} can be blended with adjacent H$_2$ lines but the silicon line was readily detected and used in the analysis. For GD 133, the photospheric lines are weaker and easier to deblend. We were able to measure the EW of Si II lines at 1260.4 {\AA}, 1264.7 {\AA} and 1265.0 {\AA} individually to derive the final abundance.
The Si II line at 1260.4 {\AA} in GD 133 is the only readily detected interstellar line that is well separated with the photospheric line. And the EW is 48 $\pm$ 21 m{\AA}, corresponding to a Si II column density of 3.8 $\times$ 10$^{12}$ cm$^{-2}$ in that direction, which is comparable to typical ISM values \citep{Lehner2003}.
\begin{figure}[hp]
\plotone{fig8.pdf}
\caption{Similar to Figure \ref{Fig: C_1} except for silicon lines. The Si II line at 1260.4 {\AA} comes from the ground state and can be only used for the analysis of GD 133 because it is well separated from the interstellar absorption line.}
\label{Fig: Si_1}
\end{figure}
\clearpage
\subsection{Calcium}
\citet{VonHippelThompson2007} found the EW of the Ca II K-line in G29-38 increased by 70\% in 3 months, which they attributed to a variable mass accretion rate. However, a follow-up study of monitoring the Ca II H \& K, Ca I 4227.9 {\AA} and Mg II 4482 {\AA} lines found no significant variability \citep{DebesMorales2008}. Subsequently, \citet{Thompson2010} found that the EW of the Ca II K-line might be variable by a few percent and concluded that the pulsation model favors polar rather than equatorial accretion. In the present study, we are not concerned about EW uncertainty on a few percent level. There are quite a few calcium lines present in the HIRES spectra in both stars, as listed in Table \ref{Tab: Lines} and shown in Figure \ref{Fig: Ca}. Our measured EWs of the Ca II H \& K and Ca I 4227.9 lines agree within the uncertainties reported in \citet{DebesMorales2008} and \citet{Thompson2010}. In GD 133, there are five Ca II lines detected and [Ca/H] = -7.21. We measured an EW of 154 $\pm$ 13 m{\AA} for the Ca II K-line, which lies within the uncertainty of the EW of 135 m{\AA} from VLT/UVES data but the derived abundance is different due to the updated model atmosphere calculations \citep{Koester2005b, KoesterWilken2006}.
\begin{figure}[hp]
\plotone{fig9.pdf}
\caption{Similar to Figure \ref{Fig: C_1} except for calcium lines in the Keck/HIRES data. All spectra were smoothed by a five-point boxcar average.}
\label{Fig: Ca}
\end{figure}
\subsection{Titanium}
Seven titanium lines were detected in the Keck/HIRES spectrum of G29-38 and three are displayed in Figure \ref{Fig: TiCr}. Although most titanium lines have EWs smaller than 20 m{\AA}, they are readily detected due to the high S/N of the data. In GD 133, the Ti II 3362.2 {\AA} line was used to derive the upper limit.
\begin{figure}[hp]
\plotone{fig10.pdf}
\caption{Similar to Figure \ref{Fig: C_1} except for titanium and chromium lines. Both spectra were smoothed by a seven-point boxcar average.}
\label{Fig: TiCr}
\end{figure}
\subsection{Chromium}
In G29-38, there are two chromium lines at 3125.9 {\AA} and 3133.0 {\AA} in the observed wavelength interval as shown in Figure \ref{Fig: TiCr}. The detection of each individual line is marginal but the presence of lines at two correct wavelengths makes the identification more convincing. In GD 133, these two chromium lines fall off the edge of echelle orders and the next strongest chromium line at 3369.0 {\AA} was used to derive the upper limit.
\subsection{Iron}
Three optical iron lines are detected in G29-38 and two of them are presented in Figure \ref{Fig: Fe_G29-38}. Iron is not detected in GD 133. Since molecular hydrogen does not significantly interfere near the Fe II 1361.4 {\AA} region in GD 133, we use the absence of this strong Fe line to derive an upper limit to the abundance.
\begin{figure}[hp]
\plotone{fig11.pdf}
\caption{Similar to Figure \ref{Fig: C_1} except for iron lines in G29-38. Both spectra were smoothed by a five-point boxcar average. The absorption feature around 3738 {\AA} is a blend of Ca II and Fe I lines, which is dominated by Ca II.}
\label{Fig: Fe_G29-38}
\end{figure}
\clearpage
\subsection{Additional Upper Limits}
{\it Nitrogen, Sulfur, Nickel}: The nitrogen triplet around 1200 {\AA} is in the wing of Lyman $\alpha$ and the S/N of the spectrum is very low, making N I 1411.9 {\AA} the best nitrogen line in our data set, as shown in Figure \ref{Fig: N}. To derive the sulfur upper limit, S I 1425.03 {\AA} was used and the fit is presented in Figure \ref{Fig: S}. Due to the presence of adjacent molecular hydrogen lines, the sulfur upper limit is less constraining in G29-38 than in GD 133. We also used Ni II 1335.2 {\AA} to determine the upper limit as shown in Figure \ref{Fig: C_1}.
{\it Sodium, Aluminum, Manganese}: Optical spectral lines were used to determine the upper limits for these elements. The details are listed in Table \ref{Tab: Lines} and their spectra are not presented.
\begin{figure}[hp]
\plotone{fig12.pdf}
\caption{Similar to Figure \ref{Fig: C_1} except for N I 1411.9 {\AA}, which is used to derive nitrogen upper limits. Only a model without molecular hydrogen is shown and used for the analysis. All the unlabeled features in the data are from molecular hydrogen.}
\label{Fig: N}
\end{figure}
\begin{figure}[hp]
\plotone{fig13.pdf}
\caption{Similar to Figure \ref{Fig: C_1} except for S I 1425.0 {\AA}, which is used to derive sulfur upper limits.}
\label{Fig: S}
\end{figure}
\section{Discussion}
We have determined the abundances of 8 elements in G29-38, including C, O, Mg, Si, Ca, Ti, Cr, Fe and placed stringent upper limits on S and Ni. There are only three hydrogen dominated white dwarfs with more than 8 elements determined, i.e. 11 for WD 1929+012, 10 for WD 0843+516 \citep{Gaensicke2012} and 9 for NLTT 43806 \citep{Zuckerman2011}. In GD 133, we have detected O, Si and Ca, marginally detected Mg, and placed a meaningful upper limit on the carbon abundance.
Both G29-38 and GD 133 show excess infrared radiation coming from an orbiting dust disk \citep{ZuckermanBecklin1987, Reach2005b, Reach2009, Jura2007b}. The source of accretion in both G29-38 and GD 133 is likely to come from one large parent body rather than a blend of several small ones, mainly because collisions among different objects will evaporate all the dust particles and destroy the dust disk \citep{Jura2008}. Because the settling times of heavy elements in these white dwarfs are less than a year \citep{Koester2009a}, much shorter than the disk lifetime of $\sim$ 10$^5$ yr \citep{Farihi2012b}, the accretion is likely to be in a steady state, wherein the rate of material falling on the white dwarf atmosphere is balanced by the settling out of the convection zone \citep{Koester2009a}. We can infer the composition of the accreted extrasolar planetesimals based on the accretion flux, which is dependent on the settling times of each element.
\subsection{Calculation of Settling Times}
Very recently, \citet{Deal2013} have drawn attention to the fingering or thermohaline instability; they find that this effect is important in hydrogen dominated white dwarfs and can change the derived accretion rates by orders of magnitude. Unfortunately they do not publish enough details for us to draw conclusions about the relevance of the effect for the white dwarfs studied here. Some problems we see in \citet{Deal2013} are:
\begin{itemize}
\item \citet{Deal2013} use a prescription for the thermohaline diffusion coefficient from \citet{VauclairTheado2012}, which is
claimed to be physically more realistic than previous methods. However, their formula leads to an infinite value for the coefficient at the bottom of the convection zone, which is unrealistic.
\item In our model for G29-38 the bottom of the convection zone is at log $(M_{cvz}/M_{star})$ = -13.9 (see Table \ref{Tab: Parameters}) and the layers above this will be homogeneously mixed in a matter of seconds. In \citet{Deal2013}, the model 3 presented in both Figure 1 and 2 has no convection at all. The model 2, with parameters closest to G29-38 has a convection zone a factor of 100 smaller than G29-38. None of these models can represent our current objects.
\item The H/He interface at log $(M_{cvz}/M_{star})$ = -5 is not a sharp boundary, but a transition zone. The helium will mix upward into the hydrogen and a significant helium fraction will still be present several pressure scale heights above this layer. This will lead to an increasing molecular weight, an effect apparently not considered in \citet{Deal2013}.
\item Their main argument is the fact that in previous determinations the accretion rates for helium white dwarfs seemed to be systematically higher than in hydrogen white dwarfs. In our opinion, a much more likely reason is the uncertainty in the depth of the convection zones in helium white dwarfs, which is described in corrected diffusion times\footnote{see www.astrophysik.uni-kiel.de/$\sim$koester} as well as possibly non-steady state accretion (see section 5.3 for more discussion).
\end{itemize}
For the time being, we do not believe that the \citet{Deal2013} calculations are applicable to G29-38 or GD 133; but we will reconsider our conclusions, once more details become available.
\subsection{G29-38}
The abundances in the parent body that accreted onto G29-38 are calculated by correcting for the settling effect of each element in the atmosphere, as shown in Table \ref{Tab: Abundances}. In addition, the abundances can also be derived by fitting the infrared spectrum of the dust disk, which is composed of pulverized extrasolar planetesimals prior to accretion onto the white dwarf. G29-38 has the brightest known dust disk due to its proximity to the Earth and \citet{Reach2005b, Reach2009} found that the dominant minerals are amorphous carbon (C), amorphous and crystalline silicates (MgFeSiO$_4$, Fe$_2$SiO$_4$, Fe$_2$Si$_2$O$_6$, CaMgSi$_2$O$_6$, Mg$_2$Si$_2$O$_6$), water ice (H$_2$O) and metal sulfides (Mg$_{0.1}$Fe$_{0.9}$S). A comparison is shown in Figure \ref{Fig: comp} where we see that the derived number ratios of n(O)/n(Mg), n(Si)/n(Mg), n(Ca)/n(Mg) and n(Fe)/n(Mg) are in rough agreement in the atmosphere and in the surrounding dust disk. The biggest discrepancy is the abundances of carbon and sulfur.
While \citet{Reach2005b, Reach2009} simulated the disk as an optically thin dust torus, we argue that it is more likely to be mostly opaque as described in \citet{Jura2003, Jura2009a}, for the following reasons. (i) In the optically thin disk model around G29-38, \citet{Reach2009} derived a total disk mass of 2 $\times$ 10$^{19}$ g, which is three orders of magnitude smaller than the lower limit of the mass of heavy elements in the atmosphere of dusty helium dominated white dwarf [see, for example \citet{Jura2012}]. Assuming there is no difference between dusty hydrogen and helium white dwarfs, the disk must be much more massive and therefore optically thick. (ii) For dusty white dwarfs hotter than 20,000 K, the entire optically thin disk would be located outside of the tidal radius \citep{XuJura2012}. There is no viable mechanism to explain the presence of so much hot dust outside the tidal radius of the white dwarf. In the opaque disk model, the radiation from the disk in the main contributor to the continuum, rather than from featureless emissions of minerals. This explains the discrepancy in carbon abundance because the emissivity spectrum of amorphous carbon is featureless \citep{Reach2009}. To account for the strong 10 $\mu$m silicate emission feature, \citet{Jura2009a} proposed the presence of an outer optically thin region or emission from a hot region on top of the opaque disk. Thus, the optically thin model from \citet{Reach2009} for the 10 $\mu$m emission features should work and most of their derived abundances agree with the values in the photosphere. However, the inclusion of niningerite (Mg$_{0.1}$Fe$_{0.9}$S) as the metal sulfide led \citet{Reach2009} to derive a high sulfur abundance; this is inconclusive due to the low S/N of the infrared spectrum (see Figure 5 in \citet{Reach2009}). This is the first direct comparison of elemental abundances derived from fitting the infrared spectrum of a dust disk with the spectroscopic analysis of a white dwarf atmosphere; the overall agreement is respectable.
\begin{figure}[hp]
\plotone{fig14.pdf}
\caption{A comparison between the composition of the extrasolar planetesimal accreted onto G29-38 derived from atmospheric analysis (from Table \ref{Tab: Abundances}, this work) and fitting the infrared spectrum of the dust disk \citep{Reach2005b, Reach2009}. The atoms are arranged with increasing atomic weight. The ordinate represents the logarithmic value of the number ratios between an element and magnesium, one of the dominant elements. One sigma error bars are plotted. The overall agreement is respectable except for the carbon and sulfur abundances; see section 5.2 for details.}
\label{Fig: comp}
\end{figure}
To find the best solar system analog to the composition of the parent body accreted onto G29-38, we follow \citet{Xu2013a} and perform a reduced chi-squared analysis with different types of meteorites. We consider 10 elements in total, including C, O, Mg, Si, S, Ca, Ti, Cr, Fe and Ni. We assigned the uncertainty in number ratios for S and Ni to be 0.17 dex, the biggest uncertainty from elements with a detection in G29-38. Therefore, these two elements contribute to the reduced chi-squared value but with relatively low weight. As shown in Figure \ref{Fig: Chi}, several single meteorites fall within the 95\% confidence level, including bulk Earth, CR chondrites, primitive achondrites and mesosiderites, a type of stoney-iron meteorites. Mesosiderites are a less promising candidate because the mass fraction of one of the major elements, Mg is 0.3 dex less in mesosiderite than in G29-38.
\begin{figure}[hp]
\epsscale{1.1}
\plotone{fig15.pdf}
\caption{ Computed reduced chi-squared values between the observed composition and meteorites. Different meteorite groups are offset in position in the ordinate for clarification. The upper panel is for G29-38 and lower panel for GD 133. For G29-38, we compare the mass fraction of 10 elements (C, O, Mg, Si, S, Ca, Ti, Cr, Fe and Ni) relative to the summed mass of O, Mg, Si and Fe; for GD 133, 5 elements (C, O, Mg, Si and Ca) are considered relative to the summed mass of O, Mg and Si because only an upper limit is obtained for Fe. The dashed lines represent 95\% confidence level. The black triangle represents one of the blends that can best match the abundances observed in G29-38; it consists of 60\% H chondrite and 40\% of howardite. The meteorite database is described in \citet{Xu2013a} with most data from \citet{Nittler2004} and some Martian meteorites data from \citet{McSween1985}. The carbon abundance in CR chondrites and lodranite is from \citet{Alexander2007} and \citet{GradyWright2003}, respectively. The bulk composition of Earth is from \citet{Allegre2001}. Terrestrial rocks include data from continental crust, upper mantle and lower mantle \citep{Anderson2007}.}
\label{Fig: Chi}
\end{figure}
A detailed comparison of all the elements scaled to CI chondrites, the most primitive material in the solar system, is shown in Figure \ref{Fig: Abundance_G29-38}. We see that the mass fraction of volatile elements, such as carbon and sulfur, are depleted by at least a factor of 9 while the refractory elements, including calcium and titanium, are enhanced. The composition of the best-match solar system object -- bulk Earth -- is also plotted for comparison. The largest discrepancies between the compositions of the object accreted onto G29-38 and bulk Earth are Ca and Ti, two refractory elements.
When considering a blend of two meteorites, several combinations can all provide a good fit to the composition observed in G29-38. One example includes 60\% H chondrite and 40\% howardite, as shown in Figures \ref{Fig: Chi} and \ref{Fig: Abundance_G29-38}. In general, some refractory-enhanced achondrites, such as howardites or mesosiderites, are required to reproduce the refractory abundance and chondritic material is needed to adjust the overall abundance pattern. Though the exact mechanism is not known, it is clear that the parent body accreted onto G29-38 has experienced post-nebular processing, such as differentiation and collision, which is found to be common for extrasolar planetesimals \citep{Xu2013a}.
\begin{figure}[hp]
\epsscale{1}
\plotone{fig16.pdf}
\caption{For G29-38, mass fraction of an element with respect to the sum of oxygen, magnesium, silicon and iron normalized to that of CI chondrites \citep{WassonKallemeyn1988}. Arrows denotes upper limits. The elements are ordered by increasing condensation temperature. Also plotted is the best match single solar system object, bulk Earth as well as one of the best match when considering a blend -- 60\% H chondrite and 40\% howadrite. Abundances for G29-38 are taken from Table \ref{Tab: Abundances}.}
\label{Fig: Abundance_G29-38}
\end{figure}
G29-38 has accreted from an extrasolar planetesimal which is enhanced in both calcium and titanium, which is less frequently seen than enhancement of only calcium. In a sample of well studied polluted white dwarfs compiled in \citet{JuraYoung2014}, 6 out of 12 stars have [Ca/Mg] at least a factor of two higher than the value for CI chondrites; in comparison, only 3 out of 9 stars have a factor of two [Ca/Mg] and [Ti/Mg] enhancement, including PG 1225-079, GD 362 and G29-38. These three stars are the best candidates for accretion of a ``normal" object and a refractory-dominated planetesimal \citep{JuraXu2013}. In addition, both G29-38 and GD 362 have well constrained stellar masses from parallax measurements \citep{vanAltena2001, Kilic2008b}. Using the initial mass to final mass relationship \citep{Williams2009}, we derive a progenitor mass of 3.95M$_\odot$ and 2.95M$_{\odot}$ for G29-38 and GD 362, respectively, which are higher than the average white dwarf progenitor mass \citep{Kleinman2013}. Unfortunately, PG 1225-079 is in the stellar parameter region where temperature and surface gravity are coupled when using only the spectroscopic method \citep{Klein2011}. At least 2 out of the 3 stars that have accreted from refractory-dominated planetesimals have high progenitor mass; this is consistent with the model that refractory-dominated planetesimals are more likely to survive the red giant stage of stars with relatively high main-sequence masses \citep{JuraXu2013}.
As shown in Figure \ref{Fig: Abundance_G29-38}, the oxygen abundance in G29-38 is depleted by a factor of 1.5 compared to that in CI chondrites. The total amount of oxygen is barely enough to combine with all the heavy elements into the oxides; there might have been some metallic iron in the accreted planetesimal. No excess oxygen is left to be in the form of H$_2$O and water is very depleted in the planetesimal accreted onto G29-38.
Though \citet{Reach2009} derived a high water abundance, it is not supported by the NIRSPEC data shown in Figure \ref{Fig: NIRSPEC}. No emission lines from water are detected. We see a gentle slope towards longer wavelength, which is consistent with the dust disk model, peaking longward of 4 $\mu$m. The IRAC 3.6 $\mu$m flux of G29-38 is $\sim$ 10\% higher than predicted by the model and \citet{Farihi2008a} hypothesized contributions from some PAH features. A subsequent study by \citet{Reach2009} found that there is significant fluctuation at 3.6 $\mu$m with an amplitude of $\sim$ 5\%, which could account for this discrepancy. Our NIRSPEC data also exclude any circumstellar emissions from PAH features as observed in Herbig Ae/Be stars \citep{Meeus2001}.
\subsection{GD 133}
For GD 133, we also performed a reduced chi-squared analysis for 5 heavy elements, including C, O, Mg, Si and Ca with an uncertainty of 0.2 dex for C and Mg. Their mass fractions are calculated with respect to the summed mass of O, Mg and Si, the most abundant three elements in GD 133. As shown in Figure \ref{Fig: Chi}, CR chondrites and mesosiderites match with the abundance pattern observed in GD 133.
A detailed comparison of the composition of the parent body accreted onto GD 133 and solar system objects is shown in Figure \ref{Fig: Abundance_GD133}. Compared to CI chondrites, carbon is depleted by at least a factor of 20 in GD 133 while calcium is enhanced by a factor of 3. Magnesium is slightly depleted and [Mg/Ca] = 0.54, which is on the high side of all polluted white dwarfs \citep{JuraXu2013}. Mesosiderites give a better fit to the overall abundance pattern with the similar Ca enhancement and Mg depletion. \citet{Xu2013a} also found that mesosiderites is the best match to the abundance pattern in GD 362 due to the enhancements of Ca, Ti and Al. With constraints from only 5 elements in GD 133, it is hard to derive additional information. There exist several strong Fe II and Mg II lines around 2800 {\AA} and they will provide much more information about the nature of the accreted material.
\begin{figure}[hp]
\plotone{fig17.pdf}
\caption{Similar to Figure \ref{Fig: Abundance_G29-38} except for GD 133 and the mass fraction of an element normalized to the sum of oxygen, magnesium and silicon. There is no error bar associated with magnesium because it is only marginally detected. The compositions of two best match meteorites from the reduced chi-squared analysis, mesosiderite and CR chondrite, are also plotted for comparison. }
\label{Fig: Abundance_GD133}
\end{figure}
Until now, all dusty white dwarfs are heavily polluted with mass accretion rates of at least 1 $\times$ 10$^8$ g s$^{-1}$ \citep{Farihi2009, Brinkworth2012}. GD 133 has a dust disk, which reprocesses about 0.5\% of the incoming star light \citep{Jura2007b,Farihi2010b}. Assuming a chondritic iron to silicon ratio, the accretion rate of iron is 5.9 $\times$ 10$^6$ g s$^{-1}$ and the total accretion rate is 3.0 $\times$ 10$^7$ g s$^{-1}$, the lowest of all dusty white dwarfs. Even with an iron abundance of [Fe/H] = -5.90 (the upper limit listed in Table \ref{Tab: Abundances}), the total accretion rate can only add up to 7.6 $\times$ 10$^7$ g s$^{-1}$, marginally lower than all other dusty white dwarfs. In a steady state model, Poynting-Robertson drag provides a lower bound of the accretion rate \citep{Rafikov2011a, XuJura2012}:
\begin{equation}
\dot{M}=\frac{16\phi_r}{3}\frac{r_*^3}{r_{in}}\frac{\sigma T_{eff}^4}{c^2}
\end{equation}
$\phi_r$ is an efficiency coefficient and taken as 1. $\sigma$ and c are the Stephan-Boltzman constant and the speed of light, respectively. For GD 133, the stellar temperature T$_{eff}$ is listed in Table \ref{Tab: Parameters} and the stellar radius r$_*$=0.012r$_\odot$. Depending on the disk inclination, the inner radius of the disk can vary \citep{Jura2007b}. To derive a lower limit of the mass accretion rate, we take the largest possible inner radius r$_{in}$=23r$_*$ for a nearly face-on disk, and consequently $\dot{M}$ = 2.5 $\times$ 10$^8$ g s$^{-1}$, about an order of magnitude higher than the most likely inferred value. The exceptionally low accretion rate in GD 133 provides direct evidence that equation (1) does not always hold, possibly because the accretion process onto the star is not always in a steady state.
There exists additional evidence for time-varying non-steady state accretion. Based on the different accretion rates derived from hydrogen and helium atmosphere white dwarfs, \citet{Farihi2012b} postulated a non-steady state accretion model including a high accretion stage and a low accretion stage. Now with an updated helium model atmosphere and settling times \citep{Xu2013a}, the difference has narrowed but is still present.
\section{Perspective}
So far, there are 10 white dwarfs that have detections of at least O, Mg, Si and Fe in the atmosphere. As presented in Figure \ref{Fig: Pie}, they are sampled from a variety of stellar temperatures and surface gravity. Five of these stars have a hydrogen-dominated atmosphere and five of them have a helium-dominated atmosphere. Eight stars also have a dust disk. Yet, to zeroth order, the resemblance of their elemental compositions relative to bulk Earth is robust. No carbon rich extrasolar planetesimals, e.g. analogs to comet Halley with 28\% carbon \citep{Jessberger1988} or interplanteary dust particle with $\sim$ 10\% carbon \citep{Thomas1993}, are identified in the current sample. The only white dwarf that has accreted objects with a considerable amount of water is GD 61 (\# 6 in the plot), which contains 26 \% water by mass \citep{Farihi2013}. The general properties of extrasolar planetesimals can be summarized as the following \citep{JuraYoung2014}, (i) Oxygen, iron, silicon and magnesium always dominate and make up more than 85\% of the total mass; (ii) carbon is always depleted relative to the solar abundance; (iii) viewed as an ensemble, water is less than 1\% of the total accreted mass but exceptions exist. As shown in Figure \ref{Fig: Pie}, there are variations among the abundances of different elements but they are only a factor of 2-3, comparable to the errors. To move beyond a zeroth order result, one must determine abundances of as many trace elements as possible, such as the case of GD 362 \citep{Zuckerman2007, Xu2013a}. Alternatively, one can assess the abundances of a few elements in an ensemble of stars, e.g. \citet{JuraXu2012, JuraXu2013}.
\begin{figure}[hp]
\plotone{fig18.pdf}
\caption{A compilation of all polluted white dwarfs with detections of at least O, Mg, Si and Fe; 8 of these stars also have good constraints on the carbon abundance. The abscissa marks white dwarf effective temperatures, corresponding to a cooling age less than 500 Myr; the ordinate is surface gravity, which shows a main sequence mass between 1.8-4.0 M$_{\odot}$ for the current sample. For clarity, some objects are slightly offset in their plotted positions. The abundances have been corrected for the effect of settling and we only show the mass fraction of O, Mg, Si and Fe; the rest are left blank. The size of a pie correlates with the accretion rate (not to scale). We see that O, Mg, Si and Fe are always the dominant elements in a variety of extrasolar planetesimals, resembling bulk Earth. No carbon rich planetesimals similar to comet Halley have been identified so far. The white dwarfs are ordered with increasing stellar temperatures. {\bf References:} Hydrogen dominated white dwarfs: 1: G29-38 (this paper), 7: PG 1015+161, 8: WD 1226+110, 9: WD 1929+012, 10: WD 0843+516 \citep{Gaensicke2012}; helium dominated white dwarfs: 2: WD J0738+1835 \citep{Dufour2012}, 3: HS 2253+8023 \citep{Klein2011}, 4: G241-6, 5: GD 40 \citep{Jura2012}, 6: GD 61 \citep{Farihi2011a, Farihi2013}. All white dwarfs except \# 3 and 4 have a dust disk. Bulk Earth: \citet{Allegre2001}. Comet Halley: \citet{Jessberger1988}}
\label{Fig: Pie}
\end{figure}
\section{Conclusions}
In this paper, we report optical and ultraviolet spectroscopic studies of two externally-polluted hydrogen dominated white dwarfs, G29-38 and GD 133. For G29-38, with the exception of carbon and sulfur, the derived elemental abundances agree reasonably well with the values obtained from fitting the infrared spectrum of the dust disk. Both stars have accreted objects that show a pattern of volatile depletion and refractory enhancement. The parent body accreted onto G29-38 has experienced post-nebular processing and can be best explained by a blend of chrondritic material and a refractory-enhanced object. The total mass accretion rate in GD 133 is significantly lower than all other dusty white dwarfs, suggesting non-steady state accretion. In a sample of ten stars, we find that the elemental compositions of extrasolar planetesimals are similar to bulk Earth regardless of their evolutionary history.
We thank G. Mace for helping with the NIRSPEC observing run and useful discussions about data reduction procedures, C. Melis for helping with HIRES observing runs in 2008, and B. Holden for useful email exchanges regarding the MAKEE software. Support for program \#12290 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. This work has been partly supported by NSF \& NASA grants to UCLA to study polluted white dwarfs.
\bibliographystyle{apj}
|
2,869,038,155,037 | arxiv | \section{Introduction}
The Harborth graph (see Figure 1) is the smallest known example of a 4-regular planar unit-distance graph. That is a planar graph, all of which edges are of unit length, with exactly four edges meeting in each vertex. This graph was named after its discoverer H.\ Harborth, who first presented it to the general public as a research problem in \cite{HarbStreichh} and to a large international audience in a talk at the Eug\`ene Strens Memorial Conference on Recreational Mathematics and its History in 1986 \cite{HarbMatch,MathLand}. At both occasions he posed the question, if a smaller example of a 4-regular planar unit-distance graph could be found.
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{HPcomplXb1.eps}
\caption{The Harborth graph embedded in the Euclidean plane.}
\label{HarborthComplete}
\end{figure}
Curiously enough, up until now nearly all published pictures of the Harborth graph -- even the original ones in \cite{HarbStreichh,HarbMatch}, as well as those in textbooks \cite{HarbKem, Pearls} -- seem to be slightly vague and inaccurate, with the vertices always being depicted by large dots. Furthermore there has not been given any analytical description of the Harborth graph, yet. That is, if we consider the graph as embedded in the Euclidean plane with a given coordinate system, the coordinates of the vertices have never been calculated exactly. This has gone to the point that the Harborth graph was even thought by some to be nonrigid\footnote{
If we consider the Harborth graph as a (mechanical) framework consisting of rigid bars interconnected by rotable joints, nonrigidity means that some vertices can be moved with respect to each other, that is the whole framework allows motions which are different from congruences \cite{CombEEMech}.}, which, as the results of this paper imply, cannot be the case.
Thus the wish for an ''exact'' description remained. This wish has recently been expressed again in the world wide web \cite{MathPuzzle}, which prompted this paper.
With the advent of dynamical geometry systems, several authors were finally able to produce more precise pictures \cite{HarbGeoExp,MathPuzzle}, which led to further evidence that there is one unique realisation of the Harborth graph in the Euclidean plane. With this paper we go one step further: using one particular way of construction, we set up a set of quadratic equations which completely describe the coordinates of crucial vertices of the Harborth graph. Using this initial set of equations, we will show that all the coordinates are algebraic numbers, and we will calculate their minimal polynomials with the help of a computer algebra system. Even though, as we will see later on, it is impossible to solve the corresponding algebraic equations exactly, which in our understanding means in terms of radicals, nevertheless we thus have achieved an exact analytic description of the Harborth graph, since, together with easy to calculate numerical approximations for the actual coordinates, these polynomials uniquely determine each coordinate.
The author is indebted to C.\ Adelmann and H.\ L\"owe from the Technical University Braunschweig, Germany, for a number of very valuable discussions on the subject. Furthermore he owes thanks to the Institute for Mathematical Physics of the TU Braunschweig, especially to the research group of R.F.\ Werner, for making its computing facilities available to him for this research.
\section{Using Dynamic Geometry Software and Numerical Analysis}
\subsection{Geometric construction}
Because of the obvious twofold symmetry of the Harborth graph, it is enough to analyse one of its quarters. Therefore, in a first step we construct one of these quarters (see Figure 2), using one of the existing (imprecise) first generation pictures
as a blueprint.
We start from an initial isosceles triangle $ABC$ of fixed but arbitrary height $T,$ with two sides being of unit length, and a neighbouring symmetric trapezoid $BCDE,$ which has the side $BC$ with the initial triangle in common. The parallel sides of the trapezoid are chosen to be of length $2$ and $3$ respectively. The remaining points are constructed from this initial configuration by using compass and ruler techniques. In the following, we list the necessary steps. Thereby $\hbox{Circ}(P,r)$ denotes the circle with center $P$ and radius $r,$ and $\cap$ the operation of letting two geometric figures intersect. Thus we get
\begin{eqnarray}
F &:=& \hbox{Circ}(A,1) \cap \hbox{Circ}(E,1) \label{Fconst}\\
G &:=& \hbox{Circ}(F,1) \cap \hbox{Circ}(D,2) \label{Gconst}\\
H &:=& \hbox{Circ}(D,2) \cap \hbox{Circ}(G,2) \label{Hconst}\\
J &:=& \hbox{Circ}(F,1) \cap \hbox{Circ}(G,1) \label{Jconst}
\end{eqnarray}
Although the intersection of two circles usually consists of two points, we use this notation as if there was no ambiguity. We are allowed to this, because we choose the resulting points of intersection according to our blueprint. Thus, e.g., $F$ is chosen in such a way that the quadrangle $ABEF$ is convex. In the sequel we will call the configuration of the points $A$ to $J$ thus constructed the \textbf{Harborth configuration}. We will do this, even if the parameter $T$ initially has not been chosen correctly, so that the configuration cannot be completed to the whole Harborth graph.
Clearly, proceeding as described, with using an arbitrary nonnegative value for the height $T$ of the initial triangle, the line through the final crucial points $H$ and $J$ will only by (a very small) chance meet the line through $A$ and $C$ at an angle $\varphi$ of $90^\circ.$ But, in order that the Harborth configuration can be completed to the whole Harborth graph, we have to make sure that this last condition is satisfied.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]
{Harborth_Publ2.eps}
\caption{A quarter of the Harborth graph, created with GeoGebra \protect\cite{geogebra} - the point $B$ (and thus the height $T$) chosen in order that $\varphi=90^\circ,$ i.e., the configuration can be completed to the whole Harborth graph.}
\end{center}
\label{HarborthPic}
\end{figure}
Using the dynamic geometry software GeoGebra \cite{geogebra}, we are able to manipulate the Harborth configuration in dependence on the parameter $T.$ Furthermore with GeoGebra we are able to read off approximate values (up to five decimal places) for the resulting angles and coordinates. At this early stage of our investigations already, some important observations can be made:
\begin{remark}
\label{1stresults}$\,$\hfill\par
\hangindent
\leftmargini
\textup{(1)}\hskip\labelsep
The point $F,$ and thus the whole Harborth configuration only exists for $T\in[0,b],$ where $b$ is approximately $0.13504.$ \par
\hangindent
\leftmargini
\textup{(2)}\hskip\labelsep
For $T\in [0,b],$ the angle $\varphi$ between $AC$ and $HJ$ lies approximately in the interval $[85.88496^\circ,\,94.59043^\circ].$\par
\hangindent
\leftmargini
\textup{(3)}\hskip\labelsep
For $T\simeq0.12073,$ the angle $\varphi$ approximates $90^\circ.$
\end{remark}
\smallskip
The upper bound $b$ in the above remark is determined by the fact that for $T>b$ the distance between $A$ and $E$ becomes greater than $2,$ and the circles with centers $A$ and $E$ of radius $1$ do not meet anymore.
As hinted at in the introduction, once we have calculated the exact values for the extremal angles - which we will do in one of the following sections - the observations collected in Remark \ref{1stresults} allow us to give the first analytic proof\footnote{The basic idea of this proof was first communicated to the author by H.\ L\"owe in 2003. Here it is presented with his kind permission.} of the planarity of the Harborth graph which does not resort to pictures or models only.
\begin{theorem}
\label{TExistenz}
There exists $T\in[0,b]$ such that $\varphi=90^\circ$ exactly, and the Harborth configuration can be completed to the Harborth graph.
\end{theorem}
\begin{proof}{\cite{LoeweHarborth}}
Since the geometric operations we used in our construction (i.e., drawing circles and letting them intersect) depend continuously on their parameters (i.e., centers and radii), and since the composition of continuous functions is again continuous, the angle $\varphi$ depends continuously on the height $T$ as long as the Harborth configuration exists. By the Intermediate Value Theorem the existence of at least one $T$ in the above interval $[0,b]$ is assured such that $\varphi$ is precisely $90^\circ.$
\end{proof}
\begin{corollary}
The Harborth graph indeed is a 4-regular unit-distance graph which is planar.
\end{corollary}
From now on, our main goal will be to determine a precise description of this particular $T,$ the existence of which we have shown above, without resorting to trial-and-error.
\subsection{Setting up algebraic equations}
\label{settingupCs}
To describe the points of the Harborth graph more precisely we need to introduce coordinates. For the time being, we choose the point $A$ as the center of our coordinate system, and the ray which extends the base side $AC$ of the triangle $ABC$ as the positive $x$-axis. When considering the Harborth graph as a whole, this is by far not the most obvious choice. In fact, in a later section we will use its center of symmetry as the origin of a more natural coordinate system, and then will have to ''translate'' our intermediate results. As we proceed, we will see that in some places this will prove to be quite cumbersome, which will retroactively justify our initial choice of coordinates.
Now let $(t,T)$ be the coordinates of the point $B$. Since $A,$ $B,$ and $C$ form an isosceles triangle, the coordinates of the point $C$ are given by $(2t,0).$
Denoting the coordinates of any point $P$ of the graph different from $B$ with $(x_P,y_P),$ we have:
\begin{eqnarray}
t^2 + T^2 - 1 & = & 0\label{IniTri}\\
-t\cdot(x_D - 2\cdot t) + T\cdot y_D - \frac{3}{2}& = & 0\label{TrapezD1}\\
(x_D - 2\cdot t)^2 + y_D^2 - 9 & = & 0\label{TrapezD2}\\
t\cdot(x_E - t) - T\cdot(y_E - T) + 1 & = & 0\label{TrapezE1}\\
(x_E - t)^2 + (y_E - T)^2 - 4 & = & 0\label{TrapezE2}\\
x_F^2 + y_F^2 - 1 & = & 0\label{KoordF1}\\
(x_E - x_F)^2 + (y_E - y_F)^2 - 1 & = & 0\label{KoordF2}\\
(x_F - x_G)^2 + (y_F - y_G)^2 - 1 & = & 0\label{KoordG1}\\
(x_D - x_G)^2 + (y_D - y_G)^2 - 4 & = & 0\label{KoordG2}\\
(x_D - x_H)^2 + (y_D - y_H)^2 - 4 & = & 0\label{KoordH1}\\
(x_G - x_H)^2 + (y_G - y_H)^2 - 4 & = & 0\label{KoordH2}\\
(x_F - x_J)^2 + (y_F - y_J)^2 - 1 & = & 0\label{KoordJ1}\\
(x_G - x_J)^2 + (y_G - y_J)^2 - 1 & = & 0\label{KoordJ2}\\
x_H - x_J & = & 0\label{Ortho}
\end{eqnarray}
Let us shortly comment on the meaning of these equations:
(\ref{TrapezD1}), (\ref{TrapezD2}), respectively (\ref{TrapezE1}), (\ref{TrapezE2}), define the vertices $D$ and $E$ of the trapezoid which do not belong to the initial triangle $ABC.$ Equations (\ref{TrapezD1}) and (\ref{TrapezE1}) stem from the fact that the line $BC$ meets $CD$ at an angle of $60^\circ$ and the line $EB$ meets $BC$ at $120^\circ.$ Every further pair of Equations (\ref{KoordF1})-(\ref{KoordJ2}) is chosen in accordance to the geometric constructions described in (\ref{Fconst}) - (\ref{Jconst}), each pair defining one of the points $F$ - $J.$ Finally, Equation (\ref{Ortho}) has to be satisfied in order that the lines given by $AC$ and $HJ$ meet at an angle of $90^\circ.$
Using this set of equations, and approximations for the coordinates, which we read off from Figure 2 with the help of GeoGebra, we are already able to calculate arbitrary precise approximations of all coordinates by using standard numerical algorithms. E.g., the results below show the coordinates with an exactness of $15$ digits; they were calculated with Mathematica, version 4.0.1.0.
\begin{eqnarray}
B = (t,T) &\simeq& (0.992685948824186,\,
0.120725337054926)\nonumber\\%0159443453161205878363141410810479749479688661230025694978071549729768335412412687087\nonumber\\
D &\simeq&
(0.809996600722107,\,
2.760161754567202)\nonumber\\%auf1967730596023804494456110558897742620551650558460145323343300554103483056790027779491\nonumber\\
E &\simeq&
(0.209102417540010,\
1.960833173433061)\nonumber\\%auf0661098075998656917473721180346209721982402571763099451726674191041965538067926455036\\
F &\simeq&
(-0.061398137844065,\
0.998113354619244)\label{numerics}\\%2605396285927775772658123215232721354412067675151661267813402945703492187228632198975\\
G &\simeq&
(-0.838419516770942,\
1.627587561152422)\nonumber\\%auf1673527004297119278112379440112862089346572837252855503559271254864711823669673909222\\
H &\simeq&
(-0.995049481192288,\
3.621444891616507)\nonumber\\%auf6846252270624952862959533191350937372609973131739793292002522832852265327516283372607\\
J &\simeq&
(-0.995049481192288,\
0.639930204451542)\nonumber\\%2632341700139727242595536892653679606825958457051186011282163547659755761701508077270
\nonumber
\end{eqnarray}
\section{Deducing an equation for the $y$-coordinate of the point $B$}
In this section we will deduce the minimal polynomial for the coordinate $y_B=T,$ i.e., the unique primitive integer polynomial $P_T$ of smallest degree such that $P_T(T)=0$ holds \cite[Section 4.1.1]{Cohen}.
Let us shortly describe the main approach which we will repeatedly take: Given two polynomial equations, which are satisfied by certain coordinates, we will calculate the resultant of the corresponding polynomials with respect to one of the appearing variables, thus eliminating this particular variable. Sometimes, if the polynomials are not too complicated, we will use Groebner basis techniques to treat more than two polynomial equations simultaneously. Both procedures lead to new polynomials or polynomial equations, which are consequences of the original ones\footnote{For a more precise description see, e.g., \cite{Cox} or \cite{Loos}.}, but contain fewer variables. To keep expressions from becoming too complicated, and running times from becoming too long we will try to factor the resulting polynomials. Many times it will prove to be advantageous to allow the factorization to be done over the ring extension $\mathbb Z[\sqrt{3}].$ If a particular polynomial is reducible we will continue our deliberations with that factor, which corresponds to the actual values of the coordinates. To check this, we use numerical approximations analogous to those given in Section \ref{settingupCs}, but which are precise to an error of $\epsilon=10^{-100}$. Most calculations, especially those of resultants, factorizations, and numerical evaluations, were done with Mathematica, version 4.0.1.0.
The succession of eliminations will be determined by the order in which the corresponding points were constructed. E.g., in the initial isosceles triangle $ABC$ due to the choice of the coordinate system all point coordinates are directly expressible in terms of $T.$ Next we will determine polynomials which describe the connection between the coordinates of $D$ and $E,$ respectively, and the parameter $T$. After that the polynomials for the coordinates of $F$ are calculated by using those for the coordinates of $D$ and $E$ and eliminating the variables in between. In principle, continuing this procedure would lead to polynomials in $x_H$ and $T,$ respectively $x_J$ and $T.$ Using the final equation $x_H=x_J,$ one should be able to deduce one polynomial in the variable $T$ alone. Unfortunately, due to the increasing complexity of expressions we were not able to continue this line of thought to its conclusion, but had to resort to an alternative way. Nevertheless we will try to push as far as possible with this approach, and come up with an alternative, when it proves to be necessary.
In the sequel, we will switch between listing the polynomials and the corresponding polynomial equations at will. When only a polynomial $P$ is given it should be understood that the coordinates appearing in $P$ satisfy the corresponding polynomial equation $P=0.$
\subsection{From $A$ to $F$}
Using the procedure \texttt{GroebnerBasis} with Equations
(\ref{IniTri}) - (\ref{TrapezD2}) as input, like
\begin{verbatim}
GroebnerBasis[{t^2+T^2-1,-t*(xD-2*t)+T*yD-3/2,(xD-2*t)^2+yD^2-9},
{t,xD,yD,T}]
\end{verbatim}
\noindent
one of the polynomials we get is
\begin{equation}
P_{y_D,T}:= 27 - 36 T^2 + 12 T\cdot y_D - 4 y_D^2.
\label{PyDT}
\end{equation}
Analogously, by changing the order of variables,
$$
0=1 - 56 T^2 + 784 T^4 - 8 x_D^2 - 208 T^2\cdot x_D^2 + 16 x_D^4
$$
can be deduced. This last equation is irreducible over $\mathbb{Z},$ but factors over $\mathbb{Z}[\sqrt{3}]$ into
polynomials, which are quadratic in $x_D:$
\begin{equation}
(-1 + 28 T^2 - 12\sqrt{3}T x_D + 4 x_D^2)\cdot
(-1 + 28 T^2 + 12\sqrt{3}T x_D + 4 x_D^2).
\label{xD_T_factored}
\end{equation}
Using numerical results for $T$ and $x_D$ in analogy to (\ref{numerics}), we see that only the first of these polynomials
\begin{equation}
P_{x_D,T}:= -1 + 28 T^2 - 12\sqrt{3}T x_D + 4 x_D^2
\label{PxDT_redux}
\end{equation}
leads to the correct result. Solving (\ref{PxDT_redux}) and (\ref{PyDT}) for $x_D$ and $y_D,$ respectively, and again discarding those solutions which do not describe the correct coordinates, we get explicit descriptions for the coordinates of $D$ in terms of the parameter $T$:
\begin{eqnarray}
x_D & = & \frac{1}{2}\left(3\sqrt{3}T+\sqrt{1-T^2}\right),\label{xDT}\\
y_D & = & \frac{3}{2}\left(T+\sqrt{3}\sqrt{1-T^2}\right).\label{yDT}
\end{eqnarray}
Starting with Equations
(\ref{IniTri}),(\ref{TrapezE1}) and (\ref{TrapezE2})
and proceeding in the same manner as above, we are led to those polynomials which describe the dependence of the coordinates of the point $E$ on the parameter $T$:
\begin{equation}
3 T^2 - x_E^2\label{PxET}
\end{equation}
and
\begin{equation}
P_{y_E,T}:= -3 + 7T^2-4Ty_E+y_E^2.\label{PyET}
\end{equation}
Again, the corresponding equations can be explicitly solved:
\begin{eqnarray}
x_E &=& \sqrt{3} T,\label{xET}\\
y_E & = & 2T + \sqrt{3}\sqrt{1-T^2}.\label{yET}
\end{eqnarray}
Next we continue by calculating the coordinates for the point $F$, once more using Mathematica's \texttt{GroebnerBasis} function. This time we start with the newly found set of Equations (\ref{PxET}) and (\ref{PyET}), together with the Equations (\ref{KoordF1}), (\ref{KoordF2}) defining $F.$ From this we get the following polynomials which describe the dependence of the coordinates $x_F$ and $y_F$ of $F$ on the parameter $T:$
\begin{equation}
\label{PxFTvl}
\begin{minipage}{0.88\linewidth}
{\small
\noindent
$
- 81 + 10800 T^2 - 422496 T^4 + 4272384 T^6 - 19194112 T^8 + 45801472 T^{10} -\\
\phantom{-} 63111168 T^{12}
+ 48234496 T^{14} - 16777216 T^{16} +\\
\left(1296 - 92448 T^2 + 1645056 T^4 - 9573888 T^6
+ 30072832 T^8 - 57655296 T^{10} +\right.\\
\left.\phantom{(} 66060288 T^{12} - 29360128 T^{14}\right) x_F^2 + \left(- 7776 + 228096 T^2
- 1555200 T^4 +\right.\\
\left.\phantom{(} 5271552 T^6 - 12189696 T^8 + 15728640 T^{10}- 9437184 T^{12}\right) x_F^4 + \\
\phantom{} \left(20736 - 152064 T^2 + 331776 T^4 - 196608 T^6 - 1310720 T^8 + 2097152 T^{10}\right) x_F^6 +\\
\phantom{} \left(- 20736 + 110592 T^2 - 442368 T^4 + 786432 T^6 - 1048576 T^8\right)x_F^8,
$
}
\end{minipage}
\end{equation}
\begin{equation}
\label{PyFT}
\begin{minipage}{0.8\linewidth}
\noindent
$P_{y_F,T}:=$
{\small
$
81 - 648 T^2 + 144 T^4 - 2304 T^6 + 4096 T^8 + \\
\phantom{P_{y_F,T}+} \left(432 T - 864 T^3 + 6528 T^5 - 10240 T^7\right) y_F + \\
\phantom{P_{y_F,T}+} \left(- 216 + 1584 T^2 - 5376 T^4 + 9216 T^6\right) y_F^2 + \\
\phantom{P_{y_F,T}+} \left( - 576 T + 1536 T^3 - 4096 T^5\right)y_F^3
+ \left(144 - 384 T^2 + 1024 T^4\right) y_F^4.
$
}
\end{minipage}
\end{equation}
\noindent
Once again, the first of these polynomials factors over ${\mathbb Z}[\sqrt{3}]$ into two polynomials of total degree $8$ and degree $4$ in $x_F$. Using the numerical values for $T$ and $x_F,$ we can deduce that only one of these describes the connection between the variables $T$ and $x_F.$ It is
\begin{equation}
\label{PxFT}
\begin{minipage}{0.89\linewidth}
\noindent
$
P_{x_F,T}:=
$
{\small
$
\phantom{(}-9 + 600 T^2 - 3472 T^4 + 5888 T^6 - 4096 T^8 -\\
\phantom{(-} 8\sqrt{3}\left(9 T + 96 T^3 - 112 T^5 + 256 T^7\right) x_F
+ 8\left(9 - 150 T^2 + 256 T^4 - 640 T^6\right) x_F^2 +\\
\phantom{(+} 8\sqrt{3}\left(36 T - 96 T^3 + 256 T^5\right) x_F^3 +16\left(- 9 + 24 T^2 - 64 T^4\right) x_F^4.
$
}
\end{minipage}
\end{equation}
\noindent
As said before, trying to continue like this to calculate polynomials for the remaining points $G, H$ and $J,$ will lead into a deadend, because the resulting equations become too unwieldy to handle, and take too much time to calculate, even with the help of Mathematica. Still, our main goal remains to find one single equation describing the parameter $T$ alone. Consequently we have to take a step back, and use a slightly more indirect approach, which we will describe in the section following the next one.
\subsection{Interlude: Calculating the extremal values for which the Harborth configuration exists}
\label{Interlude}
With Equations (\ref{xET}) and (\ref{yET}) thus available, we are able to calculate the exact maximal value for $T,$ hinted at in Remark \ref{1stresults}. To this end, we first observe that in case of $T$ being maximal the line segments $AF$ and $FE$ together, again form a straight line segment of twice the original length. Therefore, for $T$ maximal, the coordinates of the point $E$ satisfy $x_E^2+y_E^2-4=0.$ This, together with (\ref{xET}) and (\ref{yET}), after some small calculation leads to
\begin{equation}
64T^4-56T^2+1=0.
\end{equation}
Solving for $T,$ and comparing with the numerical values presented in Remark \ref{1stresults}, gives
\begin{lemma}
The minimal and maximal value for $T$ such that the Harborth configuration exists are $T=0,$ and
\begin{equation}
T=\frac{1}{4}\sqrt{7-3\sqrt{5}},
\end{equation}
respectively.
\end{lemma}
Using basic trigonometry, from this we are able to further deduce exact values for the crucial angle $\varphi$ for extremal $T$:
In case of $T=0,$ the points $A,$ $B,$ and $C$ lie on one line. So do the points $D,$ $E,$ and $F.$ Moreover the intersection point $Z$ of these two lines together with $C$ and $D$ form an equilateral triangle, the sides of which have length $3.$ Angle $\beta$ (see Figure 2) becomes one of the angles of this triangle, and thus is equal to $60^\circ.$ Furthermore the points $D,$ $F,$ and $G$ form an isosceles triangle, with the length of the base side $FG$ being one, and the other length being two. An analysis of the triangle formed by the lines $DH,$ and the prolongations of $DF$ and $HJ,$ which contains the triangle $DFG$ completely, allows us to calculate $\alpha.$ Since $\varphi=\alpha+\beta$ some further calculations show
\begin{corollary}
For $T=0,$ the angle $\varphi$ in the Harborth configuration is the unique solution of
\begin{equation}
\sin(\varphi)=\frac{1}{4}(7+3\sqrt{5})\sqrt{\frac{3}{22+6\sqrt{5}}}
\end{equation}
in the interval $[0,90^\circ],$ which up to an error of $10^{-15}$ is $85.884964999269942^\circ.$
\end{corollary}
As we have already observed, when $T$ attains its maximal value, the points $A,$ $E,$ and $F$ lie on one line, and form the side of the isosceles triangle $ABE.$ Leaving the details to the reader, again only using basic trigonometry - and Mathematica for the calculation of trigonometric expressions - we are able to show
\begin{corollary}
For $T=\frac{1}{4}\sqrt{7-3\sqrt{5}},$ the angles $\alpha$ and $\beta$ in the Harborth configuration (see Figure 2) are the unique solutions of
\begin{align}
\cos(\beta) & = \frac{\sqrt{3}}{8}\left(\sqrt{3+\sqrt{5}}-\sqrt{7-\sqrt{5}}\right),\nonumber\\
\intertext{and}
\cos(\alpha) & = \frac{68+3\sqrt{230+34\sqrt{5}}+9\sqrt{5}\left(8+\sqrt{230+34\sqrt{5}}\right)}{2\left(23+3\sqrt{5}\right)\sqrt{97-3\sqrt{5}+3\sqrt{230+34\sqrt{5}}}}\nonumber
\end{align}
in the interval $[0,90^\circ].$
Since $\varphi=\alpha+\beta$, this leads to $\varphi \simeq 94.590425288952345^\circ$ up to an error of $10^{-15}.$
\end{corollary}
\subsection{From the points $D$ and $F$ to the points $H$ and $J$}
\label{NeuKoord}
Now we continue with our task of determining the minimal polynomial for that particular $T,$ for which the Harborth graph exists, i.e., for which $\varphi=90^\circ$ holds. Our trick is, not to calculate the coordinates of the points $H$ and $J$ directly in dependence of the second coordinate $T$ of the point $B,$ but to introduce further variables $X$ and $Y$ which will lead to simpler equations. These new variables themselves will depend on the points $D$ and $F.$
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]
{Harborth_Publ2oben.eps}
\caption{Upper part of the Harborth configuration.}
\end{center}
\label{HarborthPic2}
\end{figure}
For this, let us consider $F$ as the origin of a new coordinate system, and the line $FD$ as the new $x$-axis (see Figure 3). Let $(s,S)$ be the coordinates of the point $E$ with respect to this new coordinate system. Clearly, since $DEF$ forms an isosceles triangle, $D$ is described by the coordinates $(2s,0).$ Proceeding as above by using Equations (\ref{KoordG1})-(\ref{KoordJ2}) in this new context, we successively get\footnote{We advise the reader to keep in mind that, although we use the same notation as in Section \ref{settingupCs}, now the coordinates have to be interpreted within the new coordinate frame.}:
\begin{eqnarray}
0&=&3 - 4 s^2 + 4 s x_G,\label{PxGs}\\
0&=&9 - 40 s^2 + 16 s^4 + 16 s^2 y_G^2,\label{PyGs}\\
0&=&9 - 48 s^2 + 48 s^4 + \left(12 s - 48 s^3\right) x_H + 16 s^2 x_H^2,\label{PxHs}\\
0&=&-81 - 144 s^2 - 352 s^4 - 256 s^6 - 256 s^8 +\nonumber\\
&& \left(144 s^2 + 896 s^4 + 256 s^6\right) y_H^2 - 256 s^4 y_H^4,\label{PyHs}\\
0&=&9 - 36 s^2 + 16 s^4 + \left(12 s - 16 s^3\right) x_J + 16 s^2 x_J^2,\label{PxJs}\\
0&=&81 - 504 s^2 + 1072 s^4 - 896 s^6 + 256 s^8 +\nonumber\\
&& \left(- 144 s^2 + 256 s^4 - 256 s^6\right) y_J^2 + 256 s^4 y_J^4.\label{PyJs}
\end{eqnarray}
All these equations can be easily solved for the respective coordinates. In each case only one of the solutions is in accordance with our geometric construction. Below we present explicit formulas only for the coordinates of the points $H$ and $J$:
\begin{eqnarray}
x_H&=&\frac{-3 + 12 s^2 + \sqrt{3} \sqrt{-9 + 40s^2 - 16 s^4}}{8 s},\\
y_H &=&\frac{3\sqrt{3} + 4\sqrt{3}s^2 + \sqrt{-9 + 40s^2 - 16s^4}}{8 s},\\
x_J&=&\frac{-3 + 4 s^2 - \sqrt{3} \sqrt{-9 + 40 s^2 - 16 s^4}}{8 s},\\
y_J &=&\frac{-3\sqrt{3} + 4\sqrt{3}s^2 + \sqrt{-9 + 40s^2 - 16s^4}}{8s}.
\end{eqnarray}
Therefore the slope of the line $HJ$ with regard to $DF$ as $x$-axis is
\begin{equation}
\label{firstSteigung}
m_\alpha:=\frac{y_J-y_H}{x_J-x_H}=
\frac{3\sqrt{3}}{4 s^2 + \sqrt{3}\sqrt{-9 + 40 s^2 - 16 s^4}}.
\end{equation}
Now, again we consider the whole Harborth configuration: let new variables $X$ and $Y$ be defined by $X:= x_D-x_F$ and $Y:= y_D-y_F,$ where now $x_D,$ $y_D,$ and $x_F,$ $y_F$ are interpreted as the coordinates of the points $D$ and $F$ with regard to the initial coordinate system. Then the squared length of the line segment $DF$ is given by $X^2+Y^2.$ It follows that
\begin{equation}
\label{s}
4 s^2=X^2+Y^2,
\end{equation}
and Equation (\ref{firstSteigung}) becomes
\begin{equation}
m_\alpha = \frac{3\sqrt{3}}{X^2+Y^2+\sqrt{-9 + 10\left(X^2+Y^2\right) - \left(X^2+Y^2\right)^2}}.
\label{malpha}
\end{equation}
In order to be able to complete the Harborth configuration to the whole Harborth graph, the angle $\varphi$ between the lines $HJ$ and $AC$ must be a right angle. On the other hand, we have $\varphi = \alpha + \beta,$ where $\alpha$ is the angle between $HJ$ and $DF,$ and $\beta$ denotes the angle between $DF$ and $AC,$ as shown in Figure 2. Thus the equality $\alpha = 90^\circ-\beta$ must hold.
Since $0^\circ<\alpha,\beta<90^\circ$, we have $\tan{\alpha}= \tan(90^\circ-\beta)=\frac{1}{\tan(\beta)}.$ Thus the respective slopes satisfy $m_\alpha = 1/m_{\beta}.$ The slope $m_\beta$ of $\beta$ is given within the first coordinate system by $m_\beta= Y/X.$ This, together with (\ref{malpha}), implies
\begin{equation}
\frac{3\sqrt{3}}{X^2+Y^2+\sqrt{-9 + 10\left(X^2+Y^2\right) - \left(X^2+Y^2\right)^2}}=\frac{X}{Y},
\end{equation}
\noindent
which some calculations show to be equivalent to
\begin{equation}
0 = \left(X^2 + Y^2\right)\left(-27 + 30 X^2 - 4 X^4 + 6 \sqrt{3} X Y - 4 X^2 Y^2\right).
\end{equation}
This implies
\begin{equation}
0 = 27 - 30 X^2 + 4 X^4 - 6 \sqrt{3} X Y + 4 X^2 Y^2.
\end{equation}
We set $F(X,Y) := 27 - 30 X^2 + 4 X^4 - 6 \sqrt{3} X Y + 4 X^2 Y^2.$
\noindent
Next we produce polynomials $P_{X,T}$ and $P_{Y,T}$ which describe the connection between $T$ and the new parameters $X,$ and $Y,$ respectively. To do that, this time we use Mathematica's resultant and factorization facilities, as described above. Starting with the polynomial $X-x_D+x_F,$ and the polynomials $P_{x_D,T}$ and $P_{x_F,T}$ given by (\ref{PxDT_redux}) and (\ref{PxFT}), we are thus able to successively eliminate the variables $x_D$ and $x_F$, and get
\begin{equation}
\label{PXT}
\begin{minipage}{0.89\linewidth}
$P_{X,T} :=\\$
{\small
\phantom{+\left(\right.}108 T^2 - 684 T^4 + 1344 T^6 - 1344 T^8 +\sqrt{3}\left(- 36 T +
300 T^3 - 616 T^5 + 1024 T^7\right) X \\
+\left(9 - 213 T^2 + 496 T^4 - 1216 T^6\right) X^2
+\sqrt{3}\left(36 T - 96 T^3 + 256 T^5\right) X^3\\
+\left(- 9 + 24 T^2 - 64 T^4\right) X^4\in \mathbb Z[\sqrt{3}][X,T].
$
}
\end{minipage}
\end{equation}
\noindent
In an analogous manner we deduce a polynomial $P_{Y,T}$ in $\mathbb Z[Y,T],$ which for the Harborth configuration describes the connection between these two variables:
\begin{equation}
\label{PYT}
\begin{minipage}{0.88\linewidth}
$P_{Y,T}:=\\$
{\small
\phantom{+\left(\right.}
81 - 405 T^2 + 900 T^4 - 1008 T^6 + 448 T^8
+\left( 54 T - 54 T^3 + 456 T^5 - 512 T^7\right) Y +\\
\phantom{+}\left( - 54 + 207 T^2 - 624 T^4 + 576 T^6\right) Y^2
+\left(- 18 T + 48 T^3 - 128 T^5\right) Y^3 +\\
\phantom{+}\left( 9 - 24 T^2 + 64 T^4\right) Y^4.
$
}
\end{minipage}
\end{equation}
Finally we repeat this procedure with the polynomials $F(X,Y),$ $P_{X,T},$ and $P_{Y,T}$ to eliminate variables $X$ and $T.$ I.e., first we let Mathematica calculate the resultant of $F$ and $P_{X,T}$ with regard to $X.$ We will not present the result here; let it be enough to state that the result is a polynomial of degree $8$ in $Y$, degree $32$ in $T$ and total degree $32,$ which, but for a constant factor, cannot be factored further by Mathematica, even when considered over ${\mathbb Z}[\sqrt{3}].$ With the help of Mathematica, we are able to calculate the resultant of this polynomial and $P_{Y,T}$ with respect to $Y.$ This leaves us with a polynomial in the single variable $T$ of order $156.$ Strangely enough, this final polynomial is reducible over ${\mathbb Z}[\sqrt{3}],$ its factors (up to an integer constant) being $(2T+\sqrt{3}),$ $(2T-\sqrt{3}),$ $(64T^4-24T^2+9)^6,$ and three other integer polynomials of degree $22,$ $28$ and $80,$ respectively. Here, we need only list the polynomial of degree $22,$ since this is the one which has the $y$-coordinate $T$ of the point $B$ of the Harborth graph as one of its real roots\footnote{In fact it is the positive root of smallest modulus.}:
\begin{theorem}
The minimal polynomial for the $y$-coordinate $T$ of the vertex $B$ of the Harborth graph is
\begin{equation}\nonumber
\label{PT}
\begin{minipage}{.97\linewidth}
\noindent
$P_T:=$
{\small
$
-492075 + 52356780 T^2 - 1441635408 T^4 + 12222052416 T^6
- 60567699456 T^8 +\\
\phantom{P_T=\vert +} 189747007488 T^{10}
- 417660420096 T^{12}
+ 607025037312 T^{14} - 655053815808 T^{16} +\\
\phantom{P_T=\vert +} 446118756352 T^{18} - 422064422912 T^{20} + 437348466688 T^{22}.
$
}
\end{minipage}
\end{equation}
\end{theorem}
Since this polynomial is primitive, and irreducible over $\mathbb Z,$ it is the minimal polynomial of $y_B=T.$ Thus we have achieved our desired first main result. The rest of this paper is concerned with the determination of the minimal polynomials of the other coordinates, and some of their properties.
\section{Minimal polynomials for the $y$-coordinates}
\subsection{The points $D,E,$ and $F$}
The method to calculate the minimal polynomial for each of the $y$-co\-ordi\-nates of the points $D,E,F$ is very similiar to what we have done in the last subsection. Let $P$ denote one of the points $D,E,$ or $F.$ Using Formulas (\ref{PyDT}), (\ref{PyET}), and (\ref{PyFT}), respectively, which describe the connection between the $y$-coordinate $y_P$ and the parameter $T$ by way of irreducible polynomials $P_{y_P,T}\in{\mathbb Z}[y_P,T]$ of absolute degree $2,2$ and $4,$ we let Mathematica calculate the resultant of $P_{y_P,T}$ and $P_T$ from above with respect to $T$. The result is a polynomial in ${\mathbb Z}[y_P].$ It is of degree $44$ in case of $y_D$ and $y_E$ and of degree $88$ for $y_F$. The first two of these polynomials each split over $\mathbb Z$ into two irreducible factors of degree $22.$ The polynomial for $y_F$ splits into one component of degree $44$ and -- again -- into two polynomials of degree $22.$ In each case one of these factors corresponds to the actual $y$-coordinate.
\begin{theorem}
The minimal polynomials for $y_D,y_E$ and $y_F$ are
\bigskip
\noindent
$
P_{y_D}:=
$
{\small
$
-2470693585135788 + 1679453964496051893 y_D^2 -
2462573171102886288 y_D^4 +\\
\phantom{P_{y_D}=\vert+} 1847147913929328048 y_D^6 -
888334179987132288 y_D^8 + 302241307009227264 y_D^{10} -\\
\phantom{P_{y_D}=\vert-} 74768143621533696 y_D^{12} + 13516084620361728 y_D^{14} -
1721332250836992 y_D^{16} +\\
\phantom{P_{y_D}=\vert+} 139442448236544 y_D^{18} -
6126808596480 y_D^{20} + 109337116672 y_D^{22},
$
}
\bigskip
\noindent
$
P_{y_E}:=
$
{\small
$
-387038865725307 + 255845547796716 y_E^2 - 1080696123714384 y_E^4 +\\
\phantom{P_{y_E}=\vert +} 985178573370432 y_E^6 + 290816529555456 y_E^8 +
1229422640467968 y_E^{10} -\\
\phantom{P_{y_E}=\vert -} 399291497201664 y_E^{12} -
226953868935168 y_E^{14} - 145914316455936 y_E^{16} +\\
\phantom{P_{y_E}=\vert +} 84049703993344 y_E^{18} - 9462031056896 y_E^{20} + 437348466688 y_E^{22},
$
}
\bigskip
\noindent
$
P_{y_F}:=
$
{\small
$
-6156736033068 + 4132620043369020 y_F^2 - 28069535202466347 y_F^4+\\
\phantom{P_{y_F}=\vert+} 54174190167055116 y_F^6 - 44321252355544320 y_F^8 +
16893977313239424 y_F^{10}-\\
\phantom{P_{y_F}=\vert-} 3430375146685440 y_F^{12} +
781964817629184 y_F^{14} - 165954075623424 y_F^{16}+\\
\phantom{P_{y_F}=\vert+} 16400930701312 y_F^{18} - 579898179584 y_F^{20} + 27334279168 y_F^{22}.
$
}
\end{theorem}
\subsection{The points $G,H,$ and $J$}
\label{yKoordGHJ}
When determining the minimal polynomials of the $y$-coordinates of the points $G,$ $H,$ and $J,$ we have to keep in mind that in Section \ref{NeuKoord} we used a different coordinate system for determining their coordinates than for those of the points $A$ to $F.$ Thus first we will have to transform the former into $y$-coordinates within our original system, which was centered in $A$. Since in this section it is paramount not to confuse these systems, in the sequel we will denote coordinates with regard to the coordinate system centered in $F$ with capital letters, and those with regard to the original one centered in $A$ with small letters. Thus, e.g., Equation (\ref{PyHs}) now becomes
\begin{equation}
\label{PyHsnew}
\nonumber
0=-81 - 144 s^2 - 352 s^4 - 256 s^6 - 256 s^8
+ \left(144 s^2 + 896 s^4 + 256 s^6\right) Y_H^2 - 256 s^4 Y_H^4.
\end{equation}
We will have to treat Equations (\ref{PxGs})-(\ref{PyJs}) accordingly. With this notation, our goal has become to calculate the characteristic polynomials for $y_G,$ $y_H$ and $y_J.$ Elementary analytic geometry tells us that the connection between ''old'' coordinates $(x_P,y_P)$ and ''new'' coordinates $(X_P,Y_P)$ can be described by
\begin{equation}
\label{KoordTrafo}
\begin{pmatrix}
x_P\\
y_P
\end{pmatrix}
=
\begin{pmatrix}
x_F\\
y_F
\end{pmatrix}
+\frac{X_P}{2s}
\begin{pmatrix}
X\\
Y
\end{pmatrix}
+\frac{Y_P}{2s}
\begin{pmatrix}
-Y\\
X
\end{pmatrix}
\end{equation}
for $P\in\{G,H,J\},$ where $X=x_D-x_F$ and $Y=y_D-y_F,$ as defined in Section \ref{NeuKoord}. Thus,
\begin{equation}
2s\left(y_P-y_F\right) = X_P\cdot Y + Y_P\cdot X.
\end{equation}
Setting $z_P:=y_P-y_F$ for the moment, successively using Equations (\ref{PxGs})-(\ref{PyJs}), which described the connection between new coordinates of the points $G,$ $H,$ $J,$ and the parameter $s,$ and the equality $4s^2=X^2+Y^2,$ after some calculations in the usual manner, we are able to deduce irreducible polynomials in $\mathbb Z[\sqrt{3}][z_P,X,Y]$ for $P\in\{G,H,J\}.$ Using Equations (\ref{PXT}) and (\ref{PYT}) we are further able to eliminate $X$ and $Y,$ and deduce polynomials in $\mathbb Z[z_P,T]$ of total degree $188$ for each of the points. Each of these splits again, leaving us with irreducible polynomials which are of degree $20$ in $T$ and total degree $20$ for the points $G,J$ and degree $24$ in $T$ and total degree $24$ for the point $H.$ Resubstituting $y_P-y_F$ for $z_P$ and using (\ref{PyFT}) to eliminate $y_F,$ we are left with integer polynomials in variables $y_P$ and $T$ of degree $112$ for the points $G,J$ and one of degree $128$ for $H$, which this time split off irreducible polynomials of total degree $20$ (for $G$ and $H$) and $24$ (for $J$), respectively. Calculating the resultant of these polynomials and $P_T,$ thereby eliminating $T,$ once more in each case we get polynomials of degree $176$ for $y_G,$ $y_H,$ and $y_J,$ respectively. Each contains among others an irreducible factor of degree $22$ -- the minimal polynomial, which we were looking for. Therefore we have:
\begin{theorem}
The minimal polynomials for the $y$-coordinates of the vertices $G,H, $ and $J$ of the Harborth graph are
\bigskip
\noindent
$
P_{y_G}:=
$
{\small
$
-912811377667500 + 16117998953248125 y_G^2 -
36709013218422600 y_G^4 +\\
\phantom{P_{y_G}=\vert+} 37940201286814800 y_G^6 -
23463887481854208 y_G^8 + 10021184125203456 y_G^{10}-\\
\phantom{P_{y_G}=\vert-} 3290335763447808 y_G^{12} + 888521341648896 y_G^{14} -
192809455583232 y_G^{16}+\\
\phantom{P_{y_G}=\vert+} 29839017902080 y_G^{18} -
2742026240000 y_G^{20} + 109337116672 y_G^{22},
$
}
\bigskip
\noindent
$
P_{y_H}:=
$
{\small
$
-12148787578527675 - 123412000423046805 y_H^2 -
441020584930952232 y_H^4+\\
\phantom{P_{y_H}=\vert+} 273168911377174014 y_H^6 -
27343071784237320 y_H^8 - 3667116898760364 y_H^{10}+\\
\phantom{P_{y_H}=\vert+} 823044986987616 y_H^{12} - 32095868573376 y_H^{14} -
4779985142784 y_H^{16}+\\
\phantom{P_{y_H}=\vert+} 615643279360 y_H^{18} - 27098808320 y_H^{20} + 427098112 y_H^{22},
$
}
\bigskip\noindent
and
\bigskip
\noindent
$
P_{y_J}:=
$
{\small
$
-9964518750000 + 570277711828125 y_J^2 - 1780552966387500 y_J^4+\\
\phantom{P_{y_J}=\vert+} 849106838377800 y_J^6 + 644904447905880 y_J^8 -
102048280254828 y_J^{10}- \\
\phantom{P_{y_J}=\vert-} 56106534718368 y_J^{12}+
9027433758528 y_J^{14} + 605520976896 y_J^{16}- 103349145600 y_J^{18} -\\
\phantom{P_{y_J}=\vert+}
2815229952 y_J^{20}+ 427098112 y_J^{22}.
$
}
\end{theorem}
\section{Minimal polynomials for the $x$-coordinates}
As we initially announced, we want to give minimal polynomials for all the coordinates of the most important vertices of the Harborth graph, where the center of origin is supposed to be the center of symmetry of the whole graph, i.e., the point $K$ in Figure 2, and the axes are the axes of symmetry of the Harborth graph. This means that while the $y$-coordinates remain the same, when we shift our origin from $A$ to $K,$ the $x$-coordinates (with respect to the coordinate system centered in $A$) have to be shifted by $-x_J,$ i.e., we have to set
\begin{equation}
\label{Trafo}
x_P^{\hbox{\scriptsize new}}=x_P - x_J
\end{equation}
for all points $P,$ where the coordinates on the right hand side denote coordinates with respect to the origin $A.$
As a further difficulty, again we have to pay attention that up above the $x$-coordinates for the points $G,$ $H$ and $J$ were given with respect to yet another, third coordinate system, which had $F$ as its origin and was rotated when considered within the other two coordinate frames.
In the sequel, to avoid misunderstandings, we will switch notation, and denote all $x$-coordinates with respect to the system centered in $A$ by $x_P^{\hbox{\scriptsize old}},$ those with respect to the system centered in $K$ will now become $x_P.$
\subsection{Coordinate transformations for the points $A$ to $F$}
Proceeding step by step as in Section \ref{yKoordGHJ}, but starting from the second equation resulting from (\ref{KoordTrafo}), i.e.,
\begin{equation}
2s (x_J^{\hbox{\scriptsize old}}-x_F^{\hbox{\scriptsize old}}) = X_J\cdot X - Y_J\cdot Y,
\end{equation}
we are able to deduce an irreducible polynomial $P_{{x_J^{\hbox{\scriptsize old}}},T}$ in $\mathbb Z[\sqrt{3}][x_J^{\hbox{\scriptsize old}},T]$ of total degree $24,$ and finally the characteristic polynomial of $x_J^{\hbox{\scriptsize old}}$ of degree $22.$ Since it is an even polynomial\footnote{We call a polynomial \textsl{even} if it only contains monomials of even degree.}, it describes the coordinate $x_A=-x_J^{\hbox{\scriptsize old}}$ as well. Thus,
\begin{theorem}
The minimal polynomial for the $x$-coordinate of the vertex $A$ of the Harborth graph, where the coordinate system is the one given in Figure 1, is
\medskip
\noindent
$
P_{x_A}:=
$
{\small
$
-830376562500 + 1358127000000 x_A^2 - 34144387143750 x_A^4 +
96857243056800 x_A^6 -\\
\phantom{P_{x_A}=\vert-} 68697978132015 x_A^8 - 189712941147 x_A^{10} +
6188723588664 x_A^{12} - 704220643376 x_A^{14} -\\
\phantom{P_{x_A}=\vert-} 52577813248 x_A^{16} +
27196394496 x_A^{18} - 2918612992 x_A^{20} + 106774528 x_A^{22}.
$
}
\end{theorem}
With the help of the polynomial $P_{x_J^{\hbox{\scriptsize old}},T}$ from above, we can produce a polynomial $P_{\hbox{\scriptsize trafo}}\in\mathbb Z[x_P,x_P^{\hbox{\scriptsize old}},T]$ that describes the coordinate transformation (\ref{Trafo}), by calculating the resultant of $P_{x_J^{\hbox{\scriptsize old}},T}$ and the polynomial $x_P-x_P^{\hbox{\scriptsize old}}+x_J^{\hbox{\scriptsize old}}$ with respect to $x_J^{\hbox{\scriptsize old}}.$ This is of total degree $24,$ of degree $24$ in $T,$ and of degree $8$ in both variables $x_P$ and $x_P^{\hbox{\scriptsize old}}.$
Applying the same method as above, i.e., first calculating the resultant of this ''transformation polynomial'' and the respective polynomials $P_{x_P,T},$ which we now have to interpret as polynomials in the variables $x_P^{\hbox{\scriptsize old}}$ and $T,$ followed by a factorization over $\mathbb Z[\sqrt{3}],$ and finally repeating this process with the resulting polynomial and $P_T$, we are able to deduce the characteristic polynomials of the $x$-coordinates $x_B$ to $x_F.$ Since this procedure should be standard to the reader by now, we will not go into details anymore, but will only present the results. Once again we stress the fact that these polynomials are for $x$-coordinates with respect to $K$ as origin:
\begin{theorem}
The minimal polynomials for the $x$-coordinates of the vertices of the Harborth graph, where the coordinate system is the one given in Figure 1, are
\medskip
\noindent
$
P_{x_B}:=
$
{\small
$
-17372788157292129 + 85946816541669534 x_B^2 -
172967171143553289 x_B^4+\\
\phantom{P_{x_B}=\vert +} 125428630440736260 x_B^6 -
35361034276033728 x_B^8 + 4402034757921792 x_B^{10}-\\
\phantom{P_{x_B}=\vert -} 436015591392256 x_B^{12} + 77220067192832 x_B^{14} -
11054716223488 x_B^{16}+\\
\phantom{P_{x_B}=\vert +} 874491412480 x_B^{18} - 34734080000 x_B^{20} +
557842432 x_B^{22},
$
}
\medskip
\noindent
$
P_{x_C}:=
$
{\small
$
-55268097000787592100 + 83653148035178006805 x_C^2 -\\
\phantom{P_{x_C}=\vert -}
49933201015710366166 x_C^4+ 15170804748275250138 x_C^6 -\\
\phantom{P_{x_C}=\vert -}
2623723693990622868 x_C^8 + 292733387369474292 x_C^{10}- 24051159678783648 x_C^{12} +\\
\phantom{P_{x_C}=\vert +} 1563610131071808 x_C^{14} -
77064294460416 x_C^{16}+ 2572257472512 x_C^{18} -\\
\phantom{P_{x_C}=\vert -} 50083921920 x_C^{20} + 427098112 x_C^{22},
$
}
\medskip
\noindent
$
P_{x_D}:=
$
{\small
$
-15937557042969 + 69169635141939 x_D^2 - 133600085051911 x_D^4 +\\
\phantom{P_{x_D}=\vert +}
150590940104181 x_D^6 - 109441808559384 x_D^8 +
53597367271968 x_D^{10} - \\
\phantom{P_{x_C}=\vert -} 17996039805696 x_D^{12} +
4144963934208 x_D^{14} - 647005151232 x_D^{16} +\\
\phantom{P_{x_C}=\vert +} 66726690816 x_D^{18} -
4293132288 x_D^{20} + 139460608 x_D^{22},
$
}
\medskip
\noindent
$
P_{x_E}:=
$
{\small
$
-30534686672400 + 184473995962680 x_E^2 - 493600710483009 x_E^4 +\\
\phantom{P_{x_E}=\vert +}
800738068318020 x_E^6 - 883225203916608 x_E^8 +
687262746783744 x_E^{10} - \\
\phantom{P_{x_E}=\vert -} 378024688788480 x_E^{12} + 145061641105408 x_E^{14} - 37695035736064 x_E^{16} + \\
\phantom{P_{x_C}=\vert +}
6218402758656 x_E^{18} - 582162055168 x_E^{20} + 27334279168 x_E^{22},
$
}
\medskip
\noindent
$
P_{x_F}:=
$
{\small
$
-622521 + 20028276 x_F^2 - 150285424 x_F^4 -
349270464 x_F^6 + 7694997504 x_F^8 -\\
\phantom{P_{x_F}=\vert -} 5213620224 x_F^{10} -
109200064512 x_F^{12} + 709185896448 x_F^{14} -
1112735219712 x_F^{16}+\\
\phantom{P_{x_F}=\vert +} 387346071552 x_F^{18} +
124822487040 x_F^{20} + 8925478912 x_F^{22}.
$
}
\end{theorem}
\subsection{The ''Coup de gr\^ace'' - the minimal polynomial for $x_G$}
Due to the complexity of expressions we were not able to use the above procedure to calculate the characteristic polynomial for $x_G$ - even with the help of Mathematica. Thus we have to resort to one final trick - yet another coordinate system. For this, first we mirror the Harborth configuration. After that we choose the point $J$ as the new origin, and the ray $JH$ as the positive part of the new $x$-axis (see Figure 4).
Consequently, the new $y$-coordinates are the $x$-coordinates from above.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]
{HarborthCdCG.eps}
\caption{Part of the mirrored Harborth graph for the determination of $x_G.$ Note that now $H$ and $J$ lie on the $x$-axis}
\end{center}
\label{Harborth_fuerG}
\end{figure}
Let $(u,U)$ be the coordinates of the point $G$ with respect to this coordinate system. Using Equations (\ref{KoordF2}) and (\ref{KoordG1}), adapted to the new coordinate frame, we get
\begin{equation}
\label{PyFU}
-3+4U^2-4U y_F+4y_F^2=0,
\end{equation}
where $y_F$ now denotes the $y$-coordinate of $F$ with regard to this new system. Since $y_F$ is equal to $x_F$ of old, we thus have calculated a polynomial $P_{x_F,U}.$ Calculating the resultant of this polynomial and $P_{x_F}$ with respect to $x_F$ leads to a polynomial in $\mathbb Z[U]$ of degree $44.$ Factoring this, we achieve an irreducible polynomial of degree $22,$ which is the minimal polynomial of $U$ and thus of $x_G.$
\begin{theorem}
The minimal polynomial of the $x$-coordinate of the vertex $G$ of the Harborth graph is
\medskip
\noindent
\begin{center}
\begin{minipage}{.9\linewidth}
$
P_{x_G}:=
$
{\small
$
-106929 + 9380331 x_G^2 - 257190919 x_G^4 + 2410771629 x_G^6 -
11872837680 x_G^8+\\
\phantom{P_{x_G}=\vert +} 35430882432 x_G^{10} - 66974055936 x_G^{12} +
79549160448 x_G^{14} - 56180293632 x_G^{16}+\\
\phantom{P_{x_G}=\vert +} 20514865152 x_G^{18} -
2573205504 x_G^{20} + 139460608 x_G^{22}.
$
}
\end{minipage}
\end{center}
\end{theorem}
Thus we have deduced minimal polynomials for all coordinates of the initial Harborth configuration, and consequently, because of the twofold symmetry and our particular, final choice of coordinates, for nearly all the vertices of the Harborth graph. This finishes our initial task.
\section{Coda}
With all the minimal polynomials at our disposal, we can finish this paper with some observations about their algebraic structure, and consequences for the Harborth graph.
First of all, starting with the minimal polynomials, using a computer algebra system we can once again show the existence of the Harborth graph -- this time by algebraic means only:
\begin{theorem}
For $P\in\{A,\dots,J\}$ let the coordinates $(\pm x_P,\pm y_P)$ of points in the Euclidean plane be particularly chosen roots of the irreducible polynomials $P_{x_P}, P_{y_P}$ which were detailed in the previous sections.
Then these coordinates satisfy the defining equations of coordinates of vertices of the Harborth graph.
\end{theorem}
\begin{proof}
Since the actual calculations have to be done by the computer algebra system and cannot be presented here, the proof will just consist of a series of comments: First, with $(\pm x_P,\pm y_P)$ we denote all four possible combinations of signs of the coordinates, taking into account the twofold symmetry of the Harborth graph, as well as the fact that the polynomials $P_{x_P}$ and $P_{y_P}$ are even.
Second: Clearly Equations (\ref{IniTri}) - (\ref{Ortho}) have to be restated in accordance with our finally chosen coordinate system, which had the point $K$ as its center. Thus, e.g., instead of (\ref{IniTri}) one has to show that the coordinates of the vertices $A$ and $B$ in the first quadrant satisfy $(x_B-x_A)^2+y_B^2=1,$ i.e., the points are at distance $1$ from each other. Numeric approximations which determine our choice of roots have to be adapted as well.
Moreover calculations with the chosen coordinates are done in the sense of \cite{Loos}, whereat the computer algebra system has to take the full brunt of the work. That is: since there are no explicit formulas for the particular roots of the polynomials $P_{x_P}, P_{y_P},$ which we use as coordinates, we have to understand these roots as completely defined by their representing polynomials, and an isolating interval for each. Instead of an isolating interval one can equivalently use a numeric approximation of sufficient precision. In fact, this seems to be what the computer algebra system Mathematica does: it renders possible computations with algebraic numbers by way of \texttt{root} objects \cite{MathematicaAlgNumb}. There, choosing a particular root actually means choosing one of the \texttt{root} objects produced by the \texttt{Solve} routine. This we did in accordance with the previously attained numerical results. The ``proof'' of the above theorem has been facilitated then by use of Mathematica's inbuilt \texttt{RootReduce} routine, which had to be applied to the defining equations of the Harborth configuration. In order to achieve that calculations are finished within an acceptable time, the equations have to be expanded and written as sums of products. More concretely, in the example above, after having assigned particular \texttt{root} objects to the variables \texttt{xA}, \texttt{xB}, and \texttt{yB} with the help of the respective polynomials, one successively has to calculate \verb/RootReduce[xA^2]/, \verb/RootReduce[-2*xA*xB]/, etc.\ until one is able to put everything together, thus to be able to calculate \verb/RootReduce[xA^2-2*xA*xB+xB^2+yB^2]/, which indeed results in $1$.
\end{proof}
Even though we have concentrated on one particular set of coordinates, our calculations have shown that, e.g., the $y$-coordinate $y_B$ of the vertex $B$ in any case must satisfy a polynomial equation. Thus there are only finitely many possibilities for coordinates of $B.$ Since by the geometric construction, which we used, all other vertices can be shown to depend uniquely on this one initial vertex and its embedding in the Euclidean plane, different embeddings of the Harborth graph in the plane -- if they were to exist at all -- cannot be transformed into each other in a continuous way. In other words:
\begin{theorem}
The Harborth graph is rigid.
\end{theorem}
Finally, closer scrutiny of the minimal polynomials leads to
\begin{lemma}
Let $z_P$ be one of the coordinates of a vertex $P$ of the Harborth graph different from zero. Then its minimal polynomial $P_{z_P}\in \mathbb{Z}[X]$ is an even polynomial of degree $22$ and signature $(6,8).$ That is, it has $6$ real zeros and $8$ distinct pairs of conjugate complex zeros.
Consequently we have,
\begin{equation}
\label{radicals}
P_{z_P} = F_{z_P}\circ G
\end{equation}
where $G = X^2\in \mathbb{Z}[X]$, $\circ$ denotes composition and $F_{z_P}$ is a uniquely determined irreducible integer polynomial of degree $11$ and signature $(3,4)$.
\end{lemma}
A precise proof for this lemma can be done by using any one of the existing algorithms for root isolation and some basic calculus.
\begin{theorem}
The coordinates of vertices of the Harborth graph which are different from zero cannot be expressed in terms of radicals. Furthermore the Harborth graph as a whole cannot be constructed by compass and ruler alone.
\end{theorem}
\begin{proof}
A well known corollary in Galois theory\footnote{See the corollary to Theorem 46 in \cite{Artin}.} tells us, that the zeros of a polynomial of odd prime degree, which is irreducible over a real number field, are expressible in terms of radicals, if and only if either the polynomial has only one real zero, or all of its zeros are real. We have already ascertained that the polynomials $F_{z_P}$ appearing in Equation (\ref{radicals}) each have $3$ distinct real roots. Thus the equations $F_{z_P}=0,$ and consequently $P_{z_P}=0$ are not soluble by radicals. This proves the first assumption.
We show the second assumption by an indirect proof: suppose that the Harborth graph were constructible by compass and ruler. This would imply that the vertex $A$, thus its $x$-coordinate $x_A$ and furthermore $x_A^2$ would be constructible by compass and ruler. As a consequence of this, the order of the Galois group of the corresponding minimal polynomial, which is $F_{x_A}$, would be a power of $2$, and the Galois group would be soluble\footnote{See Theorem 47 in \cite{Artin}.}. This, as we have just seen above, is absurd, completing the proof of the theorem.
\end{proof}
\bibliographystyle{amsplain}
|
2,869,038,155,038 | arxiv | \section{Introduction}
It is known that if the Riemann hypothesis is true, then there is a positive semi--definite
matrix $H(x)=\begin{pmatrix}h_{1}(x)&h_2(x)\\h_2(x)&h_3(x)\end{pmatrix}$ such that the spectrum of:
\begin{align*}
\begin{pmatrix}0&-1\\1&0\end{pmatrix}\begin{pmatrix}y_1'(x)\\y_2'(x)\end{pmatrix}
=zH(x)\begin{pmatrix}y_1(x)\\y_2(x)\end{pmatrix}
\end{align*}
(with self--adjoint boundary conditions on an interval $[a,b]$; here $b\leq\infty$), is the same as the imaginary parts of the
zeros of the Riemann $\zeta$ function on the line $\frac{1}{2}+it$. If the coefficients are smooth enough, then
this can be transformed to a Schr\"odinger operator on $[a,b]$ that squares the eigenvalues (see for example
\cite{Lag2006,Lag2009}).
The purpose of this paper is to indicate some properties that this potential -- if it exists -- must have. It
is well--known that if $Z(T)$ represents the number of zeros of magnitude less than $\abs{T}$ of the $\zeta$ function,
then:
\begin{align*}
Z(T) = \frac{1}{\pi}T\log{T} + \frac{1}{\pi}(-2\log{2\pi} - 1)T + O(\log{T}).
\end{align*}
If a potential, $V$, exists whose eigenvalues are the squares of the imaginary parts of the zeros of $\zeta$, and if
$N(T,L_V)$ represents the number of eigenvalues less than $T$, then the Weyl--asymptotics would satisfy:
\begin{align*}
Z(T) = \frac{1}{\pi}\sqrt{T}\log{\sqrt{T}} + \frac{1}{\pi}(-2\log{2\pi} - 1)\sqrt{T} + O(\log{\sqrt{T}})
\end{align*}
We use this to give some properties that such a potential must have to match the asymptotics in this way.
Much of the work in this paper builds off of work by Lagarias in papers such as \cites{Lag2006,Lag2009}.
In \cite{Lag2009}, it is shown that if $V(t)=\frac{1}{4}e^{2t} + ke^{t}$ and if $L_V:=L_{V,x_0,\alpha}f = -f'' +Vf$ is
the associated Schr\"odinger operator on the interval $[x_0,\infty)$, and if $N(T;V,x_0,\alpha)$ is the number of
eigenvalues of $L_V$ less than $\abs{T}$, then
\begin{thm}\label{T:lag}
For $L_{Q_{k},x_0,\alpha}$ the Weyl Asymptotics satisfy:
\begin{align}\label{E:lag_asym}
N(T;V,x_0) = \frac{1}{\pi}\sqrt{T}\log{\sqrt{T}} + \frac{1}{\pi}(2\log2 - 1 - x_0)\sqrt{T} + O(1),
\end{align}
as $T\to\infty$. The constant in the $O(1)$ depends on $k$ and $x_0$.
\end{thm}
There is an $O(1)$ error term here, whereas in the Weyl--asymptotics for a potential that encodes the zeros
of $\zeta$ should include an $O(\log\sqrt{T})$ term. We show in this paper that if $V$ is ''close'' to the
potential $\frac{1}{4}e^{2t}+ke^{t}$, then there is no $O(\log\sqrt{T})$ in the Weyl--asymptotics. On the other hand,
we show that if $V$ is ``too far'' from $V$, then there is an error term that is bigger than $O(\log\sqrt{T})$. These
vague categories of ``too close'' and ``too far'' (which are defined below) do not form a dichotomy, and so a
potential -- if it exists -- that matches the asymptotics appropriately must be neither ''too close'' or ''too far''.
Our first theorem shows that any perturbation by a function
$\varepsilon(t)$ that satisfies $\abs{\varepsilon(t)}\lesssim e^{t}$ will not really change the asymptotics (below,
$Q_0(t)=\frac{1}{4}e^{2t}$):
\begin{thm}\label{T:converse}
Let $\varepsilon(t)$ be a function that satisfies $\abs{\varepsilon(t)}\lesssim e^{t}$. Then there holds:
\begin{align*}
N(T; Q_0 + \varepsilon, x_0, \alpha)
= \frac{1}{\pi}\sqrt{T}\log{\sqrt{T}} + \frac{1}{\pi}(2\log2 - 1 - x_0)\sqrt{T} + O(1),
\end{align*}
where the constant in the $O(1)$ depends on $x_0$ and $\varepsilon$. In particular:
\begin{align*}
\abs{N(T; Q_0,x_0, \alpha) - N(T; Q_0 + \varepsilon, x_0, \alpha)} < O(1),
\end{align*}
and so no $\log{\sqrt{T}}$ is introduced.
\end{thm}
On the other hand, we prove that perturbations
of $Q_0$ by functions that are bigger (resp. smaller) than $e^{(1+\delta)x}$ (resp. $-e^{(1+\delta)x}$) for some
$\delta>0$ introduce a term that is on the order of (at least) $T^{\frac{\varepsilon}{4}}\sqrt{\log T}$:
\begin{thm}\label{T:mt}
Let $V$ be a function that satisfies $V(t) \geq \frac{1}{4}e^{2t} + \frac{1}{4}e^{(1+\varepsilon)t}$
(or $V(t) \leq \frac{1}{4}e^{2t} + \frac{1}{4}e^{(1+\varepsilon)t}$) for some $\varepsilon>0$, then we have:
\begin{align*}
\abs{N(T; V, x_0, \alpha) - N(T; Q_{0}, x_0, \alpha)} \gtrsim T^{\frac{\varepsilon}{4}}\sqrt{\log T}.
\end{align*}
\end{thm}
Finally, we prove that if $V(t)$ is sub-exponential on a fixed percentage of every interval $[x_0, R]$, then
$N(T;V,x_0)$ does not even match the assymptotics in the first order:
\begin{thm}\label{T:bigoh}
Let $V$ be a real potential and suppose there is a sub--exponential function $W(t)$ (by
sub--exponential we mean $\frac{\log W(t)}{t}\to 0$ as $t\to\infty$) such
that that there is a positive number, $\delta$ such that for all
sufficiently large $R$ there holds $\abs{\{t: V(t) < W(t)\}
\cap[x_0,R]}> \delta R$. Then:
\begin{align*}
\frac{N(T;V,x_0)}{\sqrt{T}\log T}\to\infty
\textnormal{ as }
T\to\infty.
\end{align*}
\end{thm}
Thus, if our goal is to find a potential $V$ such that
$N(T;V,x_0)= \frac{1}{\pi}\sqrt{T}\log{\sqrt{T}} + \frac{1}{\pi}(2\log2 - 1 - x_0)\sqrt{T} + O(\log{T})$, we
can summarize our findings as follows:
\begin{itemize}
\item [(a)] Theorem \ref{T:bigoh} gives the heuristic that if $N(T;V,x_0) = O(\sqrt{T}\log \sqrt{T})$,
then $V(t)$ must be at least on the order of $e^{at}$ for some $a>0$. (More precisely, it can't be
dominated by a sub--exponential function on sets of fixed percentages of intervals $[0,R]$).
\item [(b)] Theorem \ref{T:mt} says that for potentials of the form $\frac{1}{4}e^{2t} + \varepsilon(t)$ where
$\varepsilon(t) > e^{(1+\varepsilon)t}$ (or $\varepsilon(t)< e^{(1+\varepsilon)e}$), $N(T;V,x_0)$ will not have the desired
asymptotics. In particular, if $V(t)\simeq e^{at}$ and $a\neq 2$, then $N(T;V,x_0)$ does not have
the desired asymptotics. Thus, the heuristic is that if $V$ is a potential with the desired spectral
asymptotics, then $V(t)$ must be a small perturbation of $\frac{1}{4}e^{2t}$.
\item [(c)] Finally, Theorem \ref{T:converse} shows that small perturbations of $\frac{1}{4}e^{2t}$ will not
produce the desired asymptotics.
\end{itemize}
The theme of this paper is then that a potential that gives the desired Weyl asymptotics is not just a
``small peturbation'' of a well--established potential (like the Morse potential). It seems that
any potential that gives the desired Weyl asymptotics is going to have to oscillate wildly
between sub and super exponential functions, will have singularities, and will probably not be a function
(that is, it is a distribution).
Additionally, to make much more progress in this area, a version of Theorem \ref{T:speca}
that puts less restrictions on $V$ is needed. Theorem \ref{T:converse} is a step in this direction
and hopefully can be extended to other non--exponential potentials.
For the rest of the paper, we set $\alpha=0$ and we don't write the ``$\alpha$''
in $N(T;V, x_0, \alpha)$. The proofs of Theorems \ref{T:converse}, \ref{T:bigoh}, and
\ref{T:mt} are in the following sections. Section 2 also contains some material that is
used throughout the paper.
\section*{Acknowledgment}
Alex Poltoratski introduced me to this area and told me about several lines of investigation
and ideas to pursue in this area. I would like to thank him for this and for discussing
some of the thoughts herein.
\section{Proof of Theorem \ref{T:converse}}
The proof of Theorem \ref{T:converse} is based on a very simple observation along with the
integral estimate in \cite{Lag2009}. We give some background information on which the simple
observation is based. This observation is also used in other parts of this paper.
We have this basic theorem due to Sturm; see for example \cites{Titchmarsh1946, LevitanSargsjan1970}
\begin{thm}\label{T:st}
Let $u$ be a (any) solution to:
\begin{align*}
u'' + (\lambda - g)u = 0
\end{align*}
and let $v$ be a (any) solution to:
\begin{align*}
v'' + (\lambda - h)u = 0,
\end{align*}
where $h(x) < g(x)$ (so that $\lambda - g(x) < \lambda - h(x)$). Between any two zeros of $u$, there is a
zero of $v$. In particular:
\begin{align*}
\#\{\textnormal{zeros of } v\} \geq \#\{\textnormal{zeros of } u\} - 1
\end{align*}
\end{thm}
We have the following corollary (see also, for example, \cite{Simon2005}):
\begin{cor}\label{C:evalests}
Let $g,h$ be as above. Consider the same equations as above but consider only solutions that satisfy a
boundary condition at $0$ and are in $L^2$ (that is, we have an eigenvalue problem; we consider only the
''limit point'' case). Then for every $a>0$ we have:
\begin{align*}
\#\{\textnormal{eigenvalues of second problem in } [0,a]\}
\geq \#\{\textnormal{eigenvalues of first problem in } [0,a]\} + O(1).
\end{align*}
\end{cor}
\begin{proof}
First, consider the two differential equations:
\begin{align}\label{E:bp}
y'' + (\lambda - g)y = 0
\end{align}
and
\begin{align}\label{E:sp}
y'' + (\lambda - h)y = 0.
\end{align}
We want to consider solutions to these equations that satisfy $y(0)=0$ and $y'(0)=1$. Let $y(x,\lambda; g)$
and $y(x, \lambda; h)$ denote solutions to the respective problems. We consider the limit point case and
we have that $y(x,\lambda; g)\in L^2$ if and only if $\lambda$ is an eigenvalue of \eqref{E:bp}; and similarly
for $y(x,\lambda; h)$ and \eqref{E:sp}.
For a fixed $\lambda$, we know from Theorem \ref{T:st} that
$y(x,\lambda; h)$ has at least as many zeros as $y(x,\lambda; g)$ (up to an $O(1)$ error). Furthermore -- also
by Theorem \ref{T:st} -- if
$\lambda_{k}(g)$ and $\lambda_{k+1}(g)$ are the $k^{\textnormal{th}}$ and $(k+1)^{\textnormal{th}}$ eigenvalues
for \eqref{E:bp}, then $y(x, \lambda; g)$ has $k$ zeros for $\lambda_{k}(g) < \lambda < \lambda_{k+1}(g)$; that
is, $y(x, \lambda; g)$ gains a zero only when $\lambda$ is an eigenvalue (at which point
it gains exactly one zero). Of course, similar statements are true for $y(x, \lambda; h)$.
Now, to prove the Corollary \ref{C:evalests} we reason as follows. Starting with $\lambda = 0$, increase
$\lambda$ in a continuous manner. If $\lambda$ passes through two eigenvalues of problem \eqref{E:bp} with out
passing through an eigenvalue of \eqref{E:sp}, then we have a contradiction to Theorem \ref{T:st}.
Indeed, if $\lambda = \lambda_2(g)$, and $\lambda_{0}(h) > \lambda_{2}(g)$ then we have that
$y(x, \lambda; g)$ has $2$ zeros while $y(x, \lambda; h)$ has no zeros (Theorem \ref{T:st} says that
$y(x, \lambda; h)$ should have at least one zero). Continue increasing $\lambda$ this way, noting that
whenever it passes through an eigenvalue of problem \eqref{E:bp}, it must pass through at least one
eigenvalue of problem \eqref{E:sp}.
\end{proof}
We now prove Theorem \ref{T:converse}.
\begin{proof}[Proof of Theorem \ref{T:converse}]
By assumption, there is a $C>0$ such that $\frac{1}{4}e^{2t} - Ce^{t} < \varepsilon(t) < \frac{1}{4}e^{2t} + Ce^{t}$.
By the theorem of Lagarias $N(T; \frac{1}{4}e^{2t} \pm C e^{t}, x_0)
= \frac{1}{\pi}\sqrt{T}\log{\sqrt{T}} + \frac{1}{\pi}(2\log2 - 1 - x_0)\sqrt{T} + O(1)$. By Corollary \ref{C:evalests}:
\begin{align*}
N(T; \frac{1}{4}e^{2t} + Ce^{t})
\leq N(T; V, x_0)
\leq N(T; \frac{1}{4}e^{2t} - Ce^{t}).
\end{align*}
Since the outer two terms are equal, up to an $O(1)$ error, this implies that $N(T; V, x_0)
= \frac{1}{\pi}\sqrt{T}\log{\sqrt{T}} + \frac{1}{\pi}(2\log2 - 1 - x_0)\sqrt{T} + O(1)$.
\end{proof}
\section{Proof of Theorem \ref{T:mt}}
To prove Theorem \ref{T:mt}, we use Weyl's law. This is a well--known theorem with several
variants. We quote the one from \cite{Lag2009}:
\begin{thm}\label{T:speca}
Let $V(t) = Q_k(t)$ or $\frac{1}{4}e^{2t} + \frac{1}{4}e^{(1+\varepsilon)t}$. Then there holds:
\begin{align}\label{E:weyl_law_2}
N(T; V, x_0)
= \frac{1}{\pi}\int_{x_0}^{V^{-1}(T)}\sqrt{T - V(t)}dt + O(1).
\end{align}
\end{thm}
To prove Theorem \ref{T:mt} we use Weyl's law above and the following two lemmas.
\begin{lm}\label{L:ub}
Let $V$ be a potential such that there is an $\varepsilon>0$ such that:
\begin{align*}
V(t) > \frac{1}{4}e^{2t} + \frac{1}{4}e^{(1+\varepsilon)t}.
\end{align*}
Then
\begin{align*}
\abs{N(T; Q_0, x_0) - N(T; V, x_0)} \gtrsim T^{\frac{\varepsilon}{4}}\sqrt{\log T}.
\end{align*}
\end{lm}
\begin{proof}
First, by corollary \ref{C:evalests}, $N(T; Q_0, x_0) > N(T; V, x_0)$ (since $Q_0 < V$). Also,
since $V(t) > \frac{1}{4}e^{2t} + \frac{1}{4}e^{(1+\varepsilon)t}$, by Corollary \ref{C:evalests}, it follows that
$N(T; \frac{1}{4}e^{2t} + \frac{1}{4}e^{t}, x_0) > N(T; V, x_0)$ and so:
\begin{align*}
\abs{N(T; Q_0, x_0) - N(T; V, x_0)}
= N(T; Q_0, x_0) - N(T; V, x_0)
> N(T; Q_0, x_0) - N(T; \frac{1}{4}e^{2t} + \frac{1}{4}e^{(1+\varepsilon)t}, x_0),
\end{align*}
and so we get a lower bound on this last term. Letting $W(t) = \frac{1}{4}(e^{2t}+e^{(1+\varepsilon)t})$,
by \eqref{E:weyl_law_2}, this is equal to:
\begin{align*}
\int_{t=0}^{Q_0^{-1}(T)}\sqrt{T-Q_0(t)}dt
-\int_{t=0}^{W^{-1}(T)}\sqrt{T-W(t)}dt.
\end{align*}
For the rest of the proof, let $T=\frac{1}{4}e^{2u}$ so that $Q_{0}^{-1}(T) = u$; also let $p=W^{-1}(T)$. Thus,
we can write the integral above as:
\begin{align}\label{E:split}
\frac{1}{2}\int_{t=0}^{p}\left(\sqrt{e^{2u}-e^{2t}} - \sqrt{e^{2u}-e^{2t}-e^{(1+\varepsilon)t}}\right) dt
+ \frac{1}{2}\int_{t=p}^{u}\sqrt{e^{2u}-e^{2t}}dt.
\end{align}
We estimate the first integral in \eqref{E:split}. (As a side note, we mention the
second integral is ``small'' and does not contribute much). We first estimate $u-p$. Since $p$ satisfies
$e^{2p} + e^{(1+\varepsilon)p} = e^{2u}$, by taking $\log$ on both sides, we find:
\begin{align*}
2p + \left(\log{(e^{2p} + e^{(1+\varepsilon)p})}-\log{e^{2p}}\right) = 2u,
\end{align*}
so that $2(u-p) = \left(\log{(e^{2p} + e^{(1+\varepsilon)p})}-\log{e^{2p}}\right)$. The quantity in parentheses
is estimated as:
\begin{align*}
\log{(e^{2p} + e^{(1+\varepsilon)p})}-\log{e^{2p}}
=\log(1+e^{(\varepsilon-1)p})
\simeq e^{(\varepsilon - 1)p}.
\end{align*}
Thus, $u-p \simeq e^{(\varepsilon-1)p}$. Let
$p_{\delta}=p-\delta p$, where $\delta>0$ is a (small) number that will be chosen later that depends only
on $\varepsilon$. We will show that:
\begin{align}\label{E:toshow}
\int_{p_{\delta}}^{p}\sqrt{e^{2u} - e^{2t}} -
\sqrt{e^{2u}-e^{2t}-e^{(1+\varepsilon)t}}dt
\gtrsim \sqrt{p}e^{\frac{\varepsilon}{2} p}.
\end{align}
Note that $u \simeq \log{T}$ and $e^{\frac{\varepsilon}{2}u}\simeq T^{\frac{\varepsilon}{4}}$. Thus, this estimate
implies the claim since $\sqrt{p}e^{\frac{\varepsilon}{2}p} - \sqrt{u}e^{\frac{\varepsilon}{2}u}\to 0$ as $T\to\infty$.
The integrand is estimated as:
\begin{align*}
\sqrt{e^{2u} - e^{2t}} - \sqrt{e^{2u}-e^{2t}-e^{(1+\varepsilon)t}}
&=\frac{2^{(1+\varepsilon)t}}{\sqrt{e^{2u} - e^{2t}} + \sqrt{e^{2u}-e^{2t}-e^{(1+\varepsilon)t}}}
\\& \geq \frac{e^{(1+\varepsilon)t}}{2(\sqrt{e^{2u} - e^{2p_\delta}})}.
\end{align*}
Additionally, we have:
\begin{align*}
\int_{p_\delta}^{p}e^{(1+\varepsilon)t}dt
= \frac{1}{1+\varepsilon}\left(e^{(1+\varepsilon)p} - e^{(1+\varepsilon)p_{\delta}}\right)
\geq \frac{1}{1+\varepsilon}e^{(1+\varepsilon)p_\delta}(p-p_\delta)
= \frac{\delta}{1+\varepsilon}e^{(1+\varepsilon)(1-\delta)p}p.
\end{align*}
The ``$\geq$'' above follows from the mean value theorem.
Furthermore using the mean value theorem again, we have
\begin{align*}
e^{2u} - e^{2p_\delta}
\leq e^{2u}(u-p_\delta)
= e^{2u}((u-p) + \delta p)
= e^{2u}(u - p(1-\delta)).
\end{align*}
Putting together these estimates, we find:
\begin{align*}
\int_{p_{\delta}}^{p}\sqrt{e^{2u} - e^{2t}} -
\sqrt{e^{2u}-e^{2t}-e^{(1+\varepsilon)t}}dt
&\geq \frac{\delta}{1+\varepsilon}
\frac{e^{(1+\varepsilon)(1-\delta)p}p}{\sqrt{e^{2u}(u - p(1-\delta)}}
\\&\gtrsim \frac{\sqrt{\delta p}}{1+\varepsilon}
e^{p-u}e^{\varepsilon p - (1+\varepsilon)\delta p}.
\end{align*}
Now, choose $\delta$ so small that $(1+\varepsilon)\delta < \varepsilon/2$ so that last quantity above is
bigger than $\frac{\sqrt{\delta p}}{1+\varepsilon}e^{p-u}e^{\frac{\varepsilon}{2} p}$ (since $p-u\to 0$ so $e^{p-u}\to 1$).
This completes the proof.
\end{proof}
\begin{rem}
Note that if $\varepsilon =0$, the estimate above is:
\begin{align*}
\frac{\sqrt{\delta p}}{1+\varepsilon}
e^{p-u}e^{\varepsilon p - (1+\varepsilon)\delta p}
=\frac{\sqrt{\delta p}}{1+\varepsilon}
e^{p-u}e^{-\delta p}
\to 0.
\end{align*}
\end{rem}
\begin{lm}\label{L:lb}
Let $V$ be a potential such that there is an $\varepsilon>0$ such that:
\begin{align*}
V(t) < \frac{1}{4}e^{2t} - \frac{1}{4}e^{(1+\varepsilon)t}.
\end{align*}
Then
\begin{align*}
\abs{N(T; V, x_0)-N(T; Q_0, x_0)} \gtrsim T^{\frac{\varepsilon}{4}}\sqrt{\log T}.
\end{align*}
\end{lm}
\begin{proof}
Similar reasoning above leads us to find a lower bound on:
\begin{align*}
\int_{t=u_\delta}^{u}\sqrt{e^{2u} - (e^{2t} - e^{(1+\varepsilon)t})}-\sqrt{e^{2u} - e^{2t}}dt,
\end{align*}
where $u_\delta = u -\delta u$ and $\delta$ depends on $\varepsilon$ and will be chosen later. We find a lower bound
on the integrand as:
\begin{align*}
\sqrt{e^{2u} - (e^{2t} - e^{(1+\varepsilon)t})}-\sqrt{e^{2u} - e^{2t}}
> \frac{e^{(1+\varepsilon)t}}{\sqrt{e^{2u}-e^{2t} + e^{(1+\varepsilon)t}}}
> \frac{e^{(1+\varepsilon)t}}{\sqrt{e^{2u} + e^{(1+\varepsilon)u_\delta}}}
\simeq \frac{e^{(1+\varepsilon)t}}{e^{u}}.
\end{align*}
And so:
\begin{align*}
\int_{t=u_\delta}^{u}\sqrt{e^{2u} - (e^{2t} - e^{(1+\varepsilon)t})}-\sqrt{e^{2u} - e^{2t}}dt
&> e^{-u}\int_{t=u_\delta}^{u}e^{(1+\varepsilon)t}dt
\geq\frac{\delta u}{1+\varepsilon} e^{\varepsilon u - (1+\varepsilon)\delta u}.
\end{align*}
As above, choose $\delta$ small enough so that $(1+\varepsilon)\delta < \frac{1}{2}\varepsilon$.
\end{proof}
\section{Proof of Theorem \ref{T:bigoh}}
To prove Theorem \ref{T:bigoh} we use the following
well--known theorem:
\begin{thm}\label{T:weyl_law}
Let $V$ be a positive potential. Then there holds:
\begin{align*}
N(T; V,x_0) \simeq \abs{\{(t,\xi): \abs{\xi}^2 + V(t) < T\}\cap\{(t,\xi):t\geq x_0\}}.
\end{align*}
\end{thm}
We also make the following observation. If $W(t)$ is sub--exponential (by which we mean
that $\frac{\log W(t)}{t}\to 0$ as $t\to\infty$) then there is a function $\varepsilon(t)$ with
$\varepsilon(t)\to 0$, $t\varepsilon(t)\to \infty$ and $W(t) = e^{t\varepsilon(t)}$. Indeed, we easily compute
$\varepsilon(t)$ by noting that $W(t) = e^{t\frac{\log W(t)}{t}}$ and so $\varepsilon(t)=\frac{\log W(t)}{t}$.
Since $W(t)\to\infty$ we observe that $t\varepsilon(t)=\log W(t)\to\infty$. Furthermore,
since $W(t)$ is sub--exponential, we conclude that $\frac{\log W(t)}{t}\to 0$ as $t\to\infty$.
The proof of Theorem \ref{T:bigoh} will follow from this observation and the following lemma.
\begin{lm}
Let $V$ and $W$ be as in Theorem \ref{T:bigoh}. Let $\varepsilon(t)$ be a function that satisfies
$\varepsilon(t)\to 0$ and $t\varepsilon(t)\to\infty$ as $t\to\infty$. Further assume that there is a
positive number $\delta < 1$ such that for all sufficiently large $R$ there holds
$\abs{\{t: V(t) < Ce^{t\varepsilon(t)}\}\cap[x_0,R]}> \delta R$.
Then:
\begin{align*}
\frac{N(T;V,x_0)}{\sqrt{T}\log T}\to\infty
\textnormal{ as }
T\to\infty.
\end{align*}
\end{lm}
\begin{proof}
Let $\psi(t):=t\varepsilon(t)$ and note that by the properties of $\psi(t)$ we have that
$\frac{\psi^{-1}(t)}{t}\to\infty$ as $t\to\infty$.
By Theorem \ref{T:weyl_law} we have:
\begin{align*}
N(T;V,x_0)
\simeq \abs{\{(t,\xi):\xi^2 + V(t) < T\}}
&> \abs{\{(t,\xi):\xi^2 + V(t) < \frac{T}{2}\} \cap \{(t,\xi): V(t)<Ce^{t\varepsilon(t)}\}}
\\& > \abs{\{(t,\xi):\xi^2 + Ce^{t\varepsilon(t)} < \frac{T}{2}\} \cap \{(t,\xi): V(t)<Ce^{t\varepsilon(t)}\}}
\\&=\int_{t=x_0}^{\psi^{-1}\left(\log\frac{T}{2C}\right)}
\left(T-Ce^{t\varepsilon(t)}\right)^{\frac{1}{2}}1\!\!1_{\{t:V(t)<Ce^{t\varepsilon(t)}\}}(t)dt.
\end{align*}
Now, on the set over which the integral above is taken, we have that $Ce^{t\varepsilon(t)}<\frac{T}{2}$ and
so $T-Ce^{t\varepsilon(t)}\simeq T$. Furthermore, the measure of the set over which the integral is being
taken is at least $\delta \psi^{-1}\left(\log\frac{T}{2C}\right)$. Thus we have that:
\begin{align}\label{E:bo}
N(T;V,x_0)
\gtrsim \delta \psi^{-1}\left(\log\frac{T}{2C}\right) \sqrt{T}.
\end{align}
To prove the desired claim, we need to show that $\frac{\psi^{-1}(\log\frac{T}{2C})}{\log T} \to\infty$.
This is equivalent to showing that $\frac{\psi^{-1}(T)}{T}\to\infty$ as $T\to\infty$. But this is
true because $\frac{\psi(T)}{T}\to 0$ as $T\to\infty$. Thus, this completes the proof.
\end{proof}
\section{Remarks and Complements}
In this section, we make some concluding remarks and extend some of the results above.
\begin{prop}
If $V(t)$ is ``exponential order'' (by which we mean there are positive constants $C,a,b$ such that
$\frac{1}{C}e^{at}<V(t)<C e^{bt}$) then $N(T;V)\simeq \sqrt{T}\log{T}$.
\end{prop}
\begin{proof}
First, we note that by Corollary \ref{C:evalests}, we only need to show that $N(T;e^{kt})\simeq \sqrt{T}\log{T}$
for all $k>0$. To do this, we use Theorem \ref{T:speca}. Thus:
\begin{align*}
N(T;e^{kt})
\simeq \int_{x_0}^{\frac{1}{k}\log T}\left(T-e^{kt}\right)^{\frac{1}{2}}dt
\gtrsim \int_{x_0}^{\frac{1}{2k}\log T} \sqrt{T} dt
\simeq \sqrt{T}\log T.
\end{align*}
It is even easier to show that $N(T;e^{kt})\lesssim \sqrt{T}\log T$.
\end{proof}
Here is a proposition that says that if $V$ is super--exponential, then the Weyl Asymptotics are too small:
\begin{prop}
Let $V(t)$ be a super--exponential potential (by which we mean $\frac{\log V(t)}{t}\to\infty$ as $t\to\infty$)
then:
\begin{align*}
\frac{N(T;V)}{\sqrt{T}\log T}\to 0
\textnormal{ as }
T\to\infty.
\end{align*}
\end{prop}
\begin{proof}
Similar to above, we assume that $V(t)>e^{t\varepsilon(t)}$ where $\varepsilon(t)\to\infty$; let $\psi(t)=t\varepsilon(t)$. By Corollary
\ref{C:evalests}, we get a upper bound on $N(T; e^{t\varepsilon(t)})$:
\begin{align*}
\int_{x_0}^{\psi^{-1}(\log T)}\left(T-e^{t\varepsilon(t)}\right)^{\frac{1}{2}}dt
\leq \sqrt{T}\psi^{-1}(\log T).
\end{align*}
Now, $\frac{\psi^{-1}(t)}{t}\to 0$ as $t\to\infty$ since $\frac{\psi(t)}{t}\to\infty$ as $t\to\infty$. Thus,
\begin{align*}
\frac{\sqrt{T}\psi^{-1}(\log T)}{\sqrt{T}\log T}=\frac{\psi^{-1}(\log T)}{\log T}\to 0.
\end{align*}
\end{proof}
Finally, we briefly discuss an extension to Theorem \ref{T:bigoh}. Recall that in the proof of
Theorem \ref{T:bigoh} we had estimate \eqref{E:bo}:
\begin{align*}
N(T;V,x_0)
\gtrsim \delta \psi^{-1}\left(\log\frac{T}{2C}\right) \sqrt{T}.
\end{align*}
Now, let's make $\delta$ be a function that depends on $R$. That is, we know that $V$ is sub--exponential
on sets of size $\delta(R) R$ on the intervals $[0,R]$. Then the above estimate is:
\begin{align*}
N(T;V,x_0)
\gtrsim \delta(\log\frac{T}{2C}) \psi^{-1}\left(\log\frac{T}{2C}\right) \sqrt{T}.
\end{align*}
Thus, $\delta$ can be a decreasing function, so long as:
\begin{align*}
\frac{\delta(\log\frac{T}{2C}) \psi^{-1}\left(\log\frac{T}{2C}\right)}{\log T}\to\infty.
\end{align*}
So, for example, if $\varepsilon(t)=t^{-\varepsilon}$, then $\psi(t)=t^{1-\varepsilon}$ and $\psi^{-1}(t)=t^{1+\gamma}$ for
some $\gamma>0$. Then the estimate above is:
\begin{align*}
\frac{\delta(\log\frac{T}{2C}) \psi^{-1}\left(\log\frac{T}{2C}\right)}{\log T}
=\frac{\delta(\log\frac{T}{2C}) \left(\log\frac{T}{2C}\right)^{1+\gamma}}{\log T}
=\delta(\log\frac{T}{2C}) \left(\log\frac{T}{2C}\right)^{\gamma}.
\end{align*}
So, this still goes to $\infty$ if, for example, $\delta(t)>t^{-\frac{\gamma}{2}}$.
\begin{bibdiv}
\begin{biblist}
\bib{CodLev1955}{book}{
author={Coddington, Earl A.},
author={Levinson, Norman},
title={Theory of ordinary differential equations},
publisher={McGraw-Hill Book Company, Inc., New York-Toronto-London},
date={1955}
}
\bib{Lag2006}{article}{
author={Lagarias, Jeffrey C.},
title={Hilbert spaces of entire functions and Dirichlet $L$-functions},
conference={
title={Frontiers in number theory, physics, and geometry. I},
},
book={
publisher={Springer, Berlin},
},
date={2006},
pages={365--377}
}
\bib{Lag2009}{article}{
author={Lagarias, Jeffrey C.},
title={The Sch odinger operator with Morse potential on the right
half-line},
journal={Commun. Number Theory Phys.},
volume={3},
date={2009},
number={2},
pages={323--361}
}
\bib{LevitanSargsjan1970}{book}{
author={Levitan, B. M.},
author={Sargsjan, I. S.},
title={Introduction to spectral theory: selfadjoint ordinary differential
operators},
note={Translated from the Russian by Amiel Feinstein;
Translations of Mathematical Monographs, Vol. 39},
publisher={American Mathematical Society, Providence, R.I.},
date={1975},
pages={xi+525}
}
\bib{Simon2005}{article}{
author={Simon, Barry},
title={Sturm oscillation and comparison theorems},
conference={
title={Sturm-Liouville theory},
},
book={
publisher={Birkh\"auser, Basel},
},
date={2005},
pages={29--43}
}
\bib{Titchmarsh1946}{book}{
author={Titchmarsh, E. C.},
title={The Theory of the Riemann Zeta-Function},
publisher={Oxford, at the Clarendon Press},
date={1951},
pages={vi+346}
}
\end{biblist}
\end{bibdiv}
\end{document}
|
2,869,038,155,039 | arxiv | \section{Introduction}\label{Section:Introduction}
As all astronomical instrumentation should be driven by the scientific demand, such is the case for the HIgh-Resolution Mid-infrarEd Spectrometer (HIRMES), planned for commissioning on the Stratospheric Observatory For Infrared Astronomy \citep[SOFIA; ][]{2014ApJS..212...24T} in 2019 as a facility-class instrument. HIRMES is currently undergoing its Integration and Testing (I\&T) phase of its development at the NASA Goddard Space Flight Center (GSFC) in partnership with Cornell University, with Samuel~H.~(Harvey)~Moseley as the Principle Investigator. In September 2016, HIRMES was selected to be SOFIA's 3$^{rd}$ Generation Instrument. Since then, it has been following an intensive build schedule, working closely with the NASA/SOFIA Science Instrument Development Team at the NASA Ames Research Center (ARC).
\vspace{0.5cm}
Following the goal of understanding of how the Earth obtained its water, HIRMES will provide answers to fundamental questions currently being asked in proto-planetary science. Among these questions, HIRMES will tackle: (a) How does the mass of the disk evolve during planetary formation? (b) What is the distribution of oxygen, water ice, and water vapor in different phases of planet formation? (c) What are the kinematics of water vapor and oxygen in proto-planetary disks? In answering these questions, HIRMES will discover where, and in what form, the raw materials for life reside, and how planetary systems like our own evolve.
HIRMES will quantitatively answers these questions by providing low (R$\sim$600) to very high (R$\sim$100,000) spectral resolving power over the critical spectral range of 25 -- 122 $\mu$m in the mid- to far-infrared waveband. HIRMES combines grating dispersive spectroscopy and Fabry-Perot tunable narrow-band filters with high efficiency background-limited direct detectors. The instrument spectral resolution is designed to match the width of the spectral lines, significantly reducing the background noise, to achieve the maximum possible sensitivity for mid-far infrared spectroscopy with SOFIA. Providing this combination of sensitivity and spectral resolution, HIRMES will open up a unique and useful window on the evolution of planetary systems to the astronomical community. HIRMES' order-of-magnitude sensitivity improvement to SOFIA's current capabilities is crucial for this science program, increasing the number of observable Solar mass proto-planetary systems from a couple to hundreds. Furthermore, the instrument has utility far beyond the aforementioned investigations, providing tools for a range of Galactic studies, such as stellar outflows and their impact on the interstellar medium, and extragalactic studies, such as the strength and shape of important diagnostic emission lines.
The instrument's grating mode spectroscopy is a powerful tool for the study of water ice emission in a wide range of objects, an important capability largely absent for more than 20 years since the Kuiper Airborne Observatory (KAO) and Infrared Space Observatory (ISO).
\subsection{Science drivers}\label{Subsection:Science_Drivers}
\begin{figure}
\centering
\centerline{\includegraphics[width=\linewidth]{_proto-planetary_disk_science}}
\caption{An artistic representation of a proto-planetary disk, sliced edge-on, with a size scale similar to that of our Solar System [Mercury $\simeq$ 0.4 AU, Earth = 1 AU, Saturn $\simeq$ 9.5 AU]. The temperature decreases with distance from the star and with thermal shielding, so any water present transitions through its various states accordingly. Analyzing the three key spectral features shown here in High-Resolution; H$_2$0, [OI] and HD, allows HIRMES to probe different regions of disks and their properties.}
\label{fig:PPD}
\end{figure}
The past decade has seen considerable advancements in our understanding of exoplanets: their diverse properties (e.g. masses, size, orbits) and fundamental trends (e.g. with metallicity or host mass). These observed properties place important boundary conditions on the key processes that govern the formation and evolution of planetary systems. Analogous studies of the proto-planetary disks orbiting young stars provide the vital initial conditions for making planetary systems, as well as the early coevolution and interaction of planets and their birth environments. The observed properties of both disks and exoplanets are needed to inform, test, and refine models of the planet formation process \citep[][and references therein]{2015PASP..127..961A}. Studying the dynamics and chemistry of molecular and atomic gas in the inner regions of the disk (Radius $<$10 AU) provides key information on the reservoir available for the formation of gas giants, and the generation and eventual delivery of such chemicals to terrestrial planets \citep{2013ChRv..113.9016H,2006PNAS..10312249V}. ISO, Spitzer, and Herschel pioneered the studies of proto-planetary disks at mid-far infrared to sub-millimeter wavelengths. SOFIA, in synergy with other ground and space based observatories, will revolutionize this field over the next decade, and thus not only provide the astronomical community with unique data, but also drive the science requirements and design of future missions like the Origins Space Telescope \citep[OST;][]{2018NatAs...2..596B}.
Development of ideas about the role of water in disk surfaces is currently based on limited data, taken at low spectral resolution, or based upon upper limits. The critical transition region from the “wet” inner disk to the “dry” outer disk at a few to 10 AU is beyond our current observational reach. Wavelengths shorter than $\sim$30 $\mu$m (accessible with JWST) trace the innermost disk at $\leq$1 AU, while Herschel traced the cooler outer disk at distances $>$1 AU. However, neither of these observatories spectrally resolve the molecular lines nor do they provide information about the location of the emission. While HIRMES cannot observe the water lines tracing the dilute water vapor beyond the snow line at temperature $<$150 K, observing from the stratosphere on-board SOFIA does allow HIRMES to observe water lines tracing gas at 200 -- 300 K. By spectrally resolving the lines, HIRMES can map the surface abundance of water inside of the transition region. This is achieved by targeting three key spectral lines, specifically: H$_2$O : 34.9 $\mu$m, [OI] : 63.1 $\mu$m, and HD : 112 $\mu$m (see Figure \ref{fig:PPD}). However, there are hundreds of potentially bright water lines accessible to HIRMES over its bandwidth. These three key lines are selected as they probe different regions and properties of the proto-planetary disk, and when combined, they break the degeneracy between mass and abundance that is present in other observational techniques, enabling determination of the timescales over which water is implanted into icy bodies to seed habitable worlds. With this, the proto-planetary disk gas mass, seen as the most fundamental quantity that determines whether planets can form, is now within the grasp of observational astrophysics. Furthermore, HD has been proposed as the best tracer of the total H2 gas mass in proto-planetary disks \citep{2013Natur.493..644B,2016ApJ...831..167M}. HIRMES provides unique access to this mass tracer, which can in turn be used to derive precise molecular abundances in disks, including for observations obtained with ALMA \citep{2017ASSL..445....1B,2018ApJ...865..155C}.
\begin{figure}
\centering
\centerline{\includegraphics[width=10cm]{_water-ice_spectrum}}
\caption{An adaptation of \citet{2015ApJ...799..162M}'s figure, showing the the emission/absorption coefficients of the 43, 47 and 63 $\mu$m water-ice features that observationally infer the thermal history of grain mantles.}
\label{fig:w-i}
\end{figure}
The ice/rock ratio in the solar nebula is thought to have been significantly larger than unity beyond the snow line. That is, the solid mass reservoir was dominated by ice by factors of 2 or more. Consequently, most core-accretion models form giant planets beyond the snow line \citep[e.g.,][]{2008ApJ...685..584I, 2008ApJ...688L..99D}. Compared to silicates and other refractory materials, water ice also substantially increases the sticking probability in collisions between dust grains, catalyzing the first stage of planet formation by growing micron-sized dust grains to centimeter and meter-sized icy bodies \citep{2008ARA&A..46...21B}. Solar system comets are likely primordial tracers of the original ice/rock ratios, and indeed suggest that ice was a dominant solid mass reservoir in the Solar System. While ices are often observed via their mid-infrared bands in the 3 -- 20 $\mu$m range, these features cannot generally be used to measure bulk ice in disks, as dust hot enough to emit at these wavelengths no longer retain ice. The longer-wavelength phonon modes (43 -- 63 $\mu$m), on the other hand, are expected to be seen in emission in typical proto-planetary disks. The strongest feature of crystalline ice at 43 micron has not been accessible since ISO \citep[see Figure \ref{fig:w-i};][]{1998A&A...332L..25M}.
Beyond spectroscopy of proto-planetary disks, HIRMES will be able to access a diverse set of fine structure lines, including [FeII] 25.99 $\mu$m, 35.35 $\mu$m, [SI] 25.25 $\mu$m, [SIII] 33.48 $\mu$m, [SiII] 34.81 $\mu$m, [NeIII] 36.0 $\mu$m, [NIII] 57.30 $\mu$m, [NII] 121.90 $\mu$m, [OI] 63.18 $\mu$m, and [OIII] 51.81 $\mu$m, 88.35 $\mu$m. These, in particular, yield line ratios that probe the abundances, ionization state, and density in shocked gas. This will permit shock models to be tested, yielding quantitative mass flow rate measurements, as well as generating velocity-resolved line profiles that will elucidate the kinematics of shocked gas. Spitzer and Herschel have observed and mapped several such transitions in proto-stellar outflows and supernova remnants, allowing the spatial distributions of the various shock tracers to be compared. However, because their line profiles were unresolved, these observations did not supply kinematic information. HIRMES has the ability to provide the first complete data cubes in this field, allowing supersonic motions of the shock-heated gas to be measured as a key test of shock models.
Designed to be as versatile as possible, HIRMES will be a major enhancement to SOFIA's suite of instruments, supplying the astronomical community with data not yet seen. We describe the instrument design in Section \ref{Section:Instrument}, and the various observing modes \& techniques and data reduction in Section \ref{Section:Observing}. The authors must stress that the design and sensitivities are still preliminary (correct at the time of writing). As the instrument undergoes Integration and Testing (I\&T) through Commissioning and Acceptance, it is likely that some parameters presented here will be updated. As such, we recommend the reader to keep visiting the SOFIA - HIRMES \href{https://www.sofia.usra.edu/science/instruments/hirmes}{webpage}\footnote{\href{https://www.sofia.usra.edu/science/instruments/hirmes}{https://www.sofia.usra.edu/science/instruments/hirmes}} for the most up-to-date information.
\section{HIRMES Instrument Design}\label{Section:Instrument}
\subsection{Overview}
The HIRMES instrument is a vacuum cryostat using a TransMIT pulse-tube cryocooler with a Commercial Off-The-Shelf (COTS) $^4$He refrigerator and an Adiabatic Demagnetization Refrigerator (ADR) to cool the Transition Edge Sensor (TES) bolometers to 70 mK. The instrument block diagram (see Figure \ref{fig:inst}; top) identifies the subsystems required to achieve the aforementioned science, including the cryostat, Fabry-Perot Interferometers (FPIs), optics, mechanisms, detectors, mechanical structures, and instrument control and data handling electronics. The following section sets out to detail the primary subsystems. A cut-section of the latest HIRMES CAD model is given in Figure \ref{fig:cut}. HIRMES is $\sim$1~m in diameter and $\sim$2~m in length.
\subsection{Optical Design}
\subsubsection{Window}
HIRMES uses a 101.6 mm (4-inch) diameter (3-inch clear aperture) Topas\footnote{\href{https://topas.com/products/topas-coc-polymers}{https://topas.com/products/topas-coc-polymers}} (COC, Cyclic Olefin Copolymer) window, as the transmissive vacuum boundary between the instrument vacuum space and the Telescope Assembly (TA). Topas has low absorption over mid- to far-infrared wavebands and is also transparent in the visible, facilitating alignment and inspection. The average transmission is 93\%, with a reflection of 5\%, and an absorption of 2\%. A series of tests have been performed on the Topas window, including vacuum, stress, thermal, and environmentally testing a range of window thicknesses and curvatures. The result was to use a window with thickness = 400 $\mu$m and a radius of curvature = 66.5 mm. This radius of curvature is a formed curvature, concave to the instrument, which does not appreciably change under vacuum load.
\subsubsection{Optics}
The optical system provides efficient coupling to the telescope, controls stray light, provides filtering, and images the spectrum/image onto the detector arrays (see Figure \ref{fig:inst}; middle). The fixed optical components are manufactured with standard diamond turning process, and all of the components are fabricated from aluminum, thus the design of the optical bench is isothermal and contracts conformally. This allows the alignment to be completed warm with assurance that it will still be within tolerance when the instrument is cooled. The instrument opto-mechanical implementation is given in Figure \ref{fig:inst} (bottom).
\newpage
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.81\linewidth]{_cad_cut_section}}
\bigskip
\caption{A cut-section of the HIRMES CAD model. Some components have been removed or phantomized to aid clarity. The light from the Telescope Assembly (TA) enters through the window on the right. \vspace{1cm}}
\label{fig:cut}
\end{figure}
\smallskip
\noindent {\emph{Stage 1}:}
\begin{addmargin}[1em]{0em}
The SOFIA telescope delivers a diffraction-limited f/19.5 beam over the HIRMES bandpass. HIRMES' slits are 113 arcsec long, with a selectable slit width from 3.0 to 8.7 arcsec to match the diffraction image size at the selected wavelength range. The slit length and the image field of view are well within SOFIA's 8~arcmin diameter field. The telescope's image rotation, generated by a moving observatory and the telescope's 3-axis spherical bearing mount, is handled operationally by adjusting the telescope's fine-drive to produce a fixed sky rotation for a period of time, typically 10 -- 30~mins depending on observational geometry \citep[for more details see][]{2014ApJS..212...24T}. As such there is no hardware image de-rotator. A system of two folding mirrors redirects the beam onto a collimator that produces a parallel beam at Stage 1. This Off-Axis-Paraboloid (OAP) collimator also forms an image of the telescope pupil of 20 mm diameter onto a cold stop. The filter wheel, accommodating 12 positions for the instrument filters, and the Low-Resolution FPI wheel, are placed at the approximate location of this pupil image. Following the Low-Resolution FPI wheel, there is the slit wheel with four slits and the open aperture for the Spectral Imaging mode. After the slit wheel, two folding mirrors redirect the beam onto the Stage 2 collimator.
\end{addmargin}
\newpage
\begin{figure}[h!]
\centering
\centerline{\includegraphics[width=0.9\linewidth]{_instrument_flow_chart}}
\bigskip
\bigskip
\centerline{\includegraphics[width=0.9\linewidth]{_optical_path}}
\bigskip
\bigskip
\centerline{\includegraphics[width=0.9\linewidth]{_optical_bench}}
\caption{The HIRMES Sub-System Block Diagram shows the instrument’s internal and external components, the optical path with elements (middle, not to scale), and a labeled CAD model of the optical bench with opposing vantage points (bottom).}
\label{fig:inst}
\end{figure}
\newpage
\smallskip
\noindent {\emph{Stage 2}:}
\begin{addmargin}[1em]{0em}
An OAP collimator produces parallel beams to go through the Mid-Resolution and High-Resolution FPIs filter wheels. It also produces a 80 mm diameter pupil image onto the Mid-Resolution FPI. The Mid-Resolution FPIs wheel is tilted by $\sim$0.1~degrees, and the High-Resolution FPIs are placed at 50~mm from the Mid-Resolution FPIs to eliminate parasitic ringing between the Mid-Resolution and High-Resolution FPI mirrors. Another OAP relays the telescope image onto another intermediate focal plane with an aperture to prevent scattered light going through the system. Due to the walk-off of the beam through the High-Resolution FPIs (primarily at the longer wavelengths) the outgoing beam is expanded.
\end{addmargin}
\noindent {\emph{Stage 3}:}
\begin{addmargin}[1em]{0em}
Another OAP collimates the beam after the intermediate focus and forms a 40 mm pupil for the grating wheel, and the camera OAP relays the spectrally-dispersed light to the detectors. The flat mirrors in Stage 3 were added to package the optics into the available cold space. Final order-sorting is performed by a set of three reflective diffraction gratings. The linear dispersion at the detector array is set to spread the Free Spectral Range (FSR) of the Mid-Resolution FPIs onto at least 1 pixel at short wavelengths, and to spread the FSR wider than the point spread function (PSF) at long wavelengths. A rotary stage mechanism is used to select the grating and to set its angle over a range of $\pm$8.1\,degrees to choose the spectral line of interest. Each grating is blazed to optimize the efficiency over three sub-bands of the wavelength range. A mirror used for the spectral imaging mode occupies the fourth position on the stage. The spectral image (R$\sim$2000) is projected onto one side of the Low-Resolution detector array.
\end{addmargin}
\medskip
\noindent The optical imaging system is diffraction limited ($\ll \lambda/14$ at 24 $\mu m$) over the whole spectral range of the instrument. In fact, the quality of the reflective optical elements are such that they could be used for optical wavelengths. The instrument's internal stray light baffles are blackened with a ~500 micron thick layer of an absorptive mixture comprised of 65\% Epotek 377, 30\% fumed (pyrogenic) silica powder, and 5\% graphene by volume (alternatively, 50.7\% Epotek 377, 42.8\% fumed silica, and 6.5\% graphene by weight) \citep{2017RScI...88j4501C}. A monolayer of K1 borosilicate glass microspheres sieved to diameter $<$100 $\mu$m can be incorporated on this lossy dielectric surface and overpainted with $\sim$50 $\mu$m of Aeroglaze Z306\footnote{Lord Corporation Chemical Products, ``Aeroglaze Z306 Flat Black Absorptive Polyurethane Low-Outgassing Paint,'' 2000 West Grandview Blvd., P.O. Box 10038, Erie, PA 16514-0038}, where dilution through scattering is desirable to control the optical response. The resulting coating is CTE-matched (Coefficient of Thermal Expansion) for use on metallic substrates, non-magnetic, and robust under thermal cycling. Any rejected light is expected to exit the optical system or be absorbed by the partitioning blackened baffles in the front end of the instrument.
\vspace{-0.2cm}
\subsubsection{Filters}
The filters use a combination of technologies to cover the full 25 -- 122 $\mu$m spectral range: a dielectric multilayer filter to provide the 25 $\mu$m cut-on at short wavelength, and crystal filters at the longer wavelengths. Table \ref{tab:filters} details the specifications of each available filter mounted on the Stage 1 filter wheel (see Figure \ref{fig:inst}). The five ``First order FPI + long pass'' filters are used in the Spectral Imaging mode as order-sorting filters that each consist of a coupled fixed-width FPI and long-pass filter. Either plastic film or vapor deposited parylene are used for the anti-reflection coatings.
\begin{table}
\caption {Properties of the filters used in the Stage 1 filter wheel. \vspace{-0.25cm}} \label{tab:filters}
\begin{center}
\resizebox{\textwidth}{!}{\begin{tabular}{ |c|c|c|c| }
\hline
Filter & Design & Details & Mean Transmission \\
\hline
23-32 $\mu$m; Bandpass & Multi-layer dielectric & On CdTe substrate & 70\% \\
26 $\mu$m; Long Pass & Al$_2$O$_3$ with near IR blockers & Parylene AR (P.AR) coating & 75\% \\
40 $\mu$m; Long Pass & Al$_2$O$_3$ + CaF$_2$ stack & P.AR coating on outer layers & 75\% \\
65 $\mu$m; Long Pass & Al$_2$O$_3$ + CaF$_2$ + BaF$_2$ stack & P.AR coating on outer layers & 75\% \\
51.8 $\mu$m; [OIII] & First order FPI + long pass & Metal mesh & 50\% \\
57.3 $\mu$m; [NIII] & First order FPI + long pass & Metal mesh & 50\% \\
63.2 $\mu$m; [OI] & First order FPI + long pass & Metal mesh & 50\% \\
88.4 $\mu$m; [OIII] & First order FPI + long pass & Metal mesh & 50\% \\
121.9 $\mu$m; [NII] & First order FPI + long pass & Metal mesh & 50\% \\
\hline
\end{tabular}}
\end{center} \vspace{-3.5mm}
\end{table}
\newpage
\begin{figure}[h!]
\centering
\begin{minipage}[]{0.48\linewidth}
\centering
\vspace{-0.08cm}
\includegraphics[width=0.9\linewidth]{_FPI_combination}
\caption{Top to bottom: transmission of high-res FPI, mid-res-FPI, grating, and their product. The product is very spectrally pure. These figures are purely graphical, and are not based on measurements.}
\label{fig:fpi_comb}
\end{minipage}
\hfill
\begin{minipage}[]{0.49 \linewidth}
\centering
\includegraphics[width=0.9\linewidth]{_FPI_offset_efficiency}
\caption{Top to bottom: transmission profile at 112~$\mu$m \& 63~$\mu$m (High-Resolution FPI) and 122~$\mu$m \& 52~$\mu$m (Imaging, Low-Resolution FPI) for beams on and off the optical axis. 1~beam~=~1~diffraction limited PSF at the detector.}
\label{fig:fpi_off}
\end{minipage}
\vspace{1cm}
\end{figure}
\newpage
\vspace{-0.2cm}
\subsubsection{Fabry-Perot Interferometers}
HIRMES uses a fleet of Fabry-Perot Interferometers (FPIs) for its various observing modes. The high spectral resolution mode of $R =$ 100,000 is achieved with FPIs. Also, FPIs with low spectral resolution ($R\sim$ 2000) will be used for Spectral Imaging.
An FPI consists of two highly reflective plane-parallel mirrors that form a resonant cavity. The resonance condition is $2 \cdot d = n \cdot \lambda$, where $d$ is the cavity spacing, $\lambda$ is the wavelength, and $n$ is an integer order. The spectral resolution of an FPI is the product of the finesse, $F$ (the FPI efficiency determined by the absorption in the reflector and the mean number of reflections in the cavity), and the order, $n$. To create a spectrum, the cavity spacing $d$ is adjusted, changing the resonant wavelength. Since any wavelength that fulfills the resonance condition will be transmitted by the FPI, it is necessary to employ additional filters to sort out the unwanted wavelengths. HIRMES uses additional FPIs with a resolution of about 12,000 (mid-resolution FPIs) and the reflective grating to select the desired wavelength (see Figure \ref{fig:fpi_comb}).
Rays within either the axial beam or off-axis beams that pass the FPI at an angle with respect to the normal axis will resonate at a slightly shorter wavelength. The angle is minimized when the FPIs are located at pupil positions and have a large aperture, as is the case for the HIRMES FPIs. In addition, a high spectral resolution requires operating the FPI at a high order, hence a large cavity spacing. These two conditions determine the physical size of the FPIs. Although off-axis beams that pass the FPI at an angle with respect to the normal axis of the mirrors resonate at shorter wavelength, we will use this feature as an advantage. By slewing the telescope during the observation, and thus moving the science target from axial to off-axis pixels, we also spectrally sweep over a part of the nearby spectrum. To obtain the full desired spectrum thus requires less steps of changing the cavity spacing of the FPIs (see Figure \ref{fig:fpi_off}).
The FPIs in HIRMES use free-standing nickel metal meshes with a gold-flash as their highly reflective mirrors \citep{1963ITMTT..11..363U}. The finesse obtained with the metal meshes is a function of wavelength. Optimal finesses of FPIs made with these meshes ranges between 30-60 (a finesse that is too high would reduce the overall transmission). Thus, for each of the Mid- and High-resolution modes, HIRMES has three separate FPIs to cover the full wavelength range between 25 and 122 $\mu$m. The low-, mid-, and high-resolution FPIs have a scanning mechanism using piezo elements to select the desired cavity spacing, and step over a wavelength range to create spectra. In addition to the tunable FPIs, HIRMES will employ FPIs with a fixed cavity spacing that are tuned to specific fine-structure lines. The overall transmission that each of the FPIs will achieve is over 70\%. A full write-up of the FPIs used in HIRMES can be found at \citet{2018spie-arXiv180805218D} \& \citet{2018spie-arXiv180706019C}.
\begin{figure}
\centering
\centerline{\includegraphics[width=0.52\linewidth]{_grating_efficiency}}
\caption{The three selectable diffraction gratings provide $>$90\% average efficiency of both s- and p-polarizations over the HIRMES spectral region.}
\label{fig:grating}
\end{figure}
\subsubsection{Slits}
The slit wheel holds four slits of length = 113 arcsec, and widths = 8.7, 6.1, 4.2 and 3.0 arcsec respectively. These are selected based on the desired central wavelength and resolution. Additional to the slits, there is also a 2D image-stop of dimensions 113.0$\times$106.8 arcsec used in the Spectral Imaging mode, which is then projected onto 16$\times$16 pixels of the Low-Resolution detector array.
\subsubsection{Gratings}
In the HIRMES wavelength range, echelette (blazed) gratings have near-ideal performance, so their efficiency can be calculated accurately using diffraction models and groove geometry. These gratings, with an the average efficiency of $\sim$0.9 (see Figure \ref{fig:grating}), were chosen to optimize performance at the most important spectral lines.
\subsection{Detectors}
The HIRMES detector layout consists of two Transition Edge Sensor (TES) bolometric detector arrays (see Figure \ref{fig:detectors}) that equip HIRMES with eight subarrays of 1$\times$16 pixels for the High-Resolution mode (low saturation power), and a 64$\times$16 array for the Mid- \& Low-Resolution and Spectral Imaging modes (high saturation power). A full description of the HIRMES detectors can be found in \citet{2018JLTP..tmp...89B} \& \citet{,2018JLTP..tmp..146B}.
The ``Low-Resolution'' 64$\times$16 array is comprised of 1 mm $\times$ 1 mm square pixels with a 50\% absorptive frequency-independent coating. The thermal isolation design consists of eight single-crystal silicon legs that are 1.4 $\mu$m thick, 50 $\mu$m wide, and 30 $\mu$m long, and will provide an expected Noise Equivalent Power (NEP) $\sim$2x10$^{-17}$ $W/\sqrt{Hz}$ and saturation power $\sim$25 pW, inclusive of the 50\% absorption efficiency.
The ``High-Resolution'' array is comprised of eight, 1$\times$16 pixel subarrays, and is operated in a manner in which only one detector subarray is used at a given time. Each subarray is separately optimized for operation at a specific wavelength, by matching to the Full-Width Half-Maximum (FWHM) of the beam at the wavelength, and providing efficient absorption using a quarter-wave back short \citep{2018JLTP..tmp..116M}. The central wavelength for each subarray is: 30, 36, 43, 51, 61, 73, 88 \& 105~$\mu$m, and their respective physical pixel-widths range from 0.4 -- 1.4~mm. This system, with background-limited detectors, is near optimal for the High-Resolution spectroscopy mode throughout the entire spectral range. The detectors are Mo/Au TES bilayers deposited on photolithographically-defined leg-isolated 0.45 $\mu$m thick, 5 $\mu$m wide, and 30 $\mu$m long single-crystal silicon membranes. These detectors have an expected NEP $\sim$3x10$^{-18}$ $W/\sqrt{Hz}$ and saturation power $\sim$0.13 pW.
The detectors are read out using NIST 1$\times$11 multiplexers operating at the detector temperature of 70 mK. The signal is amplified by a SQUID series array operating at $\sim$4 K, and controlled by a room temperature Multi-Channel Electronics (MCE) controller \citep{2013Hasselfield,2016SPIE.9914E..1GH}. The Low-Resolution and High-Resolution detectors are packaged in a single, common Focal Plane Assembly (FPA), and kinematically mounted using a Kevlar and \mbox{magic-Titanium [15-3-3-3]} support system to provide thermal and mechanical isolation.
\begin{figure}
\centering
\centerline{\includegraphics[width=0.96\linewidth]{_LowRes_Chip}}
\vspace{0.1cm}
\centerline{\includegraphics[height=6.26cm]{_LowRes_Chip_zoom}
\includegraphics[height=6.26cm]{_HiRes_Chip}}
\centerline{\includegraphics[width=0.75\linewidth]{_Detectors}}
\caption{(Top) One half of the 64$\times$16 pixel Low-Resolution detector. (Middle-Left) A zoom in of the 4 most lower-right pixels of the Top image. (Middle-right) The eight, 1$\times$18 pixel subarrays of the High-Resolution detector, in which the pixels located at the edges of each subarray are not read out, resulting in 1$\times$16 active pixel subarrays. (Bottom) A schematic of the layout of both detectors on the Focal Plane.}
\label{fig:detectors}
\end{figure}
\subsection{Cryostat \& ADR}
HIRMES implements a tri-layer all aluminum (6061-T6) cryostat design (see Figures \ref{fig:cut} \& \ref{fig:inst}), with the load path extending from the flange ring (where the instrument mounts to the Telescope Assembly), through the central `bellyband' support region of the cryostat. The external hemispherical end caps, as well as the 65 K and 4 K heat shields, are supported from this bellyband, along with the monolithic 4 K optical bench (from a three-point mount). This bench supports all working instrument components and optics. The heat shields and optical bench are supported by aluminum rings suspended using 12 titanium struts with titanium end caps from the belly band. The optical bench, bulkheads, and structural components are made of the same forged 6061-T6 aluminum as the cryostat, providing high strength and thermal expansion coefficients that match the mirrors and mirror mounts. All aluminum mirrors and the optical bench are machined, annealed, and assembled to ensure dimensional stability at cryogenic operating temperatures, preventing hysteresis from repeated cool-downs.
The HIRMES thermal design uses a two-stage Pulse Tube Cryocooler (PTC) to cool the instrument and optical bench. A $^4$He sorption fridge coupled to an Adiabatic Demagnetization Refrigerator (ADR; salt-pill) is subsequently used to cool the detectors located in the FPA to 70 mK, with an expected operational hold-time of $>$12 hours. A TransMIT PTD406C pulse-tube head is mated to one of the two SOFIA on-board Cryomech CP2870 He compressors. The PTC first stage cools the outer radiation shield below 65 K. This lowers the radiation heat load from the cabin-temperature cryostat shell to a level that allows the second stage to cool the optical bench and inner radiation shield to 4 K.
\newpage
A $^4$He sorption fridge then lowers the ADR heat sink temperature to 1.3 K, allowing a quicker recycle time and enhanced cooling power. From this low starting temperature, the ADR is able to cool the detector to 70 mK. Additionally, a separate $^3$He sorption fridge acts as a 0.3 K thermal intercept to the ADR, reducing its thermal load stress.
\subsection{Instrument Calibration}
HIRMES calibration captures the wavelength settings for observations, instrumental spectral profile, and radiometric calibration to allow the line intensity to be determined. The spectral calibration uses gas cells for absolute wavelength calibration on the ground, and two Quantum Cascade Lasers (QCLs) for in-flight verification ($\sim$ 63 \& 83 $\mu$m, R $\sim10^6$). The QCLs are mounted to the 65 K stage, and are injected into the instrument by reflection off a mirror on the back of the slit wheel. The QCLs provide good short-term wavelength stability; absolute knowledge is provided by a fixed etalon used with the QCL. With knowledge of the spectral setting and instrumental profile, radiometric calibration is achieved by observing known continuum sources, including rocky moons and asteroids. During operation, the FPI spacing is measured and monitored with FPI capacitive sensors. Parallelism is established at room temperature and only requires minor adjustment when cold by a known, fixed amount, through the usage of low-voltage, tilt PICMA piezo actuators from Physik Instrumente.
\section{Observing with HIRMES}\label{Section:Observing}
\begin{table}
\renewcommand{\arraystretch}{1.25}
\caption {Properties of the filters used in the Stage 1 filter wheel.} \label{tab:modes}
\begin{center}
\resizebox{\textwidth}{!}
{\begin{tabular}{ | l | c | c | c | c | }
\hline
\multicolumn{1}{|c|}{Parameters} & High-Res & Mid-Res & Low-Res & Imaging \\
\hline
\hline
Sensitivity (5$\sigma$, 1hr) & $\lesssim 1 \times 10^{-17} W/m^2$ & \multicolumn{3}{c|}{$\sim1 \times 10^{-16} W/m^2$} \\ \hline
Resolving Power (R = $\lambda/\delta\lambda$) & 50,000 -- 100,000 & $\sim$12,000 & 325 -- 635 & $\sim$2,000 \\ \hline
Angular Resolution & \multicolumn{4}{c|}{Diffraction Limited} \\ \hline
Slit Size / FOV (arcsec) & \multicolumn{3}{c|}{Length: 113''; Width: 8.7'', 6.1'', 4.2'' \& 3.0''} & 113.0'' $\times$ 106.8'' \\ \hline
Spectral Range & \multicolumn{3}{c|}{25 -- 122 $\mu$m} & Selected lines$^A$ \\ \hline
Simultaneous Spectral Coverage ($\delta\lambda/\lambda$) & $\lambda$/R & \multicolumn{2}{c|}{0.1$\lambda$} & 0.001$\lambda$ \\ \hline
Detector Format & 8$\times$16 pix$^B$ & \multicolumn{3}{c|}{64$\times$16 array$^C$} \\ \hline
Detector Type & \multicolumn{4}{c|}{Transition Edge Sensor (TES)} \\
\hline
\end{tabular}}
\end{center}
\footnotesize
$^A$Single wavelength setting for selected filters (63.2 $\mu$m [OI]; 51.8 $\mu$m, 88.4 $\mu$m [OIII]; 57.3 $\mu$m [NIII]; 121.9 $\mu$m [NII]). \\
$^B$High resolution detector consists of eight, 1$\times$16 pixel linear subarrays; whose pixel size increases per subarray. Shorter \\
$^{\ \ }$wavelength light is positioned onto the smaller pixel subarrays, longer wavelength light onto the larger pixel subarrays. \\
$^C$Spectral Imaging uses only a 16$\times$16 pixel section of the 2D array.
\end{table}
\subsection{Observing Modes}
By combining the direct-detection arrays (TES bolometers), grating-dispersive spectroscopy, and a host of Fabry-Perot tunable narrow-band filters, HIRMES can provide four primary observing modes: High Resolution (R$\sim$100,000), Mid-Resolution (R$\sim$10,000) and Low-Resolution (R$\sim$600) spectroscopy, and Spectral Imaging (R$\sim$2000). Figure \ref{fig:elements} shows how the various optical and spectral elements can combine to support each primary observing mode. Figure \ref{fig:fpi_comb} \& \ref{fig:fpi_off} are useful to refer to when reading through the following mode descriptions. HIRMES is a complex instrument with many configurable elements, however, the combination selection for various modes will be operationally automatic, based on the desired science. One only needs to select the central wavelength, delta wavelength, number of steps, and one of the four modes. If instrumental resolution is desired, then one only needs to specify the wavelength range and the mode. Having said that, any combination of optical and spectral elements is technically feasible to create additional modes. A summary table of the observing modes and their respective properties is given in Table \ref{tab:modes}.
\newpage
\begin{figure}[h!]
\centering
\centerline{
\includegraphics[width=0.495\linewidth]{_elements_LRS}
\hfill
\includegraphics[width=0.495\linewidth]{_elements_MRS}
}
\bigskip
\centerline{
\includegraphics[width=0.495\linewidth]{_elements_HRS}
\hfill
\includegraphics[width=0.495\linewidth]{_elements_SIL}
}
\bigskip
\centerline{
\includegraphics[width=0.495\linewidth]{_elements_ALL}
\hspace{0.6cm}
\includegraphics[width=0.45\linewidth]{_LowRes_ranges}
}
\caption{The four primary observing modes and their combination of optical and spectral elements. The lower-left plot shows all optical \& spectral elements. The lower-right plot shows the resolution as a function of wavelength for the Low-Resolution grating mode. Note that it would take nine different wavelength settings to obtain the full 25 -- 122 $\mu$m spectral range in this mode (three for each grating). }
\label{fig:elements}
\end{figure}
\newpage
\subsubsection{Low-Resolution Spectroscopy}
The light first passes through a bandpass filter, then a slit, then a reflective grating. The spectrum is under-sampled by the 64$\times$16 pixel detector, producing a resolution of R $\sim$ 320 -- 635 (see Figure \ref{fig:elements}, lower-right), and instantaneous bandwidths of 5 -- 15 $\mu$m, depending on the wavelength. It would take nine different wavelength settings to obtain the full 25 -- 122 $\mu$m spectral range in this mode.
\subsubsection{Mid-Resolution Spectroscopy}
This mode keeps the configuration of the Low-Resolution grating mode, however inserts a Mid-Resolution FPI into the optical path. The effect of this is a narrow, sharp wavelength peak that is greatly under-sampled in one of the pixels of the 64$\times$16 pixel array. The FPI is stepped through one free spectral range (in roughly 10-50 steps depending on wavelength and desired spectral sampling) to produce a spectrum over the full instantaneous spectral coverage of the Low-Resolution grating mode.
\subsubsection{High-Resolution Spectroscopy}
Going one step further to the Mid-Resolution mode, an additional High-Resolution FPI is inserted into the optical path, and the central wavelength is centered onto the appropriate column of the eight 1$\times$16~pixel linear subarrays of the High-Resolution detector (which linear subarray depends on wavelength; subarray pixel size is proportional to wavelength). That means there is only a single pixel in the spectral dimension, which is closely matched to the PSF. However, due to the radially dispersive nature of FPIs, stepping spatially up and down the slit also results in a slight wavelength shift ($\sim$ 1 -- 5 km/s depending on wavelength, see Figure \ref{fig:fpi_off}). Combining this feature with stepping the FPIs by discrete steps, results in the spectral sampling of a desired wavelength range (typically a single narrow spectral line). Figure \ref{fig:fpi_comb} visualizes the product of the FPIs working together with the grating to produce a High-Resolution line spectrum.
\subsubsection{Spectral Imaging}
This mode changes the configuration completely by switching out the initial bandpass filter with a narrow-band filter on the same filter wheel. This narrow-band filter is actually a combination of a fixed-width FPI, and its own bandpass filter, both of which are tailored for specific spectral lines (see Table \ref{tab:filters}). The other configurable elements used in optical path are a Low-Resolution FPI, a square image-stop instead of a slit, and a mirror instead of the grating. The 2D image is then placed on one side of the 64$\times$16 pixel array, to produce a 16$\times$16 pixel spectral image (113.0$\times$106.8 arcsec), whose wavelength is also variable over the image, due to the radially dispersive nature of the FPI (see Figure \ref{fig:fpi_off}).
\subsection{Sensitivity Limits}
As the instrument is still being built, full characterization and calibration of the various components has yet to be completed. The instrument sensitivities or capabilities presented here are the best current estimates based on analysis of the design. In particular, Figure \ref{fig:sensitivity} visualizes the estimates of Minimum Detectable Line Fluxes (MDLFs) for the various observing modes. One can extrapolate an estimate of the total time on-source for a given flux from Figure \ref{fig:sensitivity}, and later apply atmospheric transmission factors. Observational overheads are not taken into account, and will be fully realized once cold function checks and commissioning are underway.
\subsection{Observing Techniques }
Each detector read-out has associated astrometric and timing data, enabling HIRMES to be able to support any TA Observing mode (e.g. Lissajous, raster-scan, slit-scan, mapping, chop \& nod, etc.). The typical TA Observing mode when in either of the Low-, Mid-, or High-Resolution modes would be slit-scan (scanning up and down the length of the slit), and Lissajous for Spectral Imaging. Both of these modes would be performed without chopping.
The intent of selecting slit-scan and Lissajous for typical TA Observing modes is two-fold: 1) to maximize the spectral bandwidth by moving spatially across the FPIs, and 2) to break the degeneracy of having the same sky and/or source flux on the same pixels, thereby increasing the ability to characterize the detector for data reduction. This is also achieved by a new TA Observing mode in development that allows the sky to rotate whilst tracking. Atmosphere-less moons and asteroids will be regularly observed, in addition to blank-sky / sky-dips, for flux calibration and telluric correction.
\newpage
\begin{figure}
\centering
\centerline{\includegraphics[width=0.8\linewidth]{_sensitivity}}
\caption{Visual representation of the Minimum Detectable Line Fluxes (MDLFs) for the different modes, assuming no atmosphere. Note that the Spectral Imaging mode cannot cover the entire wavelength range, only the discrete wavelengths, shown by square line-markers. The wavelengths of key spectral line features are marked by vertical blue dotted lines.}
\label{fig:sensitivity}
\end{figure}
\subsection{Control and Data Reduction Software}
The HIRMES software system design includes: 1) Graphical User Interface (GUI) for instrument control and monitoring; 2) detector data acquisition; 3) detector and readout tuning and control; 4) calibrate mechanisms and spectroscopic elements; 5) interface with ancillary devices; 6) interfacing with telescope and executing observations via the SOFIA Command Language (SCL); 7) power and thermal management; 8) health and safety monitoring, providing timely feedback to the instrument operator; 9) data reduction tools and pipeline Level 1 -- 4 science product generation; 10) data archival following all SOFIA Data Cycle System (DCS) requirements; 11) user documentation. The HIRMES software architecture consists of two sub-systems: the HIRMES Command and Data Handling (CDH) software system, and the Data Analysis and Products (DAP) software.
The HIRMES CDH software is based on the Aurora application framework (Steve~Maher, {\it private communication}), which provides flexible, platform-independent, Java-based utilities for scientific data systems and devices. Science data is archived by converting the detector data, telescope settings and astrometry, and instrument configuration into Flexible Image Transport System (FITS) format files. The detector data and telescope astrometry are tightly synchronized using the SOFIA Inter-Range Instrument Group (IRIG) timing framework. Indexes generated from the FITS header information permit fast retrieval and reporting of science/engineering data. The time-series FITS output of Aurora constitutes Level-1 data, and is archived in its entirety through the DCS.
The Level-1 FITS data is then fed into a modified version of Comprehensive Reduction Utility for SHARC-2 \citep[CRUSH\footnote{CRUSH: \href{https://github.com/attipaci/crush}{https://github.com/attipaci/crush}};][]{2008SPIE.7020E..1SK}, which produces Level-2 data after performing the following steps: evenly spectrally \& spatially resampled (FITS data cube: RA, DEC, wavelength); re-orientated to North-up, East-left; instrument calibrated (removal of correlated noise, pixel masking, biasing, etc); and WCS \& DCS compliance. The DAP pipeline can then take this Level-2 data and produce Level-3 data after flux calibration and telluric correction. After this has been achieved, DAP can produce Level-4 data by combining data from multiple observations, stitching map-pointings together, extracting 2D or 1D spectra, etc.
\subsection{Data Formats \& Access}
All of the Level-1 to Level-4 data will be provided in multi-extension FITS data cubes of flux, variance, and instrument \& observation parameters. All data are fully FITS \& DCS compliant. Additionally, the data will be compliant with the NASA/IPAC InfraRed Science Archive (IRSA\footnote{IRSA: \href{https://irsa.ipac.caltech.edu/}{https://irsa.ipac.caltech.edu/}}), to support plans for the DCS archive to be injested therein, within the next year or so. Viewing and exploration of these FITS data cubes will be possible via a future modified version of SpexTool\footnote{SpexTool: \href{http://irtfweb.ifa.hawaii.edu/~spex/observer/}{http://irtfweb.ifa.hawaii.edu/$\sim$spex/observer/}}, in addition to your local, friendly data cube viewers, such as QFitsView\footnote{QFitsView: \href{http://www.mpe.mpg.de/~ott/QFitsView/}{http://www.mpe.mpg.de/$\sim$ott/QFitsView/}}.
\section*{Acknowledgments}
The development of HIRMES is funded by the NASA / SOFIA 3$^{rd}$ Generation Instrument solicitation to the NASA Goddard Space Flight Center and partnering institutions. We thank the USRA/SOFIA Science Operation Center staff and NASA Armstrong B703 staff for their on-going support during the development of HIRMES and its up-coming commissioning.
This research was conducted [in part] at the SOFIA Science Center, which is operated by the Universities Space Research Association under contract NNA17BF53C with the National Aeronautics and Space Administration.
We would like to extends our thanks to David Franz, Kevin Denis, Manuel Balvin, George Manos, and Elissa Williams for their contribution towards useful discussions and fabrication support on the detector arrays.
|
2,869,038,155,040 | arxiv | \section{\label{Intro}Introduction}
Shear-thickening (ST), which refers to the enhancement of the apparent viscosity of a suspension as an externally imposed shear stress or shear rate is progressively increased~\cite{shearthickening,shearthickening2}, is observed in materials such as aqueous suspensions~\cite{Silica,doi:10.1073/pnas.2203795119} and granular mixtures~\cite{Fall2010shear,Granular2}. Dense granular suspensions of polydisperse irregular-shaped cornstarch particles in water were reported to exhibit shear-thinning and shear-thickening properties depending on the applied shear stress~\cite{peters2016direct} and particle volume fraction~\cite{wagner2009shear}. An aqueous cornstarch suspension subjected to low shear rates exhibits shear-thinning behaviour due to several factors, such as the organisation of suspended particles along the flow, a constant hydrodynamic viscosity contribution due to viscous stresses~\cite{wagner2009shear} and an entropic contribution from random particle collisions~\cite{shearthinning,CSconcentration}. As the externally imposed shear rate is increased, a dense cornstarch suspension shows a continuous shear-thickening (CST) regime which is followed by discontinuous shear-thickening (DST) behaviour~\cite{Fall2010shear}. At even higher imposed shear stresses, dense cornstarch suspensions display a shear jamming (SJ) state in which the suspension no longer flows but instead behaves like a solid~\cite{peters2016direct}. The increase in the bulk viscosity of the suspension in the ST regime is attributed to hydrodynamic~\cite{hydrodynamics,wagner2009shear} and frictional~\cite{frictional,WyartPhys} interactions between the constituent particles. The CST regime is characterised by well-defined highly stressed dynamic regions which propagate in the shearing direction and span an increasingly larger fraction of the suspension as the applied stress is increased~\cite{CST}. Recent simulations of dense suspensions have shown that inter-particle connectivity induced by the formation of force networks determines the rheological response of the suspension~\cite{Jamali}. It is reported that the DST regime has more constrained force networks connected via multiple particle contacts when compared to the CST regime which is characterised by less constrained force networks having single particle-particle contacts. The number density of these frictional contacts increases with increasing shear rates until the SJ state is reached~\cite{peters2016direct}. Measurements of the first normal stress difference $N_1$~\cite{macosko1994rheology} can indirectly shed light on the underlying inter-particle interactions contributing to the generation of large stresses in concentrated suspensions~\cite{normal,normal1}. While negative values of $N_1$ suggest hydrodynamic or lubrication effects in the suspension, positive $N_1$ values indicate the presence of inter-particle friction~\cite{normal}.
Material transport in industrial processing may cause interfacial instabilities, which can affect production efficiency. One such interfacial instability is the Saffman Taylor instability, which involves the development of an intricate interface when a less viscous fluid displaces a more viscous one~\cite{saffman1958penetration,Homsy,pinilla2021}. The viscous fingering (VF) instability was initially identified in oil recovery fields when water was injected under high pressures into a porous medium~\cite{orr1984use,oilrecovery}. Since then, many studies have been performed to understand this phenomenon using a Hele-Shaw (HS) geometry~\cite{pinilla2021} which comprises two glass plates separated by a narrow gap and is understood to be equivalent to the flow in a porous medium~\cite{saffman1958penetration}. Several factors affect the growth of these instabilities, for example, the wettability of the fluid pair~\cite{ESLAMI201925}, the gap of the HS cell~\cite{Van_gap} and fluid rheology~\cite{Linder2000Viscous,Associating1993zhao,buka1986transitions,ozturk2020flow}. Interfacial instabilities have been systematically investigated in non-Newtonian fluids with exotic nonlinear rheological responses such as shear-thinning, shear-thickening and non-zero yield stresses. VF in non-Newtonian fluids, for example, in liquid crystals~\cite{buka1986transitions,Zhang2021Structures}, polymers~\cite{Linder2000Viscous,Associating1993zhao}, colloidal suspensions~\cite{criterion2020divoux,Lemaire1991,PhysRevFluids.3.110502,Kawaguchi,Kawaguchi2,PALAK2022100047}, emulsions~\cite{Viscous2004kawaguchi} and granular materials~\cite{ozturk2020flow,cheng2008towards,PALAK2021127405,sandnes2011patterns}, have been studied both experimentally and numerically during the past few decades. Experiments involving the displacement of shear-thickening propylene glycol (PPG)-silica suspensions by air~\cite{Kawaguchi} demonstrated that the finger velocity deviates from the prediction of the modified Darcy's law~\cite{modDarcy} as the injection pressure is increased. When a silica suspension was displaced by air at a shear rate exceeding the critical value necessary to initiate suspension shear thickening, a transition from a stable pattern to VF instability was observed~\cite{Kawaguchi2}. Another report on the displacement of shear-thickening cornstarch suspensions by air reported an excellent correlation between suspension rheology and the observed interfacial pattern morphologies~\cite{ozturk2020flow}. This work demonstrated that interfacial pattern morphologies change with increasing injection pressures as the cornstarch suspension transitions from one flow regime to another. While earlier research work focussed on the miscible displacements of shear-thinning cornstarch suspensions~\cite{PALAK2021127405} and the immiscible displacements of shear-thickening suspensions~\cite{Kawaguchi,Kawaguchi2,sandnes2011patterns,ozturk2020flow}, the miscible displacement of shear-thickening suspensions has never been investigated experimentally to the best of our knowledge.
\begin{figure}[!t]
\includegraphics[width= 3.0in]{Fig1.jpg}
\centering
\caption{\label{fig:hssem}{\bf The experimental setup, binarization procedure, and rheological measurements.} \textbf{(a)} Schematic illustration of a radial Hele-Shaw (HS) cell. \textbf{(b)} Binarization steps for detecting the inner interface between water (inner fluid) and cornstarch (CS) suspension (displaced fluid), and the outer interface between cornstarch suspension and air (outermost fluid). \textbf{(c)} Viscosity $\eta_{out}$ vs. shear rate $\dot{\gamma}$ for aqueous cornstarch suspensions of different concentrations at a plate separation of 300 $\mathrm{\mu}$m. The region highlighted in blue indicates the shear-thickening flow regime of a 40 wt.\% cornstarch suspension. Inset shows the variation in critical shear rate $\dot{\gamma}_c$ for the onset of shear-thickening as a function of concentration of the CS suspension.}
\end{figure}
In our previous report on the miscible displacement of cornstarch suspensions in the shear-thinning regime, we showed that increasing the elasticity of the suspension and viscosity ratio of the fluid pair resulted in the suppression of interfacial instability~\cite{PALAK2021127405}. In the present work, we explore the influence of the shear-thickening rheology of a dense cornstarch suspension (displaced fluid) on the onset and growth of interfacial instabilities during its radial displacement by water (inner fluid) in a quasi-two-dimensional Hele-Shaw (HS) cell. While the existing literature on the study of instabilities focusses exclusively on the propagation of the inner fluid front at the interface between the inner and displaced fluids, our present work observes two growing interfaces simultaneously in a single experiment, $viz.$, the inner interface between water and displaced cornstarch suspension, and the outer interface between cornstarch suspension and air (outermost fluid). We observe a transient withdrawal of the cornstarch suspension and the evolution of reverse fingers at the outer interface during displacement of the suspension by water at large injection flow rates. We attribute this phenomenon to the build-up of normal stresses in the highly sheared dense cornstarch suspension. We demonstrate that the generation of reverse fingers depends sensitively on injection flow rate, concentration of the cornstarch suspension and gap of the HS cell. We quantify the formation of reverse fingers by estimating the perimeter of the pattern at the outer interface ($\Delta P_{out}$), the number of reverse fingers ($N_{rf}$) and the average spacing between these fingers ($\lambda$). Furthermore, we report a clear correlation in the interfacial dynamics at the inner and outer interfaces. Finally, we show that the emergence of reverse fingers at the outer interface reduces the efficiency of displacement of the cornstarch suspension.
\section{\label{em}Materials and Methods}
A radial Hele-Shaw (HS) cell setup~(Fig.~\ref{fig:hssem}(a)), consisting of two circular glass plates, each of radius 30~cm and thickness 10~mm, is used to study the displacement of a dense cornstarch suspension by water. Teflon spacers of thicknesses 170 $\mathrm{\mu}$m, 300 $\mathrm{\mu}$m, 500 $\mathrm{\mu}$m and 800 $\mathrm{\mu}$m are used to maintain a constant gap between the glass plates. The fluids are injected with a syringe pump (NE-8000, New Era Pump Systems, USA) through a 3 mm hole drilled at the centre of the top plate. In our experiments, density matched cornstarch suspensions ($\rho$ = 1.59 $\mathrm{g/cm^3}$) are prepared by homogeneously mixing cornstarch powder (Sigma-Aldrich) in a 55 wt.\% aqueous solution of cesium chloride CsCl (ReagentPlus\textsuperscript{\textregistered}, Sigma-Aldrich)~\cite{Browndynamic,merktpersistent} using a magnetic stirrer (1 MLH, Remi Equipments Ltd., Mumbai), followed by ultrasonication (USC 400, ANM Industries Pvt. Ltd.). The sample is left undisturbed for 24 hours to ensure uniform hydration of the cornstarch particles~\cite{CSconcentration}.
To perform displacement experiments, the homogeneous cornstarch suspension is first loaded in the radial HS cell until its boundary reaches approximately 10 cm from the injection point. After loading the cornstarch suspension, Milli-Q water (Millipore Corp., resistivity 18.2 M$\Omega$.cm), dyed with rhodamine B (Sigma-Aldrich) for enhancing the contrast at the interface, is injected into the HS cell as the inner displacing fluid at a controlled injection flow rate $q$. The growth of interfacial patterns is recorded using a DSLR camera (D5200, Nikon, Japan) with a spatial resolution of 1920$\times$1080 pixels (one-pixel area = $2.2 \times 10^{-3}$ cm$^2$) and a frame rate of 30~fps. The obtained stack of images is converted to grayscale format and analysed using the MATLAB@2021 image processing toolbox. The procedure for binarization of raw images is shown in Fig.~\ref{fig:hssem}(b). Snapshots of raw images corresponding to the temporal evolution of interfacial patterns are shown in Supplementary Fig.~S1. A stress-controlled rheometer (Anton Paar, MCR 702) is used to perform rheological measurements in a parallel plate geometry (PP50) at different plate separations.
Figure~\ref{fig:hssem}(c) shows the plots of the measured viscosities $\eta_{out}$ versus applied rotational shear rates $\dot{\gamma}$ for dense cornstarch suspensions of different concentrations at a plate separation of 300 $\mathrm{\mu}$m. Shear-thickening, an increase in the viscosity of the fluid with increasing $\dot{\gamma}$ above a critical shear rate $\dot{\gamma}_c$~\cite{doi:10.1122/1.3696875}, is prominent in the CS suspensions prepared at high concentrations. We observe from the inset of Fig.~\ref{fig:hssem}(c) that $\dot{\gamma}_c$ decreases with increase in concentration of the CS suspension~\cite{doi:10.1122/1.3696875}. We note that the observed decrease in the viscosities of 40 wt.\% and 42 wt.\% CS suspensions at very high shear rates arises due to the slippage of the dense CS suspensions at the stainless steel rheometer plates. All the displacement experiments and rheological measurements are performed at room temperature (25$^{\circ}$C).
\subsection{Calculations}
We estimate the shear rate $\dot\gamma$ imposed by water (the inner fluid) on the cornstarch (CS) suspension (displaced fluid) during displacement of the latter in the Hele-Shaw cell. The shear rate imposed by a propagating finger-tip is computed using $\dot\gamma$= 2$U/b$~\cite{Nagastu}, where $U$ is the characteristic radial propagation velocity of the interfacial finger-tips and $b$ is the gap of the Hele-Shaw cell. The finger-tip velocity, $U$, was estimated by tracking the temporal propagation of finger-tips at the inner interface using video imaging. Since each finger-tip in a pattern experiences a different local shear rate, an average over multiple finger-tips is calculated to estimate $\dot\gamma$ values for different injection flow rate experiments (Supplementary Fig.~S2). The estimated values of $\dot\gamma$ vary from $\num{6.01} \pm 0.95~\mathrm{s^{-1}}$ to $\num{230.35}\pm 52.72~\mathrm{s^{-1}}$ for the displacement of a dense 40 wt.\% CS suspension and lie above the critical shear rate $\dot{\gamma}_c = 1.33~\mathrm{s^{-1}}$ required for the onset of shear-thickening behaviour (shaded region in Fig.~\ref{fig:hssem}(c)).
\begin{figure}[!t]
\includegraphics[width= 5.7in ]{Fig2.jpg}
\centering
\caption{\label{patterns}{\bf Temporal evolution of interfacial patterns at two different injection flow rates.} \textbf{(a-c)} Patterns in grayscale formed during the displacement of a 40 wt.\% cornstarch suspension (displaced fluid) by water (inner fluid) at injection flow rate $q$ = 1 ml/min at different times of their growth. \textbf{(d-f)} Zoomed images of the patterns within the red-coloured boxes in (a-c) are displayed. The inner interface between water and cornstarch suspension, and the outer interface between cornstarch suspension and air (outermost fluid) are indicated by numbers 1 and 2 respectively in (f). \textbf{(g-i)} Patterns in grayscale for injection flow rate $q=$ 50 ml/min. \textbf{(j-l)} Zoomed images of the patterns within the blue-coloured boxes in (g-i) are displayed. The scale bar is 3 cm. The HS cell gap is 170 $\mathrm{\mu}$m.}
\end{figure}
\section{Results \& Discussions}
Figures~\ref{patterns}(a-c) display grayscale images showing the temporal evolution of interfacial patterns during the displacement of an aqueous 40 wt.\% cornstarch (CS) suspension (displaced fluid) by water (inner fluid) at a low injection flow rate $q$ = 1 ml/min (Supplementary Video 1) for a HS cell gap $b$ = 170 $\mathrm{\mu}$m. The magnified images of the interfacial regions enclosed in red boxes in Figs.~\ref{patterns}(a-c) are shown in Figs.~\ref{patterns}(d-f). In this work, we simultaneously explore the growth of two interfaces: the inner interface between water and the CS suspension (inner interface labelled as 1 in Fig.~\ref{patterns}(f)) and the outer interface between CS suspension and air (outer interface labelled as 2 in Fig.~\ref{patterns}(f)). The growth of the inner interface due to outward displacement of the CS suspension involves the appearance of fingers undergoing multiple tip-splitting events, as seen in Figs.~\ref{patterns}(b-c) and Supplementary Video 2. Figures~\ref{patterns}(g-i) show grayscale images of the temporal evolution of interfacial patterns during the displacement of a 40 wt.\% CS suspension by water at a very high injection flow rate $q$ = 50 ml/min. Interestingly, we note the transient withdrawal of the CS suspension during its displacement at high injection flow rates (Fig.~\ref{patterns}(h), Supplementary Video 3). This withdrawal process results in the invasion of air (outermost fluid) into the cornstarch suspension and the development of reverse fingers at the outer interface (Fig.~\ref{patterns}(h)) between the CS suspension and air. Magnified images of the regions enclosed in blue boxes in Figs.~\ref{patterns}(g-i) are displayed in Figs.~\ref{patterns}(j-l). The reverse fingers appear for a very short time, and the outer interface eventually becomes smooth at later stages regardless of the applied injection flow rate (Fig.~2(l)).
\begin{figure}[!t]
\includegraphics[width= 6.5in ]{Fig3.jpg}
\centering
\caption{\label{perimeter}{\bf Characterisation of the inner interface between water and cornstarch suspension and the outer interface between cornstarch suspension and air with injection flow rate $q$ as a control parameter.} \textbf{(a)} Perimeters of inner interfaces $P_{in}$ (solid lines) and outer interfaces $\Delta P_{out} = P_{out}(t) - P_{out}(0)$ (filled symbols connected by solid lines) vs. time $t$ at various injection flow rates, where $P_{out}(t)$ and $P_{out}(0)$ are perimeters at times $t$ and $t$ = 0 s respectively, with $t$ = 0 s corresponding to the time of injection of the inner fluid (water). \textbf{(b)} Number of reverse fingers $N_{rf}$ (\textcolor{black}{\small$\blacksquare$}) and average reverse finger spacing $\lambda$ (\textcolor{red(ryb)}{\large$\bullet$}) as a function of injection flow rate $q$. \textbf{(c)} Sweep efficiency $SE$ vs. time $t$ for different injection flow rates of the inner fluid. A purple ellipse highlights the sharp decrease in $SE$ due to the generation of reverse fingers. Inset shows $dSE/dt$ vs. time. \textbf{(d)} First normal stress difference $N_1$ vs. applied shear rate for cornstarch suspensions of different concentrations measured in a stress-controlled rheometer at a plate separation of 300 $\mathrm{\mu}$m in a parallel plate experimental geometry. The shear rates imposed by the inner fluid in the HS cell experiments at various injection flow rates, estimated as described in section 2.1, lie in the region highlighted in green.}
\end{figure}
We next quantify the morphologies and growth of the inner and outer interfaces (Fig.~\ref{perimeter}). It is seen from Fig.~\ref{perimeter}(a) that the perimeter of the inner interface, $P_{in}$, increases monotonically with time (Fig.~\ref{perimeter}(a)) showing very rapid initial growth, followed by a significant slowing down at later times. The perimeter of the outer interface is defined as $\Delta P_{out} = P_{out}(t) - P_{out}(0)$ where $P_{out}(t)$ and $P_{out}(0)$ are perimeters of the outer interface at times $t$ and $t$ = 0 s respectively, with $t$ = 0 s corresponding to the time of injection of water (inner fluid). While $\Delta P_{out}$ does not change appreciably for low injection flow rates of water, we note that it shows a non-monotonic variation with time for high injection flow rates. The initial rapid increase in $\Delta P_{out}$ for high injection flow rates is due to the formation of reverse fingers at the outer interface between the cornstarch suspension and air. After reaching a maximum, the subsequent decrease in $\Delta P_{out}$ is attributed to the fading of reverse fingers at later times. We note from Fig.~\ref{perimeter}(a) that changes in the slopes of $P_{in}$ and $\Delta P_{out}$ occur at almost the same time, thereby indicating a strong correlation in the growth kinetics of the inner and outer interfaces. The variations in the time derivatives of $P_{in}$ and $\Delta P_{out}$, $i.e.$ in the growth rates of the inner and outer interfaces, further confirm this correlation (Supplementary Fig.~S3). Such close correlation between the dynamics of the interfaces 1 and 2 (Fig.~\ref{patterns}(f)) suggests that the stresses generated within the dense cornstarch suspension during its displacement by the inner fluid are also transmitted to the outer interface between the CS suspension and air.
We note that reverse fingers only appear for injection flow rates $q$ larger than 1 ml/min. The global features of the outer interface are next quantified by computing the number of reverse fingers, $N_{rf}$, and the average spacing between reverse fingers, $\lambda$, for different injection flow rates of water (15 ml/min $\leq q \leq$ 50 ml/min, Fig.~\ref{perimeter}(b)). The number of reverse fingers is estimated by identifying all the tips of the reverse fingering pattern at the outer interface when the transient reverse fingers reach their maximum lengths. Since the occurrence of reverse fingers is pronounced for high injection flow rates, therefore $N_{rf}$ increases with an increase in injection flow rate (Fig.~\ref{perimeter}(b)). The average spacing between the reverse fingers is estimated as $\lambda = {<\sqrt{(r_1)^2 + (r_2)^2 + 2 r_1 r_2 \cos(\theta_1 - \theta_2)}>}_{N_{rf}-1}$, where $r$ and $\theta$ are polar coordinates corresponding to the tips of the reverse fingers and $<...>_{N_{rf}-1}$ denotes an average over the estimated spacings between all the adjacent reverse finger-tips (Fig.~\ref{perimeter}(b)). The average reverse finger spacing $\lambda$ does not show any significant dependence on the injection flow rates explored in our experiments. Further details about the calculations of $N_{rf}$ and $\lambda$ are provided in Supplementary Fig.~S4.
Sweep efficiency ($SE$) is a non-dimensional parameter often used to determine how effectively one fluid displaces another~\cite{PALAK2021127405,shokri}. By following the protocols adopted in our previous report~\cite{PALAK2021127405}, we estimate $SE$ of the displaced cornstarch suspension by computing the ratio of the area contacted by the inner fluid (water; black area in Fig.~\ref{fig:hssem}(a)) and the total area occupied by the inner and displaced fluids (sum of the black and white areas in Fig.~\ref{fig:hssem}(a)). Figure~\ref{perimeter}(c) shows the variation in sweep efficiency with time for all the injection flow rates $q$ used in our experiments. Before the onset of pattern growth at the inner interface at the earliest times, we observe that $SE$ is close to unity for all $q$ values. This is followed by a decrease in $SE$ for the higher flow rates as the interfacial patterns evolve at later times. The initial sharp drops of $SE$ values occur at the same times when reverse fingers are observed (highlighted by a purple ellipse in Fig.~\ref{perimeter}(c)). This is also evident from the slopes of the $SE$ vs. $t$, $dSE/dt$ shown in the inset of Fig.~\ref{perimeter}(c), thereby confirming that the presence of reverse fingers significantly affects sweep efficiency during displacement of the highly sheared cornstarch suspension.
\begin{figure}[!b]
\includegraphics[width= 6.0in ]{Fig4.jpg}
\centering
\caption{\label{asymmetric}{\bf Asymmetry in the generation of reverse fingers at the outer interface between cornstarch suspension and air.} \textbf{(a)} Interfacial pattern obtained during the radial displacement of a cornstarch suspension (40 wt.\%) by water at $q$ = 40 ml/min for HS cell gap $b$ = 170 $\mathrm{\mu}$m. The scale bar is 5 cm. Formation of reverse fingers is not axisymmetric in the different interfacial sections. Yellow lines divide the pattern into octants, which are labelled from numbers 1 to 8. $R$ is the length of the longest finger of the inner pattern between the cornstarch suspension and water and $n_{rf}$ is the number of reverse fingers in an octant. \textbf{(b)} The longest finger length $R$ and the number of reverse fingers $n_{rf}$ in each octant of the pattern displayed in (a) vs. octant number.}
\end{figure}
It has been predicted in non-linear simulations that non-zero normal stresses in a viscoelastic fluid lead to significantly higher stress asymmetries along the flow direction when compared to the normal direction~\cite{shokri}. These stress asymmetries result in an effective drag force along the flow direction. Figure~\ref{perimeter}(d) displays the rapid increase in the first normal stress difference $N_1$ with shear rate for the higher cornstarch suspension concentrations. The observed decrease in $N_1$ at very high shear rates can be attributed to slip~\cite{wallslip2,wallslip,macosko1994rheology} between the extremely dense CS suspension and the rheometer plates due to the imposition of large tangential strains. It is important to note that such slippage-induced decreases in $N_1$ (Fig.~\ref{perimeter}(d)) and $\eta_{out}$ (Fig.~\ref{fig:hssem}(c)) are observed at comparable shear rates. However, as confirmed from the temporal variations of the perimeters ($P_{in}$ and $\Delta P_{out}$) of the interfaces (Fig.~\ref{perimeter}(a)), we do not observe any intermittent changes in growth profiles of the inner and outer interfaces in the HS cell. We therefore expect minimal or no slippage between the suspension and HS glass plates in our displacement experiments. It was reported earlier that the sign of $N_1$ depends on the details of the particle-particle interactions in shear-thickened suspensions of colloidal silica~\cite{normal} and granular cornstarch~\cite{normal1,PhysRevResearch.4.033062}. While negative values of $N_1$ represent hydrodynamic or lubrication effects, positive values of $N_1$ reflect the dominant influence of inter-particle friction~\cite{normal}. In our experiments, the measured values of $N_1$ (Fig.~\ref{perimeter}(d)) at the imposed shear rates are always positive, indicating that inter-particle friction at the microscopic scale determines the rheology of granular cornstarch suspensions. We therefore believe that stress anisotropies in our system arise due to the formation of anisotropic force chains~\cite{Majmudar2005,doi:10.1073/pnas.2203795119,CST} supported by inter-particle friction in the sheared granular cornstarch suspension, which results in large positive values of $N_1$~\cite{PhysRevResearch.4.033062,normal1}. Therefore, the large buildup of normal stresses in CS suspensions under high shear rates causes an effective drag in the flow direction and results in the formation of reverse fingers in our displacement experiments. A recent study reported a characteristic time of a few seconds for the applied stresses to propagate across highly sheared cornstarch suspensions undergoing discontinuous shear-thickening~\cite{maharjan}. These stresses are anisotropically transmitted via particle-particle contacts in sheared forced networks that form and break under large shear rates. We therefore note that the observed maximum growth of transient reverse fingers at timescales $\approx$ a few seconds for appropriately high injection flow rates (as seen from the peak positions of $\Delta P_{out}$ in Fig.~\ref{perimeter}(a)) coincides approximately with the expected time interval for the propagation of stresses through force networks in the suspension. The subsequent rearrangement of these force networks at later times results in the disappearance of the observed reverse fingers.
As seen in Fig.~\ref{asymmetric}(a), the occurrence of reverse fingers is not always axisymmetric at the outer interface between the cornstarch suspension and air. We further analyse the reverse fingering patterns for high injection flow rates by dividing each image into eight octants, with each octant labelled by a unique number between 1 and 8 as shown in Fig.~\ref{asymmetric}(a). For each octant, the number of reverse fingers, $n_{rf}$, at the outer interface and the longest finger length, $R$, of the inner pattern are estimated. The longest finger length in the inner pattern, $R$, varies appreciably in the different octants and is approximately inversely proportional to $n_{rf}$ as seen in Fig.~\ref{asymmetric}(b). This indicates that growth of the inner pattern is comparatively slower in the interfacial sections having more reverse fingers $n_{rf}$. While we may expect a decrease in the finger spacing $\lambda$ with an increasing number of equally distributed reverse fingers, we note that the constant value of $\lambda$ with changing $q$ as seen in Fig.~\ref{perimeter}(b) also indirectly indicates that the reverse fingers are not necessarily axisymmetric (Fig.~\ref{asymmetric}(a)). It seems reasonable to conclude that the anisotropic build-up of normal stresses~\cite{Majmudar2005,doi:10.1073/pnas.2203795119,CST} in the displaced cornstarch suspension causes unequal drag forces in the sample. These drag forces lead to the observed slower growth of the propagating finger-tips of the inner pattern in certain sections, an effective withdrawal of the cornstarch suspension, and the generation of pronounced reverse fingers at the outer interface.
\begin{figure}[!t]
\includegraphics[width= 6.0in ]{Fig5.jpg}
\centering
\caption{\label{gap}{\bf Characterisation of the inner interface between water and cornstarch suspension, and the outer interface between cornstarch suspension and air while increasing gap $b$ of the Hele-Shaw cell.} Interfacial patterns obtained during the displacement of a 40 wt.\% cornstarch suspension by water at $q$ = 50 ml/min for different gaps $b$ of the Hele-Shaw (HS) cell: \textbf{(a)} $b$ = 170 $\mathrm{\mu m}$, \textbf{(b)} $b$ = 300 $\mathrm{\mu m}$ and \textbf{(c)} $b$ = 500 $\mathrm{\mu m}$. The scale bar is 5 cm. \textbf{(d)} Perimeters of inner interfaces $P_{in}$ (solid lines) and outer interfaces $\Delta P_{out}$ (filled symbols connected by solid lines) for different gaps $b$. \textbf{(e)} Number of reverse fingers $N_{rf}$ (\textcolor{black}{\small$\blacksquare$}) and average reverse finger spacing $\lambda$ (\textcolor{red(ryb)}{\large$\bullet$}) at the outer interface as a function of $b$.}
\end{figure}
We next investigate the effect of confinement on the reverse fingering patterns when a dense cornstarch suspension (40 wt.\%) is displaced by water at a fixed injection flow rate $q$ = 50 ml/min in a Hele-Shaw cell with gaps $b$ varying between 170 $\mathrm{\mu}$m to 500 $\mathrm{\mu}$m (Figs.~\ref{gap}(a-c)). We see the formation of reverse fingers for the lower HS cell gaps (HS cell gap $b$ = 170 and 300 $\mathrm{\mu}$m, Figs.~\ref{gap}(a-b)), but observe only a slight withdrawal of the outer interface and the absence of reverse finger formation at $b$ = 500 $\mathrm{\mu}$m (Fig.~\ref{gap}(c)). When $b$ is increased to 800 $\mathrm{\mu}$m, we note that water, the inner fluid, spreads over the cornstarch suspension rather than displacing it (Supplementary Video 4). We next quantify the effects of confinement on the onset and growth of the patterns at the inner and outer interfaces by estimating the pattern perimeters $P_{in}$ and $\Delta P_{out}$, number of reverse fingers $N_{rf}$ and average reverse finger spacing $\lambda$ for different values of $b$. As reported by us for pattern formation at a fixed HS cell gap and different injection flow rates (Fig.~\ref{perimeter}(a)), $P_{in}$ increases monotonically with time and shows a change in slope at intermediate times while $\Delta P_{out}$ shows non-monotonic time-dependence (Fig.~\ref{gap}(d)) for the smaller HS cell gaps. Decrease in the peak values of $\Delta P_{out}$ with $b$ (Fig.~\ref{gap}(d)) is a consequence of a decrease in the number of reverse fingers $N_{rf}$ (Fig.~\ref{gap}(e)) with the removal of confinement. Simultaneously, $\lambda$ is seen to increase with $b$ (Fig.~\ref{gap}(e)). As demonstrated earlier, the formation of reverse fingers is sensitively dependent on the first normal stress difference $N_1$. Our rheometric measurements reveal that $N_1$ decreases steadily with increasing gap thickness of the rheometer plates (Supplementary Fig.~S5). This is consistent with previous work that highlighted the increasingly strong shear-thickening rheology of cornstarch suspensions with decreasing plate separations~\cite{doi:10.1122/1.3696875}. The large values of $N_1$ at low rheometer plate separations indicate that confined geometries are necessary for the generation of reverse fingers.
\begin{figure}[!t]
\includegraphics[width= 5.0in ]{Fig6.jpg}
\centering
\caption{\label{Varconc}{\bf Characterisation of the inner interface between water and cornstarch (CS) suspension, and the outer interface between cornstarch suspension and air while increasing the concentration of the CS suspension.} Interfacial patterns formed by the displacement of cornstarch suspensions of various concentrations: (a) 35 wt.\% (b) 40 wt.\% (c) 42 wt.\% at time $t$ = 2 s after injection of water at $q$ = 50 ml/min in the HS cell of gap $b$ = 170 $\mathrm{\mu}$m. The scale bar is 5 cm. (d) Perimeters of the inner interfaces $P_{in}$ (solid lines) and outer interfaces $\Delta P_{out}$ (filled symbols connected by solid lines) vs. time $t$ for the above cornstarch suspensions.}
\end{figure}
We therefore conclude that the formation of reverse fingers at the outer interface requires the build-up of large normal stresses in the cornstarch suspension and can be achieved either by increasing the injection flow rate of the inner fluid or by decreasing the gap of the HS cell. The important role of the first normal stress difference in the generation of reverse fingers is further confirmed by increasing the concentration and therefore the elasticity~\cite{PALAK2021127405} of the displaced cornstarch suspension. Figures~\ref{Varconc}(a-c) display grayscale images of the interfacial patterns formed when water injected at an injection flow rate $q$ = 50 ml/min displaces cornstarch suspensions of different concentrations in a Hele-Shaw cell of gap $b$ = 170 $\mathrm{\mu}$m. While reverse fingering is observed when cornstarch suspensions of higher concentrations are displaced by water, it is absent when a cornstarch suspension of a lower concentration (35 wt.\%) is displaced at the same injection flow rate. The higher peak value of the outer perimeter $\Delta P_{out}$ and its non-monotonic evolution with time at the highest concentration (42 wt.\%; Fig.~\ref{Varconc}(d)) indicates the enhanced formation of reverse fingers at the outer interface. The variations in the time derivatives of $P_{in}$ and $\Delta P_{out}$ (Supplementary Fig.~S6(a-b)), $i.e.$ in the growth rates of the inner and outer interfaces, indicate correlation in the growth kinetics of the inner and outer interfaces. We conclude by noting that the time-evolutions of $P_{in}$ and $\Delta P_{out}$ qualitatively follow the same trend regardless of whether cornstarch suspensions of increasing concentrations are displaced at a fixed injection flow rate (Fig.~\ref{Varconc}(d)) or a cornstarch suspension of fixed concentration is displaced at increasing injection flow rates (Fig.~\ref{perimeter}(a)).
\section{Conclusions}
Dense aqueous granular cornstarch (CS) suspensions display shear-thinning and shear-thickening flows when externally applied shear stresses and particle concentrations are varied appropriately~\cite{peters2016direct}. It is now well-known that hydrodynamic lubrication forces and the formation of anisotropic force chain networks govern the unique rheology of cornstarch suspensions~\cite{wagner2009shear,peters2016direct,doi:10.1073/pnas.2203795119,CST,Jamali}. The spatially anisotropic stresses arising from the formation of force chain networks~\cite{Majmudar2005, doi:10.1073/pnas.2203795119,Jamali} at large shear rates should influence the displacement efficiency~\cite{shokri} of a dense shear-thickening cornstarch suspension in a Hele-Shaw (HS) cell. In this work, we investigate miscible displacements of confined shear-thickening cornstarch (CS) suspensions (displaced fluid) when the shear rates imposed on the suspension are greater than the critical shear rate $\dot\gamma_c$ required for the onset of shear-thickening. Displacement of dense CS suspensions at large shear rates are achieved by systematically varying injection flow rate of the displacing inner fluid (water) and gap of the HS cell. We note that previous literature focussed exclusively on the propagation of the inner fluid front at the inner fluid-fluid interface~\cite{saffman1958penetration,Homsy,pinilla2021,ozturk2020flow,PALAK2021127405,PALAK2022100047,Linder2000Viscous,Zhang2021Structures}. Besides monitoring the generation of viscous fingers at the inner interface between water and the CS suspension, we report here an unexpected growth of transient reverse fingers at the outer interface between the CS suspension and air (outermost fluid) at sufficiently high injection flow rates. Our observation of an inverse relation between the number of reverse fingers at the outer interface and the rate of growth of the inner pattern establishes the presence of a strong correlation between the growth kinetics of the two interfaces. Our rheometric measurements of large positive values of the first normal stress difference in dense cornstarch suspensions at large applied shear rates ($\dot{\gamma} \geq \dot{\gamma}_c$) indicate inter-particle frictional interactions~\cite{normal,frictional} and the generation of shear-induced anisotropic force chain networks~\cite{Majmudar2005,doi:10.1073/pnas.2203795119,Jamali}. We believe that the anisotropic stress profiles that are generated in the highly-sheared CS suspension are responsible for the observed emergence of reverse fingers at the outer interface. We also note the enhanced formation of reverse fingers for low gap widths of the HS cell and high concentrations of the CS suspensions. Since our rheology experiments clearly demonstrate that normal stresses in the CS suspension increase with decreasing gap of the HS cell~\cite{doi:10.1122/1.3696875} and increasing suspension concentration~\cite{normal}, our results verify the important contribution of normal stresses in pattern formation during the displacement of viscoelastic suspensions in confined geometries. The magnitude of the normal stresses generated in viscoelastic fluids such as in emulsions~\cite{foamsn} and polymeric solutions~\cite{polym} are strongly dependent on their individual internal microstructures. In order to thoroughly investigate the relation between normal stresses, sample microstructures and morphologies of interfacial displacement patterns, it would be interesting to systematically perform displacement experiments with different materials and externally imposed shear profiles.
In a significant advance to our previous work~\cite{PALAK2021127405} where we proposed different experimental protocols for controlling interfacial instabilities during the miscible displacement of shear-thinning cornstarch suspensions, we report here the first experimental observation of reverse fingers at the outer interface between a highly sheared CS suspension and air during miscible displacement of the suspension by water. Since displacement efficiency~\cite{shokri} depends on the morphologies of the inner and outer interfaces, we observe a sharp decrease in sweep efficiency due to the generation of reverse fingers at the outer interface. The role of shear-dependent rheology in determining the morphologies of interfacial patterns formed during the displacement of a more viscous fluid by a less viscous one is of fundamental and practical interest. Besides being fascinating from a fluid mechanics point of view, the understanding of interfacial instabilities can be useful in many areas such as hydrology~\cite{Hydrology}, oil recovery by water flooding~\cite{oilrecovery}, in enhancing the mixing of fluids~\cite{PhysRevLett.106.194502,Soltanian2017}, while fabricating structured soft materials~\cite{Marthelot2018} and in the control of dendritic growth morphologies in rechargeable batteries~\cite{dendrite}. The present work can also have useful implications in cementing processes involving the removal of drilling mud and its substitution with cement slurries~\cite{cement}.
\section*{Declaration of Competing Interest}
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
\section*{Data Availability}
Source data are available for this paper from the corresponding author upon reasonable request.
\section*{Acknowledgments}
We thank Raman Research Institute (RRI, India) for funding our research and Department of Science and Technology Science and Education Research Board (DST SERB, India) grant EMR/2016/006757 for partial support.
\bibliographystyle{elsarticle-num}
|
2,869,038,155,041 | arxiv | \section{Introduction}
\label{sec:intro}
In the past two decades, wide-field optical survey telescopes have discovered
hundreds of extreme optical transients which can be split into two
classes: superluminous supernovae (SLSNe) with peak luminosities
at least 10 times higher than those of ordinary SNe (see
\citealt{Gal2012,Gal2018} for reviews), and rapidly evolving optical
transients (REOTs, e.g., \citealt{Drout2014,Arc2016,Tanaka2016,Pur2018})
whose rise time is $\sim 2-12$ days.
Although the nature of the majority of SLSNe and REOTs is illusive, a major
fraction of SLSNe are believed to be SNe Ic or IIn powered by a magnetar
or ejecta-circumstellar medium (CSM) interaction, while some REOTs have
been confirmed to be type Ibn and Ic SNe \citep{Ho2021}.
iPTF~16asu discovered by the intermediate Palomar Transient Factory
(iPTF, \citealt{Cao2016}) on 2016 May 11.26 UT is one of such rapidly evolving
SN Ic at redshift $z$ = 0.187.
The follow-up observations and analysis show that
the $g-$band light curve of iPTF~16asu peaked at absolute magnitude of $-$20.4 mag
\citep{White2017} (W17 hereafter) which is comparable to $g-$band peaks of SLSNe
PTF10hgi ($M_g, {\rm peak} = -20.42$ mag)
and PTF11rks ($M_g, {\rm peak} = -20.76$ mag) \citep{Inse2013}.
W17 find that late-time spectra of iPTF16asu show the features of the
spectra of broad-lined Type Ic SNe (SNe Ic-BL).
W17 use several models to fit the pseudo-bolometric light curve they constructed.
They find that the full pseudo-bolometric light curve cannot be fitted by the
$^{56}$Ni model and the shock-breakout model, but can be fitted by the magnetar model.
W17 suggest, however, that the derived ejecta mass (0.086\,M$_\odot$) is
too small and conclude that the value is unreasonable.
As pointed out by W17, the bolometric luminosity of iPTF~16asu might be
underestimated since they constructed the post-peak pseudo-bolometric light
curve by accounting for the observed flux. Using this method, the
peak of the pseudo-bolometric light curve is $(3.4\pm 0.3)\times 10^{43}$ erg s$^{-1}$,
which is lower than the blackbody luminosity at day $+3.47$
($(6.4\pm 1.6)\times 10^{43}$ erg s$^{-1}$) they derived by using the blackbody
model. \footnote{\cite{Wang2019} construct the bolometric light curve of iPTF~16asu
and suggest that its peak luminosity is $3.8\times 10^{43}$ erg s$^{-1}$, which is
slightly higher than that given by W17.}
The pre-peak pseudo-bolometric light curve might also be not precise, since
it is obtained by assuming that the $g-$band flux to total flux is a constant,
while this ratio might not be a constant.
Due to the fact that rapidly evolving (super-)luminous SNe Ic-BL are extremely
rare, the real bolometric light curve and the energy sources of
iPTF~16asu deserve further study. In this paper, we re-investigate these two issues.
In Section \ref{sec:SED}, we re-investigate the SEDs
of iPTF~16asu and derive the bolometric luminosity at some epochs.
In Section \ref{sec:modeling}, we model the multi-band light curves
of iPTF~16asu using different models. We discuss our results in Section
\ref{sec:discussion} and draw some conclusions in Section \ref{sec:Con}.
\section{The Blackbody Fits for SEDs of iPTF~16asu}
\label{sec:SED}
W17 have used the blackbody model to fit the SEDs at the epochs when the
observations in at least three bands are available simultaneously
and the first two spectra of iPTF~16asu, reporting the evolution of temperature
and the radius of the photosphere of iPTF~16asu. However,
the derived blackbody luminosity of iPTF~16asu at the most epochs haven't been
presented. Here, we re-fit the SEDs at all epochs when at least three filter data are available.
To combine the $gri$ and photometry of $Swift$-UVOT, the $gri$ data at day $+3.51$ have
been interpolated to day $+3.47$.
We fit the unique UV--optical--IR SED (at day $+3.47$) by using the UV-absorbed blackbody model,
in which the optical--IR part of the SED is fitted by the equation
$F_{\nu,{\rm ph}}(\lambda > \lambda_{\rm cut}) =F_{\nu,{\rm bb}} = (2 {\pi} h{\nu}^3/c^2)
(e^{\frac{h{\nu}}{k_{\rm b}T_{\rm ph}}}-1)^{-1}\frac{R_{\rm ph}^2}{D_L^2}$,
while the UV part of SED is fitted by $F_{\nu,{\rm ph}}(\lambda \leq \lambda_{\rm cut}) =
\big(\frac{\lambda}{\lambda_{\rm cut}}\big)^{\beta}F_{\nu,{\rm bb}}$
(see, e.g., \citealt{Pra2017,Nich2017b,Yan2020}); here, $T_{\rm ph}$ is the temperature of the SN photosphere,
$R_{\rm ph}$ is the radius of the SN photosphere, $D_L$ is the luminosity distance of the SN,
$\lambda_{\rm cut}$ is the cutoff wavelength, $\beta$ is a power-law index \citep{Yan2020}.
\footnote{\cite{Pra2017} and \cite{Nich2017b} suppose that $\lambda_{\rm cut} =3000$ \AA, $\beta=1$.}
Although the SEDs at all other epochs do not have the UV photometry, we suggest that it is more
reasonable to suppose that they are UV-absorbed ones and invoke the UV-absorbed model to
fit them. The values of $\lambda_{\rm cut}$ and $\beta$ are taken to be the best-fitting
ones of the fit for the SED at day $+3.47$, because there are no UV data to constrain the
two parameters for the other epochs. For comparison, however, the standard blackbody model is also invoked to fit the same SEDs.
The Markov Chain Monte Carlo (MCMC) method using the \texttt{emcee} Python package \citep{Foreman-Mackey2013}
is used to obtain the best-fitting parameters and the 1-$\sigma$ range of the parameters.
We present the medians and 1-$\sigma$ bounds of the temperature and the radii
of the SN photosphere, as well as the bolometric luminosity derived by using
$L_{\rm ph} = \int_0^{\infty} F_{\nu,{\rm ph}}{\rm d}\nu$ at all epochs
in Table \ref{table:SED_L}, and plot all fits reproduced by
the UV-absorbed blackbody model (the solid lines) as well as the standard
blackbody model (the dashed lines) in Figure \ref{fig:SED}.
We find that the UV-absorbed blackbody model is better than the standard blackbody model,
because the former can fit all data at day $+3.47$ (see Figure \ref{fig:SED}) and,
as shown in Figure \ref{fig:SED}, the latter cannot fit the $u$-band photometry (see also Figure 6 of W17).
The value of the reduced $\chi^2$ ($\chi^2$/dof = 0.20, dof=degree of freedom) of the former
is also smaller than that ($\chi^2$/dof = 0.79) of the latter. The derived value of $\lambda_{\rm cut}$ is
$2757.81^{+429.64}_{-365.93}$ \AA, which is comparable to that of some type I SLSNe
($\sim$3000 \AA, e.g., \citealt{Chom2011}, \citealt{Pra2017}, \citealt{Nich2017a}, and \citealt{Nich2017b}).
The photosphere temperature at day $+3.47$ we derive
($12,004^{+743}_{-632}$ K, see Table \ref{table:SED_L}) is higher than that derived by W17 ($10,800\pm 250$ K);
in contrast, the photosphere radius at the same epoch we derived ($(2.18\pm 0.17)\times 10^{15}$ cm)
is smaller than that derived by W17 ($(2.6\pm 0.2)\times 10^{15}$ cm). Adopting
$L_{\rm ph} = \int_0^{\infty} F_{\nu,{\rm ph}}{\rm d}\nu$ (rather than
$L_{\rm ph} = 4 \pi \sigma T_{\rm ph}^4R_{\rm ph}^2$), we find that the bolometric luminosity
at day $+3.47$ is $6.20^{+0.19}_{-0.18}\times 10^{43}$ erg s$^{-1}$, which is slightly lower than
that derived by W17 ($(6.4\pm 1.6)\times 10^{43}$ erg s$^{-1}$),
but still about 2 times the peak ($(3.4\pm 0.3)\times 10^{43}$ erg s$^{-1}$)
of the pseudo-bolometric light curve constructed by W17.
We compare the post-peak bolometric light curve we derive from the SEDs and the
pseudo-bolometric light curve constructed by W17 by plotting them in Figure \ref{fig:SED-L}.
It can be found that the bolometric light curve we derive is significantly more luminous
than the pseudo-bolometric light curve constructed by W17. Especially, the extrapolated
peak luminosity of the bolometric luminosity of iPTF~16asu can exceed $\sim 10^{44}$ erg s$^{-1}$,
which is about 3 times the peak of the pseudo-bolometric light curve
($(3.4\pm 0.3)\times 10^{43}$ erg s$^{-1}$), and brighter than the threshold ($7\times 10^{43}$ erg s$^{-1}$,
\cite{Gal2012}) of SLSN. This indicates that iPTF~16asu might be a genuine SLSN.
\section{Modeling the Multi-band Light Curves of iPTF~16asu}
\label{sec:modeling}
The fact that the bolometric luminosity inferred from the UV-absorbed blackbody
fits of the SEDs is higher than the pseudo-bolometric light curve constructed by W17
indicates that the energy source powering the luminosity evolution must be
re-investigated.
We first use the $^{56}$Ni model to fit the multi-band light curves. The details
of the $^{56}$Ni model reproducing the bolometric light curves of SNe can be found in
\citet{Wang2015b} and references therein. \footnote{Note that the
factor $(1-e^{-\tau_{\gamma}(t)})$ in Equation (1) of \citet{Wang2015b} and other
literature (e.g., \citealt{Cha2012}, \citealt{Cha2013}, \citealt{Wang2015a},
\citealt{Nich2017b}) that presents the $\gamma$-ray trapping factor ought to be
inside, rather than outside, the integral.}
Throughout this paper, the optical opacity of the ejecta $\kappa$ is
taken to be 0.10 cm$^2$~g$^{-1}$.
To fit the multi-band light curves, the photosphere evolution
modules must be incorporated into the model, as done by \citet{Nich2017b} for the
fits of the multi-band light curves of SLSNe, see their Equations (8) and (9).
The equations determining SEDs are presented in Section \ref{sec:SED}.
The definitions, the units, and the priors of the parameters of
the $^{56}$Ni model are listed in Table \ref{tab:Nimag-parameters}.
\footnote{The values of $\lambda_{\rm cut}$ and $\beta$ are taken
to be those derived by the fit for the first SED, see Section \ref{sec:SED}.}
The MCMC method using the \texttt{emcee} Python package
\citep{Foreman-Mackey2013} is also used here.
The $^{56}$Ni model cannot match the pre-peak $g-$band light curve,
see Figure \ref{fig:multibandfits}. Moreover, the derived $^{56}$Ni is
$1.98_{-0.03}^{+0.01}$~M$_\odot$, which is $\sim 3.6$ times that
derived by W17 ($0.55$~M$_\odot$), and significantly higher than
the ejecta mass ($0.30_{-0.07}^{+0.08}$~M$_\odot$, see Table \ref{tab:Nimag-parameters},
or Figure \ref{fig:corner_Ni}). This supports W17's conclusion that the $^{56}$Ni model
is disfavored.
The magnetar model is one of the most prevailing models used
to account for the SNe that cannot be explained by the $^{56}$Ni model.
For completeness, however, the contribution of $^{56}$Ni is also
included and the ratio of the $^{56}$Ni
to the ejecta mass is supposed to be $\leq$ 0.2 \citep{Ume2008}.
The details of the magnetar plus $^{56}$Ni model we adopt can be found in
\citet{Wang2015b} and references therein.
The photosphere evolution modules are also from Equations (8) and (9) of \citet{Nich2017b}.
We suppose that the mass and radius of the magnetar are
respectively $1.4$~M$_\odot$ and 10 km and list the free parameters of the magnetar plus $^{56}$Ni
model, their definitions, as well as the priors in Table \ref{tab:Nimag-parameters}.
The fit produced by the magnetar plus $^{56}$Ni model is also shown
in Figure \ref{fig:multibandfits}.
The parameters and the corresponding corner plot are presented in
Table \ref{tab:Nimag-parameters} and Figure \ref{fig:corner_magni}, respectively.
The magnetar plus $^{56}$Ni model can better fit the pre-peak $g-$band light curve,
and the value of $\chi^2$/dof is 4.31, which is smaller than that (6.61) of the
$^{56}$Ni model.
\section{Discussion}
\label{sec:discussion}
\subsection{The Analysis for the Parameters of the Magnetar plus $^{56}$ Ni model}
The ejecta mass derived by the magnetar model is $0.21_{-0.06}^{+0.08}$~M$_\odot$, which
is $\sim 2-3$ times that derived by W17. Although it is still smaller than those of
other SLSNe, we suggest that this value is not problematic since it is comparable with
those of ultra-stripped SNe \citep{Tauris2013,Tauris2015}.
\footnote{For comparison, \cite{De2018} show that the ejecta mass of iPTF~14gqr which
is a fast-evolving SN Ic is $\sim 0.2$~M$_\odot$; \cite{Yao2020} study a rapidly evolving
SN~Ib SN~2019dge, and find that its ejecta mass is $\sim\,0.3$\,M$_\odot$; \cite{Prit2021}
study SN~2018gep which is another luminous ($M_{\rm r,peak}=-19.49\pm0.23$ mag) fast-evolving
($t_{\rm rise,V}\lesssim 6.2\pm 0.8$ days) SN Ic-BL with spectra resembling those of iPTF~16asu,
and show that its ejecta mass might be $\sim 0.26$~M$_\odot$ for the magnetar+$^{56}$Ni model
or $\sim 0.49$~M$_\odot$ for the ejecta--circum-stellar medium (CSM) interaction plus $^{56}$Ni
model.} Furthermore, the ejecta can be larger if we suppose that the value of $\kappa$ was
0.05$-$0.07 cm$^2$ g$^{-1}$, instead of the value we adopt.
Therefore, we suggest that iPTF~16asu might be an ultra-stripped SLSN Ic whose
luminosity was boosted by a nascent magnetar.
The derived $``$floor temperature$"$ ($6410.26^{+114.72}_{-108.19}$ K)
is consistent with that derived by W17 ($\sim 6000$ K,
see their Figure 7) from the fits for the SEDs of iPTF~16asu
as well as many other SLSNe I (see Table 3 of \citealt{Nich2017b}).
On the other hand, the temperature flattened to the $``$floor temperature$"$
after $\sim$ 18 days after the peak, which
is significantly smaller than those of other SLSNe ($\sim$ 50 days, see
\citealt{Nich2017b}). The reason is that the ejecta mass is very small
and the photosphere shrunk more quickly than those of other SLSNe.
The early-time photosphere velocity inferred is $(2.91\pm 0.16)\times 10^9$ cm s$^{-1}$,
slightly lower than the value ($(3.45\pm 0.54)\times 10^9$ cm s$^{-1}$) given by W17.
\footnote{In principle, the velocity of the photosphere is lower than the
velocity inferred from absorption lines of the spectra, because the latter are formed
in the SN atmospheres which are above the photospheres and have larger expansion
velocity, see the discussion in \cite{Wang2022}.}
The derived rising time is $5.97_{-0.42}^{+0.34}$ days, slightly
larger than that derived using the second-order polynomial fit
($3.97\pm 0.19$ days, W17).
The small ejecta mass and the high velocity result in a short diffusion time and
rise time, then the needed input power is lower than those of ``normal$"$ SLSNe
(according to the ``Arnett law$"$, \citealt{Arn1982}). Therefore, the best-fitting
$P_0$ (9.79 ms) is significantly larger than those of most SLSNe ($\sim 1-5$ ms).
\subsection{The Theoretical Bolometric Light Curve, the Temperature Evolution, and the Radius Evolution of iPTF~16asu}
In section \ref{sec:SED}, we get the post-peak bolometric light curve of iPTF~16asu by
fitting the SEDs at seven epochs and using $L_{\rm ph} = \int_0^{\infty} F_{\nu,{\rm ph}}{\rm d}\nu$
to calculate the luminosity at the epochs. To obtain the full bolometric light curve, we use
the theoretical multi-band light curves produced by the best-fitting parameters to construct its
bolometric light curve by integrating the theoretical SEDs at all epochs, see the top-left panel of
Figure \ref{fig:bolo}. We also plot the temperature evolution and the radius evolution
reproduced by the magnetar plus $^{56}$Ni model in Figure \ref{fig:bolo}
(see the top-right and the bottom panels, respectively).
For comparison, bolometric luminosity, the temperature evolution and the radius evolution
derived by the photometry (see Table \ref{table:SED_L}) are also plotted in the same figure.
The peak luminosity of the bolometric light curve we construct is $\sim 1.06\times 10^{44}$
erg s$^{-1}$, which is $\sim 3$ times the peak luminosity derived by W17 ($(3.4\pm 0.3)\times10^{43}$ erg s$^{-1}$),
indicating that iPTF~16asu is NOT a luminous SN between SLSNe and normal SNe, but a genuine SLSN
even if the most stringent threshold ($>7\times 10^{43}$ erg s$^{-1}$, \citealt{Gal2012}) was adopted.
Using trapezoidal integration over period from the explosion to 100 days after explosion,
we calculate the total radiated energy of $\sim 1.28\times 10^{50}$ erg, which is
also about 3$-$4 times that calculated by W17 ($4.0\pm 0.6\times 10^{49}$ erg).
As pointed out by \cite{Inse2013}, using $L_{\rm ph} = 4 \pi \sigma T_{\rm ph}^4R_{\rm ph}^2$
would overestimate the bolometric luminosity.
However, our method deriving the theoretical bolometric luminosity at all epochs
(including the epochs without any observations) can avoid overestimating the
bolometric luminosity, since we adopt the UV-absorbed blackbody model
\footnote{Although only one SED has UV photometry, our UV-absorbed
fit for the multi-band light curves of iPTF~16asu was applied for all epochs.}
and calculate the luminosity via the equation
$L_{\rm ph} = \int_0^{\infty} F_{\nu,{\rm ph}}{\rm d}\nu$,
rather than $L_{\rm ph} = 4 \pi \sigma T_{\rm ph}^4R_{\rm ph}^2$.
\section{Conclusions}
\label{sec:Con}
iPTF~16asu has so far been classified as a luminous rapidly evolving SN Ic-BL whose
multi-band light curves resembles those of luminous REOTs. In this paper, we re-analyze the SEDs of iPTF~16asu and re-construct
its post-peak bolometric light curve. We find that the bolometric luminosity at $+3.47$ days
after the peak is $6.20^{+0.19}_{-0.18}\times 10^{43}$ erg s$^{-1}$. Although this value is
slightly lower than the value ($(6.4\pm1.6)\times 10^{43}$ erg s$^{-1}$) derived by W17
, it still significantly brighter than the peak of pseudo-bolometric light curve constructed by W17
($(3.4\pm0.3)\times10^{43}$ erg s$^{-1}$).
Extrapolating the post-peak bolometric light curve we derive would result in a peak
luminosity of $\sim 10^{44}$ erg s$^{-1}$, which exceeds the threshold of SLSNe
($>7\times10^{43}$ erg s$^{-1}$, \citealt{Gal2012}). This fact indicates that the luminosity of the
bolometric light curve of iPTF~16asu might be about 3 times that of the pseudo-bolometric
light curve constructed by W17, and iPTF~16asu might be a SLSN.
We use the $^{56}$Ni model and the magnetar plus $^{56}$Ni model to
fit the multi-band light curves, and construct the theoretical bolometric light curves by
integrating the theoretical SEDs constructed by the theoretical multi-band light curves.
Our modeling disfavors the $^{56}$Ni model and favors the magnetar plus $^{56}$Ni model.
Although the ejecta mass we derived is $0.21_{-0.06}^{+0.08}$~M$_\odot$ which is very low,
we suggest that this value is not problematic, since it is comparable with the ejecta masses
of ultra-stripped SNe, and a smaller value of $\kappa$ would give a larger ejecta
mass. It is reasonable to expect a magnetar boosted the luminosity of iPTF~16asu and made it
an ultra-stripped/rapidly evolving SLSN Ic-BL.
The bolometric light curve derived by the theoretical SEDs extracted from the theoretical
multi-band light curves reproduced by the best-fitting parameters of the magnetar plus
$^{56}$Ni model shows that its peak is $\sim 1.06\times 10^{44}$ erg s$^{-1}$,
indicating that iPTF~16asu is indeed a rapidly evolving
SLSN Ic-BL, since the peak luminosity is higher than the threshold of SLSNe. This conclusion
is robust, since we use the UV-absorbed blackbody model to obtain the theoretical SEDs at all
epochs, and avoid overestimating the bolometric luminosity.
Our work further highlights the importance and robustness of the method directly fitting
the multi-band light curves of SNe and other optical transients. Especially, it is very
difficult to construct the early-time (pseudo-)bolometric light curves for a fraction of
SNe and other optical transients discovered by various wide-field sky-survey telescopes,
since only one or two band observations for them are available at the early epochs.
For the SNe and other optical transients having sparse early-time data,
modeling the multi-band light curves can yield more reliable results.
\acknowledgments
We thank the anonymous referee for helpful comments and
suggestions that have allowed us to improve this manuscript.
This work is supported by National Natural Science Foundation of China
(grant 11963001).
\clearpage
\begin{table*}
\centering
\tabletypesize{\scriptsize}
\caption{\label{table:SED_L}The medians and 1-$\sigma$ bounds of the parameters of the UV-absorbed Blackbody model for SEDs of iPTF16asu.}
\begin{tabular}{c c c c c c c}
\hline\hline
\colhead{Phase} & \colhead{$T_{\rm ph}$} & \colhead{$R_{\rm ph}$} & \colhead{$\lambda_{\rm cut}$} & \colhead{$\beta$} & \colhead{$L_{\rm bolo}$} & \colhead{$\chi^{\rm 2}$/dof} \\
(days) & ($\rm 10^{3}$ K) & ($\rm 10^{15}$ cm) & ($\rm \mathring{A}$) & -- & ($\rm 10^{43}\ erg\ s^{-1}$ ) & -- \\
\hline
3.47 & $12.01^{+0.73}_{-0.63}$ & $2.18^{+0.17}_{-0.17}$ & $2758.56^{+432.87}_{-366.65}$ & $1.5^{+0.79}_{-0.62}$ & $6.20^{+0.19}_{-0.18}$ & 0.2\\
10.24 & $7.01^{+0.43}_{-0.39}$ & $4.06^{+0.49}_{-0.44}$ & -- & -- & $2.82^{+0.10}_{-0.10}$ & 5.89 \\
12.77 & $6.43^{+0.21}_{-0.2}$ & $4.39^{+0.34}_{-0.31}$ & -- & -- & $2.33^{+0.08}_{-0.07}$ & 0.024 \\
14.41 & $6.56^{+0.27}_{-0.25}$ & $3.99^{+0.36}_{-0.33}$ & -- & -- & $2.09^{+0.07}_{-0.06}$ & 4.54 \\
17.72 & $6.19^{+0.16}_{-0.16}$ & $4.19^{+0.24}_{-0.23}$ & -- & -- & $1.83^{+0.03}_{-0.03}$ & 13.26 \\
18.59 & $6.21^{+0.28}_{-0.26}$ & $3.99^{+0.42}_{-0.39}$ & -- & -- & $1.67^{+0.08}_{-0.07}$ & 1.63 \\
19.47 & $5.95^{+0.31}_{-0.28}$ & $4.16^{+0.51}_{-0.45}$ & -- & -- & $1.54^{+0.08}_{-0.07}$ & 14.96 \\
\hline
\noalign{\smallskip}
\end{tabular}
\end{table*}
\clearpage
\begin{table*}
\tabletypesize{\scriptsize}
\caption{The Definitions, the units, the prior, the medians, 1-$\sigma$ bounds, and the best-fitting values for the parameters of the $^{56}$Ni and the magnetar plus $^{56}$Ni models. The values of $\chi^{\rm 2}$/dof (reduced $\chi^{\rm 2}$) are also presented.}
\label{tab:Nimag-parameters}
\hspace{-30pt}
\begin{tabular}{c c c c c c c c}
\hline\hline
& \colhead{Definition} & \colhead{Unit} & \colhead{Prior} & \colhead{Best fit} & \colhead{Median}\\
\hline
{\bf $^{56}$Ni}\\
\hline
$M_{\rm ej}$ & the ejecta mass & M$_\odot$ & $[0.1, 50]$ & 0.35 & $0.3^{+0.08}_{-0.07}$ \\
$v$ & the ejecta velocity & $10^9$ cm s$^{-1}$ & $[1.0, 5.0]$ & 2.48 & $2.53^{+0.12}_{-0.12}$ \\
$M_{\rm Ni}$ & the $^{56}$Ni mass & M$_\odot$ & $[0.0, 2.0]$ & 1.99 & $1.98^{+0.01}_{-0.03}$ \\
$\log \kappa_{\rm \gamma, Ni}$ & gamma-ray opacity of $^{56}$Ni-cascade-decay photons & cm$^2$g$^{-1}$ & $[-1.57, 4] $ & -0.78 & $-0.7^{+0.13}_{-0.11}$ \\
$T_{\rm f}$ & the temperature floor of the photosphere & K & $[3000, 10^4] $ & 6398.78 & $6397.14^{+104.1}_{-105.89}$ \\
$t_{\rm shift}$ & the explosion time relative to the first data & days & $[-20, 0]$ & -8.35 & $-8.1^{+0.4}_{-0.42}$ \\
$A_{\text{host,V}}$ & Extinction in the host galaxy & mag & $[0, 1]$ & 0.001 & $0.0037^{+0.01}_{-0.0}$ \\
$\chi^{\rm 2}$/dof & & & & 6.6 & 6.64 \\
\hline\hline
{\bf Magnetar + $^{56}$Ni}\\
\hline
$M_{\rm ej}$ & the ejecta mass & M$_\odot$ & $[0.1, 50]$ & 0.19 & $0.21^{+0.08}_{-0.06}$ \\
$v$ & the ejecta velocity & $10^9$ cm s$^{-1}$ & $[1.0, 5.0]$ & 2.95 & $2.91^{+0.16}_{-0.16}$ \\
$M_{\rm Ni}$ & the $^{56}$Ni mass & M$_\odot$ & [0.0, 0.2\,$M_{\rm ej}$] & 0.029 & $0.018^{+0.02}_{-0.01}$ \\
$P_0$ & the initial period of the magnetar & ms & $[0.8, 50]$ & 9.81 & $9.7^{+0.27}_{-0.37}$ \\
$B$ & the magnetic field strength of the magnetar & $10^{14}$ G & $[0.1, 100]$ & 9.57 & $9.36^{+0.22}_{-0.22}$ \\
$\log \kappa_{\rm \gamma, Ni}$ & gamma-ray opacity of $^{56}$Ni-cascade-decay photons & cm$^2$g$^{-1}$ & $[-1.57, 4] $ & 0.41 & $0.34^{+0.18}_{-0.18}$ \\
$\log \kappa_{\rm \gamma, mag}$ & gamma-ray opacity of magnetar photons & cm$^2$g$^{-1}$ & $[-2, 4] $ & 0.28 & $0.82^{+2.16}_{-1.52}$ \\
$T_{\rm f}$ & the temperature floor of the photosphere & K & $[3000, 10^4] $ & 6362.82 & $6408.44^{+114.73}_{-108.44}$ \\
$t_{\rm shift}$ & the explosion time relative to the first data & days & $[-20, 0]$ & -5.84 & $-5.97^{+0.34}_{-0.42}$ \\
$A_{\text{host,V}}$ & Extinction in the host galaxy & mag & $[0, 1]$ & 0.0071 & $0.021^{+0.03}_{-0.02}$ \\
$\chi^{\rm 2}$/dof & & & & 4.3 & 4.36 \\
\hline\hline
\noalign{\smallskip}
\end{tabular}
\end{table*}
\begin{figure}[tbph]
\begin{center}
\includegraphics[width=0.4\textwidth,angle=0]{fitting3-47.pdf}
\includegraphics[width=0.4\textwidth,angle=0]{fitting10-24.pdf}
\includegraphics[width=0.4\textwidth,angle=0]{fitting12-77.pdf}
\includegraphics[width=0.4\textwidth,angle=0]{fitting14-41.pdf}
\includegraphics[width=0.4\textwidth,angle=0]{fitting17-72.pdf}
\includegraphics[width=0.4\textwidth,angle=0]{fitting18-59.pdf}
\includegraphics[width=0.4\textwidth,angle=0]{fitting19-47.pdf}
\end{center}
\caption{The best fits of the SEDs of iPTF~16asu at all epochs.
The solid lines represent the theoretical SEDs produced by the UV-absorbed blackbody model.
For comparison, the fits using the standard blackbody model are plotted by the dashed lines.
The data are from Table 1 of W17, the triangle in the first panel represents the $V-$band upper limit.}
\label{fig:SED}
\end{figure}
\clearpage
\begin{figure}[tbph]
\begin{center}
\includegraphics[width=0.6\textwidth,angle=0]{SED-L.pdf}
\end{center}
\caption{The post-peak bolometric light curve derived from the SEDs
(the filled squares), the pseudo-bolometric light
curve constructed by W17 is represented by the filled circles.}
\label{fig:SED-L}
\end{figure}
\clearpage
\begin{figure}[tbph]
\begin{center}
\includegraphics[width=0.70\textwidth,angle=0]{fitting_Ni.pdf}
\includegraphics[width=0.70\textwidth,angle=0]{fitting_MagNi.pdf}
\end{center}
\caption{The best fits (the solid curves) of the multi-band light curves
of iPTF~16asu using the $^{56}$Ni model (the top panel) and the magnetar plus $^{56}$Ni model
(the bottom panel), respectively. The shaded regions indicate 1-$\sigma$ bounds
of the parameters. The dotted lines and dashed lines in the bottom panel
are the light curves powered by the $^{56}$Ni and the magnetar, respectively.
The data are from Table 1 of W17, triangles represent upper limits.}
\label{fig:multibandfits}
\end{figure}
\clearpage
\begin{figure}[tbph]
\begin{center}
\includegraphics[width=0.48\textwidth,angle=0]{L.pdf}
\includegraphics[width=0.48\textwidth,angle=0]{T.pdf}
\includegraphics[width=0.48\textwidth,angle=0]{R.pdf}
\end{center}
\caption{The bolometric light curve, the temperature evolution, and the radius evolution reproduced by
the magnetar plus $^{56}$Ni model. The shaded regions indicate 1-$\sigma$ errors of the parameters.
Also plotted are the corresponding values (see Table \ref{table:SED_L}) derived from the observations.}
\label{fig:bolo}
\end{figure}
|
2,869,038,155,042 | arxiv | \section{Introduction}
The so called ``Steep--Flat--Steep" behavior \cite{gt05, nousek05}
of the early (up to $\sim$a day) X--ray afterglow was unpredicted
before we could observe it with {\it Swift}.
It has been interpreted in several ways
(for reviews, see e.g. \cite{zhang07})
none of which seems conclusive.
The spectral slope does not change across the temporal
break from the shallow to the normal decay phase,
ruling out a changing spectral break as a viable explanation.
An hydrodynamical or geometrical nature of the break is instead preferred.
Furthermore, the X--ray and optical lightcurves often do not
track one another (e.g. \cite{pana06, pana07})
suggesting a possible different origin.
To solve these difficulties Uhm \& Beloborodov \cite{uhm07}
and Genet, Daigne \& Mochkovitch \cite{genet07} suggested that
the X--ray plateau emission is not due to the
forward, but to the reverse shock running into ejecta of
relatively small (and decreasing) Lorentz factors.
This however requires an appropriate $\Gamma$--distribution of the
ejecta, and also the suppression of the X--ray flux produced by the
forward shock.
We (\cite{gg07}) instead suggested
that the plateau phase of the X--ray emission (and sometimes even of
the optical) is due to a prolonged activity of the central engine (see also
\cite{lp07}),
responsible for a ``late--prompt'' phase:
after the early ``standard" prompt the central engine continues to
produce for a long time (i.e. days) shells of progressively lower
power and bulk Lorentz factor.
The dissipation process during this and the early phases
occur at similar radii (namely close to the transparency radius).
The reason for the shallow decay phase, and for the break ending it,
is that the $\Gamma$--factors of the late shells are
motonically decreasing, allowing to see an
increasing portion of the emitting surface, until all of it is visible.
Then the break occurs when $\Gamma=1/\theta_j$.
\begin{figure}
\includegraphics[height=0.5\textheight]{ghisellini_f1.eps}
\caption{
Cartoon of the proposed model, and schematic illustration of the different
components contributing to the X--ray and optical light curves, as labelled.
Scales are arbitrary.
The early prompt phase is erratic, with shells of varying $\Gamma$ and power.
Then the central engine produces shells of progressively less power and
bulk Lorentz factors, producing a smoother light curve.
Since the average $\Gamma$--factor is decreasing, the observer sees
an increasing portion of the emitting area, until all of it
becomes visible when $\Gamma \sim 1/\theta_j$.
When this occurs there is a break in the light curve,
associated with the ending of the shallow phase.
The case illustrated here is only one (likely the most common)
possible case, when the X--ray flux is dominated by late
prompt emission (solid line, the dotted line corresponds to an
extrapolation at very late times), while the optical flux is dominated
by the real afterglow (dashed).
Adapted from \cite{gg07}.
}
\end{figure}
\section{The shallow X--ray afterglow phase}
\subsection{The time ending the shallow phase}
Willingale et al. \cite{willi07} have proposed to described the
X--ray afterglow light curve with a rising exponential connecting to
a power law function.
The end of the shallow phase is the junction between the exponential and
the power law, and it is called $T_a$.
They showed that interpreting $T_a$ as a jet break time one
obtains, for the {\it Swift} bursts in their sample, a good correlation
between the peak energy of the prompt spectrum, $E_{\rm peak}$,
and the collimation corrected energetics $E_\gamma$, with a small
scatter and a slope identical to the so called Ghirlanda relation \cite{ggl04}
(which identifies as a jet break time the break in the optical light curve,
occurring usually much later), challenging the physical nature of
the Ghirlanda relation.
Nava et al. \cite{nava07} have then
investigated this issue with a larger sample, finding that the correlation
found by \cite{willi07} does not have the same slope of the
Ghirlanda one, and it is not as tight.
More importantly, they demonstrated that $T_a$ does not play any role
in the construction of the correlation found by \cite{willi07},
which is instead (entirely) a by--product of the the $E_{\rm peak}$--$E_{\rm iso}$
correlation (the so called ``Amati" relation, \cite{ama02}).
In fact there is no (anti)--correlation between $T_a$ and $E_{\rm iso}$
(``a la Frail", \cite{frail01}) for GRBs of the same $E_{\rm peak}$
(see \cite{nava07} for more details and figures).
\subsection{Prolonged central engine activity}
The time $T_a$ is not a jet break time, still it may be produced by a
mechanism very similar to the process responsible for the
jet break visible during the deceleration of the fireball.
Consider the accretion onto the newly formed
black hole, and suppose that it occurs in two phases.
The first is short, intense, erratic,
corresponding to the early prompt phase of GRBs.
The second is longer, smoother, with a rate decreasing in time,
corresponding to the late prompt emission.
The first accretion mode might correspond to the
accretion of the equatorial core material
which failed to form the black hole in the first place.
It can form a very dense accreting torus, which can sustain a strong magnetic
field, which in turn efficiently extracts the rotational energy of the
black hole.
After this phase, some fall--back material may also be accreted,
with a density smaller than in the early phases.
The magnetic field that this matter can sustain is weaker than
before, with a corresponding smaller power extracted from the black hole spin.
This may well correspond to production of shells of smaller
$\Gamma$--factors.
These shells can dissipate part of their energy with the same mechanism
of the early ones.
Occasionally, in this late prompt phase,
the central engine may produce a faster than average shell,
originating the late flares often observed in the Swift/XRT light curves.
In the scenario we have proposed, there is a simple relation between the
function describing how $\Gamma$ decreases in time and the
observed decay slopes before and after $T_a$.
Assume that the plateau phase is described by $L(t)\propto t^{-\alpha_2}$,
followed by a steeper decay $L(t)\propto t^{-\alpha_3}$.
Then, by geometry alone, one can derive that (\cite{gg07}):
\begin{equation}
\Gamma \, \propto t^{-(\alpha_3-\alpha_2)/2}
\end{equation}
We can also estimate how the barion loading of the late shells
changes in time.
Assume $L(t) \propto \eta \Gamma\dot M c^2$, and consider
for semplicity $t>T_a$, when all the jet is visible.
Then, for constant $\eta$ we have:
\begin{equation}
\dot M \, \propto \, t^{-(\alpha_2+\alpha_3)/2}
\end{equation}
If we insert the average values of $\alpha_3$ and $\alpha_2$
($\sim 1.25\pm 0.25$ and $\sim 0.6\pm 0.3$, respectively, see \cite{pana06})
we approximately have $\dot M\propto t^{-1}$ and $\Gamma\propto t^{-1/3}$.
This means that the total energy (i.e. integrated over time,
$E =\int \Gamma \dot M c^2 dt$, beginning from the start of the plateau phase)
involved in the late phase is smaller than the energy spent during
the early prompt.
\subsection{Observational tests}
If we allow for {\it two} origins for the emission during and
after the X--ray plateau phase (one due to the late prompt and
the other due to the conventional forward shock), we can account
for a variety of cases: both the optical and the X--rays
are late prompt emission or forward shock emission;
or X--rays and optical are ``decoupled'',
one due to late prompt and the other to the forward shock.
One obvious way to check these possibilities is through the simultaneous
spectral energy distribution (SED), which can confirm or not
if the X--ray and the IR--optical fluxes belong to the same component.
If the emission in the two bands have a different origin
they should not ``interfere" with one another,
requiring that the X--ray spectrum breaks at low energies,
and the optical at high ($\sim$UV) energies.
The unknown extinction due to the host galaxy material
may be a complication, but infrared data can help.
The SED so obtained may clearly show if the IR--optical
and X--ray emission belong (or not) to two different components.
Since in our scenario the late central activity is not
energetically demanding, another test concerns the total kinetic energy
of the fireball after its radiative phase,
using the radio data, as done
e.g. for GRB 970508 \cite{frail00}.
Should the derived energetics be smaller than
what required by e.g. the refreshed shock scenario,
one could exclude this possibility, and instead
favor our scenario.
In cases in which the late prompt emission ends, the underlying
forward shock emission can be revealed.
In the light curve, this should appear as a steep--flat transition
at late times (not to be confused with the usual
steep--flat--steep X--ray decay).
This can also be confirmed by the corresponding SEDs.
\begin{theacknowledgments}
I gratefully thank all my collaborators: A. Celotti, C. Firmani, G. Ghirlanda,
M. Nardini, L. Nava and F. Tavecchio.
\end{theacknowledgments}
\bibliographystyle{aipprocl}
|
2,869,038,155,043 | arxiv | \section{Introduction}
Object recognition in humans is based primarily on shape, \citep{grill-spector_lateral_2001,biederman_recognition-by-components_1987,biederman_surface_1988, hoffman_visual_1998, kourtzi_representation_2001}. In contrast, deep networks (DNs) trained on conventional object and scene datasets such as ImageNet have only a weak grasp of true shape, instead classifying images mainly based on color, texture, local shape, and context cues \citep{baker_deep_2018, brendel_approximating_2019, geirhos_imagenet-trained_2019}. The lack of a "shape bias" in conventional DNs is a likely contributor to the various un-biological performance characteristics of DNs, including their susceptibility to adversarial inputs \citep{goodfellow_explaining_2015}; their propensity to confidently classify random noise patterns as specific objects; their poor generalization behavior; and their inability to recognize objects based on line drawings, though line drawings explicitly represent the key information needed for recognition in humans \citep{brendel_approximating_2019, geirhos_imagenet-trained_2019, russakovsky_imagenet_2015}.
A poor grasp of global object shape does not prevent conventional DNs from performing well on benchmark tasks, however: state-of-the-art top-5 performance on ImageNet is approaching 99\% \citep{pham_meta_2021}. It therefore seems that ImageNet does not test, and is evidently not well suited for training, the basic representational capability that underlies human object and scene vision.
We have developed a new benchmarking approach based on the idea – similar to that motivating some contrastive learning approaches \citep{chen_simple_2020} – that the core competence of a 3D recognition system is to produce similar internal visual codes when familiar objects or scenes are viewed from different perspectives, under different lighting conditions, and/or with different backgrounds. On the other hand, to state the obvious, the recognizing system should produce different codes for different objects, regardless of viewpoint, lighting conditions, etc.
The ability to perform well at this basic matching task is, in our view, a pre-requisite for performing well and generalizing well in real-world object recognition tasks. Conversely, a recognition system that performs poorly at this task cannot be said to understand shape.
In the following, we describe our image set, our measures of task performance, and the way we control task difficulty.
\begin{figure}
\centering
\includegraphics[width=12cm]{figure1.png}
\caption{Complete set of 3D object models (from shapenet.org) rendered using Blender at their "origin viewpoints". Objects were grouped into 20 categories, with 10 instances of each.}
\label{fig:all_objects}
\end{figure}
\section{The ShapeY Image Set}
Constructing a challenging 3D shape recognition benchmark based on nearest-neighbor view matching requires (1) the ability to densely sample 3D views for each represented object; (2) the ability to manipulate (or eliminate) all non-shape-related cues; and (3) the availability of a large and diverse set of 3D object models. \citet{borji_ilab-20m_2016} provide a comprehensive review of 3D view-based image databases; some meet one or two of these requirements, but none – or any other database that we are aware of – satisfies all three. We therefore set out to produce a new object view database using Blender and publicly available 3D models from Shapenet.org \citep{chang_shapenet_2015}.
Our image database currently contains $\sim$ 63,000 rendered images of 3D objects. Each 256x256 image depicts a single object, grey in color, against a black background. The database contains 20 basic level object categories (chair, airplane, plant, etc.), 10 instances of each category (Figure \ref{fig:all_objects}), and 321 3D views of each instance. Object views are grouped into "series" representing different combinations of viewpoint transformations (CVTs). Each series is centered on a common "origin view" of the object, with 5 viewpoint steps moving away from the origin in both directions for a total of 11 views per series. Five types of rigid transformation were used (x, y, pitch, roll and yaw; scale changes were excluded so as to preserve object detail that would be lost at smaller scales), leading to 31 possible CVTs (31 = 5 transformations chosen 1, 2, 3, 4, or 5 at a time). In each viewpoint step the object was transformed simultaneously along all dimensions in the CVT. For example, in a series combining "x" and "roll", each step in the series came with a horizontal shift of $\sim$3.3\% of the image width, combined with 9\degree of image plane rotation. In series’ containing "pitch" or "yaw", the object was rotated in depth around the horizontal or the vertical axis of the object, respectively. The step sizes meant that over the 10 steps from one end of a series to the other, the object could shift by $\sim$33\% of the image width and/or rotate by 90\degree, or both. Examples of all series that contained transformations in both pitch and yaw ('pw') are shown in Figure 2.
\begin{figure}
\centering
\floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,top},capbesidewidth=4.5cm}}]{figure}[\FBwidth]
{\caption{Positive match candidates (PMCs) for view \#8 (blue box) out of 11 in the series with CVT = 'pw'. Rows show all 8 series containing 'pw'. The difficulty of the matching task is controlled by excluding positive match candidates in the "vicinity" of the reference view in viewpoint space. The "exclusion zone" shown (red shading) is for an exclusion radius $r_e=2$.} \label{fig:transforms_and_exclusions}}
{\includegraphics[height=2.5in]{Figure2.png}}
\end{figure}
\section{Novel features of the ShapeY performance benchmark}
ShapeY has three notable features. First, it is designed to probe the micro-structure of the embedding space of a shape-representing network by directly asking "what looks most similar to what" in that space. Given an input image, a response is scored as "correct" if the closest match is to another view of the same object; "categorically correct" if the closest match is to a view of a different object within the same category; and "incorrect" if the closest match is to a "distractor" from a different object category. Thus, unlike most OR benchmarks in wide use, in which a response is considered correct if an object view is rated as \emph{generally} better matched to the correct class compared to other classes (e.g. by computing cosine distance to a class prototype in the final layer of a DN), our benchmark enforces the stronger condition that there should be \emph{no single view of any other object} that matches an input better than the best-matching same-object view. Thus, given a particular view of a lamp, even if 99 out of 100 of the closest views in the database are images of the same lamp, if the single closest match is a view of a boat, the trial is scored as an error. Measuring performance in this "worst case" manner allows us to more sensitively detect "tangles" in the fine structure of the shape-space embedding \citep{dicarlo_untangling_2007}.
Second, our benchmark allows task difficulty to be finely controlled through the use of "exclusions". In the case of a viewpoint exclusion, we choose an exclusion radius $r_e$, and then eliminate as positive match candidates (PMCs) all same-object views surrounding, and therefore most similar to, the input view, up to $r_e$ steps along a designated set of transformation dimensions. For example, if $r_e=2$, and 'pw' is the designated set of exclusion transformations, then PMCs must be at least 3 viewpoint steps away from the reference view in both pitch and yaw, and can therefore be drawn only from the 8 series whose CVTs contain both pitch and yaw ('pw', 'xpw', 'xypw', 'xprw', 'xyprw', 'ypw', 'prw', 'yprw') (Figure \ref{fig:transforms_and_exclusions}). This particular set of exclusion parameters guarantees that any successful match to a reference view must have bridged at least a 27\degree change in both pitch and yaw, and could also differ from the reference view by 3 viewpoint steps along one or more other viewpoint dimensions. Measuring the decline in matching performance as $r_e$ increases allows us to quantify the degree of 3D viewpoint variation that the shape-representing system can tolerate before false matches to similar-looking distractor objects begin to increase in frequency. Similarly, by varying the composition of the exclusion transformation set, we can test which dimensions of viewpoint variation, singly or in combination, are most disruptive to shape-matching performance.
In addition to ignoring modest changes in viewpoint, a shape-based recognition system must be capable of ignoring changes in non-shape cues, including the colors and textures of objects and backgrounds, changes in lighting conditions, etc. (this is the main idea underlying contrastive learning). We quantified the ability to cope with these types of changes through the use of "appearance exclusions". An example of an appearance exclusion would be a "contrast exclusion", in which object views rendered in the original format with black backgrounds (Figure \ref{fig:all_objects}) could only be matched to views of themselves with light backgrounds. That is, views containing the original black backgrounds were excluded from the set of PMCs. To put this into practice, we doubled the database to include a second, light background version of every object view. In the "hard" version of the contrast exclusion task, given a "reference" view, all same-object views with dark backgrounds were excluded as match candidates, forcing the system to recognize the same shape despite the change in background. All views of \emph{other} objects were not subject to the exclusion, however, and were available to match in the original black background only. In the "soft" version of the task, \emph{all} match candidates were subject to the exclusion, including all views of different objects and of the same object.
\begin{figure}
\centering
\floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,top},capbesidewidth=3.3cm}}]{figure}[\FBwidth]
{\caption{Nearest neighbor matching error, plotted against the exclusion radius $r_e$. Top row shows errors for object matching; bottom row shows errors for category matching. Three columns show results for exclusion transformation sets with 1, 2, and 3 viewpoint transformation dimensions, respectively.} \label{fig:nnerror_graphs}}
{\includegraphics[height=8.3cm]{Figure3_final.png}}
\end{figure}
\section{Results}
We tested the performance of a ResNet50 \citep{he_deep_2015}, pre-trained on ImageNet \citep{paszke_pytorch_2019}. Results of a basic matching test are shown in Figure \ref{fig:nnerror_graphs}. Error rates averaged over all 63,000 views are shown in Figure \ref{fig:nnerror_graphs} for all exclusion transformations involving either 1, 2, or 3 transformation dimensions (columns 1-3, respectively).
Error rates were surprisingly high.
For the single transformation dimension 'p', error rate was already 45\% for $r_e$= 2, corresponding to an enforced 27\degree change in object pitch. When pitch and yaw were combined ('pw'), the error rate climbed to nearly 60\% at $r_e$= 2. Error rates were generally worse when more transformation dimensions were combined; worse for depth rotations than image plane rotations; and much worse for rotations than shifts. (The near perfect shift invariance for $r_e=2$ was expected given the ResNet50 embedding was taken from the global average pooling layer, which explicitly pools across image shifts). Category error rates were lower, but remained substantial. For example, a 27\degree change in object pitch and roll led to a 33\% error rate, meaning that 1 in every 3 views in the database was judged to be most similar to view from an entirely different object category.
The numerical results shown in Figure \ref{fig:nnerror_graphs} provide a quantitative summary of the shape representation capabilities of a vision system, and especially the ability to tolerate 3D viewpoint variation. Our approach to nearest-neighbor matching with exclusions can also provide a qualitative measure of the "tangledness" of the shape embedding by analyzing shape match failures.
Match failures are particularly informative regarding the quality of the shape embedding: when a view of a reference object is found to be very close to a view of even one other object of very different shape, it is likely that the reference view is close to a large number of other very different shapes as well, whose discovery depends mainly on having a sufficient number of distractors in the view database. Several examples of match failures are shown in Figure \ref{fig:error_examples}, along with same-object match candidates that were rejected by the DN.
\begin{figure}
\centering
\floatbox[{\capbeside\thisfloatsetup{capbesideposition={left,bottom},capbesidewidth=5cm}}]{figure}[\FBwidth]
{\caption{Eight examples of nearest-neighbor matching errors by a pre-trained ResNet50. Within each group of 3 images, the best match to the reference view (according to ResNet50) is shown at left, and a more similar appearing correct match rejected by the DN is shown at right. Values in orange show the correlation between that view's embedding vector and that of the reference view.\newline} \label{fig:error_examples}}
{\includegraphics[height=5cm]{Figure4.png}}
\end{figure}
We next tested matching performance of ResNet50 with the added challenge of a "contrast exclusion". Figure \ref{fig:contrast_reversal} shows results for the exclusion transformation 'pr' using the both object and category matching criteria. When a reference view could only be matched to contrast reversed views (soft version), category error rates climbed from 33\% to 42\% at $r_e$= 2, and object matching errors climbed above 70\%. The difficulty encountered by ResNet50 in matching contrast reversed views is striking: even when the PMCs for a reference view included \emph{the exact same object view} but for the change in background, the reference view was falsely matched more than 70\% of the time to an object from an entirely different category.
\begin{figure}[h]
\centering
\floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,top},capbesidewidth=6cm}}]{figure}[\FBwidth]
{\caption{Nearest-neighbor matching errors when a "contrast exclusion" was compounded with viewpoint exclusions. Results shown is for the exclusion transformation set 'pr'. The first point on the x-axis means that \emph{all} contrast-reversed views, including the reference view itself, were available as positive match candidates.} \label{fig:contrast_reversal}}
{\includegraphics[height=5cm]{Figure5.png}}
\end{figure}
\section{Discussion}
As an approach to OR performance testing, pairwise view matching has three particular merits. First, the ability to recognize familiar objects and scenes despite modest changes in viewing distance, angle, lighting, and background is an innate capability of biological vision systems, and is arguably a more fundamental visual capability than image classification. Thus, we can imagine a vision system that can reliably rate the similarity of two views of an object without knowing the object class, but it would seem paradoxical if a system could correctly classify objects, while being unable to rate the similarity of two object views. In short, pairwise view matching is a good starting point for evaluating recognition competence. Second, our approach allows task difficulty to be controlled by parametrically varying the set of views that are qualified to be positive match candidates for any given reference view. Third, collecting and analyzing matching errors allows us to draw a rough equivalence between an amount of viewpoint change, which should minimally alter an object's shape code, and an amount of actual shape change. If we discover that a modest depth rotation of an object regularly alters the embedding vector as much as switching to a different object category, then the embedding is performing poorly. This applies to ResNet50 when pre-trained on ImageNet: a viewpoint change of 27\degree in pitch and yaw alters the embedding vector as much as switching from a birdhouse to a couch, or from a faucet to a beer mug (Figure \ref{fig:contrast_reversal}). Likewise, if switching the background of an object from dark to light changes the embedding vector as much as a change of object category, we may conclude the embedding space is badly entangled. This can serve as a cautionary note when utilizing embeddings on downstream transfer learning tasks.
It is worth noting that, when views of objects of very different shape are found to lie near to each other in the embedding space, then it is likely that every object view is near to a panoply of different shapes. Therefore as the number of different objects in the database increases, and the embedding space becomes more densely populated with views, the rate of false matches is likely to approach 100\%. As a reference point, our database currently contains 20 basic level object categories; by comparison, the number of basic level categories that a human subject effortlessly commands is $\sim$100-fold larger (in the range of 1-$\sim$3,000 \citep{biederman_recognition-by-components_1987} see Footnote 10).
|
2,869,038,155,044 | arxiv | \section{Introduction}
Frequency modulation (FM) is a basic acoustic feature of animal vocalisation, human speech and music. In human speech, consonants preceding and following a vowel can be acoustically characterised by formant transitions: a series of simultaneous fast FM sinusoids of around 50\,ms duration that start or finish in the frequencies characterising the vowel \cite{Kent2008}. At all stages of the ascending auditory pathway, FM is encoded along the tonotopic axis in a \emph{spectral representation} that holds the instantaneous frequency of the stimuli \cite{Hu2003}. Individual neurons at higher levels of the processing hierarchy (inferior colliculus \cite{Geis2013, Li2010, Hage2003}, medial geniculate body \cite{Kuo2012, Lui2003}, and auditory cortex \cite{Issa2016, Trujillo2013, Ye2010, Zhang2003}) also encode FM direction and rate. We call this latter, more abstract representation, the \emph{sweep representation}.
Despite the massive feedback projections that characterise the auditory pathway \cite{Schofield2011}, computational models to date use only feedforward mechanisms to explain FM encoding \cite{Skorheim2014}. Given the importance of high-order predictive elements in the optimisation of speech recognition abilities (e.g., \cite{Moore1995}), descending projections are likely to play an important role for how fast FM-sweeps, the basic building blocks of speech, are encoded in the auditory system. Feedback connections have general complex repercussions on the way sounds are processed by, for instance, modulating the properties of the receptive fields \cite{Shamma2014, Suga2012}.
The sweep pitch shift is a classical behavioural effect from psychoacoustics first reported around 60 years ago \cite{Brady1961}. In the original experiment, participants listened to fast rising and falling FM-sweeps. The authors discovered that the participants judged up sweeps as eliciting a higher pitch than down sweeps with the same average fundamental frequency. These findings were later replicated \cite{Nabelek1970, Rossi1978}. To explain the effect d'Alessandro and colleagues proposed a phenomenological model assuming that the pitch of a sweep is integrated using a fixed-size window from the instantaneous frequency of the stimulus across time \cite{DAlessandro1994, DAlessandro1998}. Due to the leaking memory of the integration, this process naturally favours the latest frequencies of the sweep, explaining the perceptual pitch shift. However, the authors found that different integration weights were necessary to explain different partitions of their data, indicating that the phenomenological model is not a parsimonious explanation of the sweep pitch shift. Whether classical mechanistic models of pitch processing (see~\cite{DeCheveigne2005} for a review) can explain the effect has not been considered before.
Here we first showed that the sweep pitch shift \cite{Brady1961, Nabelek1970, Rossi1978} cannot be explained using classical models of pitch processing (see~\cite{DeCheveigne2005} for a review). We argue that the inability of these models to reproduce the perceptual results stems from the fact that they consist of feed-forward elements only, so that the spectral representation cannot interact with information related to the FM rate or direction. Similarly, since previous models of FM encoding \cite{Skorheim2014} considered a static representation of spectral information, they predict that the sweep pitch shift would not occur. The aim of this study is to build a comprehensive model of FM encoding incorporating both, the sweep and the spectral representations, and describing the sweep-to-spectral feedback mechanisms active during the processing of FM sounds.
We approached this problem in three steps. First, we reexamined and quantified the sweep pitch shift in a behavioural experiment, and tested whether the experimental data could be explained by existing computational models of FM-encoding and pitch perception \cite{Zilany2014, Meddis1997, Meddis2006}. In the second step, we built a hierarchical model motivated by the hypothesis that the pitch sweep shift results from feedback modulation between the two representations. The feedforward components of the model were based on results of previous studies on FM direction selectivity and included frequency and FM sweep direction processing \cite{Skorheim2014, Ye2010, Razak2008}. The top-down architecture was grounded in the basis of generative hierarchical models and predictive coding \cite{Mumford1992, Friston2005} and informed by the human psychophysics results from the first part of the study. In the third and last step, we used a new set of stimuli termed \emph{sweep trains} to further validate the model. These stimuli, consisting of a concatenation of five sweeps, preserve the same acoustical features of the original sweeps but elicited different dynamics in the feedback system of the model than their single-sweep counterpart. The ability of the model to predict the pitch elicited by these novel stimuli illustrated the generalisation power of the computational mechanisms proposed in this work.
\section{Results}
\subsection{The sweep pitch shift revisited}
For the first behavioural study we used a total of $10 \times 3 = 30$ fast FM sweeps. The sweeps had 10 linearly distributed frequency spans $\Delta f \in [-600, 600]$\,Hz and 3 average frequencies $\bar{f} \in \{900, 1200, 1500\}$\,Hz. Each sweep had a duration of 40\,ms and was preceded and followed by 5\,ms segments of constant frequency. 8 participants matched the stimuli against probe pure tones of adjustable frequency. In one part of the experiment probe tones were presented before the sweep, in the other after the sweep. Each participant matched each stimulus four times. The pitch sweep shift was measured as the difference between the perceived pitch and the average frequency of the sweep: $\Delta p = f\subix{perceived} - \bar{f}$. Stimuli are available in the supporting information (\nameref{sounds}).
We found that the pitch shift $\Delta p$ depended on the sweep's span $\Delta f$ (Fig~\ref{fig:swPitch} and Table~Tab~\ref{tab:stats}). The exact dependence was consistent across listeners for sweeps with $\Delta f \leq 333$\,Hz lying in the vicinity of the linear fit $f_{\text{perceived}} \simeq \bar{f} + m\,\Delta f$ (with an average deviance from the fit of 46\,Hz). Sweeps with larger frequency spans resulted in wider distributions of $f_{\text{perceived}}$ due to higher inter- and intra- subject variabilities (Fig~\ref{fig:swVar}; see also~Fig.~\ref{fig:slopes}). Presenting the sweep before or after the probe tone did not systematically affect the perceived pitch (Fig.~\ref{fig:lrDev}).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{newFig1.eps}
\caption{\textbf{Sweep pitch shift.} Kernel density estimations on the perceived pitch are plotted separately for each of the 30 sweeps used in the experiment. The $y$-axis of each plot shows the magnitude of the sweep pitch shift $\Delta p$. The x-axis list the gaps of each of the sweeps. Red crosses show the mean and standard error of the data. Dark dashed lines show the group linear fit of the data.}
\label{fig:swPitch}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{newFig2.eps}
\caption{\textbf{Variance of the perceived pitch and up/down asymmetry.} Left: Kernel density estimations of the intra-subject standard deviation of the sweep pitch shift $\Delta p$, plotted separately for the different frequency spans $\Delta f$. Each sample in the distributions corresponds to the standard deviation of the perceived pitch of a sweep in one subject (i.e., in each distribution there are $8 \times 3 $ points, one for each subject and $\bar{f}$). The variance is monotonically correlated to the absolute gap $|\Delta f|$ ($r_s = 0.63$, $p < 10^{-27}$). Right: Kernel density estimations of the up/down asymmetry distributions as defined in Eq~\eqref{eq:udAsymm}. Each sample of the distributions corresponds to the difference of the average absolute deviation from centre frequency between up and down sweeps of the same $|\Delta f|$ for a given subject and centre frequency ($N = 8\times3 = 24$). Red crosses show the mean and the standard error of the data.}
\label{fig:swVar}
\end{figure*}
In their classical study, Brady and colleagues \cite{Brady1961} showed that the absolute value of the sweep pitch shift $|\Delta p|$ is larger for down than for up sweeps. In a later study, Nabelek and colleagues \cite{Nabelek1970} showed the reversed effect. To test if our data replicates any of these previous findings we drew, for each absolute frequency span $|\Delta f|$, the distribution of the differences between the pitch shift in up and down sweeps:
\begin{equation}
\text{asymm}^{\uparrow \downarrow}_{|\Delta f|} =
|\Delta p(\Delta f)| - |\Delta p(-\Delta f)|
\label{eq:udAsymm}
\end{equation}
Our results robustly replicated the observations from Nabelek and colleagues (Fig~\ref{fig:swVar}, right). The sweep pitch shift was significantly larger for up than down sweeps for \mbox{$|\Delta f| \leq 200\,$Hz} ($p < 2 \times 10^{-5}$) but not for \mbox{$|\Delta f| = 66\,$Hz} ($p = 0.77$), according to two-tailed rank-sum tests ($N = 96$). Up sweeps have been consistently found to be easier to discriminate from pure tones than down sweeps in a wide range of experimental conditions \cite{Luo2006b, Gordon2002, Madden1997, Collins1978}, probably because auditory nerve responses to up sweeps compensate for the low-frequency processing delay of the basilar membrane \cite{Uppenkamp2001}, provoking stronger neural responses than their down counterparts. The stronger pitch shift to up sweeps already suggest that the sweep representation plays an important role on the genesis of the sweep pitch shift.
Last, we tested if the dependence of the sweep pitch with $\Delta f$ was robustly replicated across subjects. The slope of the linear fit between $f_{\text{perceived}}$ and $\Delta f$, similar in magnitude in all participants, are plotted in Fig.~\ref{fig:slopes}.
\subsection{Bottom-up models of pitch cannot explain the pitch shift}
Pitch is represented in two complimentary codes within the auditory system: the spectral code, produced by the spectral decomposition of the stimuli performed by the basilar membrane; and the temporal code, comprised in the spike timings of the neurons across the auditory nerve that are phase locked to the stimulus waveform (see~\cite{Oxenham2013} for a review). If the sweep pitch shift was a consequence of bottom-up pitch processing, we would expect the effect to be explainable by previous computational models that use either of the two representations to infer pitch. To test this we computed the pitch predicted by one representative model of each family; i.e., one model using the spectral and one model using the temporal codes.
In the spectral model, pitch can be directly inferred by computing the expected value of the activity across cochlear channels in the auditory nerve \cite{DeCheveigne2005, Zilany2014}. Unlike the empirical data, predictions of the spectral model show no systematic dependence of $f_{\text{perceived}}$ on $\Delta f$ (Fig~\ref{fig:spectralModel}). Note that, since the sinusoidal FM-sweeps used in the experiments evoke a single peak in the spectral distributions, more sophisticated spectral models designed to explain how the pitch of harmonic complex tones would yield identical results \cite{DeCheveigne2005}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{newFig3.eps}
\caption{\textbf{Predictions of the spectral model of pitch.} Heatmaps show the mean activation across the duration of the stimuli at different cochlear channels, as simulated by a model of the auditory periphery in response to each sweep; units are arbitrary. Error bars point to the expected value and variance of the distribution across frequencies for each sweep. Each empty square denotes the expected channel elicited by a pure tone with the frequency of the average experimental data of the corresponding sweep.}
\label{fig:spectralModel}
\end{figure*}
The temporal model was based on the principles of the summary autocorrelation function (SACF), that measures pitch according to the phase-locked response in the auditory nerve \cite{Meddis1997, Balaguer2008}. We chose this model because it performs a relatively straightforward analysis of the phase-locked activity in the periphery. Predictions of the temporal model lay within $f_{\text{perceived}} \simeq \bar{f}$ independently of $\Delta f$ (Fig.~\ref{fig:sacfModel}). This is most likely a consequence of the SACF being unable to decode rapidly changing frequencies with such short stimuli.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{newFig4.eps}
\caption{\textbf{Predictions of the temporal model of pitch.} Heatmaps show the distribution across periods elicited in the summary autocorrelation factor (SACF) for each sweep. The value corresponding to each period was computed as the average activation of the SACF across four harmonics (see Methods); units are arbitrary. Error bars point to the expected value and variance of the distribution across periods for each sweep. Each empty square denotes the expected period elicited in the SACF by a pure tone with the frequency of the average experimental data of the corresponding sweep.}
\label{fig:sacfModel}
\end{figure*}
\subsection{The FM-feedback spectral model}
In this section we introduce a hierarchical model of FM-encoding, termed \emph{FM-feedback spectral model}, with two levels (Fig~\ref{fig:diagram}). In the first level, the \emph{spectral} layer holds a spectral representation of the sound. In the second level, the \emph{sweep} layer encodes FM-sweep direction. The spectral layer uses the spectral rather than the temporal code to represent the instantaneous frequency of the stimuli because, as we showed in the previous section, the phase-locked responses of the auditory peripheral model \cite{Zilany2014} are not able to catch up with the fast modulation of the frequency of the sweeps robustly. Moreover, the animal literature converges in the notion that sweep direction and rate are decoded from the spectral, and not the temporal representation of the sounds \cite{Skorheim2014, Kuo2012, Pollak2011, Li2010, Zhang2003, Lui2003}.
The main hypothesis introduced in the FM-feedback spectral model model is that, once the direction of the sweep is encoded in the sweep layer, a feedback mechanism modulates the effective time constant of the populations encoding the frequencies that are expected to be activated next in the spectral layer. We expect this parsimonious mechanism to qualitatively explain why the posterior parts of the sweep are given a higher weight during perceptual integration and to quantitatively reproduce the exact dependence of pitch with $\Delta f$ observed in our data. An implementation of the FM-feedback spectral model written in python is freely available at \url{https://github.com/} (the libraries will be available there upon publication; the code is currently attached to the manuscript for revision).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{newFig5.eps}
\caption{\textbf{Diagram of the FM-feedback spectral model.} The model consists of three layers: first, the \emph{peripheral system}, representing the activity at the beginning of the auditory nerve; second: the \emph{spectral layer}, with a network integrating the spectral information of the sound ($f$ network); and third, the \emph{sweep layer}, with one network specialised in detecting up ($\uparrow$ network) sweeps and another network specialised in detecting down ($\downarrow$ network) sweeps. The spectral layer integrates afferent inputs from the periphery and holds a representation of the stimulus that can be used to infer pitch. The sweep layer receives afferent inputs from the spectral layer that are used to decode the direction of the sweeps. Feedback connections from the sweep layer to the spectral layer modulate the time constants of the populations that are expected to be activated once the direction of the sweep has been decoded. The inhibitory ensembles in the up and down network enforce competition between up and down ensembles in a winner-take-all fashion. Note that the diagram is schematic and shows only 5 of the $N = 100$ populations and a single example of the connections between the sweep and the spectral layers. The labels of the boxes of the peripheral system are also schematic: the spectral resolution of the peripheral system is much higher.}
\label{fig:diagram}
\end{figure}
Example responses of the excitatory populations of the model to up and down sweeps are shown in Fig~\ref{fig:popResponses}.
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{newFig6.eps}
\caption{\textbf{Model responses to an up and a down sweep.} A-E show the responses to an up sweep, and F-J to a down sweep. From top to bottom: (A/F) the instantaneous firing rate of the up-selective excitatory populations in the sweep layer; (B/G) the instantaneous firing rate of the down-selective excitatory populations in the sweep layer; (C/H) the instantaneous firing rate of the populations in the spectral layer and, in the right of the panels, a schematic view of the probability distribution of pitch derived from this representation; (D/I) the output of the model of the auditory periphery; (E/F) the instantaneous frequency of the sweeps along time. In all panels except for E/J, $y$-axis represents the cochlear channel $n$, ordered from bottom to top. The stimuli were the up and a down sweeps with $\Delta = \pm300\,\text{Hz}$ and $\bar{f} = 1200\,\text{Hz}$ used in the experiment.
\label{fig:popResponses}}
\end{figure*}
\subsubsection{Modelling FM direction selectivity \label{sec:mod:model:dsi}}
At least three mechanisms for FM direction selectivity have been identified in the animal literature: asymmetric sideband inhibition \cite{Geis2013, Williams2012, Fuzessery2011}, duration sensitivity \cite{Morrison2018, Kuo2012, Williams2012}, and delayed excitation \cite{Fuzessery2011, Ye2010, Razak2008}. In order to prevent an excessive inflation in the dimensionality of the model's parameter space, we focus here on delayed excitation, a straightforward mechanism where neurons with different best frequencies output to the direction selective neuron with different delays; e.g., an up-selective neuron will receive delayed inputs from a neuron tuned to low frequencies and instantaneous inputs from a neuron tuned to high frequencies, so that an up sweep results in simultaneous excitation from both of them. Any related mechanism showing FM direction selectivity should yield similar overall results \cite{Skorheim2014}.
In the FM-feedback spectral model, delayed excitation is implemented by introducing consistent delays between the populations in the spectral and the sweep layers. A sweep population receiving direct input from the spectral population encoding $f_0$ and responding selectively to up sweeps will receive increasingly delayed inputs from the spectral populations centred at $f < f_0$ (Fig~\ref{fig:diagram}). The relative delay in the connection between a spectral population $m$ and a target sweep population $n$ depends linearly on the spectral distance between the two ensembles: $\delta t_{nm} = |n-m| \delta t_0$.
The sweep layer consists of two networks, each encoding one of the FM directions and responding selectively to \emph{up} ($\uparrow$) and \emph{down} ($\downarrow$) sweeps. Each of the networks consist of $N$ columns, each comprising an excitatory and an inhibitory population (Fig~\ref{fig:diagram}).
To quantify direction selectivity, we used the standard direction selectivity index (DSI; e.g.,~\cite{Zhang2003}), defined as the proportion of the activity elicited in a network by an up sweep minus the activity elicited in the same network by a down sweep with the same duration and frequency span. An ideal network responding selectively to up sweeps will have a $\text{DSI} = +1$ and an ideal network responding selectively to down sweeps will have a $\text{DSI} = -1$. Similar DSI magnitudes are measured in the down and the up network (Fig~\ref{fig:dsiFixed}); systematically increasing DSI magnitudes were elicited by increasing $\bar{f}$ and $|\Delta f|$. Network selectivity to FM direction was robust across reparametrisations of the model, although deactivation of the feedback connections resulted in a 8.7($\pm1.5$)\% average decrease in DSI$^{\uparrow}$ and in a 9.7($\pm1.4$)\% average increase in DSI$^{\downarrow}$, indicating that the feedback connections sharpen direction selectivity.
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{newFig7.eps}
\caption{\textbf{Direction selectivity indices for the sweeps of the experiment.} DSI${\uparrow}$ and DSI$^{\downarrow}$ to sweeps with different $\bar{f}$ and $|\Delta f|$. DSI is defined as the proportional activity to up in comparison to down sweeps in a given network.}
\label{fig:dsiFixed}
\end{figure*}
\subsubsection{Predictive mechanisms \label{sec:mod:model:nmda}}
Once neurons in the sweep layer encode the sweep direction, feedback connections targeting the spectral layer facilitated the encoding of expected frequencies. Let $i$ be the population in the up-sweep network receiving inputs from a population in the spectral layer encoding a certain frequency $f_0$. Due to delayed excitation, the population $i$ becomes active when it detects an up sweep occurring in the neighbourhood of frequencies $f \leq f_0$. Although in some occasions the up sweep will culminate in $f_0$, in most of the cases $f_0$ will be only an intermediate step in the ascending succession of the sweep and thus the activation of $i$ would imply that populations in the spectral layer with best frequencies immediately higher than $f_0$ are likely to activate next. The top down mechanism of the model, encoded in the feedback projections stemming from the sweep layer and targeting the spectral layer, reduced the temporal constant of these populations using low-current feedback excitatory signals. Similarly, feedback connections stemming from a population $j$ in the down-network that received timely inputs from a spectral population with best frequency $f$ will target populations in the spectral network with best frequencies immediately lower than $f_0$.
NMDA receptors are typically responsible for conveying feedback excitatory information in the cerebral cortex \cite{Friston2001, Salin1995}; specifically, NMDA-deactivation results in a reduced feedback control in the auditory pathway \cite{Rauschecker1998}. Thus, while bottom-up drive was modelled using AMPA dynamics, feedback connections were modelled according to NMDA-like synaptic gating dynamics with a finite rising time constant \cite{Brunel2001}. Feedback current intensity was kept low in comparison to the bottom-up driver by enforcing NMDA conductivity to be much smaller than the AMPA conductivity (i.e., $J\NMDA \ll J\AMPA$).
The low-current feedback signal modulates the population to elicit only a subtly higher firing rate than a not modulated population. Due to network effects captured in the mean-field model \cite{Ostojic2011}, this subtle activation driven by a low-current input resulted in a significantly lower effective integration time constant at the neuronal population level (Fig~\ref{fig:tauTrajectories}), causing the population to react faster to changes in the bottom-up input. This increased readiness reduced the metabolic cost of encoding expected frequencies and, since the population will spend more time in the high-firing-rate regime, it indirectly results in a stronger contribution of the frequencies expressed in the last part of the sweep to the probability distribution of pitch.
\begin{figure}
\includegraphics[width=\columnwidth]{newFig8.eps}
\caption{\textbf{Effect of the predictive feedback mechanism on the population time constants.} $\tau\supix{pop}(h, I)$ (green-blue solid lines) depends on the firing rate $h$ and the synaptic input $I$ (cf. Eq~\eqref{eq:taupop}). The figure portraits two different trajectories of the variable $\tau\supix{pop}(h, I)$ in the $(h, I)$ space, both starting at an initial state ($h \sim 0$ in the regime of spontaneous activity with no inputs) and finishing at a equilibrium state (with $h \sim 40$\,Hz). The dashed purple line shows a trajectory followed by the population when the forward synaptic input from the peripheral layer is plugged in without previous modulation. In this case, the population reacts slowly to the strong synaptic input, and eventually converges to equilibrium. The dotted lines (orange and red) show the trajectory of the same population in the presence of feedback modulation. The low-current modulatory inputs drive the population to a state with a low effective time constant without substantially increasing its firing rate (orange section of the trajectory). When the strong synaptic input from the auditory periphery is switched on (red section of the trajectory) the population reacts quickly to the synaptic, reaching equilibrium much faster than in the non-modulated case.}
\label{fig:tauTrajectories}
\end{figure}
\subsubsection{Reproduction of the sweep pitch shift \label{sec:mod:results:sweeps}}
The FM-feedback spectral model explains $R^2 = 0.88$ of the variance of the experimental data (Fig~\ref{fig:swMod}). Moreover, there was a significant correlation between the variance of the model responses and the standard error of the experimental data ($r_p = 0.60$, $p = 0.0005$), indicating that the larger variability in the pitch shift observed for the larger $\Delta f$ can be understood as a consequence of a wider spread activation across the spectral populations.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{newFig9.eps}
\caption{\textbf{Predictions of the FM-feedback spectral model for FM-sweeps.} Heatmaps show the mean activation at different cochlear channels as simulated by a model of the auditory periphery in response to each sweep; units are arbitrary. Error bars point to the expected value and variance of the distribution across frequencies for each sweep. Each empty square denotes the expected channel elicited by a pure tone with the frequency of the average experimental data of the corresponding sweep.}
\label{fig:swMod}
\end{figure*}
Since up sweeps provoke a stronger overall activity in the auditory nerve \cite{Rupp2002}, facilitation currents were slightly higher for up than for down sweeps, resulting in a noticeable stronger absolute mean pitch shift for up than for down sweeps, reproducing the experimental data (Fig~\ref{fig:swModAsymm}). Note that this is not an obvious result of the model fitting for the data, as the expected difference between the absolute deviance $f_{\text{perceived}} - \bar{f}$ for up and down sweeps $E[\text{asymm}^{\uparrow \downarrow}] \simeq 24$\,Hz is significantly smaller than the average error of the model predictions with respect to the data ($E[\text{error}] \simeq 54$\,Hz).
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{newFig10.eps}
\caption{\textbf{Predictions of the FM-feedback spectral model for the up/down asymmetry.} Error bars show the model predictions of the up/down asymmetry coefficient asymm${\uparrow \downarrow}$ (see Eq~\eqref{eq:udAsymm}). Error bars are estimations of the standard error calculated based on the dispersion of the centroids for different $\bar{f}$ and the variance of the spectral distribution $\rho$ of each condition. Experimental data in the background is the same as in Fig~\ref{fig:swVar}, right.}
\label{fig:swModAsymm}
\end{figure*}
To study the dependence of the fitness with the model's parameters we recomputed the explained variance $R^2$ across the parameter space of the model (Fig~\ref{fig:pitchparspaceSw}). The model explained the experimental data in a wide section of the parameter space, with an average $R^2$ across a 5-point diameter sphere around the final parameters of $E[R^2] = 0.78\pm0.03$. To show that fit of the model was not simply caused by an overall stronger activation provoked by the feedback currents, but by a decrease in the effective time constant of the populations, we also computed the dependence of $R^2$ with the conductivity of the feedback current $J\NMDA$ while keeping the population time constant $\tau$ fixed to $\tau = \tau^{\text{memb}}$ (see Methods). Even considering lower $\tau^{\text{memb}}$ than the physiologically valid nominal value $\tau^{\text{memb}} = 20\,$ms, without an adaptive $\tau$, much stronger NMDA currents ($J\NMDA \sim J\AMPA$) are necessary to drive the spectral distribution towards the experimental results.
\begin{figure}
\includegraphics[width=\columnwidth]{newFig11.eps}
\caption{\textbf{Experimental fit in relation to the model parametrisation.} Shading matrices show the explained variance of the experimental data $R^2$ (bright yellow means $R^2 = 1$, dark blue means $R^2 = 0$) for different points in the parameter space. Unless stated otherwise, parameters not varied in the matrices correspond to the values listed in the Methods section. The two leftmost plots show the dependence of $R^2$ with the conductivity of the feedback connections and the dynamics of the excitatory population time constants. Different values of the nominal population's time constant were used to illustrate that the dynamic effect (rather than the resulting shorter time constant) is crucial to explain the experimental results; however, during the parameter tuning the temporal constant was constrained to $\tau\supix{memb} = 20$\,ms based on physiological observations \cite{McCormick1985}. The rightmost plot shows the dependence of $R^2$ on the width and reach ($w_{\omega s}$ and $\Delta_{\omega s}$, respectively; see Methods) of the feedback connections. Black crosses in the parameter space signal the final parametrisation.}
\label{fig:pitchparspaceSw}
\end{figure}
\subsubsection{Reproduction of previous experimental results \label{sec:mod:results:brady}}
We tested whether the FM-feedback spectral model was able to predict the pitch shift of additional data from the study by Brady and colleagues \cite{Brady1961}. We chose their stimuli because this was the only study that investigated the dependence of the pitch shift with properties different than $\Delta f$. Specifically, in the \emph{experiment II} from the original paper, Brady and colleagues considered FM-sweeps with a fixed 20\,ms transition between 1000\,Hz and 1500\,Hz that was located at six different positions within a 90\,ms stimulus (see schematics in Fig~\ref{fig:bradySchematic}, left). In the \emph{experiment III}, they used FM-sweeps in the same $\Delta f$ but with transitions of six different durations (see schematics in Fig~\ref{fig:bradySchematic}, right). All stimuli had the same duration (90\,ms) and frequency span (1000-1500\,Hz); in each of the two experiments there was a total of 12 stimuli (six up, six down).
\begin{figure}
\includegraphics[width=\columnwidth]{newFig12.eps}
\caption{\textbf{Schematic view of the stimuli from \cite{Brady1961}.} In the stimuli from Brady's experiment II (left) the transient was fixed to a 20\,ms duration and its onset was systematically varied so that the transition falls at different segments of the stimulus. In the stimuli from Brady's experiment III (right) the stimulus offset was fixed at 90\,ms and the transient's onset varied between 10 and 50\,ms, resulting in transients of different durations. We extended the duration of these last stimuli to 95\,ms to prevent the ramping at the end of the stimulus from overlapping with the FM transient.}
\label{fig:bradySchematic}
\end{figure}
We compared the predictions of the FM-feedback spectral model with the experimental results reported in the original paper (Fig~\ref{fig:bradyPitch}). The experimental trend is well reproduced by the model. Predictions showed a strong Pearson's correlation with the reported sweep pitch shift across both experiments ($r_p = 0.87$, $p < 10^{-6}$) and a weaker but still significant correlation between the variance of the activation distribution $\rho$ and the experimental standard error ($r_p = 0.46, p = 0.03$). These correlations showed that the participant’s perception is well predicted by the model.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{newFig13.eps}
\caption{\textbf{Predictions of the FM-feedback spectral model for Brady's stimuli.} Shading matrices show the distribution of the activation across channels ($y$-axis) for different transient onsets ($x$-axis). Squares printed over the distributions mark the estimations of the experimental results in the channel space.}
\label{fig:bradyPitch}
\end{figure*}
\subsection{Testing the FM-feedback spectral model with a novel class of stimuli }
The results described so far are in favour of the hypothesis that a feedback system between FM-sweep direction-encoding and frequency-encoding populations are responsible for the sweep pitch shift. To validate these findings, this section introduces a novel set of stimuli specifically designed to contest the main hypothesis of the model. The novel stimuli consist of concatenations of several single sweeps with the same properties as the stimuli used in the first experiment. We call them \emph{sweep train} in the following. Sweep trains present the same acoustical properties as the single sweeps used in the first behavioural experiment and should nominally elicit the same pitch percept as their single-sweep subcomponents. However, the FM-feedback spectral model predicts that the feedback system will only reduce the time constant of the spectral populations during the processing of the first sweep in the train, because they will already have an elevated firing rate (and thus a low effective time constant) during the processing of the subsequent sweeps in the train. Consequently, the model predicts that the sweep trains will elicit a much more subtle pitch shift than their single sweep counterparts. We tested this prediction in a perceptual experiment analogous to the first experiment.
\subsection{Sweep trains show minimal sweep pitch shift}
To ensure that each train was perceived as a single auditory object, we only used sweeps with $|\Delta f| \leq 333$\,Hz to ensemble the sweep trains, resulting in a total of $3\times6 = 18$ stimuli. The magnitude of the pitch shift depended $\Delta f$ (Fig~\ref{fig:trPitch}, Tab~\ref{tab:stats}, bottom). However, as qualitatively predicted by the FM-feedback spectral model, the effect sizes of the correlation were lower than in the single-sweep experiment (cf., Tab~\ref{tab:stats}, top). Data also showed much higher inter- and intra-subject variability than in the single-sweep experiment (Fig.~\ref{fig:slopes}). After completing the experiment, participants reported that the sweep train stimuli were harder to match than the single-sweep counterparts. Although trains with small $\Delta f$ were generally perceived as continuous tones, subjects reported that a few trains (putatively those with the largest $\Delta f$) elicited a ringing-phone-like percept. Stimuli are available in the supporting information (\nameref{sounds}).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{newFig14.eps}
\caption{\textbf{Sweep pitch shift for sweep trains.} Kernel density estimations on the perceived pitch are plotted separately for each of the 18 sweep trains used in the experiment. The $y$-axis of each plot shows the magnitude of the sweep pitch shift $\Delta p$. The x-axis list the gaps of each of the sweeps. Red crosses show the mean and standard error of the data. Dark dashed lines show the group linear fit of the data.}
\label{fig:trPitch}
\end{figure*}
Sweep-train stimuli show only a subtle up/down asymmetry that did not reach statistical significance ($p = 0.67, p = 0.96, p = 0.36$ for $|\Delta f| = 333$, $|\Delta f| = 200$, $|\Delta f| = 66$, respectively; according to two-sided Wilcoxon signed rank tests with 24 samples per condition).
\subsection{The FM-feedback spectral model explains the diminished pitch shift in the sweep trains}
Next, we assessed the ability of the FM-feedback spectral model to quantitatively explain the effect size of the pitch shift observed in the sweep trains. The fit with the experimental data was comparable to that of the single sweep stimuli: the model explained $R^2 = 0.83$ of the variance of the data (Fig.~\ref{fig:trMod}) and the response distribution's expected value was strongly correlated with the observed pitch shift ($r_p = 0.99, p < 10^{-18}$;).
\begin{figure*}
\includegraphics[width=\textwidth]{newFig15.eps}
\caption{\textbf{Predictions of the FM-feedback spectral model for sweep trains.} Shading matrices show the distribution of the activation across channels ($y$-axis) for different sweep $\Delta f$ ($x$-axis). Squares printed over the distributions mark the expected channel $E[k]$ as defined in Eq~\eqref{eq:spectralPitch}. Solid error bars are estimations of the experimental results in the channel space. The expected value agrees with the experimental data. Moreover, stimuli with larger $\Delta f$ seem to elicit wider activation distributions than stimuli with smaller $\Delta f$, mirroring the generally larger variance observed in the behavioural data corresponding to the larger $\Delta f$.}
\label{fig:trMod}
\end{figure*}
As in the first experiment, the variance of the experimental data was strongly correlated to the width of the model responses ($r_p = 0.60, p = 0.0005$; Fig~\ref{fig:swModAsymm}, left).
\paragraph{}
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{newFig16.eps}
\caption{\textbf{Predictions of the FM-feedback spectral model for the variance and up/down asymmetry in sweep trains.} Left: Kernel density estimations of the intra-subject standard deviation of the sweep pitch shift $\Delta p$, plotted separately for the different frequency spans $\Delta f$. Each sample in the distributions corresponds to the standard deviation of the perceived pitch of a sweep in one subject (i.e., in each distribution there are $8 \times 3 $ points, one for each subject and $\bar{f}$). The variance is monotonically correlated to the absolute gap $|\Delta f|$ ($r_s = 0.63$, $p < 10^{-27}$). Right: Kernel density estimations of the up/down asymmetry distributions as defined in Eq~\eqref{eq:udAsymm}. Each sample of the distributions corresponds to the difference of the average absolute deviation from centre frequency between up and down sweeps of the same $|\Delta f|$ for a given subject and centre frequency ($N = 8\times3 = 24$). Red crosses show the mean and the standard error of the data. Error bars show the model predictions of the up/down asymmetry coefficient asymm${\uparrow \downarrow}$ (see Eq~\eqref{eq:udAsymm}). Error bars are estimations of the standard error calculated based on the dispersion of the centroids for different $\bar{f}$ and the variance of the spectral distribution $\rho$ of each condition. Cf.,~Fig~\ref{fig:swModAsymm}}
\label{fig:trModAsymm}
\end{figure*}
Last, we tested whether the different up/down asymmetry (asymm$^{\uparrow \downarrow}$) observed in the single sweeps and sweep train data could be quantitatively explained by the FM-feedback spectral model. In the single-sweep data, the model predicts a stronger pitch shift magnitude $|\Delta p|$ for up sweeps (Fig~\ref{fig:swModAsymm}) because, due to the compensation for the delay introduced by the basilar membrane in response to low frequencies, these elicit a more synchronous and stronger peak activation in the auditory nerve \cite{Rupp2002}, resulting in larger feedback currents. Qualitatively, a much weaker asymmetry was expected in the sweep-train data, since the effects of the feedback system are virtually absent during the processing of the ending four fifths of the stimuli. Modelling results on the up/down asymmetry closely reproduced the empirical data (Fig~\ref{fig:swModAsymm}), fully explaining the observed differences between the two families of stimuli.
\section{Discussion}
In this work we have built a model describing how cross-feature feedback between two different representations of frequency modulation gives rise to a puzzling perceptual effect. This contrasts with the classical view of FM encoding as a bottom-up process \cite{Skorheim2014}. The predictive feedback proposed in this work aids efficient encoding of frequency modulation by decreasing its metabolic cost \cite{Alexandre2018}, shortening its processing time \cite{Jaramillo2011, Mazzucato2019}, and enhancing direction selectivity.
\subsection{Bottom-up pitch models and pitch codes}
Two codes of pitch-related information are available in the auditory nerve at early stages of the auditory pathway: 1) the spectral code, produced by the spectral decomposition of the stimuli performed by the basilar membrane; and 2) the temporal code, comprised in the spike timings of the neurons across the auditory nerve that are phase locked to the stimulus waveform (see~\cite{Oxenham2013} for a review).
Our simulations showed that current modelling approaches based on the temporal code do not suffice to explain the pitch of the FM-sweeps used in the experiments. This is most likely a consequence of the fast change rate in the periodicities of fast FM stimuli. Typically, pitch decisions based on the auditory nerve temporal code are made after integrating over four cycles of the period of the stimuli \cite{Tabas2019, Wiegrebe2001}, coinciding with the duration threshold for accurate pitch discrimination \cite{Krumbholz2003}. However, our stimuli presented an average change of $\sim25$\,Hz across four repetitions of their average frequency, making this integration virtually impossible. Thus, the FM-feedback spectral model assumes that the pitch of FM sweeps and pure tones is encoded in a spectral representation, siding with the idea that spatial information can still play a crucial role in pitch processing.
The bottom-up integration of the spectral representation, cornerstone of the classical spectral theories of pitch \cite{Helmholtz1863}, predicted a sweep pitch shift in the opposite direction of the experimental data; i.e., a shift towards the frequencies expressed at the beginning of the sweep. This is a direct consequence of the global adaptation effects experienced in the auditory nerve after the first few milliseconds of the stimuli \cite{Zilany2009}. Even without such adaptation, the plain integration proposed by the spectral models would predict a null sweep pitch shift. Feedback modulation facilitating the encoding of the predictable parts of the sweeps is thus crucial to account for the experimental data.
\subsection{Relation to predictive coding and hierarchical processing strategies}
The presence of predictive feedback modulation in the subcortical sensory pathway has been shown before in humans \cite{Suga2012, Tabas2020} and non-human mammals \cite{Malmierca2015}. Previous studies often interpreted it in the context of the predictive coding framework \cite{Mumford1992, Rao1999, Friston2005}, a theory of sensory processing that postulates that sensory information is encoded as prediction error; i.e., that neural activity at a given level of the processing hierarchy encodes the residuals of the sensory input with respect to a generative model encoded higher in the hierarchy.
The FM-feedback spectral model can also be understood in the light of this formalism: it presents three hierarchical layers of abstraction (the inputs from the peripheral system, the frequency network, and the sweep network), and the two top layers perform predictions on the sensory input incoming at the immediately lower representation of the hierarchy. In the case of the frequency network, the temporal integration can be interpreted as the prediction that the input's distribution across cochlear channels will change with a much longer time constants as that of the fluctuations introduced by neuronal noise. However, unlike the classical predictive coding microcircuit where predictions and prediction error are kept in separate neural ensembles \cite{Bastos2012}, the frequency and sweep network simultaneously hold a representation that is both, descriptive for their own representation and predictive for the immediately lower representation of the hierarchy.
Combining predictions and representations in the same neural code solves some of the open questions of classical predictive coding architectures recently summarised by Denham and Winkler \cite{Denham2018}: i) ``what precisely is meant by prediction?'', ii) ``which generative models [within the hierarchy] make the predictions?'', and iii) ``what within the predictive framework is proposed to correlate with perceptual experience?''. In the FM-feedback spectral model, the predictions can be summarised as the probability distribution of patterns of activation expected to come next in the lower level given what has been encoded so far in the higher level. These conditional probability distributions are hardcoded in the top-down connections stemming from the neurons holding the high-level representation and targeting the neurons holding the lower level representations. Such connectivity patterns would represent the statistics between the representations in the two levels if they were naturally formed through synaptic plasticity after sufficient exposure to the stimuli. Last, the perceptual experience in the FM-feedback spectral model is encoded in the activation along the two hierarchical stages, which encode different aspects of the stimuli.
Another key difference between the FM-feedback spectral model's architecture and the classical predictive coding microcircuit is that, rather than encoding the residuals of the spectral representation with respect to the FM-sweep representation, neurons in the spectral layer simply encode the spectral content of the stimulus. However, since the decoding of the predictable parts of the stimuli is faster and its metabolic cost lower, predictability potentially ensues a significant decrease on the amount of signal produced during the encoding. Such mechanism would explain why even expected stimuli, for which the residual should theoretically be zero, do still evoke measurable responses (as in, for instance, stimulus-specific adaptation \cite{Ulanovsky2003, Malmierca2015}).
\subsection{Comparison with previous measurements of the sweep pitch shift}
Our experimental findings qualitatively replicated the sweep pitch shift effect found in previous studies; namely, we found that the pitch elicited by FM-sweeps was biased towards the frequencies spanned in the ending part of the sweeps \cite{Brady1961}, and that the perceptual bias is monotonically related to the frequency span $\Delta f$ \cite{Nabelek1970, Rossi1978}. On average, we estimated a putative linear relation between the pitch shift $\Delta p$ and $\Delta f$ of around $m \simeq 0.38$, slightly higher than Brady's \cite{Brady1961} ($m \simeq 0.34$ with transitions of 50\,ms) and Nabelek's \cite{Nabelek1970} ($m \simeq 0.32$ with transitions of 40\,ms) reports, and significantly higher than Rossi's \cite{Rossi1978} ($m = 1/6 \simeq 0.17$ with transitions of 200\,ms) estimation. Since Rossi's transitions were 5 times longer than ours, the estimations are difficult to compare. However, the disagreement seems to indicate that the pitch shift is stronger with shorter durations. This observation would be fully compatible with the mechanism of predictive facilitation described in the FM-feedback spectral model: since the time to decode FM direction is independent of sweep duration, whilst only the most posterior part of the stimulus is facilitated in the short sweeps, in a long sweep the facilitation currents would affect a much larger portion of the sound, potentially including frequencies occurring before $\bar{f}$.
The subtle disagreement of our predictions with Brady's and Nabelek data has three possible explanations: 1) the differences are a result of the studies having relatively low sample sizes in comparison with the high inter-subject variability of the effect (see~\ref{fig:slopes}); 2) Brady's and Nabelek's studies do not report any participant selection criteria: perhaps the inclusion of listeners that were unable to perform the match resulted in experimental results biased towards a null effect (i.e., towards $\Delta p = 0$); and 3) Nabelek and Brady used analogue synthesisers to produce their stimuli, resulting in sweeps with a richer spectral contour than our digital FM-sinusoids, which might have resulted in a weaker effect (Fig~6 in \cite{Brady1961} indicates that the spectral properties of the sweep do indeed affect the pitch shift: sweeps of the same duration, spectral scope and $\Delta f$ produced different sweep pitch shift magnitudes).
\paragraph{}
The FM-feedback spectral model also provides for a mechanistic interpretation of these previous results. In Brady's experiment II, the transient duration of the sweep is kept constant but its onset is varied across the stimulus duration. When the transient is located near the beginning of the stimulus, the greatest part of the sounds excites neurons encoding frequencies close to the posterior parts of the transient pushing the distribution of the responses towards the ending frequencies of the sweep $f_1$. This shift is larger than it would be expected for a sound without a transient because of the feedback modulation of the later frequencies exerted by the sweep network. When the transient is located at the very end of the stimulus, the longer portion of the stimulus exciting $f_0$ compensates for the shift in the frequency distribution, bringing the perceived pitch closer to the starting frequencies of the stimulus.
In Brady's experiment III, the transient's onset is kept constant and it is the duration of the transient that is varied. The decreased sweep pitch shift observed for shorter in comparison to longer transient durations can be explained by the FM-feedback spectral model as a consequence of the stimuli presenting a larger segment with the initial frequency, thus shifting the distribution of the responses towards $f_0$.
\subsection{FM encoding and physiological location of the sweep and spectral layers}
FM direction selectivity was modelled according to the principles of delayed excitation \cite{Razak2008, Ye2010, Kuo2012}. Although both delayed excitation and sideband inhibition contribute to direction selectivity in the mammalian auditory pathway \cite{Williams2012, Fuzessery2011, Geis2013}, neural modelling based on each of the two mechanisms produces similar representations \cite{Skorheim2014}. We chose to use delayed excitation alone in order to prevent a disproportionate inflation in the number of free parameters of the model.
Although we did not attempt to model FM rate selectivity, the FM-feedback spectral model's DSIs monotonically increased with $\Delta f$, a property that could be exploited in further developments of the model to encode modulation rate. FM rate encoding has been reported in mice \cite{Geis2013, Trujillo2013}, rats \cite{Lui2003} and more extensively in bats (e.g.,~\cite{Gittelman2011, Fuzessery2011}).
The earliest neural centre within the auditory pathway showing FM direction selectivity in mammals is the inferior colliculus \cite{Kuo2012, Geis2013, Li2010, Hage2003}, although subsequent nuclei (medial geniculate body \cite{Kuo2012, Lui2003} and auditory cortex \cite{Issa2016, Trujillo2013, Ye2010, Li2010, Zhang2003}) show generally stronger DSIs. Thus, the sweep layer postulated in the FM-feedback spectral model could be implemented even at early stages of the auditory hierarchy. Similarly, since all the nodes in the ascending auditory pathway contain tonotopically arranged nuclei, the spectral layer could be putatively located as early as in the cochlear nucleus. The physiological location of the mechanisms described here remains an open question.
\section{Conclusion}
In this work we have harnessed a well-established perceptual phenomenon to inform a model of FM direction encoding. We have shown that neither phenomenological nor mechanistic bottom-up models of auditory processing are able to explain the experimental data. We concluded that FM direction-selective neurons must alter the way that spectral information is encoded via a feedback mechanism. The main contribution of this work is a specific theory of how this feedback modulation might be exerted. Given the paramount role played by fast FM-sweeps in speech, the predictive mechanisms described here could be part of a larger hierarchical network responsible for the encoding of speech sounds in the human auditory pathway.
\section{Materials and methods}
\subsection{Measuring the sweep pitch shift in single sweeps}
\subsubsection{Participants \label{sec:sw:methods:listeners}}
8 participants (4 female), aged 22 to 31 (average 26.9) years old, were included in the study. They all had normal hearing thresholds between 250\,Hz and 8\,kHz ($< 25\,$dB HL) according to pure tone audiometry (Micromate 304, Madsen Electronics). All reported at least five years of musical experience, but none of them was a professional musician.
The 8 participants were derived from a larger set of 22 candidates. Candidates were screened by a first behavioural test assessing their capacity to match pure tones against pure tones, and then by a second test measuring their consistency when matching sweeps against pure tones (see details bellow). From the 14 excluded participants, one failed the first test and 13 failed the second test. 6 of the excluded participants reported no previous musical experience; the remaining 8 had at least five years of musical training.
\subsubsection{Stimuli \label{sec:sw:methods:stimuli}}
Stimuli were $50$\,ms long frequency-modulated sweeps. Frequency was kept constant during the first and final 5\,ms of the sweeps. The modulation was asymptotic and carried out in 40\,ms. Stimuli were ramped-in and damped-out with 5\,ms Hanning windows overlapping the sections with constant frequency.
There were 30 single sweeps with 10 linearly distributed frequency spans $\Delta f \in [-600, 600]$\,Hz and 3 average frequencies $\bar{f} \in \{900, 1200, 1500\}$\,Hz. For each sweep with a given $\Delta f$ and $\bar{f}$, the initial and final frequencies were $f_0 = \bar{f} - \Delta f / 2$ and $f_1 = \bar{f} + \Delta f / 2$.
\subsubsection{Experimental design \label{sec:sw:methods:design}}
Each trial consisted of a sequential presentation of a target sweep and a probe pure tone. After the presentation, the participant was asked whether the second sound evokes a higher, equal, or lower pitch percept than the first sound. Participants were allowed to replay the sounds as many times as needed in case of doubt. After the response, the software adjusted the frequency of the probe tone by increments of $\pm \epsilon = \pm 25$\,Hz, bringing the pitch of the sound closer to the participant’s percept (e.g., if the participant judged the target sweep as having a lower pitch than the probe tone, the frequency of the probe tone was reduced by 25\,Hz). This procedure was repeated until the participant reported that the two sounds evoked the same pitch percept. Then, the frequency of the matched pure tone was stored as the perceived pitch of the sweep reported in that trial, and a new trial with a new target sweep began. The initial frequency of the probe tone was sampled from a Gaussian distribution centred on the average frequency $\bar{f}$ of the target sweep.
Each of the 30 sweeps was matched four times, so that there were 120 trials in total in the experiment. The relative order of the probe tone and the target sweep was reversed in half of the trials to assess if presentation order affects the sweep pitch shift. Thus, the experiment can be described as a 10 (10 different frequency spans) $\times$ 3 (3 average frequencies) $\times$ 2 (probe played first or last) factorial design.
\subsubsection{Experimental procedure \label{sec:sw:methods:structure}}
Before the experiments, all potential participants performed a brief training to ensure that they had understood the task. The trial structure of the training was exactly the same as in the experiment, but both probe and target consisted of pure tones. During the training, the software provided feedback after each trial informing the participant whether the response was correct or incorrect. The training was divided in batches of six trials, and it concluded when the participant correctly matched the pitch of every trial in one batch. Most participants completed the training in the first batch.
After the training, we evaluated the consistency of each potential participant when matching the pitch of FM-sweeps. During the evaluation, participants undertook a block of 12 trials consisting in 4 repetitions of the same 3 sweeps: $\{\Delta f = 67\,\text{Hz}, \bar{f} = 900\,\text{Hz}\}$, $\{\Delta f = -200\,\text{Hz}, \bar{f} = 1200\,\text{Hz}\}$, and $\{\Delta f = -67\,\text{Hz}, \bar{f} = 1500\,\text{Hz}\}$. We chose these sweeps to ensure consistency across several $\bar{f}$ and $\Delta f$ while keeping $|\Delta f|$ small enough to ensure that the sweeps would elicit an unequivocal pitch percept \cite{Hart1990}. After the completion of this block, we scored the participant's pitch matching consistency as the inverse of the average of the absolute differences between the reported pitch in each sweep. Only participants with an average deviation smaller than twice the frequency increment step $2\epsilon = 50$\,Hz were included in the experiment. We assumed that participants with larger deviations were either unable or unwilling to perform consistent pitch judgements on sweeps, and thus their inclusion would contaminate the data with random guesses that could bias our estimations of the sweep pitch shift towards $\Delta p = 0$.
The 8 included participants matched the remaining 27 sweeps in four additional blocks. No sweep type was repeated within a single block, and all sweeps were presented 4 times across the entire experiment, resulting in 27 trials per block. The order of the sweeps within each block was randomised and the relative position of the probe tone with respect to the target stimulus was pseudorandomised so that half of the trials in each block were presented in each direction. Participants were instructed to take rests between blocks and were allowed to take as many shorter rests between trials as needed. To encourage precision, a 5\euro{} award was offered to participants that kept their self-consistency along the main experiment with the same criterium as in the evaluation: a smaller variance than $2\epsilon = 50$\,Hz within each sweep type. Only sweeps expected to yield the most unequivocal pitch sensation according to Hart's law \cite{Hart1990} (i.e., $|\Delta f| \leq 200$\,Hz) were used to compute the overall self-consistency; participants were however unaware of this. Participants typically completed the experiment within 3 hours.
\subsection{Measuring the sweep pitch shift in sweep trains}
\subsubsection{Participants \label{sec:tr:methods:listeners}}
The same 8 participants who completed the first experiment were invited to repeat the measurements with the new stimuli.
\subsubsection{Stimuli \label{sec:tr:methods:stimuli}}
Stimuli were concatenations of 5 sweeps adding up to a total of $250$\,ms (sweep trains; see Fig~\ref{fig:stim}). The sweeps were taken from a subset of 18 elements from the first experiment with 6 different frequency spans $\Delta f \in [-333, 333]$\,Hz. To ensure continuity of the stimulus waveform, the sweeps were concatenated in the frequency domain. 5\,ms Hanning windows were applied only at the very beginning and very end of the sweep trains.
\begin{figure*}
\includegraphics[width=\textwidth]{newFig17.eps}
\caption{\textbf{Examples of the stimuli.} Waveform $s(t)$ and instantaneous frequency $f(t)$ of the sweep with $\bar{f} = 1200$\,Hz and $\Delta f = -200$\,Hz and its corresponding sweep train.}
\label{fig:stim}
\end{figure*}
\subsubsection{Experimental design \label{sec:tr:methods:design}}
The matching procedure was the same as in the first experiment: the participants matched the pitch of the sweep trains to probe pure tones whose frequency they could adjust with the aid of a computer software. To ensure that there were no effects of stimulus duration, the probe tones had the same duration as the sweep trains (i.e., 250\,ms). As in the first experiment, each of the 18 sweep trains was matched four times, so that there were 72 trials in the second experiment. The relative order of the probe tone and the target sweep train was also reversed in half of the trials. Thus, the second experiment can be described as a 6 (different frequency spans) $\times$ 3 (average frequencies) $\times$ 2 (probe played first or last) factorial design.
\subsubsection{Experimental procedure \label{sec:tr:methods:structure}}
Since the participants were already familiar with the task, the experiment contained no training. Four repetitions of the $18$ sweep-trains were distributed across $5$ blocks following the same principles as described in the first experiment. Participants typically completed the second experiment within 2 hours.
\subsection{Bottom-up models of pitch}
\subsubsection{Spectral models of pitch processing \label{sec:sw:models:spectral}}
The responses at the auditory nerve were computed with a model of the peripheral auditory system \cite{Zilany2014, Zilany2009}. The model's output represents the expected firing rate $p_n(t)$ in a fibre of the auditory nerve associated with the $n$th cochlear channel ($n = 1, 2, \dots, N$) at an instant $t$. The frequency range of the cochlear model was discretised in $N = 100$ channels, spanning frequencies from $f_{\text{min}} = 125$\,Hz to $f_{\text{max}} = 10$\,kHz.
The perceived pitch corresponded to the expected cochlear channel $k$, $E[k]$, according to a probability distribution $\rho$ derived from the integral of $p_n(t)$ over the duration of the stimulus $L$:
\begin{equation} \label{eq:peripheralPitch}
E[k] = \sum_n n \rho_n \quad \text{with} \quad
\rho_n = \frac{\int_0^{L} dt \, p_n(t)}{\sum_n \int_0^{L} dt \, p_n(t)}
\end{equation}
To compare the predictions of the model with the experimental data, we also computed the expected channels $E[k]$ associated to pure tones with the frequency of the average perceived pitch of each sweep.
\subsubsection{Temporal models of pitch processing \label{sec:sw:models:temporal}}
The SACF used in this work follows the original formulation by Meddis and O'Mard \cite{Meddis1997, Meddis2006}. Essentially, this model poses the existence of an array of $M$ periodicity detectors responding more saliently to a preferred period $\delta t_m$. The instantaneous firing rate $A_m(t)$ of the $m$th periodicity detector ($m = 1, 2, \dots, M$) follows:
\begin{equation} \label{eq:sacf}
\tau\supix{SACF}_m \dot{A}_m(t) = - A_m(t) + \sum_n p_n(t) \, p_n(t - \delta t_m)
\end{equation}
\noindent where the auditory nerve activity $p_n(t)$ in the cochlear channel $n$ at an instant $t$ is computed as in the previous section. The characteristic periods $\delta t_m$ are uniformly distributed between $\delta t_m = 0.5$\,ms and $\delta t_m = 30$\,ms, which allows the model to capture periodicities corresponding to frequencies between 2\,kHz and 135\,Hz up to four lower harmonics. We kept a fixed integration constant $\tau\supix{SACF}_m = 2.5\,$ms; using variable $\tau\supix{SACF}_m$ that depend linearly on $\delta t_m$ (see details in~\cite{Balaguer2008, Wiegrebe2004}) did not result in substantial changes in our results.
Stimuli presenting periodicities at a certain frequency $f$ typically elicit peaks of activation in the detectors tuned to the preferred period $\delta t_m = 1 / f = T_0$ and to the periods corresponding to all subsequent lower harmonics $\delta t_m = 2\,T_0 = T_1$, $\delta t_m = 3\,T_0 = T_2$, etc. Thus, evidence for the period $T$ at an instant $t$, $B(t)_T$ can be represented as the $B(t)_T = \sum_{m \in\{\mathcal M_T\}} A_m(t)$, where $\mathcal{M_T}$ are the indices of the periodicity detectors tuned to $T$, $2T$, $3T$, etc. (i.e., $\mathcal{M_T} = \{m: \delta t_m = n T \quad \forall \, n \in [1, 2, 3, ...]\}$). We estimated $B(t)_T$ using four harmonics; extending or reducing the number of harmonics used to estimate $B(t)_T$ did not significantly alter our results.
The perceived pitch corresponded to the expected period $T$, $E[T]$, according to a probability distribution $\rho$ derived from the integral of $B_T(t)$ over the duration of the stimulus $L$:
\begin{equation} \label{eq:temporalPitch}
E[T] = \sum_T T \rho_T \quad \text{with} \quad
\rho_T = \frac{\int_0^{L} dt \, B_T(t)}{\sum_n \int_0^{L} dt \, B_T(t)}
\end{equation}
To compare the predictions of the model with the experimental data, we computed the expected period $E[T]$ associated to pure tones with the frequency of the average perceived pitch of each sweep.
\subsection{Details on the predictive model of FM encoding}
\subsubsection{Spectral layer and pitch estimations}
The spectral layer consists on an array of $N = 100$ neural populations that integrate the output of the peripheral model. Neural populations are modelled according to a mean-field derivation \cite{Wong2006} that, although it was first formulated to describe dynamics in cortical regions dedicated to visual decision making, have been successfully used to describe the dynamics of many different cortical areas (e.g.,~\cite{Deco2013}). The firing rate $h_n(t)$ of the $n$th ensemble follows the dynamics of a leaky integrator:
\begin{equation} \label{eq:firingRate}
\tau\supix{pop} \, \dot{h}^f_n(t) = -h^f_n(t) + \phi(I^f_n(t))
\end{equation}
\noindent where $\tau\supix{pop}$ are adaptive time constant:
\begin{equation}
\tau\supix{pop}_{e,i}(h, I) = \tau\supix{memb}_{e,i} \, \Delta_T
\frac{\partial_{x}\phi(x)\arrowvert_{x = I}}{h}
\label{eq:taupop}
\end{equation}
\noindent $\Delta_T = 1\,$mV is the size of the spike initialisation of the neural model and $\tau\supix{memb}_e = 20\,$ms and $\tau\supix{memb}_i = 10$\,ms \cite{McCormick1985} are the neural membrane time constants for excitatory and inhibitory populations, respectively. Using adaptive integration time constants makes the populations to react faster to changes when they are marginally active and have weak synaptic inputs, a behaviour often reported in tightly connected populations of neurons \cite{Ostojic2011}. This component is the key of the feedback mechanism used to increase the responsiveness of the populations encoding the expected parts of the sweeps (Fig~\ref{fig:tauTrajectories}). The analytic formulation of $\tau\supix{pop}(h, I)$ stems from a theoretical study of networks of exponential-integrate-and-fire neurons \cite{Ostojic2011}.
Inputs $I^f_n(t)$ were modelled with AMPA synaptic dynamics \cite{Brunel2001}. AMPA synapses present short time constants that are able to preserve the fine temporal structure of auditory input, and thus are the major receptor type conveying bottom-up communication in the auditory pathway (e.g.,~\cite{Golding2012}).
\begin{equation} \label{eq:fInputBottomUp}
I^f_n(t) = J\subix{in}\AMPA \sum_k \omega\supix{in}_{nk} S\subix[,k]{in}\AMPA{}(t)
\end{equation}
\noindent We allowed some dispersion in the propagation from the peripheral model to the spectral layer by using a Gaussian-shaped connectivity matrix:
\begin{equation}
\omega\supix{in}_{nm} = \frac{1}{\sqrt{\sigma\subix{in}}}
e^{-\frac{(m - n)^2}{2 \sigma\supix{in}}}
\end{equation}
\noindent where the normalisation factor $\sqrt{\sigma\subix{in}}$ ensures that the total input to a population under a uniform peripheral input remains the same regardless of the dispersion $\sigma\subix{in}$. The synaptic gating variable $S\subix[,n]{in}\AMPA{}(t)$ follows \cite{Brunel2001}:
\begin{equation}
\tau\AMPA{} \dot{S}\subix[,n]{in}\AMPA{}(t) =
-S\subix[,n]{in}\AMPA{}(t) + p_n(t)
\end{equation}
\noindent Note that we used the index $^f$ to denote variables in the spectral layer. The perceived pitch corresponded to the expected cochlear channel $k$, $E[k]$, according to a probability distribution $\rho$ derived from the integral of $p_n(t)$ over the duration of the stimulus $L$ (cf. Eq~\eqref{eq:peripheralPitch}):
\begin{equation} \label{eq:spectralPitch}
E[k] = \sum_n n \rho_n \quad \text{with} \quad
\rho_n = \frac{\int_0^{L} dt \, h^f_n(t)}{\sum_n \int_0^{L} dt \, h^f_n(t)}
\end{equation}
The time constant $\tau\AMPA{} = 2$\,ms was taken from the literature \cite{Brunel2001}. The effective conductivity $J\subix{in}\AMPA{} = 0.38\,$nA was manually tuned within the realistic range such that the peripheral system would elicit firing rates on the range $5\,\text{Hz} \geq h_n(t) \geq 100$\,Hz in the integrator ensembles. The transfer function $\phi(x) = (c x - I_0)/(1 - e^{-g (c x - I_0)})$ and its parameters, empirically derived for networks of integrate-and-fire neurons, were taken from~\cite{Wong2006}.
\subsubsection{Sweep layer and direction selectivity}
The sweep layer consists on four arrays of $N = 100$ neural populations following the same dynamics described in the previous section (i.e., Eq~\eqref{eq:firingRate}). From the four arrays, two (one excitatory, one inhibitory) are tuned to up sweeps, and two (again, one excitatory and one inhibitory) are tuned to down sweeps (Figure~\ref{fig:diagram}). The instantaneous firing rate of the \emph{up} ($h^{\uparrow e}_n(t), h^{\uparrow i}_n(t)$) and \emph{down} ($h^{\downarrow e}_n(t), h^{\downarrow i}_n(t)$) neural population , with \emph{up} ($I^{\uparrow e}_n(t), I^{\uparrow i}_n(t)$) and \emph{down} ($I^{\downarrow e}_n(t), I^{\downarrow i}_n(t)$) synaptic inputs, respectively. Although the transfer functions $\phi(x)$ are the same for all the ensembles, the parameters $c$, $I_0$, and $g$ are different for excitatory and inhibitory populations \cite{Wong2006}.
Excitatory and inhibitory inputs to populations in the \emph{sweep layer} are modelled according to AMPA-like and GABA-like synaptic gating dynamics \cite{Brunel2001}:
\begin{eqnarray*}
\dot{S}_{\alpha, n}\AMPA{}(t) &=&
-\frac{S_{\alpha, n}\AMPA{}(t)}{\tau\AMPA{}} + h^{\alpha e}_n(t) + \sigma \xi,
\quad \alpha = \uparrow, \downarrow, f
\\
\dot{S}_{\alpha, n}\GABA{}(t) &=&
-\frac{S_{\alpha, n}\GABA{}(t)}{\tau\GABA{}} + h^{\alpha i}_n(t) + \sigma \xi,
\quad \alpha = \uparrow, \downarrow
\end{eqnarray*}
\noindent where $\xi$ is an uncorrelated Gaussian noise sampled independently for each synapse and instant $t$, and $\sigma = 0.0007$\,nA is the amplitude of the noise \cite{Wong2006}. The total synaptic input for each population is then:
\begin{eqnarray*}
I^{\uparrow e}_n(t) & = &
J_{f}\AMPA \sum_m \omega^{f \uparrow}_{nm} S_{f, m}\AMPA{}(t - \delta t_{nm}) - \\
& &
J\GABA \left(\sum_m \omega^{ie}_{nm} S_{\downarrow, m}\GABA{}(t)
+ S_{\uparrow, n}\GABA{}(t) \right) + I\subix{bkg}^E \\
I^{\uparrow i}_n(t) & = & J_{s}\AMPA \sum_m \omega^{ei}_{nm} S_{\uparrow, m}\AMPA{}(t)
+ I\subix{bkg}^I\\
I^{\downarrow e}_n(t) & = &
J_{f}\AMPA \sum_m \omega^{f \downarrow}_{nm} S_{f, m}\AMPA{}(t - \delta t_{nm}) - \\
& &
J\GABA \left(\sum_m \omega^{ie}_{nm} S_{\uparrow, m}\GABA{}(t)
+ S_{\downarrow, n}\GABA{}(t) \right) + I\subix{bkg}^E \\
I^{\downarrow i}_n(t) & = & J_{s}\AMPA \sum_m \omega^{ei}_{nm} S_{\downarrow, m}\AMPA{}(t)
+ I\subix{bkg}^I
\end{eqnarray*}
\noindent where $I\subix{bkg}^E$ and $I\subix{bkg}^I$ are constant background inputs putatively sourced in external neural populations \cite{Wong2006}.
The excitatory-to-inhibitory and inhibitory-to-excitatory connectivity matrices $\omega^{ei}$ and $\omega^{ie}$ are Gaussian shaped and centred in the identity matrix:
\begin{equation}
\omega^{\alpha}_{nm} = e^{-\frac{(n-m)^2}{2 \sigma_{\alpha}}}, \quad \alpha = ei, ie
\end{equation}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{newFig18.eps}
\caption{\textbf{Connectivity matrices.} Matrices show the connection between the first 25 ensembles of each source-target group. From left to right, matrices correspond to: excitatory-to-inhibitory $\omega^{ei}$, inhibitory-to-excitatory $\omega^{ie}$; bottom-up AMPA connections spectral-to-up $\omega^{f \uparrow}$, spectral-to-down $\omega^{f \uparrow}$; and feedback NMDA connections up-to-spectral $\omega^{\uparrow f}$, down-to-spectral $\omega^{\downarrow} f$. Labels are encircled in a white square in the top right of each plot. The free parameters of each connectivity matrix are defined geometrically in the plots.}
\label{fig:connectivity}
\end{figure}
The remaining connectivity matrices $\omega^{f \uparrow}$ and $\omega^{f \downarrow}$ are defined to constraint the up (down) feed to inputs from lower (higher) frequencies and to limit the range of the connection to a small number of populations $\Delta_{\omega f}$ of the spectral representation:
\begin{eqnarray*}
\omega^{f \uparrow}_{nm} & = & \left\{
\begin{array}{rl}
1 & \text{ if } \quad 0 \leq n - m \leq \Delta_{\omega f}\\
0 & \text{ otherwise}
\end{array} \right. \\
\omega^{f \downarrow}_{nm} & = & \left\{
\begin{array}{rl}
1 & \text{ if } \quad 0 \leq m - n \leq \Delta_{\omega f}\\
0 & \text{otherwise}
\end{array} \right.
\end{eqnarray*}
The free parameters were initialised to standard values (the effective conductivities $J_{f}\AMPA$, $J\GABA$, and $J_{s}\AMPA$, according to~\cite{Wong2006}; the baseline delay $\delta t_0$ to 2\,ms/channel; and the dispersion constants $\sigma\subix{in}$, $\sigma\subix{ei}$, $\sigma\subix{ei}$, and $\Delta_{\omega f}$, to $0.1\,N$) and manually tuned so that the networks showed direction selectivity for the FM-sweep characteristics (duration, rates, $\Delta f$) of the stimuli used in the first part of the study. Unless stated otherwise, all simulations listed in this work correspond to the parameters listed in Table~\ref{tab:pars}.
\begin{table}
\centering
\caption{\textbf{Model parameters.} Most parameters were taken from the original studies that derived the mean field approximations used in the model and are cited accordingly. Other free parameters, like the number of bins of the tonotopic axis $N$, were fixed to reasonable but arbitrary values at the beginning of the model construction and were not adjusted during the analyses (\emph{ad-hoc}). Free parameters that were manually tuned are labelled as \emph{tuned (x)}, where $x$ is: 1, for parameters tuned so that the spectral layer integrates the peripheral representation correctly (see Section~\ref{sec:sw:models:spectral}); 2, for parameters tuned to achieve FM-direction selectivity; and 3, for parameters tuned so that the feedback signalling resulted in a fair fit between the model's pitch predictions and the experimental observations.}
\begin{tabular}{cr@{\,}lc}
parameter & value & (unit) & source \\
$N$ & $100$ & channels & ad-hoc \\
$dt$ & $0.1$ & ms & ad-hoc \\
periph $dt$ & $0.01$ & ms & ad-hoc \\
periph $f_{\text{min}}$ & $125$ & Hz & \cite{Zilany2009} \\
periph $f_{\text{max}}$ & $10000$ & Hz & \cite{Zilany2009} \\
$\tau\supix{memb}_e$ & $20$ & ms & \cite{McCormick1985} \\
$\tau\supix{memb}_i$ & $10$ & ms & \cite{McCormick1985} \\
$\Delta$ & $1$ & mV & \cite{Ostojic2011} \\
$c\supix{excitatory}$ & $310$ & (V\,nC)$^-1$ & \cite{Wong2006} \\
$I_0\supix{excitatory}$ & $125$ & Hz & \cite{Wong2006} \\
$g\supix{excitatory}$ & $0.16$ & s & \cite{Wong2006} \\
$c\supix{inhibitory}$ & $615$ & (V\,nC)$^-1$ & \cite{Wong2006} \\
$I_0\supix{inhibitory}$ & $177$ & Hz & \cite{Wong2006} \\
$g\supix{inhibitory}$ & $0.087$ & s & \cite{Wong2006} \\
$I\subix{bkg}^E$ & $0.23$ & nA & \cite{Wong2006} \\
$I\subix{bkg}^I$ & $0.10$ & nA & \cite{Wong2006} \\
$\sigma$ & $0.0007$ & nA & \cite{Wong2006} \\
$\gamma$ & $0.641$ & & \cite{Brunel2001} \\
$\tau\AMPA$ & $2$ & ms & \cite{Brunel2001} \\
$\tau\GABA$ & $ 5$ & ms & \cite{Brunel2001} \\
$\tau\NMDA$ & $100$ & ms & \cite{Brunel2001} \\
$J\AMPA\subix{in}$ & $0.38$ & nC & tuned (1) \\
$J\AMPA_f$ & $0.55$ & nC & tuned (2) \\
$J\AMPA_s$ & $0.67$ & nC & tuned (2) \\
$J\GABA$ & $0.30$ & nC & tuned (2) \\
$J\NMDA$ & $0.04$ & nC & tuned (3) \\
$\sigma\subix{in}$ & $0.1\,N$ & channels & tuned (2) \\
$\sigma\subix{ie}$ & $0.5\,N$ & channels & tuned (2) \\
$\sigma\subix{ei}$ & $0.03\,N$ & channels & tuned (2) \\
$\Delta t_0$ & $3$ & ms$/$channel & tuned (2) \\
$\Delta_{\omega f}$ & $0.05\,N$ & channels & tuned (2) \\
$\Delta_{\omega s}$ & $0.05\,N$ & channels & tuned (3) \\
$w_{\omega s}$ & $0.03\,N$ & channels & tuned (3)
\end{tabular}
\label{tab:pars}
\end{table}
The direction selectivity index (DSI; e.g.,~\cite{Zhang2003}) described in the Results section was computed as the proportion of the activity elicited in a network by an up sweep minus the activity elicited in the same network by a down sweep with the same duration and frequency span:
\begin{equation} \label{eq:dsi}
\text{DSI}^{\alpha} = \frac{\sum_n \int\,dt \left(\left[h^{\alpha e}_n(t)\right]_{+\Delta f} - \left[h^{\alpha e}_n(t)\right]_{-\Delta f}\right)}
{\sum_n \int\,dt \left( \left[h^{\alpha e}_n(t)\right]_{+\Delta f} + \left[h^{\alpha e}_n(t)\right]_{-\Delta f}\right)}
\quad \alpha = \uparrow, \downarrow
\end{equation}
\noindent where $\left[h^{\alpha e}_n(t)\right]_{\Delta f}$ is the firing rate $h^{\alpha e}_n(t)$ elicited in the network by a sweep with a frequency span $\Delta f$.
\subsubsection{Feedback connections}
Feedback connections from the sweep layers to the spectral layer were modelled according to NMDA-like synaptic gating dynamics with a finite rising time constant \cite{Brunel2001}.
\begin{eqnarray*}
\dot{S}_{\alpha, n}\NMDA{}(t) & = & - \frac{S_{\alpha, n}\NMDA{}(t)}{\tau\NMDA{}}
+ \sigma \xi \\
& & + \left(1 - S_{\alpha, n}\NMDA{}(t)\right)
\gamma\, h^{\alpha e}_n(t), \quad \alpha = \uparrow, \downarrow
\end{eqnarray*}
\noindent with $\gamma = 0.641$. NMDA currents are added to the total synaptic input of the neurons in the spectral layer as an additional term in \eqref{eq:fInputBottomUp}:
\begin{equation*}
I^f_n(t) \rightarrow \hat{I}^f_n(t) = I^f_n(t) +
J\NMDA \sum_{\alpha = \uparrow, \downarrow} \sum_m
\omega^{\alpha f}_{nm} S_{\alpha, m}\NMDA{}(t)
\end{equation*}
The connectivity matrices $\omega^{\alpha \uparrow}_{nm}$, $\omega^{\alpha \downarrow}_{nm}$ were chosen such that the target of the NMDA-driven activation was limited to a number of $\Delta_{\omega s}$ and leave a gap of $w_{\omega s}$ populations between the centre frequency of the source and target ensembles (see Fig~\ref{fig:connectivity}, right):
\begin{eqnarray*}
\omega^{\uparrow f}_{nm} & = & \left\{
\begin{array}{rl}
1 & \text{ if } \quad w_{\omega s} \leq m - n \leq \Delta_{\omega s} + w_{\omega s}\\
0 & \text{ otherwise}
\end{array} \right. \\
\omega^{\downarrow f}_{nm} & = & \left\{
\begin{array}{rl}
1 & \text{ if } \quad w_{\omega s} \leq n - m \leq \Delta_{\omega s} + w_{\omega s}\\
0 & \text{otherwise}
\end{array} \right.
\end{eqnarray*}
\noindent The gap $ w_{\omega s} > 0$ is enforced to avoid resonances between sweep-selective and spectral populations with the same centre frequency during the encoding of pure tones. The free parameters were initialised to standard values (the NMDA conductivity $J\NMDA$ to the value recommended by~\cite{Wong2006}, and the connectivity parameters $w_{\omega s}$ and $\Delta_{\omega s}$ to $0.1\,N$) and manually tuned so that the pitch predictions of the model (as computed in Eq~\eqref{eq:spectralPitch}) matched the empirical data.
\section*{Acknowledgments}
This research was funded by the ERC Consolidator Grant SENSOCOM 647051.
The authors would like to thank Shih-Cheng \emph{Vincent} Chien for his enlightening suggestions during the writing of the manuscript.
\onecolumn
|
2,869,038,155,045 | arxiv | \section{Introduction}
We investigate a heat advection in a random flow which is supposed
to be "turbulent". The turbulence is a complex phenomenon
difficult to define and avoiding a description in precise
mathematical terms. The complexity of turbulence can be related to
its dependence on the length scale relevant for undergoing
experiments. In this paper we apply only some aspects of the
turbulent flow: randomness of the velocity field, its
self-similarity and long range correlations . The appearance of
the turbulence should have an impact on transport phenomena
described by an advection-diffusion equation of a passive scalar
\cite{nature}. Such an equation can describe a transport of heat,
a mass or some impurities. We are interested in the equilibrium
distribution of solutions of the random advection-diffusion
equation . The equilibrium is possible only under an external
forcing (a heat source). We are interested in the equilibrium
distribution at all scales. Such an equilibrium will depend on the
forcing. The universality is possible only in the inertial range
\cite{kolmogorov}\cite{frisch} \cite{gawedzki} where the external
forcing should not be relevant(see ref.\cite{staicu} for some
recent shear flow experiments) . Although the precise equilibrium
distribution depends on the form of the forcing the asymptotic
behavior of correlation functions depends solely on the asymptotic
behavior of the random forcing. We investigate the way the long
range correlations of the fluid velocity influence the long range
correlations of the temperature.
We assume that there is a distinguished direction of the fluid
velocity ${\bf V}$. We make a decomposition $X=({\bf x},{\bf
z})\in R^{D}$ with ${\bf x}\in R^{d}$ and ${\bf z}\in R^{D-d}$ ;
${\bf V}(\tau,{\bf x})$ depends only on ${\bf x}\in R^{d}$ and has
the non-vanishing components only in $R^{D-d}$ (in such a case it
satisfies automatically $\nabla {\bf V}=0$;for physical
applications $D=3$ and $d=2$ or $d=1$). As a typical example we
could consider a fluid flow $V_{z}(x,y)$ in the direction of the
$z$-axis which does not depend on $z$. We can impose such an
anisotropy of the flow by an external force ${\bf R}$ which
depends only on ${\bf x}$ and has non-zero components solely in
the ${\bf z}$ direction. So, we consider the Navier-Stokes
equation with such a random force ${\bf R}$
\begin{displaymath}
\partial_{t}{\bf V}+{\bf V}\nabla {\bf V}-\nu \triangle {\bf V}
={\bf R}
\end{displaymath}
The $({\bf 0},{\bf V}({\bf x}))$ solution of the Navier-Stokes
equation is the solution of the linear equation ( for the ${\bf
z}$-component)
\begin{displaymath}
\partial_{t}{\bf V}-\nu \triangle_{\bf x} {\bf V}
={\bf R}
\end{displaymath}( together with a zero solution for the ${\bf
x}$-component). By a proper choice of the external force ${\bf
R}$ we can simulate a large class of ${\bf x}$-dependent flows.
In secs.2-3 we discuss the advection-diffusion equation, the
random velocity and a random forcing.
The advection-diffusion equation can be
solved by means of the Feynman-Kac formula. The Feynman-Kac
solution has already been discussed by other authors
\cite{majda}-\cite{glimm}. These authors have been interested in
the asymptotic behavior of the advection-diffusion equation
without forcing. Our main interest (secs.4-5) is in the asymptotic
behavior for large time and distances of correlation functions of
the temperature field resulting from the advection-diffusion
equation with forcing describing the heat injection. First, in
sec.3 we simulate forcing by a constant gradient term in the
temperature. We obtain a simple soluble model of advection
illustrating some general features. In general, we can obtain some
lower and upper bounds on the correlation functions by means of
the Jensen inequalities (sec.5). For the sake of simplicity we
concentrate on the two-point correlations. In sec.6 we show how
our methods can be extended to multi-point correlations. We obtain
asymptotic behavior of the Fourier transform of the correlation
functions for small and large momenta.
We compare our methods
and results (in secs.4-6 and in the Appendix B) with an exactly
soluble model of Kraichnan
\cite{kraichnan}\cite{gawedzki}\cite{falk} (defined by a velocity
field which is a white noise in time). The random advection is
closely connected with a diffusion. In fact, under some natural
assumptions random advection enforces diffusion
\cite{kesten}\cite{ave}\cite{komorowski} and vice versa the
diffusion can be expressed as a white noise advection \cite{verg}.
However, when we choose no diffusion (zero molecular diffusivity)
in the initial equation of advection describing the temperature evolution then we
obtain a
model of advection (discussed in Appendix A) as a limit of the
solution of the random advection-diffusion equation. The limit of
zero molecular diffusivity has been discussed earlier in
refs.\cite{wei}\cite{hula}.
In the text some positive constants arise
(denoted usually as $K$,$c_{1}$, etc.) which are not described at
each case and are not related one to another.
\section{The advection-diffusion equation}We consider the advection in a random
velocity field ${\bf V}$ ( described in the Introduction)
forced by a random source $f$\begin{equation}
\partial_{\tau}\theta_{\tau}+{\bf V}\nabla
\theta_{\tau}-\frac{\mu^{2}}{2}\triangle \theta_{\tau}=f
\end{equation} where $\mu^{2}$ is the molecular diffusivity.
If the random velocity ${\bf V}$ has correlation functions
singular at small time then eq.(1) needs a careful
interpretation. If the singularity of the velocity's covariance is of the form
$\delta(t-t^{\prime})D({\bf x}-{\bf x}^{\prime})$ then there are two standard interpretations
either Ito or Stratonovitch \cite{ikeda} \cite{simon}. The difference between
them in eq.(1)is $\frac{1}{2}D({\bf 0})\nabla_{\bf z}^{2}\theta$. Hence,
choosing one of them will change only the diffusion constant.
We choose the Stratonovitch interpretation throughout the paper
and also in the Appendix B.
First, let us consider ${\bf V}=0$ and $f=0$. Let
$N$ be a (deterministic) solution of the heat equation \begin{equation}
\partial_{\tau}N_{\tau}-\frac{\mu^{2}}{2}\triangle N_{\tau}=0
\end{equation}
We expand $\theta$ around the solution $N$ of the diffusion equation
\begin{displaymath}
\theta=T+N
\end{displaymath}
(if the mean value of ${\bf V}$ is zero then $T$ describes
fluctuations of the temperature).
From eq.(1)
\begin{equation}
\partial_{\tau}T_{\tau}+{\bf V}\nabla
T_{\tau}-\frac{\mu^{2}}{2}\triangle T_{\tau}=F
\end{equation}
where
\begin{displaymath}
F=f-{\bf V}\nabla
N_{\tau} \end{displaymath}
As the simplest example of a physical relevance we consider the
mean gradient \cite{gradient}\cite{arnovitz}
\begin{equation}
N=-{\bf g}{\bf X}
\end{equation}
where ${\bf g}$ is a constant vector. The mean gradient is a
stationary solution of the heat equation between two planes kept
at fixed temperatures. For such a static solution
\begin{equation}
F=f+{\bf V}{\bf g}
\end{equation}
We can see that even if $f=0$ then $F$ is non-trivial. This is a
frequent realization of an advection in experiments
\cite{gollub}\cite{biferale}. In such a case the source $F$ has
the same distribution as the velocity. A constant mean gradient is
distinguishing a direction in space. It breaks the rotational
symmetry. As a model we could consider ${\bf g}=(0,0,g_{z})$ and
${\bf V}=(0,0,V_{z})$.
We define the spectral measure $\rho$ of the temperature $T$ which
is directly measurable in experiments \cite{lesieur}
\begin{equation}\begin{array}{l}
\langle T_{\tau}({\bf x},{\bf z})T_{\tau}({\bf x}^{\prime},{\bf
z}^{\prime})\rangle-\langle T_{\tau}({\bf x},{\bf
z})\rangle\langle T_{\tau}({\bf x}^{\prime},{\bf
z}^{\prime})\rangle\cr=\int d{\bf k}d{\bf p}\exp(i{\bf k}({\bf
x}-{\bf x}^{\prime})+i{\bf p}({\bf z}-{\bf
z}^{\prime}))\rho_{\tau}({\bf k},{\bf p})
\end{array}\end{equation}We have
\begin{equation}
\begin{array}{l}
\int d{\bf x}(\langle \tilde{T}_{\tau}({\bf x},{\bf
p})\tilde{T}_{\tau}({\bf x}^{\prime},{\bf
p}^{\prime})\rangle-\langle \tilde{T}_{\tau}({\bf x},{\bf
p})\rangle\langle \tilde{T}_{\tau}({\bf x}^{\prime},{\bf
p}^{\prime})\rangle)\cr=\delta({\bf p}+{\bf
p}^{\prime})\rho_{\tau}({\bf 0},{\bf p}) \end{array}\end{equation}
and
\begin{displaymath}
\langle \tilde{T}_{\tau}({\bf x},{\bf p})\tilde{T}_{\tau}({\bf
x},{\bf p}^{\prime})\rangle-\langle \tilde{T}_{\tau}({\bf x},{\bf
p})\rangle\langle \tilde{T}_{\tau}({\bf x},{\bf
p})\rangle=\delta({\bf p}+{\bf p}^{\prime})\int d{\bf
k}\rho_{\tau}({\bf k},{\bf p})
\end{displaymath}
\begin{displaymath}
\langle T_{\tau}({\bf x},{\bf z})T_{\tau}({\bf x},{\bf
z})\rangle-\langle T_{\tau}({\bf x},{\bf z})\rangle\langle
T_{\tau}({\bf x},{\bf z})\rangle=\int d{\bf p}\int d{\bf
k}\rho_{\tau}({\bf k},{\bf p})
\end{displaymath}
When the spectral function has singularities at low momenta then
the Fourier transform in eq.(6) may need a careful definition in
the sense of generalized functions. Instead of the correlation
functions of $T_{\tau}({\bf x},{\bf y}) $ we could consider the
structure functions
\begin{displaymath}
{\cal G}^{(2n)}_{\tau}({\bf x},{\bf z})=\langle (T_{\tau}({\bf
0},{\bf 0})- \langle T_{\tau}({\bf 0},{\bf
0})\rangle-T_{\tau}({\bf x},{\bf z})+\langle T_{\tau}({\bf x},{\bf
z})\rangle)^{2n}\rangle
\end{displaymath}
For $ n=1$ we have
\begin{displaymath}
\begin{array}{l} {\cal G}^{(2)}_{\tau}({\bf x},{\bf z})=2\int d{\bf k}d
{\bf p}\rho_{\tau}({\bf k},{\bf p})(1-\exp(i{\bf k}{\bf x}+i{\bf
p}{\bf z})) \end{array}\end{displaymath} ${\cal G}^{(2)}_{\tau} $
scales in the same way as $\langle T T\rangle$ but has better
infrared behaviour.The structure functions ${\cal G}^{(2n)}$ are
expressed by the correlation functions of the Fourier transforms
of $T_{\tau}$.
It can be seen
that the spectral measure $\rho$ of the temperature $T$ depends on
the spectral measure of the source $f$ and the scaling properties
of the random velocity field.
\section{Gaussian model
of the shear flow }
We decompose the fluid velocity
\begin{displaymath}
{\bf V}={\bf U}+{\bf v}
\end{displaymath}
into the mean value ${\bf U}$ and random fluctuations ${\bf v}$.
We assume that the velocity ${\bf
v}$ is a Gaussian Euclidean $R^{d}$ invariant random field with
the mean zero and the covariance
\begin{equation} \langle v_{j}(s,{\bf x})v_{k}(s^{\prime},{\bf
x}^{\prime})\rangle= G_{jk}(s-s^{\prime},{\bf x},{\bf x}^{\prime})
\end{equation}
where $j,k=d+1,...,D$. For the sake of simplicity of the arguments
we shall sometimes separate the time-dependence choosing $G$ of
the product form $\Gamma D$. If G is a decaying function of the
distance $\vert {\bf x}-{\bf x}^{\prime}\vert$ then a model of the
vector field ${\bf v}$ can be determined by a translation
invariant G, e.g.,
\begin{equation}
G_{jk}(s-s^{\prime},{\bf x},{\bf
x}^{\prime})\equiv\delta_{jk}\Gamma(s-s^{\prime})D({\bf x}-{\bf
x}^{\prime})=\delta_{jk}\Gamma(s-s^{\prime})\int d{\bf p}
\exp(i{\bf p}( {\bf x}-{\bf x}^{\prime}))\tilde{D}({\bf p})
\end{equation}
where $\tilde{D}$ is a locally integrable function.
In a description of the turbulence we consider growing long range
correlations. In such a case $G$ cannot be translation invariant .
We consider a model with Euclidean $R^{d}$ invariant correlation
functions of ${\bf v}({\bf x})-{\bf v}({\bf x}^{\prime})$. Then
\begin{equation}
G_{jk}(s-s^{\prime},{\bf x},{\bf
x}^{\prime})=\delta_{jk}\Gamma(s-s^{\prime})(\vert {\bf
x}\vert^{2\beta}+\vert {\bf x}^{\prime}\vert^{2\beta}- \vert {\bf
x}-{\bf x}^{\prime}\vert^{2\beta})
\end{equation}
This $G$ is positive definite if $\Gamma$ is positive definite and
$0<\beta<1$ (the covariance (10) determines Levy's model
\cite{levy} of the Brownian motion depending on $d$-parameters ).
When $2\beta<2$ then the vector field ${\bf v}({\bf x})$ does not
satisfy the Lipschitz condition. In such a case we could expect
difficulties with the uniqueness of the flow and the uniqueness of
the solution of eq.(1) at $\mu=0$. Fortunately, a definition of
the unique solution of eq.(1) in a weak probabilistic sense is
possible \cite{raymond}\cite{eiden}even without the Lipschitz
condition.
The source $f$ is an independent Gaussian field with the
covariance
\begin{equation}
\langle f(s,{\bf x},{\bf z})f(s^{\prime},{\bf x}^{\prime},{\bf
z}^{\prime})\rangle= M(s-s^{\prime},{\bf x}-{\bf x}^{\prime},{\bf
z}-{\bf z}^{\prime})
\end{equation}
We take the Fourier transform of eq.(3) in the ${\bf z}$ variable.
Then, this equation reads
\begin{equation}
\partial_{\tau}\tilde{T}_{\tau}({\bf x},{\bf p})+(i{\bf p}{\bf
V}(\tau,{\bf x})+\frac{\mu^{2}{\bf
p}^{2}}{2}-\frac{\mu^{2}}{2}\triangle_{\bf
x})\tilde{T}_{\tau}({\bf x},{\bf p})=\tilde{F}(\tau,{\bf x},{\bf
p})
\end{equation}
We apply the Feynman-Kac formula \cite{simon} in order to express
the solution of eq.(12) with the initial condition $T_{0}\in
L^{2}(dX)$ in the form (the uniqueness of the solution is
discussed in \cite{raymond}\cite{eiden})
\begin{equation}
\begin{array}{l}
\tilde{T}_{\tau}({\bf x},{\bf p})=\exp(-\frac{\mu^{2}{\bf
p}^{2}\tau}{2})E[\exp(-i{\bf p}\int_{0}^{\tau}{\bf V}(\tau -s,{\bf
x}+\mu{\bf b}(s))ds)\tilde{T}_{0}({\bf x}+\mu{\bf b}(\tau),{\bf
p})] +\cr \int_{0}^{\tau}dt\exp(-\frac{\mu^{2}{\bf
p}^{2}(\tau-t)}{2})E[\exp(-i{\bf p}\int_{0}^{\tau-t}{\bf V}(\tau
-s,{\bf x}+\mu{\bf b}(s))ds)\tilde{F}(t,{\bf x}+\mu{\bf
b}(\tau-t),{\bf p})]
\end{array}
\end{equation}
In eq.(13) $b_{j}$ ($j=1,2,...,d$) is the Brownian motion defined
as the Gaussian process with the covariance \cite{simon}
\begin{displaymath}
E[b_{j}(s)b_{k}(t)]=\delta_{jk}min(s,t) \end{displaymath}
We are
interested in the equilibrium distribution of $T_{\tau}$, i.e.,in
the limit $\tau\rightarrow\infty$. When $\tau\rightarrow \infty$
and $T_{0}\in L^{2}(dX)$ then the first term in eq.(13) is
vanishing . For this reason we may set $T_{0}=0$ from the
beginning. The stationary solutions $N$ being harmonic functions
are not square integrable in $R^{D}$. Admitting such functions as
initial conditions we could regain the solution $N$ from eq.(13)
(with $F=0$ ). In particular, the mean gradient (4) comes from a
generalized function $\tilde{T}_{0}$ with its support concentrated
at ${\bf p}=0$.
Before discussing more general correlations let us consider the
constant mean gradient (eqs.(4)-(5)) with $f=0$ and
$F={\bf gV}$. Then, from eq.(13) (with $T_{0}=0$)\begin{equation}
\begin{array}{l}
\tilde{T}_{\tau}({\bf x},{\bf p})= \delta({\bf
p})E[\int_{0}^{\tau}dt{\bf gV} (t,{\bf x}+\mu {\bf b}(\tau-t))]
\end{array}
\end{equation}
We shall see that some properties of the general advection (3)
appear already at the level of the simple model (14). It follows
from eq.(14) that \begin{displaymath} \langle T_{\tau}({\bf
X})\rangle =\delta({\bf p})E[\int_{0}^{\tau}dt{\bf gU} (t,{\bf
x}+\mu {\bf b}(\tau-t))]
\end{displaymath}
and
\begin{displaymath}
\begin{array}{l}
\langle \tilde{T}_{\tau}({\bf x},{\bf p})\tilde{T}_{\tau}({\bf
x}^{\prime},{\bf p}^{\prime})\rangle-\langle \tilde{T}_{\tau}({\bf
x},{\bf p})\rangle\langle \tilde{T}_{\tau}({\bf x}^{\prime},{\bf
p}^{\prime})\rangle= \delta({\bf p})\delta({\bf
p}^{\prime})\cr\int_{0}^{\tau}dt\int_{0}^{\tau}dt^{\prime} E[{\bf
g}G(t-t^{\prime}, {\bf x}-{\bf x}^{\prime}+\mu {\bf
b}(\tau-t)-\mu{\bf b}^{\prime}(\tau-t^{\prime})){\bf g}]
\end{array}
\end{displaymath}
We calculate the integral over time. First, if the covariance $G$
is time-independent (a steady flow) then
\begin{equation}
\begin{array}{l}\langle T_{\tau}({\bf x},{\bf z})T_{\tau}({\bf
x}^{\prime},{\bf z}^{\prime})\rangle-\langle T_{\tau}({\bf
X})\rangle\langle T_{\tau}({\bf X}^{\prime})\rangle\cr
=4\mu^{-4}\int d{\bf k}\exp(i{\bf k}({\bf x}-{\bf
x}^{\prime})){\bf g}\tilde{G}({\bf k}){\bf g}\vert{\bf
k}\vert^{-4} (1-\exp(-\frac{\mu^{2}}{2}{\bf k}^{2}\tau))^{2}
\end{array}
\end{equation} Next, let us consider \begin{equation} G(t-t^{\prime},{\bf
x}-{\bf x}^{\prime})=\delta(t-t^{\prime})D({\bf x}-{\bf
x}^{\prime})
\end{equation}The covariance (16) does not have
any physical foundations but the virtue of the assumption (16) is
the solubility of the model (3)
\cite{kraichnan}(the Kraichnan model) in the sense that one can
obtain a closed set of partial differential equations for the
correlation functions (see the Appendix B). In our simplified version (14)
\begin{equation}
\begin{array}{l}
\langle T_{\tau}({\bf x},{\bf z})T_{\tau}({\bf x}^{\prime},{\bf
z}^{\prime})\rangle-\langle T_{\tau}({\bf X})\rangle\langle
T_{\tau}({\bf X}^{\prime})\rangle\cr=\mu^{-2}\int d{\bf
k}\exp(i{\bf k}({\bf x}-{\bf x}^{\prime})){\bf g}\tilde{D}({\bf
k}){\bf g}\vert{\bf k}\vert^{-2} (1-\exp(-\mu^{2}{\bf
k}^{2}\tau))\end{array}
\end{equation}
If the ${\bf v}$ correlations are growing as in eq.(10) then the
expression (17) can be infrared divergent (especially at
$\tau=\infty$). In such a case we should rather consider
\begin{equation}\begin{array}{l} \langle (T_{\tau}({\bf 0},{\bf
0})-\langle T_{\tau}({\bf 0},{\bf 0})\rangle- T_{\tau}({\bf
x},{\bf z})+\langle T_{\tau}({\bf x},{\bf
z})\rangle)^{2}\rangle\cr=8\mu^{-4}\int d{\bf k}(1-\exp(i{\bf
k}{\bf x}))\tilde{G}({\bf k})\vert{\bf k}\vert^{-4}
(1-\exp(-\frac{\mu^{2}}{2}{\bf k}^{2}\tau))^{2}
\end{array}\end{equation} In general, let
\begin{equation} G(t-t^{\prime},{\bf x}-{\bf x})
=\int d\omega d{\bf k}\tilde{G}(\omega,{\bf
k})\exp(i\omega(t-t^{\prime})+i{\bf k}({\bf x}-{\bf x}^{\prime}))
\end{equation}
then\begin{displaymath}\begin{array}{l} \langle
\tilde{T}_{\tau}({\bf x},{\bf p})\tilde{T}_{\tau}({\bf
x}^{\prime},{\bf p}^{\prime})\rangle-\langle \tilde{T}_{\tau}({\bf
x},{\bf p})\rangle\langle T_{\tau}({\bf x}^{\prime},{\bf
p}^{\prime})\rangle= \delta({\bf p})\delta({\bf
p}^{\prime})\cr\int_{0}^{\tau}dt\int_{0}^{\tau}dt^{\prime}\int
d{\bf k} {\bf g}\tilde{G}(t-t^{\prime},{\bf k}){\bf g}\exp(i{\bf
k}( {\bf x}-{\bf x}^{\prime})-\frac{1}{2}\mu^{2}{\bf k}^{2}
(2\tau-t-t^{\prime}))
\end{array}
\end{displaymath}After the time integration
\begin{equation}\begin{array}{l}
\langle T_{\tau}({\bf x},{\bf z})T_{\tau}({\bf x}^{\prime},{\bf
z}^{\prime})\rangle-\langle T_{\tau}(X)\rangle\langle
T_{\tau}(X^{\prime})\rangle=\int d{\bf k}d\omega\exp(i{\bf k}({\bf
x}-{\bf x}^{\prime})){\bf g}\tilde{G}(\omega,{\bf k}){\bf g}\cr
(\frac{1}{4}\mu^{4}\vert{\bf k}\vert^{4}+\omega^{2})^{-1}\vert
1-\exp(-\frac{1}{2}\mu^{2}{\bf k}^{2}\tau-i\omega\tau)\vert^{2}
\end{array}
\end{equation}
We assume that $G$ is scale invariant
\begin{equation} G(ct,\lambda {\bf x})=c^{-\alpha}
\lambda^{-2\gamma}G(t,{\bf x})
\end{equation}
($\alpha+\gamma<1$ if the time integral in eq.(14) is to be
finite). This assumption has simple consequences for the heat
transport. It may be not exact in mathematical models. As an
example, for the shear flow solution of the Navier-Stokes equation
discussed in the Introduction if $C_{jl}(\omega,{\bf k})$ is the
spectral function of the force distribution ${\bf R}$ then the
spectral function of the stationary velocity distribution
(obtained as a solution of the Navier-Stokes equation with the
initial condition at $t_{0}$ and then letting $t_{0}\rightarrow
-\infty$)
is
\begin{equation}\tilde{G}_{jl}(\omega,{\bf k})
=C_{jl}(\omega,{\bf k})\Big((\frac{\nu}{2}{\bf
k}^{2})^{2}+\omega^{2}\Big)^{-1}
\end{equation}
We must choose a specific $C$ in order to obtain a scale invariant
$\tilde{G}$.
We can see from eqs.(15)-(20) that at finite $\tau$ the
large distance behavior of the temperature correlations is the
same as that of the velocity correlations because the behavior of
$\rho_{\tau}$ for small momenta does not change. However,if
$\langle {\bf v}({\bf x}){\bf v}({\bf 0})\rangle\simeq\vert {\bf
x}\vert^{2\beta}$ then at $\tau=\infty$ for a steady flow we
obtain in eq.(18)\begin{displaymath} \langle (T_{\infty}({\bf
x},{\bf z})-\langle T_{\infty}({\bf x},{\bf
z})\rangle-T_{\infty}({\bf 0},{\bf 0})+\langle T_{\infty}({\bf
0},{\bf 0})\rangle)^{2}\rangle\simeq \vert {\bf x}\vert^{2\beta+4}
\end{displaymath}
and for the Kraichnan model \cite{kraichnan}
\begin{displaymath}
\langle (T_{\infty}({\bf x},{\bf z})-\langle T_{\infty}({\bf
x},{\bf z})\rangle-T_{\infty}({\bf 0},{\bf 0})+\langle
T_{\infty}({\bf 0},{\bf 0})\rangle)^{2}\rangle\simeq \vert {\bf
x}\vert^{2\beta+2}
\end{displaymath}
in eq.(17) . For a general time dependent $G(t,{\bf x}) $ of the
form (19) we shall have the $\vert{\bf x}
\vert^{2\beta-2\alpha+4}$ behavior of the structure functions
$S^{(2)}_{\infty}$ in eq.(20) if $G$ scales as in eq.(21) (
$\gamma=-\beta$). We can establish the behavior for large ${\bf
x}-{\bf x}^{\prime}$ by means of a change of variables in the
integrals (15)-(20) ${\bf k}=\tilde{{\bf k}}\vert{\bf x}-{\bf
x}^{\prime}\vert^{-1}$ and $\omega=\tilde{\omega}\vert{\bf x}-{\bf
x}^{\prime}\vert^{-2}$ and an estimate of the remainder. Note that
the long range correlations of the velocity field ($\gamma<0$)
lead to an increase of the temperature correlations.
\section{ Gaussian white noise source}
In this section we consider $F=f$ as a Gaussian random field
independent of ${\bf v}$. Estimates on the equilibrium
distribution are simplified if the sources at different times are
independent
\begin{equation}
\begin{array}{l}
M(t-t^{\prime},{\bf x}-{\bf x}^{\prime},{\bf z}-{\bf z}^{\prime})
=\delta(t-t^{\prime})m({\bf x}-{\bf x}^{\prime},{\bf z}-{\bf
z}^{\prime})
\end{array}
\end{equation}
We assume the form (23) of $M$ as a technical simplification. This
is a mathematical idealization still justified by an application
of physical sources of heat (as heat injections independent at
each time).
For a lower bound we need an assumption that the dependence on
${\bf x}-{\bf x}^{\prime}$ is of the form of the Laplace transform
(such an assumption includes the scale invariant distributions
$m$ which do not increase at large distances) either in the form
\begin{equation}
\begin{array}{l} m({\bf x}-{\bf x}^{\prime},{\bf z}-{\bf
z}^{\prime})\equiv m_{1}({\bf x}-{\bf x}^{\prime})m_{0}({\bf
z}-{\bf z}^{\prime})\cr =\int d{\bf k}d{\bf p}\exp(i{\bf k}({\bf
x}-{\bf x}^{\prime})+i{\bf p}({\bf z}-{\bf
z}^{\prime}))\tilde{m}_{1}({\bf k})\tilde{m}_{0}({\bf p})\cr
=
\int_{0}^{\infty}da \nu_{1}(a)\exp(-a\vert {\bf x}-{\bf
x}^{\prime}\vert^{2})m_{0}({\bf z}-{\bf z}^{\prime})
\end{array}
\end{equation}
or in the Euclidean invariant way
\begin{equation}
\begin{array}{l} m({\bf x}-{\bf x}^{\prime},{\bf z}-{\bf
z}^{\prime}) = \int_{0}^{\infty}da \nu(a)\exp(-a(\vert {\bf
x}-{\bf x}^{\prime}\vert^{2}+\vert {\bf z}-{\bf
z}^{\prime}\vert^{2})) \cr \equiv \int_{0}^{\infty}da\int d{\bf
p}\exp(i{\bf p}({\bf z}-{\bf z}^{\prime}))\exp(-a\vert {\bf
x}-{\bf x}^{\prime}\vert^{2})\nu(a,{\bf p})
\end{array}
\end{equation}
In eqs.(24)-(25) $\nu_{1}$ and $\nu$ are non-negative functions.
${\bf v}$ in eq.(13) enters $T_{\tau}$ in the form
\begin{displaymath}
\exp(i{\bf v}({\bf J}))
\end{displaymath}
where
\begin{displaymath}
{\bf v}({\bf J})=\int d{\bf u}\int_{0}^{\tau} ds {\bf v}(s,{\bf
u}){\bf J}(s,{\bf u})
\end{displaymath}
with
\begin{displaymath} {\bf J}(s,{\bf u})=-\theta(s){\bf
p}\delta({\bf u}-{\bf x}-\mu{\bf b}(\tau-s))\end{displaymath} It
follows that the expectation values of $n$ products of $T_{\tau}$
are expressed by
\begin{displaymath}
\langle \exp(i{\bf v}({\bf J}_{n}))\rangle=S({\bf J}_{n})
\end{displaymath}
where $S({\bf J})$ is the characteristic function of the random
field ${\bf v}$. For a Gaussian random field
\begin{equation}
S({\bf J})=\exp(-\frac{1}{2}{\bf J}G{\bf J})
\end{equation}
Let us note that because of the translation invariance in the
${\bf z}$ variable of the source $f$ we have a conservation of
momenta
\begin{equation}
\langle \tilde{T}_{\tau}({\bf x}_{1},{\bf
p}_{1}).....\tilde{T}_{\tau}({\bf x}_{n},{\bf p}_{n})\rangle
=\delta({\bf p}_{1}+...+{\bf p}_{n}){\cal H}\end{equation} The
correlation functions (27) are expressed by the characteristic
function (26) with ${\bf J}_{n}$ satisfying the condition (for
$n>1$)
\begin{equation} \int{\bf J}_{n}(s,{\bf u})d{\bf u}=0
\end{equation}
It follows that in the Gaussian case with the covariance (10) the
part of $G$ which is not translation invariant does not
contribute to the correlation functions.
We calculate the equal time expectation values of $T_{\tau}$ (
eq.(13)with the zero initial condition) under the assumption that
the random fields $f$ and ${\bf v}$ are independent
\begin{equation}
\begin{array}{l}
\langle \tilde{T}_{\tau}({\bf x},{\bf p})\tilde{T}_{\tau}({\bf
x}^{\prime},{\bf p}^{\prime})\rangle= \delta({\bf p}+{\bf
p}^{\prime})\int_{0}^{\tau}dt \exp(-\mu^{2}{\bf p}^{2}(\tau-t))
\cr E[\exp(-i{\bf p}\int_{0}^{\tau-t}{\bf U}(\tau -s,{\bf
x}+\mu{\bf b}(s))ds)\cr \tilde{m}( {\bf x}-{\bf x}^{\prime}+\mu
{\bf b}(\tau-t)-\mu{\bf b}^{\prime}(\tau-t),{\bf p})S({\bf
J}_{2})]
\end{array}
\end{equation}
where \begin{displaymath} {\bf J}_{2}({\bf u})= {\bf p}\theta
(s)\delta({\bf u}-{\bf x}-{\bf b}(\tau-s))-{\bf p}\theta
(s)\delta({\bf u}-{\bf x}^{\prime}-{\bf b}^{\prime}(\tau-s))
\end{displaymath} For the Gaussian field (26)
\begin{equation}
\begin{array}{l}
S({\bf
J}_{2})=\exp\Big(-\frac{1}{2}\int_{0}^{\tau-t}\int_{0}^{\tau-t}dsds^{\prime}
{\bf p} G_{0}(s-s^{\prime},\mu{\bf b}(s)-\mu{\bf
b}(s^{\prime})){\bf p} \cr -
\frac{1}{2}\int_{0}^{\tau-t}\int_{0}^{\tau-t}dsds^{\prime} {\bf p}
G_{0}(s-s^{\prime},\mu{\bf b}^{\prime}(s)-\mu{\bf
b}^{\prime}(s^{\prime})){\bf p} \cr +\int_{0}^{\tau
-t}\int_{0}^{\tau-t}dsds^{\prime} {\bf p} G_{0}(s-s^{\prime},{\bf
x}-{\bf x}^{\prime}+\mu{\bf b}(s)-\mu{\bf
b}^{\prime}(s^{\prime})){\bf p} \Big)
\end{array}
\end{equation}
where $G_{0}$ is the translation invariant part of $G$.
If \begin{equation} \vert \tilde{m}( {\bf x},{\bf p})\vert\leq
K\vert\tilde{m}_{0}\vert({\bf p})
\end{equation}
then from $\vert S({\bf J})\vert\leq 1$ there follows the
bound\begin{equation} \vert\langle \tilde{T}_{\tau}({\bf x},{\bf
p})\tilde{T}_{\tau}({\bf x}^{\prime},{\bf p}^{\prime})\rangle\vert
\leq K\delta({\bf p}+{\bf p}^{\prime})\vert\tilde{m}_{0}\vert({\bf
p})\mu^{-2}{\bf p}^{-2}
(1-\exp(-\mu^{2}{\bf p}^{2}\tau))\end{equation}
For a small ${\bf p}$ and a finite $\tau$ the correlations (32)
are bounded by $\tau\vert\tilde{m}_{0}\vert({\bf p})$ whereas at
$\tau=\infty$ by $\vert\tilde{m}_{0}\vert({\bf p}){\bf p}^{-2}$.
Next, we apply the scale invariance of the Brownian motion
\begin{equation}
{\bf b}(at)=\sqrt{a}{\bf b}(t)
\end{equation}
in eq.(29).We write $s=(\tau-t)\sigma$.
Then, using the scaling properties (21) and (33) and denoting by $G_{0}$
the translation invariant part of $G$ we can rewrite eqs.(29)-(30) in
the form
\begin{equation}
\begin{array}{l}
\langle \tilde{T}_{\tau}({\bf x},{\bf p})\tilde{T}_{\tau}({\bf
x}^{\prime},{\bf p}^{\prime})\rangle= \delta({\bf p}+{\bf
p}^{\prime})\int_{0}^{\tau}dt \exp(-\mu^{2}{\bf p}^{2}(\tau-t))\cr
E[\exp(-i{\bf p}\int_{0}^{\tau-t}{\bf U}(\tau -s,{\bf x}+\mu{\bf
b}(s))ds)\tilde{m}({\bf x}-{\bf x}^{\prime}+\mu\sqrt{\tau-t}{\bf
b}(1)-\mu\sqrt{\tau-t}{\bf b}^{\prime}(1),{\bf p})\cr
\exp\Big(-\frac{1}{2}(\tau-t)^{2-\alpha-\gamma}
\int_{0}^{1}\int_{0}^{1}d\sigma d\sigma^{\prime} {\bf p}
G_{0}(\sigma-\sigma^{\prime},\mu{\bf b}(\sigma)-\mu{\bf
b}(\sigma^{\prime})){\bf p} \cr
-\frac{1}{2}(\tau-t)^{2-\alpha-\gamma}\int_{0}^{1}\int_{0}^{1}d\sigma
d\sigma^{\prime} {\bf p} G_{0}(\sigma-\sigma^{\prime},\mu{\bf
b}^{\prime}(\sigma)-\mu{\bf b}^{\prime}(\sigma^{\prime})){\bf p}
\cr +(\tau-t)^{2-\alpha-\gamma} \int_{0}^{1}\int_{0}^{1}d\sigma
d\sigma^{\prime} {\bf p}
G_{0}(\sigma-\sigma^{\prime},(\tau-t)^{-\frac{1}{2}}({\bf x}-{\bf
x}^{\prime})+\mu{\bf b}(\sigma)-\mu{\bf
b}^{\prime}(\sigma^{\prime})){\bf p}\Big)]
\end{array}
\end{equation}
For the Kraichnan model \cite{kraichnan}
$\Gamma(s-s^{\prime})=\delta(s-s^{\prime})$ in eqs.(9)-(10),then
$\alpha=1$ in eq.(22) and the formula (34) reads
\begin{equation}
\begin{array}{l}
\langle \tilde{T}_{\tau}({\bf x},{\bf p})\tilde{T}_{\tau}({\bf
x}^{\prime},{\bf p}^{\prime})\rangle= \delta({\bf p}+{\bf
p}^{\prime})\int_{0}^{\tau}dt \exp(-\mu^{2}{\bf p}^{2}(\tau-t))\cr
E[\exp(-i{\bf p}\int_{0}^{\tau-t}{\bf U}(\tau -s,{\bf x}+\mu{\bf
b}(s))ds)\tilde{m}({\bf x}-{\bf x}^{\prime}+\mu\sqrt{\tau-t}{\bf
b}(1)-\mu\sqrt{\tau-t}{\bf b}^{\prime}(1),{\bf p})\cr
\exp\Big(-(\tau-t)^{1-\gamma} {\bf p} D({\bf 0}){\bf p}
+(\tau-t)^{1-\gamma}\int_{0}^{1}d\sigma {\bf p}
D((\tau-t)^{-\frac{1}{2}}({\bf x}-{\bf x}^{\prime})+\mu{\bf
b}(\sigma)-\mu{\bf b}^{\prime}(\sigma)){\bf p}\Big)]
\end{array}
\end{equation}
\section{Jensen inequalities for the temperature correlations}
We are going to estimate
the spectral measure (6)-(7) by an application of the Jensen
inequality. We can obtain an upper bound on the correlation
functions applying the Jensen inequality ($\int d\nu\exp f\geq
\exp(\int d\nu f$) if $\int d\nu=1$) \cite{jensen} to the time
integral in eqs.(29)-(30)
\begin{equation}
\begin{array}{l}
\vert\langle \tilde{T}_{\tau}({\bf x},{\bf
p})\tilde{T}_{\tau}({\bf x}^{\prime},{\bf p}^{\prime})\rangle
\vert\leq \cr 2\delta({\bf p}+{\bf
p}^{\prime})\int_{0}^{\tau}dr\int_{0}^{1}d\sigma
\int_{0}^{\sigma}d\sigma^{\prime}\exp(-\mu^{2}{\bf p}^{2}r) \cr
E[\vert\tilde{m}( {\bf x}-{\bf x}^{\prime}+\mu\sqrt{r}{\bf
b}(1)-\mu\sqrt{r}{\bf b}^{\prime}(1),{\bf p})\vert
\cr\exp\Big(-\frac{1}{2}r^{2-\alpha-\gamma} {\bf p}
G_{0}(\sigma-\sigma^{\prime},\mu{\bf b}(\sigma)-\mu{\bf
b}(\sigma^{\prime})){\bf p} \cr -\frac{1}{2}r^{2-\alpha-\gamma}
{\bf p} G_{0}(\sigma-\sigma^{\prime},\mu{\bf
b}^{\prime}(\sigma)-\mu{\bf b}^{\prime}(\sigma^{\prime})){\bf p}
\cr +r^{2-\alpha-\gamma} {\bf p}
G_{0}(\sigma-\sigma^{\prime},r^{-\frac{1}{2}}({\bf x}-{\bf
x}^{\prime})+\mu{\bf b}(\sigma)-\mu{\bf
b}^{\prime}(\sigma^{\prime})){\bf p}\Big)]
\end{array}
\end{equation}
Let $p(s,{\bf u};t,{\bf w})$ be the transition function for the
Brownian motion to pass from ${\bf u}$ at time $s$ to ${\bf w}$ at
time $t$. Then, the expectation value (36) reads
\begin{equation}
\begin{array}{l}
\vert\langle \tilde{T}_{\tau}({\bf x},{\bf
p})\tilde{T}_{\tau}({\bf x}^{\prime},{\bf
p}^{\prime})\rangle\vert\leq \cr 2\delta({\bf p}+{\bf
p}^{\prime})\int d{\bf u}d{\bf u}^{\prime}d{\bf w}d{\bf
w}^{\prime}\int_{0}^{\tau}dr\int_{0}^{1}d\sigma
\int_{0}^{\sigma}d\sigma^{\prime}\exp(-\mu^{2}{\bf p}^{2}r) \cr
p(0,{\bf 0};\sigma^{\prime},{\bf u}) p(\sigma^{\prime},{\bf
u};\sigma,{\bf w}) p(\sigma,{\bf w};1,{\bf z}) p(0,{\bf
0};\sigma^{\prime},{\bf u}^{\prime}) p(\sigma^{\prime},{\bf
u}^{\prime};\sigma,{\bf w}^{\prime}) p(\sigma,{\bf
w}^{\prime};1,{\bf z}^{\prime}) \cr \vert\tilde{m}( {\bf x}-{\bf
x}^{\prime}+\mu\sqrt{r}{\bf z}-\mu\sqrt{r}{\bf z}^{\prime},{\bf
p})\vert \cr\exp\Big(-\frac{1}{2}r^{2-\alpha-\gamma} {\bf p}
G_{0}(\sigma-\sigma^{\prime},\mu{\bf w}-\mu{\bf u}){\bf p}
-\frac{1}{2}r^{2-\alpha-\gamma} {\bf p}
G_{0}(\sigma-\sigma^{\prime},\mu{\bf w}^{\prime}-\mu{\bf
u}^{\prime}){\bf p} \cr +\frac{1}{2}r^{2-\alpha-\gamma} {\bf p}
G_{0}(\sigma-\sigma^{\prime},r^{-\frac{1}{2}}({\bf x}-{\bf
x}^{\prime})+\mu{\bf w}-\mu{\bf u}^{\prime}){\bf p} \cr
+\frac{1}{2}r^{2-\alpha-\gamma} {\bf p}
G_{0}(\sigma-\sigma^{\prime},r^{-\frac{1}{2}}({\bf x}-{\bf
x}^{\prime})+\mu{\bf w}^{\prime}-\mu{\bf u}){\bf p}\Big)
\end{array}
\end{equation}
Till now we have kept the mean velocity ${\bf U}$ as an arbitrary
non zero function.We can obtain a lower bound only if
\begin{displaymath} {\bf U}=0
\end{displaymath}
As claimed by some authors (see ,e.g.,the standard text-book
\cite{landau}) the mean velocity does not play any essential role
in turbulence. So, setting it equal to zero we do not lose much.
Moreover, for the lower bound we must assume $m$ of the form (24)
(or (25)) with $\tilde{m}_{0}({\bf p})\geq 0$. Then, we can apply
the Jensen inequality to the expectation value over the Brownian
motion
\begin{equation}
\begin{array}{l}
\langle \tilde{T}_{\tau}({\bf x},{\bf p})\tilde{T}_{\tau}({\bf
x}^{\prime},{\bf p}^{\prime})\rangle\geq \cr \delta({\bf p}+{\bf
p}^{\prime})\int_{0}^{\tau}dr \exp(-\mu^{2}{\bf
p}^{2}r)\int_{0}^{\infty} da \nu_{1}(a)\cr\tilde{m}_{0}({\bf p})
\exp E\Big[-\frac{1}{2}r^{2-\alpha-\gamma}
\int_{0}^{1}\int_{0}^{1}d\sigma d\sigma^{\prime} {\bf p}
G_{0}(\sigma-\sigma^{\prime},\mu{\bf b}(\sigma)-\mu{\bf
b}(\sigma^{\prime})){\bf p} \cr
-\frac{1}{2}r^{2-\alpha-\gamma}\int_{0}^{1}\int_{0}^{1}d\sigma
d\sigma^{\prime} {\bf p} G_{0}(\sigma-\sigma^{\prime},\mu{\bf
b}^{\prime}(\sigma)-\mu{\bf b}^{\prime}(\sigma^{\prime})){\bf p}
\cr +r^{2-\alpha-\gamma} \int_{0}^{1}\int_{0}^{1}d\sigma
d\sigma^{\prime} {\bf p}
G_{0}(\sigma-\sigma^{\prime},r^{-\frac{1}{2}}({\bf x}-{\bf
x}^{\prime})+\mu{\bf b}(\sigma)-\mu{\bf
b}^{\prime}(\sigma^{\prime})){\bf p}\cr-a\vert {\bf x}-{\bf
x}^{\prime}+\mu\sqrt{r}{\bf b}(1)-\mu\sqrt{r}{\bf
b}^{\prime}(1)\vert^{2}\Big]
\end{array}
\end{equation}
For $m$ of the form (25) the inequality (38) reads
\begin{equation}
\begin{array}{l}
\langle \tilde{T}_{\tau}({\bf x},{\bf p})\tilde{T}_{\tau}({\bf
x}^{\prime},{\bf p}^{\prime})\rangle\geq \cr \delta({\bf p}+{\bf
p}^{\prime})\int_{0}^{\tau}dr \exp(-\mu^{2}{\bf
p}^{2}r)\int_{0}^{\infty} da \nu(a,{\bf p})\cr \exp
E\Big[-\frac{1}{2}r^{2-\alpha-\gamma}
\int_{0}^{1}\int_{0}^{1}d\sigma d\sigma^{\prime} {\bf p}
G_{0}(\sigma-\sigma^{\prime},\mu{\bf b}(\sigma)-\mu{\bf
b}(\sigma^{\prime})){\bf p} \cr
-\frac{1}{2}r^{2-\alpha-\gamma}\int_{0}^{1}\int_{0}^{1}d\sigma
d\sigma^{\prime} {\bf p} G_{0}(\sigma-\sigma^{\prime},\mu{\bf
b}^{\prime}(\sigma)-\mu{\bf b}^{\prime}(\sigma^{\prime})){\bf p}
\cr +r^{2-\alpha-\gamma} \int_{0}^{1}\int_{0}^{1}d\sigma
d\sigma^{\prime} {\bf p}
G_{0}(\sigma-\sigma^{\prime},r^{-\frac{1}{2}}({\bf x}-{\bf
x}^{\prime})+\mu{\bf b}(\sigma)-\mu{\bf
b}^{\prime}(\sigma^{\prime})){\bf p}\cr-a\vert {\bf x}-{\bf
x}^{\prime}+\mu\sqrt{r}{\bf b}(1)-\mu\sqrt{r}{\bf
b}^{\prime}(1)\vert^{2}\Big]
\end{array}
\end{equation}
The correlation functions (36)-(39) in general will essentially
depend on the source distribution $m$. We consider $m$ such
that:i) $m_{1}$ is bounded from above by a constant (eq.(31)) and
in addition ii) $m_{1}({\bf x})$ is decreasing like a power
$2\Omega$ of $\vert {\bf x}\vert$.
From eq.(32) it
follows that under the assumption (31) the limit $\tau\rightarrow
\infty$ exists. We wish to estimate the correlation functions at
$\tau=\infty$ under various conditions on $m_{1}({\bf x})$ . Using
the inequality (for $A\geq 0$)
\begin{displaymath}
2\exp(-\mu^{2}{\bf p}^{2}r-A({\bf x}-{\bf x}^{\prime},{\bf b})r^{2-\alpha-\gamma}{\bf p}^{2})
\leq \exp(-\mu^{2}{\bf p}^{2}r)+
\exp(-A({\bf x}-{\bf x}^{\prime},{\bf b}) r^{2-\alpha-\gamma}{\bf p}^{2})
\end{displaymath}
and a change of variables in the $r$-integral in eqs.(36)-(37)
$r=t\vert{\bf p}\vert^{-\frac{2}{2-\alpha-\gamma}}$ we
obtain ( when $m_{1}$ is a bounded function (31))
\begin{equation}
\begin{array}{l}\langle \tilde{T}_{\infty}({\bf x},{\bf
p})\tilde{T}_{\infty}({\bf x},{\bf p}^{\prime})\rangle\cr\leq
\delta({\bf p}+{\bf p}^{\prime}) \vert\tilde{m}_{0}\vert({\bf
p})\Big(c_{1}\theta(\vert {\bf p}\vert -\frac{1}{\mu})\vert {\bf
p}\vert^{-2}+c_{2}\theta(\frac{1}{\mu}-\vert {\bf p}\vert )\vert
{\bf p}\vert^{-\frac{2}{2-\alpha-\gamma}}\Big)
\end{array}
\end{equation}
where on the rhs of eq.(36) after an integral over $r$ (which can
be performed by a change of variables) we obtain a function
$\vert A({\bf x}-{\bf x}^{\prime},{\bf
b})\vert^{-\frac{1}{2-\alpha-\gamma}}$ (where ${\bf b}$ depends on
$\sigma$ and $\sigma^{\prime}$) whose expectation value is
expressed by the rhs of eq.(37). This is an integrable function of
${\bf u},{\bf w},{\bf z},{\bf u}^{\prime},{\bf w}^{\prime}$ and
${\bf z}^{\prime}$. Hence, it can be bounded by a constant
$c_{2}$. Under a stronger assumption that
\begin{equation}
\int d{\bf x}m_{1}({\bf x})<\infty
\end{equation}
from eq.(37) we obtain in a similar way the bound
\begin{equation}
\begin{array}{l} \int d{\bf
x}\langle \tilde{T}_{\infty}({\bf x},{\bf
p})\tilde{T}_{\infty}({\bf x}^{\prime},{\bf
p}^{\prime})\rangle\cr\leq \delta({\bf p}+{\bf p}^{\prime})
\vert\tilde{m}_{0}\vert({\bf p})\Big(c_{3}\theta(\vert {\bf
p}\vert -\frac{1}{\mu})\vert {\bf
p}\vert^{-2}+c_{4}\theta(\frac{1}{\mu}-\vert {\bf p}\vert )\vert
{\bf p}\vert^{-\frac{2}{2-\alpha-\gamma}}\Big)
\end{array}
\end{equation}
This is a bound on the spectral measure on the rhs of eq.(7).
We wish to estimate the dependence of the correlation functions
(34) on ${\bf x}-{\bf x}^{\prime}$ in a more explicit form. Note
that if the velocity correlations are defined by eq.(9) where
$\tilde{D}({\bf k})$ is an integrable function then on the basis
of the Lebesgue lemma $G$ is vanishing at large $\vert {\bf
x}-{\bf x}^{\prime}\vert$. In such a case the term depending on
${\bf x}-{\bf x}^{\prime}$ in the exponential on the rhs of
eq.(34) can be neglected. If $m$ is in addition a slowly varying
function of ${\bf x}-{\bf x}^{\prime}$ then
\begin{equation} \langle \tilde{T}_{\tau}({\bf x},{\bf
p})\tilde{T}_{\tau}({\bf x}^{\prime},{\bf
p}^{\prime})\rangle\simeq\langle \tilde{T}_{\tau}({\bf x},{\bf
p})\tilde{T}_{\tau}({\bf x},{\bf p}^{\prime})\rangle
\end{equation}
There remains to discuss the turbulent flow (10). We are unable to
prove precise upper bounds for large $\vert {\bf x}-{\bf
x}^{\prime}\vert$ and general $\beta$. However, if $0<2\beta<1$
and $d=1$ then $g({\bf x})=-\vert {\bf x}\vert^{2\beta}$ is a
convex function \cite{jensen}
\begin{displaymath}
g(\frac{1}{2}({\bf x}+{\bf y}))\leq \frac{1}{2}g({\bf x})
+\frac{1}{2}g({\bf y})
\end{displaymath}
As a consequence
\begin{displaymath}
\begin{array}{l}
\exp\Big(-r^{2-\alpha+\beta} {\bf p}^{2}
\Gamma(\sigma-\sigma^{\prime})\vert r^{-\frac{1}{2}}({\bf x}-{\bf
x}^{\prime})+\mu{\bf b}(\sigma)-\mu{\bf
b}^{\prime}(\sigma^{\prime})\vert^{2\beta}\Big)\cr \leq
\exp\Big(-\frac{1}{2}r^{2-\alpha+\beta} {\bf p}^{2}
\Gamma(\sigma-\sigma^{\prime})\vert 2 r^{-\frac{1}{2}}({\bf
x}-{\bf x}^{\prime})\vert^{2\beta}-\frac{1}{2}\vert 2\mu{\bf
b}(\sigma)-2\mu{\bf
b}^{\prime}(\sigma^{\prime})\vert^{2\beta}\Big)
\end{array}
\end{displaymath}
Hence, under the assumption (31) (after the $r$-integration) the
inequalities (36)-(37) at $\tau=\infty$ for $0<2\beta<1$ read
\begin{equation}
\begin{array}{l}
\langle \tilde{T}_{\infty}({\bf x},{\bf p})\tilde{T}_{\infty}({\bf
x}^{\prime},{\bf p}^{\prime})\rangle\cr
\leq K \vert {\bf p}\vert^{-\frac{2}{2-\alpha}}\vert\tilde{m}_{0}\vert({\bf p})
\vert {\bf x}-{\bf x}^{\prime}\vert^{-\frac{2\beta}{2-\alpha}}
\end{array}
\end{equation}
We expect the inequality (44) to hold true in general
(under the assumption (31))for large
$\vert {\bf x}-{\bf x}^{\prime}\vert$ because we obtain
such a behavior of the two-point function if in a formal way
we take the limit $\vert {\bf x}-{\bf x}^{\prime}\vert\rightarrow \infty$
in eq.(34) neglecting terms of order
$\vert {\bf x}-{\bf x}^{\prime}\vert^{-1}$.
We discuss now the Jensen inequality (38) for the lower bound.
It is sufficient to calculate the expectation value in
the exponential (38).
First, in
the Kraichnan model (35) for the term $-W$ in the exponential
appearing in eq.(38) we obtain
\begin{equation}
\begin{array}{l}
\exp(-W({\bf x}-{\bf x}^{\prime}))=\exp\Big(-r^{1-\gamma}\int
d{\bf k}{\bf p}\tilde{D}\left({\bf k}\right){\bf p}
\cr\left(1-\mu^{-2}{\bf k}^{-2}\exp\left(i{\bf
k}r^{-\frac{1}{2}}\left({\bf x}- {\bf x}^{\prime}\right)\right)
\left(1-\exp\left(-\mu^{2}{\bf k }^{2}\right)\right)\right)\Big)
\end{array}
\end{equation}
It is easy to see that
\begin{equation}
\begin{array}{l}
\exp(-W({\bf 0}))=\exp\Big(-r^{1-\gamma}\int d{\bf k}{\bf
p}\tilde{D}\left({\bf k}\right){\bf p} \cr\left(1-\mu^{-2}{\bf
k}^{-2} \left(1-\exp\left(-\mu^{2}{\bf k
}^{2}\right)\right)\right)\Big)\geq \exp(-cr^{1-\gamma}{\bf
p}^{2})
\end{array}
\end{equation}
under the assumptions that $ {\bf p}\tilde{D}{\bf p}\geq
\vert\tilde{D}\vert {\bf p}^{2}$, $\int d{\bf
k}\vert\tilde{D}\vert({\bf k})\theta (\vert {\bf k}\vert
-\frac{1}{\mu})<\infty$ and
\begin{displaymath}
\int d{\bf k}\vert\tilde{D}\vert({\bf k}){\bf k}^{2}\theta
(\frac{1}{\mu}-\vert {\bf k}\vert)<\infty.
\end{displaymath}
In such a case we can take the limit $\tau\rightarrow \infty$. In
this limit
\begin{equation}
\begin{array}{l}\langle \tilde{T}_{\infty}({\bf x},{\bf
p})\tilde{T}_{\infty}({\bf x},{\bf p}^{\prime})\rangle\geq
\delta({\bf p}+{\bf p}^{\prime})\tilde{m}_{0}({\bf
p})\int_{0}^{\infty}dr \int da \nu_{1}(a)\cr \exp(-\mu^{2}{\bf
p}^{2}r-c {\bf p}^{2}r^{1-\gamma}-2a\mu^{2} r)
\cr=\tilde{m}_{0}({\bf p})\delta({\bf p}+{\bf
p}^{\prime})\int_{0}^{\infty}drm_{1}( \mu\sqrt{2
r})\exp(-\mu^{2}{\bf p}^{2}r-c {\bf p}^{2}r^{1-\gamma})
\end{array}
\end{equation}
The behavior of the integral (47) depends on the behavior of the
source correlations $m_{1}$ as a function of $\vert {\bf x}-{\bf
x}^{\prime}\vert$. If
\begin{equation} m_{1}(\mu\sqrt{2 r})\geq K
\end{equation}
then
\begin{equation}\langle \tilde{T}_{\infty}({\bf x},{\bf
p})\tilde{T}_{\infty}({\bf x},{\bf p}^{\prime})\rangle\geq
\delta({\bf p}+{\bf p}^{\prime})\tilde{m}_{0}({\bf p})
\Big(c_{5}\theta(\vert {\bf p}\vert -\frac{1}{\mu})\vert{\bf
p}\vert^{-2}+c_{6}\theta(\frac{1}{\mu}-\vert {\bf p}\vert )\vert
{\bf p}\vert^{-\frac{2}{1-\gamma}}\Big)
\end{equation}
This lower bound coincides with the upper bound (40) (where
$\alpha=1$). If $m_{1}$ satisfies a stronger condition (
$\Omega<1$)\begin{equation} m_{1}(\vert {\bf x}\vert)\geq K \vert
{\bf x}\vert^{ -2\Omega}
\end{equation}
( $\Omega\geq 0$ if it is to be of the form (24),i.e.,
$\nu_{1}(a)\geq Ka^{\Omega-1}$) then
\begin{equation}\langle \tilde{T}_{\infty}({\bf x},{\bf
p})\tilde{T}_{\infty}({\bf x},{\bf p}^{\prime})\rangle\geq
\delta({\bf p}+{\bf p}^{\prime})\tilde{m}_{0}({\bf p})
\Big(c_{7}\theta(\vert {\bf p}\vert -\frac{1}{\mu})\vert{\bf
p}\vert^{-2+2\Omega}+c_{8}\theta(\frac{1}{\mu}-\vert {\bf p}\vert
)\vert {\bf p}\vert^{-\frac{2-2\Omega}{1-\gamma}}\Big)
\end{equation}
The inequality (51) results from the following estimate (for
$\alpha+\gamma<1$)
\begin{equation}
\begin{array}{l}
\int_{0}^{\infty}dr r^{-\Omega}\exp(-\mu^{2}{\bf p}^{2}r-c {\bf
p}^{2}r^{2-\alpha-\gamma})\cr=\int_{0}^{1}drr^{-\Omega}
\exp(-\mu^{2}{\bf p}^{2}r-c{\bf p}^{2}r^{2-\alpha-\gamma})
+\int_{1}^{\infty}drr^{-\Omega}\exp(-\mu^{2}{\bf p}^{2}r-c{\bf
p}^{2}r^{2-\alpha-\gamma}) \cr\geq \int_{0}^{1}drr^{-\Omega}
\exp(-(\mu^{2}{\bf p}^{2}+c{\bf p}^{2})r)
+\int_{1}^{\infty}drr^{-\Omega}\exp(-(\mu^{2}{\bf p}^{2}+c{\bf
p}^{2})r^{2-\alpha-\gamma}) \cr = \vert{\bf
p}\vert^{-2+2\Omega}\int_{0}^{{\bf
p}^{2}}t^{-\Omega}\exp(-(\mu^{2}+c)t)dt + \vert{\bf
p}\vert^{-\frac{2-2\Omega}{2-\alpha-\gamma}}\int_{a({\bf
p})}^{\infty}t^{-\Omega}\exp(-(\mu^{2}+c)t^{2-\alpha-\gamma})dt
\end{array}
\end{equation}
where $a({\bf p})=\vert{\bf p}\vert^{\frac{2}{2-\alpha-\gamma}}$
and $\alpha=1$ in application to eq.(47).
Next, we wish to estimate the behavior of the
temperature correlations at large ${\bf x}-{\bf x}^{\prime}$ in
the turbulent case (10) when $\gamma=-\beta<0$ (if $\gamma>0$ and
$m_{1}$ is a bounded function then the temperature correlations
are bounded from below and from above as functions of ${\bf
x}-{\bf x}^{\prime}$ ,eq.(43)). First, we consider the Kraichnan
model (35) ($\Gamma(s-s^{\prime})=\delta(s-s^{\prime})$ in
eq.(10)) with the mean velocity ${\bf U}=0$ and
\begin{equation}
\tilde{D}({\bf k})\simeq \vert{\bf k}\vert^{-d+2\gamma}
\end{equation}
The integral in eq.(45) is convergent for large ${\bf k}$ if
$\gamma<0$ and for small ${\bf k}$ if $-\gamma<1$. We consider the
model (10) with $0<\beta=-\gamma<1$. Let us change the integration
variable in eq.(45)
\begin{equation}
{\bf k}=\vert{\bf x}-{\bf x}^{\prime}\vert^{-1}\sqrt{r}{\bf q}
\end{equation}
Then, after an estimate of the remainder
\begin{equation} \exp(-W({\bf x}-{\bf
x}^{\prime}))\geq\exp(-cr{\bf p}^{2} \vert{\bf x}-{\bf
x}^{\prime}\vert^{2\beta})
\end{equation}
As a consequence \begin{displaymath}\begin{array}{l} \langle
\tilde{T}_{\tau}({\bf x},{\bf p})\tilde{T}_{\tau}({\bf
x}^{\prime},{\bf p}^{\prime})\rangle\cr \geq \delta({\bf p}+{\bf
p}^{\prime})\tilde{m}_{0}({\bf p}) (\mu^{2}{\bf p}^{2} +c{\bf
p}^{2} \vert{\bf x}-{\bf x}^{\prime}\vert^{2\beta})^{-1}
\Big(1-\exp(-\tau (\mu^{2}{\bf p}^{2} +c{\bf p}^{2} \vert{\bf
x}-{\bf x}^{\prime}\vert^{2\beta}))\Big)
\end{array}\end{displaymath}
Hence, for large $\vert{\bf x}-{\bf x}^{\prime}\vert$ we obtain
\begin{equation} \langle \tilde{T}_{\infty}({\bf
x},{\bf p})\tilde{T}_{\infty}({\bf x}^{\prime},{\bf
p}^{\prime})\rangle\geq \delta({\bf p}+{\bf
p}^{\prime})\tilde{m}_{0}({\bf p}) c^{-1}{\bf p}^{-2} \vert{\bf
x}-{\bf x}^{\prime}\vert^{-2\beta}
\end{equation}
This lower bound for the Kraichnan model is the same as the upper
bound (44) (here $\alpha=1$).
Let
us calculate the expectation value in the exponential of eq.(38)
(denoted by $-W$) for the general $G$ of eq.(19)
\begin{equation}
\begin{array}{l}
W({\bf x}-{\bf x}^{\prime})=r^{2-\alpha-\gamma}\int d\omega d{\bf
k}{\bf p}\tilde{G}\left(\omega,{\bf k}\right){\bf p} \cr
\Big(2\left(\frac{1}{2}\mu^{2}{\bf
k}^{2}-i\omega\right)^{-1}\left(1-\left(\frac{1}{2}\mu^{2}{\bf
k}^{2}-i\omega\right)^{-1}\left(1-\exp\left(-\frac{1}{2}\mu^{2}{\bf
k}^{2}+i\omega\right)\right)\right)\cr
-\left(\frac{1}{4}\mu^{4}\vert{\bf
k}\vert^{4}+\omega^{2}\right)^{-1}\exp\left(i{\bf
k}r^{-\frac{1}{2}}\left({\bf x}- {\bf x}^{\prime}\right)\right)
\vert 1 -\exp\left(-\frac{1}{2}\mu^{2}{\bf
k}^{2}+i\omega\right)\vert^{2}\Big)
\end{array}
\end{equation}
We estimate this integral at ${\bf x}={\bf x}^{\prime}$ first.
Similarly as in eq.(46) the scale invariance (21) leads to
\begin{equation} W({\bf 0})\geq cr^{2-\alpha-\gamma}{\bf p}^{2}
\end{equation} if
\begin{displaymath}
\int d{\bf k}d\omega \tilde{G}\left(\omega,{\bf k}\right)
\left(\frac{1}{2}\mu^{2}{\bf k}^{2}-i\omega\right)^{-1}\theta
(\vert{\bf k}\vert -\frac{1}{\mu})<\infty
\end{displaymath} and
\begin{displaymath}
\int\int d{\bf k}d\omega \tilde{G}\left(\omega,{\bf k}\right)
\left(\frac{1}{4}\mu^{4}\vert{\bf
k}\vert^{4}+\omega^{2}\right)^{\frac{1}{2}}\theta(\frac{1}{\mu}-\vert
{\bf k}\vert)<\infty \end{displaymath} Hence, under the assumption
(50) on the basis of the inequalities (52) and (58) we have the
lower bound ( generalizing that of eq.(51) to $\alpha\neq
1$)\begin{equation}\langle \tilde{T}_{\infty}({\bf x},{\bf
p})\tilde{T}_{\infty}({\bf x},{\bf p}^{\prime})\rangle\geq
\delta({\bf p}+{\bf p}^{\prime})\tilde{m}_{0}({\bf p})
\Big(c\theta(\vert {\bf p}\vert -\frac{1}{\mu})\vert{\bf
p}\vert^{-2+2\Omega}+c^{\prime}\theta(\frac{1}{\mu}-\vert {\bf
p}\vert )\vert {\bf
p}\vert^{-\frac{2-2\Omega}{2-\alpha-\gamma}}\Big)
\end{equation}
(at $\Omega=0$ this lower bound coincides with the upper bound
(40)).
Next, if $\vert{\bf x}-{\bf
x}^{\prime}\vert$ is large then for $0<-\gamma=\beta<1$ we obtain
from eqs.(21) and (57) the lower bound
\begin{displaymath}
\exp(-W({\bf x}-{\bf x}^{\prime}))\geq\exp(-c{\bf p}^{2}r^{2-
\alpha }\vert{\bf x}-{\bf x}^{\prime}\vert^{2\beta})
\end{displaymath}
where the form of the rhs comes from a change of variables ${\bf
k}={\bf k}^{\prime}\vert{\bf x}-{\bf x}^{\prime}\vert^{-1}$ and
$\omega=\omega^{\prime} \vert {\bf x}-{\bf x}^{\prime}\vert^{-2}$
and an estimate of the remainder in eq.(57).
If we restrict ourselves to $G$ of the form (10) and $2\beta\geq
1$ then we can derive a more precise lower bound for $\exp(-W)$
with an application of the H\"older inequality
\begin{displaymath}
\vert {\bf x}+{\bf y}\vert^{2\beta}\leq 2^{2\beta-1} (\vert {\bf
x}\vert^{2\beta}+\vert {\bf y}\vert^{2\beta})
\end{displaymath}
From eq.(38) and the H\"older inequality we obtain after an
elementary calculation of the expectation value over the Brownian
paths
\begin{equation}
\begin{array}{l}
\exp(-W({\bf x}-{\bf x}^{\prime})) \geq\exp \Big( -C
r^{2-\alpha-\gamma} {\bf p}^{2} -cr^{2-\alpha}\vert {\bf x}-{\bf
x}^{\prime}\vert^{2\beta}{\bf p}^{2}\Big)
\end{array}
\end{equation}
Hence, after a calculation of the expectation value in the
exponential in eq.(38) the remaining $r$ and the $a$ integrals
(from the representation (24)) in the correlation function (38)
read
\begin{displaymath}\begin{array}{l} \int_{0}^{\infty}dr\int da
\nu_{1}(a)\exp(-W -a\vert {\bf x}-{\bf
x}^{\prime}\vert^{2}-2\mu^{2}ra) \cr\geq
\frac{1}{2}\int_{0}^{\infty}dr\int da \nu_{1}(a)\exp(-W -a\vert
{\bf x}-{\bf x}^{\prime}\vert^{2})+
\frac{1}{2}\int_{0}^{\infty}dr\int da \nu_{1}(a)\exp(-W
-2\mu^{2}ra)
\end{array}\end{displaymath} where $\exp(-W) $ is lower bounded by
eq.(60). An easy estimate of this integral leads to the following
inequality large ${\bf x}-{\bf x}^{\prime}$\begin{equation}
\begin{array}{l}\langle \tilde{T}_{\infty}({\bf x},{\bf
p})\tilde{T}_{\infty}({\bf x}^{\prime},{\bf
p}^{\prime})\rangle\cr\geq(\delta({\bf p}+{\bf
p}^{\prime})\tilde{m}_{0}({\bf p}) (K_{1}\vert{\bf
p}\vert^{-\frac{2-2\Omega}{2-\alpha}} \vert{\bf x}-{\bf
x}^{\prime}\vert^{-2\sigma}+K_{2}\vert{\bf
p}\vert^{-\frac{2}{2-\alpha}} \vert{\bf x}-{\bf
x}^{\prime}\vert^{-\frac{2\beta}{2-\alpha}-2\Omega})
\end{array}\end{equation} where
\begin{equation}
\sigma=\frac{\beta(1-\Omega)}{2-\alpha}
\end{equation}
For $0<2\beta<1$ this lower bound coincides with the upper bound
(44)(derived for $\Omega=0$). We expect that eq.(61) gives the
asymptotic behavior of the two-point correlation function for any
$0<2\beta<2$ because such a behavior is a consequence of a formal
exchange of the limit $\vert {\bf x}-{\bf
x}^{\prime}\vert\rightarrow \infty$ with the integral over $t$ and
the expectation value over the Brownian motion in eq.(34).
The lower bound (59) for small ${\bf p}$ is obtained by neglecting
the $\vert {\bf x}-{\bf x}^{\prime}\vert$-dependent term on the
rhs of eq.(60). We can see from eq.(59) that if
$\tilde{m}_{0}({\bf p})\simeq \vert {\bf p}\vert^{-\nu}$ and
$m_{1}({\bf k})\simeq \vert{\bf k}\vert^{-d +2\Omega}$ then the
$\langle \tilde{T}\tilde{T}\rangle$ correlations behave as $\vert
{\bf p}\vert^{-2-\nu+2\Omega}$ for large momenta (short distances
in the ${\bf z}$ direction), whereas the low momentum behaviour
(large distance ) is $\vert {\bf
p}\vert^{-\nu-\frac{2-2\Omega}{2-\alpha-\gamma}}$. These estimates
show the effect of the random flow on the temperature correlations
in the ${\bf z}$ direction. The effect on the temperature
correlations in the ${\bf x}$ direction is described by the lower
bound (61) and the upper bound (44). Again the decay of
temperature correlations is determined by scaling indices of the
velocity and source correlations.
\section{Higher order correlation functions}
Let us consider the multi-point correlation functions
\begin{equation}
\begin{array}{l}
\langle \tilde{T}_{\tau}({\bf x}_{1},{\bf
p}_{1})......\tilde{T}_{\tau}({\bf x}_{2n},{\bf
p}_{2n})\rangle=\sum_{pairs} \int_{0}^{\tau}dt_{1}....dt_{2n}\cr
\prod_{(j,k)}\delta({\bf p}_{j}+{\bf p}_{k})\delta(t_{j}-t_{k})
\exp(-\frac{1}{2}\mu^{2}\sum_{j}{\bf p}_{j}^{2}(\tau-t_{j}))\cr
E[\prod_{(j,k)}\tilde{m}({\bf x}_{j}-{\bf x}_{k}+\mu{\bf
b}_{j}(\tau-t_{j})-\mu{\bf b}_{k}(\tau-t_{k}),{\bf p}_{j})\cr
\exp\Big(-\frac{1}{2}\sum_{il}
\int_{0}^{\tau-t_{i}}\int_{0}^{\tau-t_{l}}ds ds^{\prime} {\bf
p}_{i} G_{0}(s-s^{\prime},{\bf x}_{i}-{\bf x}_{l}+\mu{\bf
b}_{i}(s)-\mu{\bf b}_{l}(s^{\prime})){\bf p}_{l}\Big)]
\end{array}
\end{equation}where the
sum is over all pairings in accordance with the Gaussian
combinatorics. From (63) we have
\begin{displaymath}
\begin{array}{l}
\vert\langle \tilde{T}_{\tau}({\bf x}_{1},{\bf
p}_{1})......\tilde{T}_{\tau}({\bf x}_{2n},{\bf
p}_{2n})\rangle\vert\leq\sum_{pairs}
\int_{0}^{\tau}dt_{1}....dt_{2n}\cr \prod_{(j,k)}\delta({\bf
p}_{j}+{\bf p}_{k})\delta(t_{j}-t_{k})
\exp(-\frac{1}{2}\mu^{2}\sum_{j}{\bf p}_{j}^{2}(\tau-t_{j}))\cr
E[\prod_{(j,k)}\vert\tilde{m}({\bf x}_{j}-{\bf x}_{k}+\mu{\bf
b}_{j}(\tau-t_{j})-\mu{\bf b}_{k}(\tau-t_{k}),{\bf
p}_{j})\vert]<\infty
\end{array}
\end{displaymath}
Hence, the equilibrium limit $\tau\rightarrow \infty$ exists.
If $m$ is either of the form (24) or (25) then we can apply the
Jensen inequality to the expectation value in the form $E[\exp
f]\geq \exp E[f]$. We obtain an analogue of the lower bound (38).
For the upper bound we apply the Jensen inequality to the time
integral\begin{equation}
\begin{array}{l}\exp(-\frac{1}{2}\int_{0}^{\tau}\int_{0}^{\tau}dsds^{\prime}
\int \int J_{k}(s) J_{l}(s^{\prime})\langle
v_{k}(s)v_{l}(s^{\prime})\rangle)\cr \leq \tau^{-2}
\int_{0}^{\tau}\int_{0}^{\tau}dsds^{\prime}\exp(-\frac{\tau^{2}}{2}
\int \int J_{k}(s) J_{l}(s^{\prime})\langle
v_{k}(s)v_{l}(s^{\prime})\rangle)\end{array}
\end{equation}
where
\begin{displaymath} {\bf J}(s,{\bf
u})=-\theta(s)\sum_{k=1}^{2n}{\bf p}_{k}\delta({\bf u}-{\bf
x}_{k}-\mu{\bf b}_{k}(\tau-s))\end{displaymath} and the additional
integral in eq.(64) is over the spatial variable ${\bf u}$.
We can repeat the basic estimates concerning the behavior for low
${\bf z}$ momenta and large ${\bf x}$ distances by means of the
methods applied for the two-point correlations. First, by means of
the Jensen inequalities we reduce the estimates of the expectation
values to finite dimensional integrals. From the Jensen
inequalities we can see that the correlation functions are bounded
in $\tau$ when $\tau\rightarrow\infty$. Next, the results
concerning the scaling behavior for $2n$-point functions can be
obtained by an introduction of spherical coordinates in the
$dt_{1}...dt_{n}$ integral in eq.(63). Then, the correlation
functions scale in a simple way with respect to the temporal
radius $r$. Let us explain such estimates in more detail for
$n=2$. Then,
\begin{equation}
\begin{array}{l}
\langle \tilde{T}_{\tau}({\bf x}_{1},{\bf
p}_{1})......\tilde{T}_{\tau}({\bf x}_{4},{\bf
p}_{4})\rangle=\delta({\bf p}_{1}+{\bf p}_{3}) \delta({\bf
p}_{2}+{\bf p}_{4}) \cr\int_{0}^{\tau}dt_{1}\int_{0}^{\tau}dt_{2}
\exp(-\mu^{2}{\bf p}_{1}^{2}(\tau-t_{1})-\mu^{2}{\bf
p}_{2}^{2}(\tau-t_{2}))\cr E[\tilde{m}({\bf x}_{1}-{\bf
x}_{3}+\mu{\bf b}_{1}(\tau-t_{1})-\mu{\bf b}_{3}(\tau-t_{1}),{\bf
p}_{1})\cr \tilde{m}({\bf x}_{2}-{\bf x}_{4}+\mu{\bf
b}_{2}(\tau-t_{2})-\mu{\bf b}_{4}(\tau-t_{2}),{\bf p}_{2})\cr
\exp\Big(-\sum_{j=1,2}
\int_{0}^{\tau-t_{j}}\int_{0}^{\tau-t_{j}}dt dt^{\prime} {\bf
p}_{j} G_{0}(t-t^{\prime},\mu{\bf b}_{j}(t)-\mu{\bf
b}_{j}(t^{\prime})){\bf p}_{j}\cr +\sum_{j< k}
\int_{0}^{\tau-t_{j}}\int_{0}^{\tau-t_{k}}dt dt^{\prime} {\bf
p}_{j} G_{0}(t-t^{\prime},{\bf x}_{j}-{\bf x}_{k}+\mu{\bf
b}_{j}(t)-\mu{\bf b}_{k}(t^{\prime})){\bf p}_{k}\Big)] + permut.
\end{array}
\end{equation}
where the sum is over permutations of the numbers from 1 to 4 in
accordance with the Gaussian combinatorics; in the sum in the
exponential we set $t_{1}=t_{3}$ and $t_{2}=t_{4}$. Let
$\tau-t_{1}=r\cos\theta$ , $\tau-t_{2}=r\sin\theta$,
$t=r\sigma\cos\theta$ and $t^{\prime}=r\sigma^{\prime}\sin\theta$.
In such a case $r$ scales in the exponential in the same way as in
eqs.(37)-(38). The integral $dt_{1}dt_{2}=drr d\theta$ adds an
additional power of r. Under the assumption (31) the small ${\bf
p}$ behavior of the correlation functions (65) at $ \tau=\infty$
is determined by the integral
\begin{equation}
\begin{array}{l}
\vert\langle \tilde{T}_{\infty}({\bf x}_{1},{\bf
p}_{1})......\tilde{T}_{\infty}({\bf x}_{4},{\bf
p}_{4})\rangle\vert \cr \simeq \vert\tilde{m}_{0}\vert({\bf
p}_{1})\vert\tilde{m}_{0}\vert({\bf
p}_{2})\int_{0}^{\infty}drrE[\exp(-r^{2-\alpha-\gamma}\sum_{jk}{\bf
p}_{j}G_{0}{\bf p}_{k} )]+permut.
\cr\simeq\vert\tilde{m}_{0}\vert({\bf
p}_{1})\vert\tilde{m}_{0}\vert({\bf p}_{2})E[\vert\sum_{jk}{\bf
p}_{j}G_{0}{\bf p}_{k}\vert^{-\frac{2}{2-\alpha-\gamma}}]+permut.
\end{array}
\end{equation}
For large distances , $\gamma=-\beta<0$ and $G$ of eq.(10) we can
expand the dependence on the Brownian motion in eq.(66) in powers
of $\mu\vert {\bf x}_{j}-{\bf x}_{k}\vert^{-1}$. The leading order
reads
\begin{equation}
\begin{array}{l}
\vert\langle \tilde{T}_{\infty}({\bf x}_{1},{\bf
p}_{1})......\tilde{T}_{\infty}({\bf x}_{4},{\bf
p}_{4})\rangle\vert \cr\simeq\vert\tilde{m}_{0}\vert({\bf
p}_{1})\vert\tilde{m}_{0}\vert({\bf p}_{2})\vert\sum_{jk}{\bf
p}_{j}{\bf p}_{k}\vert {\bf x}_{j}-{\bf
x}_{k}\vert^{2\beta}\vert^{-\frac{2}{2-\alpha}}+permut.
\end{array}
\end{equation}
Note that
the power describing the low ${\bf p}$ behavior in eq.(66) and
large ${\bf x}$ behavior in eq.(67) is twice as big as that for
the two-point function (34) and (40) indicating the asymptotic
scale invariance of the temperature $\tilde{T}_{\infty}({\bf
x},{\bf p})$ at low momenta or large distances. This property can
be extended to the $2n$ correlation functions where the scaling
index is proportional to $n$ as a consequence of the $drr^{n-1}$
time integral in the spherical time coordinates. Such a behavior
of the integrals suggests that if the velocities and the sources
are scale invariant then the temperatures scale at large distances
with the scale dimension determined by the two-point function.
\section{Discussion}
The power-law behavior of turbulent velocity correlation functions
and passive scalar correlation functions in a homogeneous
isotropic turbulent flow has been widely discussed in the
literature since the basic papers of Kolmogorov \cite{kolmogorov}
followed by Obukhov \cite{obukhov},Corrsin\cite{corsin} and
Batchelor\cite{batchelor} (concerning the scalar advection). The
universal Kolmogorov $\frac{5}{3}$ law for spectral velocity
distribution as well as passive scalar distribution is derived by
means of dimensional arguments (independent of any dynamical
model). A statistical homogeneity and isotropy of the turbulence
at a microscale in a sufficiently large space interval (called the
" inertial range") is at the base of the Kolmogorov theory. Under
these assumptions the velocity (or passive scalar) correlation
functions are universal ,i.e.,independent of the source
distribution $m$. An experimental verification is not simple.
Turbulent flows are usually non-homogeneous and non-isotropic at a
macro scale. However, if a flow satisfying Kolmogorov assumptions
is created then the spectral Kolmogorov law is satisfied in the
inertial range \cite{goto}. Nevertheless, it is common for flows
in nature that Kolmogorov assumptions are not satisfied (for some
studies of such turbulent flows see
\cite{celani}\cite{mydlarski}). Even if the velocity is satisfying
the Kolmogorov law the analogous Obukhov law for $\rho$ may fail
\cite{celani}\cite{mydlarski}\cite{anton}. As the authors in
\cite{celani} point out some problems with the verification of
Kolmogorov's theory concerns a construction of a flow which would
be homogeneous and isotropic in a sufficiently large inertial
range ( usually boundary conditions or sources violate a global
symmetry). They suggest a study of non-isotropic flows.
An investigation of a general class of dynamical models of
randomly forced Navier-Stokes and passive scalar equations is
still beyond the reach of analytical as well as numerical methods.
A substantial progress has been achieved in the white noise
randomly forced passive scalar (Kraichnan model)\cite{kraichnan}
\cite{gawedzki}\cite{falk}. However, the white noise distribution
of velocities is quite unrealistic. Our main motivation in these
studies was a derivation of the scaling behavior for velocities
which are not of the white noise type. A passive scalar in a shear
flow independent of the coordinates in the direction of the flow
was studied before in \cite{majda}\cite{glimm}. However, these
authors were interested in the anomalous free decay of solutions
of the advection-diffusion equation.
Our results predict a power-law of the passive scalar correlations in
non-isotropic flows. The results depend on the source distribution
$m$ because the source $f$ is present at any scale. We do not
specify any inertial range in our model. In general, the
correlations must depend on the source (for a discussion of random
forcing see \cite{stephen}). This can be seen from the detailed
calculations in \cite{gawedzki2}\cite{proc} performed in the
isotropic Kraichnan model (white noise in
time)\cite{kraichnan}\cite{gawedzki}\cite{falk}. The two-point
passive scalar correlations depend explicitly on the source and
on the molecular diffusivity $\mu^{2}$. Only in a proper limit of
the source covariance $m$ and $\mu\rightarrow 0$ the universal
scaling law comes out.
Before we summarize our results let us begin with simple models.
First, consider a pure diffusion corresponding to ${\bf V}=0$.
Then
\begin{displaymath}
\begin{array}{l}
\langle \tilde{T}_{\tau}({\bf x},{\bf p})\tilde{T}_{\tau}({\bf
x}^{\prime},{\bf p}^{\prime})\rangle= 2\delta({\bf p}+{\bf
p}^{\prime})\int_{0}^{\tau}dr\exp(-\mu^{2}{\bf p}^{2}r) \cr
E[\tilde{m}( {\bf x}-{\bf x}^{\prime}+\mu\sqrt{r}{\bf
b}(1)-\mu\sqrt{r}{\bf b}^{\prime}(1),{\bf p}) ]\cr = 2\delta({\bf
p}+{\bf
p}^{\prime})(2\pi)^{-D+d}\int_{0}^{\tau}dr\exp(-\mu^{2}{\bf
p}^{2}r) \cr \int d{\bf u}d{\bf w}\exp(-\frac{{\bf
u}^{2}}{2}-\frac{{\bf w}^{2}}{2})\tilde{m}( {\bf x}-{\bf
x}^{\prime}+\mu\sqrt{r}{\bf u}-\mu\sqrt{r}{\bf w},{\bf p}) ]\cr
=\mu^{-2}\delta({\bf p}+{\bf p}^{\prime})\int d{\bf k} \exp(i{\bf
k}({\bf x}-{\bf x}^{\prime}))\tilde{m}_{1}({\bf
k})\tilde{m}_{0}({\bf p})({\bf p}^{2}+{\bf
k}^{2})^{-1}\cr\Big(1-\exp(-\mu^{2}({\bf p}^{2}+{\bf
k}^{2})\tau)\Big)\end{array}
\end{displaymath}
In the limit
$\tau\rightarrow \infty$
\begin{equation}
\langle T_{\infty}({\bf x},{\bf z})T_{\infty}({\bf
x}^{\prime},{\bf z}^{\prime})\rangle= \mu^{-2}\int d{\bf k} d{\bf
p}\exp(i{\bf k}({\bf x}-{\bf x}^{\prime}))\exp(i{\bf p}({\bf
z}-{\bf z}^{\prime}))\tilde{m}({\bf k},{\bf p})({\bf p}^{2}+{\bf
k}^{2})^{-1} \end{equation} Hence\begin{equation}
\rho_{\infty}({\bf k},{\bf p})=\mu^{-2}({\bf p}^{2}+{\bf
k}^{2})^{-1}\tilde{m}_{1}({\bf k})\tilde{m}_{0}({\bf p})
\end{equation}
Let us note that the behavior of the temperature
correlations changes abruptly for large $\vert{\bf z}-{\bf
z}^{\prime}\vert$
at $\tau=\infty$ in this simple model. At finite
$\tau$ it is the same as that of the source (say $\vert{\bf
z}-{\bf z}^{\prime}\vert^{-d+\nu}$) whereas at $\tau=\infty$ it
becomes $\vert{\bf z}-{\bf z}^{\prime}\vert^{-d+\nu+2}$. However,
it can be seen from eq.(68) that after the limit $\tau\rightarrow
\infty$ the limit $\mu\rightarrow 0$ does not exist in the model
without the advection. If we first take $\mu\rightarrow 0$ then
the subsequent limit $\tau\rightarrow \infty$ is linearly
divergent in $\tau$. The strong $\mu$-dependence of the asymptotic
behavior means that this parameter sets a scale on time and space
which determines different scaling behavior. In Appendix A we show
that the limits $\mu\rightarrow 0$ and $\tau\rightarrow \infty$
can be interchanged in the model with a random advection . The
correlation functions ${\cal S}^{(2n)}$ in a non-isotropic
Kraichnan model are discussed in Appendix B. The correlation
functions ${\cal S}^{(2n)}({\bf x}_{1},{\bf p}_{1},.....,{\bf
x}_{2n},{\bf p}_{2n})$ can be calculated exactly in the limit
$\mu\rightarrow 0$ (eq.(86)). They show no anomalous scaling
(encountered in the isotropic model
\cite{kraichnan}\cite{gawedzki}) as long as the points ${\bf
x}_{j}$ are different. The scaling behavior can change after a
transformation to the configuration space (the Fourier transform
does not exist in the usual sense).
Let us compare the two-point temperature correlation function
(68) with the one
in a random flow which is bounded in space and time
,i.e., $G=\langle {\bf v}{\bf v}\rangle\simeq const$.
Under the assumption (31) we obtain
\begin{equation}
\begin{array}{l}
\langle \tilde{T}_{\infty}({\bf x},{\bf p})\tilde{T}_{\infty}({\bf
x}^{\prime},{\bf p}^{\prime})\rangle\simeq K\delta({\bf p}+{\bf
p}^{\prime})\tilde{m}_{0}({\bf
p})\int_{0}^{\infty}dr\exp(-\mu^{2}{\bf p}^{2}r -c{\bf p}^{2}r^{2}
)\end{array}
\end{equation}
The integral (70) behaves as $\tilde{m}_{0}({\bf p}){\bf
p}^{-2}$ for large ${\bf p}$ and as $\tilde{m}_{0}({\bf
p})\vert{\bf p}\vert^{-1}$ for a small ${\bf p}$ in agreement with
eq.(59) for $\Omega=\alpha=\gamma=0$. Our results of secs.4 and 5
give an extension of the simple observations on the temperature
correlation functions derived in this section for a pure diffusion
and for an advection by a uniformly bounded random flow.
In our model (defined by the assumption that the velocity does not
depend on coordinates in the direction of the flow) the spectral
distribution in the corresponding momentum is proportional to the
source distribution $\tilde{m}$ as can be seen from eq.(34). We
could consider a source $f$ with the covariance $m({\bf x},{\bf
z})$ which (approximately in a certain range as in
refs.\cite{gawedzki2}\cite{proc}) is independent of ${\bf x}$. In
such a case the spectral equilibrium distribution (6) for a pure
diffusion $\rho_{\infty}({\bf k},{\bf p})$ (67) is $\delta({\bf
k})\tilde{m}_{0}({\bf p}){\bf p}^{-2}$ where the ${\bf p}^{-2}$
behavior comes from the molecular diffusivity. The temperature
correlations remain independent of ${\bf x}$ and the limit
$\mu\rightarrow 0$ does not exist. A random advection is changing
the behavior of temperature correlations in ${\bf x}$ as well as
in ${\bf p}$. This change involves a non-perturbative mechanism
which could not be seen in an expansion in ${\bf V}$. It comes
from an exponential of $G$ in eq.(34). In particular, a steady
flow bounded in ${\bf x}$ gives $\rho_{\infty}({\bf k},{\bf
p})=\delta({\bf k})\vert {\bf p}\vert^{-1}$ for $\vert {\bf
p}\vert\ll \frac{1}{\mu}$ whereas for the random velocity growing
in space with the index $\beta$ (eq.(10)) we have for a small
${\bf k}$
the behavior $\rho_{\infty}({\bf k},{\bf p})\simeq \vert {\bf
k}\vert^{-d+\frac{2\beta}{2-\alpha}} $ as follows from eq.(61).
In experiments ($D=3$) we could create an anisotropic flow with
the Kolmogorov index (10) $\beta=\frac{1}{3}$ in $d=2$ or $d=1$.
In such a case we obtain definite predictions concerning the
temperature distribution. This will be \newline $ \vert {\bf
x}-{\bf x}^{\prime}\vert^{-\frac{2}{3}}\vert{\bf z}-{\bf
z}^{\prime}\vert^{\nu}$ where
\begin{displaymath}
\nu= \frac{2}{2-\alpha}-(D-d)
\end{displaymath}
and $D-d$ is either $1$ or $2$ and there is a restriction $\alpha
-\beta<1$ coming from the requirement of the integrability of the
expression in the exponential of (34).
In general, we can see from eqs.(40),(42),(44),(59) and (61) that
the turbulent behavior $\gamma=-\beta<0$ of the velocity field
will (in comparison to pure diffusion) decrease the temperature
correlations in the direction orthogonal to the flow and increase
the correlations (at the fixed $\alpha$)in the direction of the
flow. These effects contribute to the more coherent heat
distribution in a turbulent stream.
\section*{Appendix A: The limit
$\mu\rightarrow 0$} If there is no diffusion ($\mu=0$) then our
formulas in secs.4-6 at finite $\tau$ remain valid but need some
interpretation. There is no expectation value over the Brownian
motion. In such a case in some formulas (as in eqs.(34)-(35))
$\gamma=0$. Let us consider as an example the formula (34) at
$\mu=0$
\begin{equation}
\begin{array}{l}
\langle \tilde{T}_{\tau}({\bf x},{\bf p})\tilde{T}_{\tau}({\bf
x}^{\prime},{\bf p}^{\prime})\rangle= \delta({\bf p}+{\bf
p}^{\prime})\tilde{m}({\bf x}-{\bf x}^{\prime},{\bf
p})\int_{0}^{\tau}dt \exp(-i{\bf p}\int_{0}^{\tau-t}ds{\bf
U}(\tau -s,{\bf x}))\cr \exp\Big(-(\tau-t)^{2-\alpha}
\int_{0}^{1}\int_{0}^{1}d\sigma d\sigma^{\prime} {\bf p}
G_{0}(\sigma-\sigma^{\prime},{\bf 0}){\bf p} \cr
+(\tau-t)^{2-\alpha} \int_{0}^{1}\int_{0}^{1}d\sigma
d\sigma^{\prime} {\bf p} G_{0}(\sigma-\sigma^{\prime},{\bf x}-{\bf
x}^{\prime}){\bf p}\Big)
\end{array}
\end{equation}
For the Kraichnan model \cite{kraichnan} (35) the formula (71)
reads (with the Stratonovitch interpretation of the gradient term,
see the discussion at the beginning of sec.2).
\begin{equation}
\begin{array}{l}
\langle \tilde{T}_{\tau}({\bf x},{\bf p})\tilde{T}_{\tau}({\bf
x}^{\prime},{\bf p}^{\prime})\rangle= \delta({\bf p}+{\bf
p}^{\prime})\tilde{m}({\bf x}-{\bf x}^{\prime},{\bf
p})\int_{0}^{\tau}dt \exp(-i{\bf p}\int_{0}^{\tau-t}{\bf U}(\tau
-s,{\bf x})ds)\cr \exp\Big(-(\tau-t) {\bf p} D_{0}({\bf 0}){\bf p}
+(\tau-t) {\bf p} D_{0}({\bf x}-{\bf x}^{\prime}){\bf p}\Big)
\end{array}
\end{equation}
In the limit $\tau \rightarrow \infty$ and for ${\bf U}=0$ we can
calculate the integral over time in eq.(71) with the result
\begin{equation}
\begin{array}{l}
\langle \tilde{T}_{\infty}({\bf x},{\bf p})\tilde{T}_{\infty}({\bf
x}^{\prime},{\bf p}^{\prime})\rangle= C\delta({\bf p}+{\bf
p}^{\prime})\tilde{m}({\bf x}-{\bf x}^{\prime},{\bf p})\cr\Big(
\int_{0}^{1}\int_{0}^{1}d\sigma d\sigma^{\prime} {\bf p}
G_{0}(\sigma-\sigma^{\prime},{\bf 0}){\bf p} -
\int_{0}^{1}\int_{0}^{1}d\sigma d\sigma^{\prime} {\bf p}
G_{0}(\sigma-\sigma^{\prime},{\bf x}-{\bf x}^{\prime}){\bf
p}\Big)^{-\frac{1}{2-\alpha}}
\end{array}
\end{equation}
in agreement with the bounds (56) and (61). We can also calculate
the higher order correlation functions. As an example, the four
point function (65) reads
\begin{displaymath}
\begin{array}{l}
\langle \tilde{T}_{\tau}({\bf x}_{1},{\bf
p}_{1})......\tilde{T}_{\tau}({\bf x}_{4},{\bf p}_{4})\rangle\cr =
\delta({\bf p}_{2}+{\bf p}_{4}) \delta({\bf
p}_{1}+{\bf p}_{3})\tilde{m}({\bf x}_{1}-{\bf x}_{3},{\bf
p}_{1})\tilde{m}({\bf x}_{2}-{\bf x}_{4},{\bf
p}_{2})\cr\int_{0}^{\tau}dt_{1}\int_{0}^{\tau}dt_{2}
\exp\Big(-\sum_{j=1,2}
\int_{0}^{\tau-t_{j}}\int_{0}^{\tau-t_{j}}dt dt^{\prime} {\bf
p}_{j} G_{0}(t-t^{\prime},{\bf 0}){\bf p}_{j}\cr +\sum_{j< k}
\int_{0}^{\tau-t_{j}}\int_{0}^{\tau-t_{k}}dt dt^{\prime} {\bf
p}_{j} G_{0}(t-t^{\prime},{\bf x}_{j}-{\bf x}_{k}){\bf p}_{k}\Big)
+ permut.
\end{array}
\end{displaymath}
We can obtain detailed estimates of the
time integrals for any $\alpha$. In some special cases the
integrals can be explicitly calculated. In Appendix B we give the
formula (eq.(88)) for the Kraichnan model ($\alpha=1$). For a
steady flow ($\Gamma(s)=1$ in eq.(10),$\alpha=0$) at $\tau=\infty$
the integration over $t_{j}$ gives
\begin{equation}
\begin{array}{l}
\langle \tilde{T}_{\infty}({\bf x}_{1},{\bf
p}_{1})......\tilde{T}_{\infty}({\bf x}_{4},{\bf p}_{4})\rangle\cr
=
\delta({\bf p}_{2}+{\bf p}_{4}) \delta({\bf
p}_{1}+{\bf p}_{3})\tilde{m}({\bf x}_{1}-{\bf x}_{3},{\bf
p}_{1})\tilde{m}({\bf x}_{2}-{\bf x}_{4},{\bf p}_{2}) \cr
\Big(4{\bf p}_{1}^{2}{\bf p}_{2}^{2}D_{0}({\bf x}_{1}-{\bf
x}_{3})D_{0}({\bf x}_{2}-{\bf x}_{4})\cr - ({\bf p}_{1}{\bf
p}_{2})^{2}(D_{0}({\bf x}_{1}-{\bf x}_{4})+D_{0}({\bf x}_{2}-{\bf
x}_{3})+D_{0}({\bf x}_{1}-{\bf x}_{2}) +D_{0}({\bf x}_{3}-{\bf
x}_{4}))^{2}\Big)^{-\frac{1}{2}}+permut.
\end{array}
\end{equation}
where $D_{0}({\bf x}_{j}-{\bf x}_{k})=-\vert {\bf x}_{j}-{\bf
x}_{k}\vert^{2\beta}$ in the model (10).
\section*{Appendix B:The Kraichnan model}
If $\Gamma(t-t^{\prime})=\delta(t-t^{\prime})$ then we obtain a
closed set of equations for the correlation functions
\begin{equation}
{\cal S}_{\tau}^{(n)}({\bf x}_{1},...,{\bf x}_{n};{\bf
p}_{1},....,{\bf p}_{n})=\langle \tilde{T}_{\tau}({\bf x}_{1},{\bf
p}_{1})....\tilde{T}_{\tau}({\bf x}_{n},{\bf p}_{n})\rangle
\end{equation}
These equations have been derived by Kraichnan \cite{kraichnan}
for velocities depending on all coordinates. In our simplified
model (9)-(10) they read (the odd order correlation functions are
zero)
\begin{equation}
\begin{array}{l}
\partial_{\tau}{\cal S}_{\tau}^{(2n)}=
\frac{1}{2}\mu^{2}\sum_{j=1}^{j=2n}\triangle_{j}{\cal
S}_{\tau}^{(2n)}-\frac{1}{2}(\mu^{2}+D_{0}({\bf
0}))\sum_{j=1}^{j=2n}{\bf p}_{j}^{2}{\cal S}_{\tau}^{(2n)}\cr
+\sum_{<j,k>}{\bf p}_{j}D_{0}({\bf x}_{j}-{\bf x}_{k}){\bf
p}_{k}{\cal S}_{\tau}^{(2n)}+\sum_{<j,k>}\delta({\bf p}_{j}+{\bf
p}_{k})\tilde{m}({\bf x}_{j}-{\bf x}_{k},{\bf p}_{j}){\cal
S}_{\tau}^{(2n-2)}(jk)\cr \equiv {\cal M}S_{\tau}^{(2n)}+ {\cal
R}{\cal S}_{\tau}^{(2n-2)}
\end{array}
\end{equation}
where $D_{0}$ is the translation invariant part of $D$
and ${\cal S}(jk)$ means that the coordinates ${\bf x}_{j}$
and ${\bf x}_{k}$ are lacking in ${\cal S}$. The term $D({\bf 0})$
(adding to $\mu^{2}$) comes from the Stratonovitch interpretation
of eq.(1).The solution of eq.(76) reads
\begin{equation}
\begin{array}{l} {\cal S}_{\tau}^{(2n)} =\exp(\tau {\cal
M})S_{0}^{(2n)}+ \int_{0}^{\tau}dt\exp((\tau-t) {\cal M}){\cal
R}{\cal S}_{t}^{(2n-2)} \end{array} \end{equation} If the operator
${\cal M}$ is strictly negative in the space $L^{2}(R^{2dn})$ then
the limit $\tau\rightarrow \infty$ exists and does not depend on
the initial condition ${\cal S}_{0}^{(2n)}$.
We can express the solution of eq.(76) by means of the Feynman-Kac
formula for the heat kernel
\begin{equation}
(\exp (r{\cal M})g)({\bf x}_{1},.....,{\bf x}_{2n})=
E[\exp(\int_{0}^{r}ds W({\bf b}(s)))g({\bf x}_{1}+\mu{\bf
b}_{1}(r),...,{\bf x}_{2n}+\mu{\bf b}_{2n}(r))]
\end{equation}
where
\begin{equation}
\begin{array}{l}
W(s)=-\frac{1}{2}(\mu^{2}+D_{0}({\bf 0}))\sum_{j=1}^{j=2n}{\bf
p}_{j}^{2} +\sum_{<j,k>}{\bf p}_{j}D_{0}({\bf x}_{j}+\mu{\bf
b}_{j}(s)-{\bf x}_{k}-\mu{\bf b}_{k}(s)){\bf p}_{k} \end{array}
\end{equation} We obtain an upper bound on the correlation
functions (78) from the Jensen inequality as applied to the time
integral
\begin{equation}(\exp r{\cal
M})g)({\bf x}_{1},.....,{\bf x}_{2n})\leq
\frac{1}{r}\int_{0}^{r}ds E[\exp(r W({\bf b}(s)))\vert g\vert({\bf
x}_{1}+\mu{\bf b}_{1}(r),...,{\bf x}_{2n}+\mu{\bf b}_{2n}(r))]
\end{equation}
If $g=\exp h$ (or a superposition with positive coefficients of
such functions as in eq.(25))then we have the lower bound from the
Jensen inequality as applied to the expectation value
\begin{equation}
(\exp (r{\cal M})\exp h)({\bf x}_{1},.....,{\bf x}_{2n})\geq \exp
E[\int_{0}^{r}ds W({\bf b}(s)))+h({\bf x}_{1}+\mu{\bf
b}_{1}(r),...,{\bf x}_{2n}+\mu{\bf b}_{2n}(r))]
\end{equation}
As an example, the formula for the two point function (in the
limit $\tau\rightarrow \infty$) with the velocity correlations
defined by eq.(10) reads
\begin{equation}
\begin{array}{l}{\cal S}_{\infty}^{(2)}({\bf x}_{1},{\bf x}_{2},{\bf p}_{1},{\bf
p}_{2})= \delta({\bf p}_{1}+{\bf
p}_{2})\int_{0}^{\infty}dr\exp(-r\mu^{2}{\bf p}_{1}^{2}) \cr
E[\exp(-{\bf p}_{1}^{2}\int_{0}^{r}ds\vert{\bf x}_{1}-{\bf
x}_{2}+\mu{\bf b}_{1}(s)-\mu{\bf b}_{2}(s)\vert^{2\beta})\tilde{
m}({\bf x}_{1}+\mu{\bf b}_{1}(r)-{\bf x}_{2}-\mu{\bf
b}_{2}(r),{\bf p}_{1})]
\end{array}
\end{equation}
Then, the resulting correlation functions
are controlled from below and from above by the Jensen
inequalities. For the lower bound (81) we obtain an explicit
formula (using the representation (24) for $m_{1}$)
\begin{equation}
\begin{array}{l}{\cal S}_{\infty}^{(2)}({\bf x}_{1},{\bf x}_{2},{\bf p}_{1},{\bf
p}_{2})\geq \delta({\bf p}_{1}+{\bf p}_{2})\tilde{m}_{0}({\bf
p})\int d\nu_{1}(a)\int_{0}^{\infty}dr\exp(-r\mu^{2}{\bf
p}_{1}^{2}) \cr
\exp(-{\bf p}^{2}r^{\beta +1}h(r^{-\frac{1}{2}}\vert {\bf x}_{1}-{\bf
x}_{2}\vert)-a\vert{\bf x}_{1}-{\bf x}_{2}\vert^{2}-2\mu^{2}ra)
\end{array}
\end{equation}
where
\begin{equation}
\begin{array}{l}
h(\rho)=K\rho^{2(1+\beta)}\int_{0}^{\rho^{-2}}d\lambda\int_{0}^{\infty}
db b^{-1-\beta}
\Big(1-(1+2\mu^{2}\lambda
b)^{-\frac{d}{2}}\exp(-\frac{b}{2(1+2\mu^{2}b\lambda)})\Big)
\end{array}\end{equation}
here $K$ is a positive constant.
From eq.(84) it can easily be seen that for large ${\bf x} -{\bf
x}^{\prime}$ (small $\rho$ in eq.(84)) the $r$-integrand in
eq.(83) behaves as
\begin{displaymath}
\exp(-K r{\bf p}^{2}\vert {\bf x}_{1}-{\bf
x}_{2}\vert^{2\beta}-a\vert {\bf x}_{1}-{\bf
x}_{2}\vert^{2}-2\mu^{2}ra)
\end{displaymath}
(as shown in another way in eq.(60);here $\alpha=1$)leading as a
consequence to the estimate (61) for the correlation functions. We
can continue the Jensen inequalities for higher correlation
functions as from eqs.(77) and (78) it follows that the
correlation functions are again in the form of superpositions of
exponentials.
For lower order correlations a direct study of the differential
equation (76) can be equally efficient. As an example, if $D=3$
and $d=2$ then the equation (76) at $\tau=\infty$ (with the
velocity covariance (10)) reads (here $\rho=\vert{\bf x}_{1}-{\bf
x}_{2}\vert$)
\begin{equation}
(\mu^{2}\frac{1}{\rho}\partial_{\rho}\rho\partial_{\rho}-\mu^{2}p^{2}-p^{2}\rho^{2\beta})
{\cal T}_{\infty}^{(2)}(\rho,p;\mu)=\tilde{m}(\rho,p)
\end{equation}
where we defined
\begin{displaymath}
{\cal S}^{(2)}(\rho,p_{1},p_{2};\mu)=\delta( p_{1}+p_{2}){\cal
T}^{(2)}(\rho,p_{1};\mu)
\end{displaymath}
In contradistinction to the spherically symmetric case
\cite{kraichnan} eq.(85) is not explicitly soluble but its
asymptotic solution (61) is easy to obtain. This asymptotic
behavior is the same as the limit $\mu=0$ of the solution (85)
\begin{equation}
{\cal T}_{\infty}^{(2)}(\rho,p;0)=-p^{-2}\rho^{-2\beta}
\tilde{m}(\rho,p)
\end{equation}
In general, from eq.(76) the limit $\mu=0$ can be obtained
inductively \begin{equation}\begin{array}{l}{\cal
S}_{\infty}^{(2n)}({\bf x}_{1},...,{\bf x}_{2n};{\bf
p}_{1},...,{\bf p}_{2n};0)=\Big(\frac{1}{2}D_{0}({\bf
0})\sum_{j=1}^{j=2n}{\bf p}_{j}^{2}- \sum_{<j,k>}{\bf
p}_{j}D_{0}({\bf x}_{j}-{\bf x}_{k}){\bf
p}_{k}\Big)^{-1}\cr\sum_{<i,l>}\delta({\bf p}_{i}+{\bf
p}_{l})\tilde{m}({\bf x}_{i}-{\bf x}_{l},{\bf p}_{i}){\cal
S}_{\infty}^{(2n-2)}(il;0) \end{array}\end{equation} The formulas
for the asymptotic behavior (61) ($\alpha=1$) and (66)
($\alpha=1,\gamma=0$) agree with the exact solution (87). For
$n=1$ the solution (82) takes the form (86) whereas for $n=2$ we
have
\begin{equation}\begin{array}{l}{\cal
S}_{\infty}^{(4)}({\bf x}_{1},...,{\bf x}_{4};{\bf p}_{1},...,{\bf
p}_{4};0)=\Big(\frac{1}{2}D_{0}({\bf 0}))\sum_{j=1}^{j=4}{\bf
p}_{j}^{2}- \sum_{<j,k>}{\bf p}_{j}D_{0}({\bf x}_{j}-{\bf
x}_{k}){\bf p}_{k}\Big)^{-1}\cr\Big(\delta({\bf p}_{1}+{\bf
p}_{2})\delta({\bf p}_{3}+{\bf p}_{4})\big({\bf p}_{1}D_{0}({\bf
0}){\bf p}_{1}-{\bf p}_{1}D_{0}({\bf x}_{1}-{\bf x}_{2}){\bf
p}_{1}\big)^{-1}\cr\tilde{m}({\bf x}_{1}-{\bf x}_{2},{\bf
p}_{1})\tilde{m}({\bf x}_{3}-{\bf x}_{4},{\bf p}_{3})
+permut.\Big)
\end{array}\end{equation}
For the scale invariant random velocity field (10) $D_{0}({\bf
0})=0$ and $D_{0}({\bf x}_{j}-{\bf x}_{k})=-\vert{\bf x}_{j}-{\bf
x}_{k}\vert^{2\beta}$. It follows from eqs.(87)-(88) that the
temperature correlation functions are scale invariant under scale
transformations of the coordinates ${\bf x}_{j}$ as well as ${\bf
p}_{j}$. When $\mu=0$ then the correlation functions ${\cal
S}^{(2n)}_{\infty}$ are singular at coinciding points (the limit
$\mu\rightarrow 0$ has been studied earlier by other methods in
\cite{wei}\cite{hula}). The bound (40) is valid for $\mu>0$.
|
2,869,038,155,046 | arxiv | \section{Introduction.}\label{1}
Throughout this paper, we let $(\O,\cF,\dbF,\dbP)$ be a complete
filtered probability space on which a one-dimensional standard
Brownian motion $W(\cd)$ is defined with $\dbF=\{\cF_t\}_{t\ge0}$
being its natural filtration augmented by all the $\dbP$-null sets.
Let us begin with the following stochastic differential equation
(SDE, for short) in $\dbR$:
\bel{1.1}\left\{\ba{ll}
\ns\ds dX(t)=b(X(t),\m(t))dt+dW(t),\qq t\in[0,T],\\
\ns\ds X(0)=x,\ea\right.\ee
where
\bel{b}\ba{ll}
\ns\ds b(X(t),\m(t))=\int_\O
b(X(t,\o),X(t;\o'))\dbP(d\o')\\
\ns\ds\qq\qq\q~\equiv\int_\dbR
b(\xi,y)\m(t;dy)\Big|_{\xi=X(t)}\equiv\dbE[b(\xi,X(t))]
\Big|_{\xi=X(t)},\ea\ee
where $b:\dbR\times\dbR\to\dbR$ is a (locally) bounded Borel
measurable function and $\m(t;\cd)$ is the probability distribution
of the unknown process $X(t)$:
\bel{m}\m(t;A)=\dbP(X(t)\in A),\qq\forall A\in\cB(\dbR).\ee
Here $\cB(\dbR^n)$ is the Borel $\si$-field of $\dbR^n$ ($n\ge1$).
Equation (\ref{1.1}) is called a {\it McKean--Vlasov} SDE. Such an
equation was suggested by Kac \cite{Kac 1956} as a stochastic toy
model for the Vlasov kinetic equation of plasma and the study of
which was initiated by McKean \cite{McKean 1966}. Since then, many
authors made contributions on McKean--Vlasov type SDEs and
applications, see, for examples, Dawson \cite{Dawson 1983},
Dawson--G\"artner \cite{Dawson-Gartner 1987}, G\'artner
\cite{Gartner 1988}, Scheutzow \cite{Scheutzow 1987}, Sznitman
\cite{Sznitman 1989}, Graham \cite{Graham 1992}, Chan \cite{Chan
1994}, Chiang \cite{Chiang 1994}, Ahmed--Ding \cite{Ahmed-Ding
1995}. In recent years, related topics and problems have attracted
more and more attentions, see, for examples, Veretennikov
\cite{Veretennikov 2003}, Huang--Malham\'e--Caines
\cite{Huang-Malhame-Caines 2006}, Ahmed \cite{Ahmed 2007},
Mahmudov--McKibben \cite{Mahmudov-Mckibben 2007}, Lasry--Lions
\cite{Lasry-Lions 2007}, Borkar--Kumar \cite{Borkar-Kumar 2010},
Crisan--Xiong \cite{Crisan-Xiong 2010}, Kotelenez--Kurtz
\cite{Kotelenez-Kurtz 2010}, Park--Balasubramaniam--Kang
\cite{Park-Balasubramaniam-Kang 2008}, Andersson--Djehiche
\cite{Andersson-Djehice 2011}, Meyer-Brandis--Oksendal--Zhou
\cite{Meyer-Brandis-Oksandal-Zhou 2011}, and so on.
\ms
Inspired by (\ref{1.1}), one can consider the following more general
SDE:
\bel{MF-SDE}\left\{\ba{ll}
\ns\ds dX(t)=b(t,X(t),\dbE[\th^b(t,\xi,X(t))]_{\xi=X(t)})dt\\
\ns\ds\qq\qq\qq\qq+\si
(t,X(t),\dbE[\th^\si(t,\xi,X(t))]_{\xi=X(t)})dW(t),\qq t\in[0,T],\\
\ns\ds X(0)=x.\ea\right.\ee
where $\th^b$ and $\th^\si$ are some suitable maps. We call the
above a {\it mean-field} (forward) stochastic differential equation
(MF-FSDE, for short). From (\ref{b}) and (\ref{MF-SDE}), we see that
(\ref{1.1}) is a special case of (\ref{MF-SDE}). Note also that
(\ref{MF-SDE}) is an extension of classical It\^o type SDEs. Due to
the dependence of $b$ and $\si$ on
$\dbE[\th^b(t,\xi,X(t))]_{\xi=X(t)}$ and
$\dbE[\th^\si(t,\xi,X(t))]_{\xi=X(t)}$, respectively, MF-FSDE
(\ref{MF-SDE}) is {\it nonlocal} with respect to the event
$\o\in\O$.
\ms
It is easy to see that the equivalent integral form of
(\ref{MF-SDE}) is as follows:
\bel{1.3}\ba{ll}
\ns\ds X(t)=x+\int_0^tb(s,X(s),\dbE[\th^b(s,\xi,X(s))]_{\xi=X(s)})ds\\
\ns\ds\qq\qq\qq+\int_0^t\si(s,X(s),\dbE[\th^\si(s,\xi,X(s))]_{\xi
=X(s)})dW(s),\qq t\in[0,T].\ea\ee
This suggests a natural extension of the above to the following:
\bel{MF-FSVIE}\ba{ll}
\ns\ds X(t)=\f(t)+\int_0^tb(t,s,X(s),\dbE[\th^b(t,s,\xi,X(s))]_{\xi=X(s)})ds\\
\ns\ds\qq\qq\qq+\int_0^t\si(t,s,X(s),\dbE[\th^\si(t,s,\xi,X(s))]_{\xi
=X(s)})dW(s),\qq t\ge0.\ea\ee
We call the above a mean-field (forward) stochastic Volterra
integral equation (MF-FSVIE, for short). It is worthy of pointing
out that when the drift $b$ and diffusion $\si$ in (\ref{MF-FSVIE})
are independent of the nonlocal terms
$\dbE[\th^b(t,s,\xi,X(s))]_{\xi=X(s)}$ and
$\dbE[\th^\si(t,s,\xi,X(s))]_{\xi=X(s)}$, respectively,
(\ref{MF-FSVIE}) is reduced to a so-called (forward) stochastic
Volterra integral equation (FSVIEs, for short):
\bel{FSVIE}\ba{ll}
\ns\ds
X(t)=\f(t)+\int_0^tb(t,s,X(s))ds+\int_0^t\si(t,s,X(s))dW(s),\qq
t\ge0.\ea\ee
Such kind of equations have been studied by a number of researchers,
see, for examples, Berger--Mizel \cite{Berger-Mizel 1980}, Protter
\cite{Protter 1985}, Pardoux--Protter \cite{Pardoux-Protter 1990},
Tudor \cite{Tudor 1989}, Zhang \cite{Zhang 2010}, and so on.
Needless to say, the theory for (\ref{MF-FSVIE}) is very rich and
have a great application potential in various areas.
\ms
On the other hand, a general (nonlinear) backward stochastic
differential equation (BSDE, for short) introduced in Pardoux--Peng
\cite{Pardoux-Peng 1990} is equivalent to the following:
\bel{}Y(t)=\xi+\int_t^Tg(s,Y(s),Z(s))ds-\int_t^TZ(s)dW(s),\qq
t\in[0,T].\ee
Extending the above, the following general stochastic integral
equation was introduced and studied in Yong \cite{Yong 2006, Yong
2007, Yong 2008}:
\bel{BSVIE}Y(t)=\psi(t)+\int_t^Tg(t,s,Y(s),Z(t,s),Z(s,t))ds-\int_t^TZ(t,s)dW(s),\qq
t\in[0,T].\ee
Such an equation is called a backward stochastic Volterra integral
equation (BSVIE, for short). A special case of (\ref{BSVIE}) with
$g(\cd)$ independent of $Z(s,t)$ and $\psi(t)\equiv\xi$ was studied
by Lin \cite{Lin 2002} and Aman--N'zi \cite{Aman-N'Zi 2005} a little
earlier. Some relevant studies of (\ref{BSVIE}) can be found in
Wang--Zhang \cite{Wang-Zhang 2007}, Wang--Shi \cite{Wang-Shi 2010},
Ren \cite{Ren 2010}, and Anh--Grecksch--Yong \cite{Anh-Grecksch-Yong
2011}. Inspired by BSVIEs, it is very natural for us to introduce
the following stochastic integral equation:
\bel{MF-BSVIE}\ba{ll}
\ns\ds Y(t)=\psi(t)+\int_t^Tg(t,s,Y(s),Z(t,s),Z(s,t),\Gamma
(t,s,Y(s),Z(t,s),Z(s,t)))ds\\
\ns\ds\qq\qq\qq-\int_t^TZ(t,s)dW(s),\qq t\in[0,T],\ea\ee
where $(Y(\cd),Z(\cd\,,\cd))$ is the pair of unknown processes,
$\psi(\cd)$ is a given {\it free term} which is $\cF_T$-measurable
(not necessarily $\dbF$-adapted), $g(\cd)$ is a given mapping, called
the {\it generator}, and
\bel{G}\G(t,s,Y,Z,\h Z)=\dbE\[\th(t,s,y,z,\hat z,Y,Z,\h
Z)\]_{(y,z,\hat z)=(Y,Z,\h Z)}\ee
with $(Y,Z,\h Z)$ being some random variables, for some mapping
$\th(\cd)$ (see the next section for precise meaning of the above).
We call (\ref{MF-BSVIE}) a {\it mean-field backward stochastic
Volterra integral equation} (MF-BSVIE, for short). Relevant to the
current paper, let us mention that in Buckdahn--Djehiche--Li--Peng
\cite{Buckdahn-Djehiche-Li-Peng 2009}, mean-field backward
stochastic differential equations (MF-BSDEs, for short) were
introduced and in Buckdahn--Li--Peng \cite{Buckdahn-Li-Peng 2009} a
class of nonlocal PDEs are studied with the help of an MF-BSDE and a
McKean-Vlasov forward equation.
\ms
We see that MF-BSVIE (\ref{MF-BSVIE}) not only includes MF-BSDEs
(which, of course, also includes standard BSDEs) introduced in
\cite{Buckdahn-Djehiche-Li-Peng 2009,Buckdahn-Li-Peng 2009}, but
also generalizes BSVIEs studied in \cite{Yong 2006, Yong 2008,
Wang-Shi 2010}, etc. in a natural way. Besides, investigating
MF-BSVIEs allows us to meet the need in the study of optimal control
for MF-FSVIEs. As a matter of fact, in the statement of Pontryagin
type maximum principle for optimal control of a forward
(deterministic or stochastic) control system, the adjoint equation
of variational state equation is a corresponding (deterministic or
stochastic) backward system, see \cite{Yong-Zhou 1999} for the case
of classical optimal control problems, \cite{Andersson-Djehice 2011,
Buckdahn-Djehiche-Li 2011, Meyer-Brandis-Oksandal-Zhou 2011} for the
case of MF-FSDEs, and \cite{Yong 2006, Yong 2008} for the case of
FSVIEs. When the state equation is an MF-FSVIE, the adjoint equation
will naturally be an MF-BSVIE. Hence the study of well-posedness for
MF-BSVIEs is not avoidable when we want to study optimal control
problems for MF-BSVIEs.
\ms
The novelty of this paper mainly contains the following: First,
well-posedness of general MF-BSVIEs will be established. In doing
that, we discover that the growth of the generator and the nonlocal
term with respect to $Z(s,t)$ plays a crucial role; a better
understanding of which enables us to have found a neat way of
treating term $Z(s,t)$. Even for BSVIEs, our new method will
significantly simplify the proof of well-posedness of the equation
(comparing with \cite{Yong 2008}). Second, we establish two slightly
different duality principles, one starts from linear MF-FSVIEs, and
the other starts from linear MF-BSVIEs. We found that ``{\sl Twice
adjoint of a linear MF-FSVIE is itself}'', whereas, ``{\sl Twice
adjoint of a linear MF-BSVIE is not necessarily itself}''. Third,
some comparison theorems will be established for MF-FSVIEs and
MF-BSVIEs. It turns out that the situation is surprisingly different
from the differential equation cases. Some mistakes found in
\cite{Yong 2006, Yong 2007} will be corrected. Finally, as an
application of the duality principle for MF-FSVIEs, we establish a
Pontryagin type maximum principle for an optimal control problem of
MF-FSVIEs.
\ms
The rest of the paper is organized as follows. Section 2 is devoted
to present some preliminary results. In Section 3, we prove the
existence and uniqueness of adapted M-solutions to MF-BSVIE
(\ref{MF-BSVIE}). In Section 4 we obtain duality principles.
Comparison theorems will be presented in Section 5. In Section 6, we
deduce a maximum principle of optimal controls for MF-FSVIEs.
\section{Preliminary Results.}
In this section, we will make some preliminaries.
\subsection{Formulation of MF-BSVIEs.}
Let us first introduce some spaces. For $H=\dbR^n$, etc., and $p>1$,
$t\in[0,T]$, let
$$\ba{ll}
\ns\ds L^p(0,T;H)=\Big\{x:[0,T]\to
H\Bigm|\int_0^T|x(s)|^pds<\infty\Big\},\\
\ns\ds L_{\cF_t}^p(\O;H)=\Big\{\xi:\O\to H\Bigm|\xi\hb{ is
$\cF_t$-measurable, }\dbE|\xi|^p<\infty\Big\},\\
\ns\ds L_{\cF_t}^p(0,T;H)=\Big\{X:[0,T]\times\O\to H\Bigm|X(\cd)\hb{
is
$\cF_t$-measurable, }\dbE\int_0^T|X(s)|^pds<\infty\Big\},\\
\ns\ds L_\dbF^p(0,T;H)=\Big\{X:[0,T]\times\O\to H\Bigm|X(\cd)\hb{ is
$\dbF$-adapted, }\dbE\int_0^T|X(s)|^pds<\infty\Big\},\\
\ns\ds L^p_{\dbF}(\Omega;L^2(0,T;H))=\Big\{X:[0,T]\times\O\to
H\Bigm| X(\cdot)\hb{ is $\dbF$-adapted, }
\dbE\(\int_0^T|X(s)|^2ds\)^{p\over2}<\infty\Big\}.\ea$$
Also, let (with $q\ge1$)
$$\ba{ll}
\ns\ds L^p(0,T;L^q_\dbF(0,T;H))=\Big\{Z:[0,T]^2\times\O\to
H\Bigm|Z(t,\cd)\hb{ is $\dbF$-adapted for almost all $t\in[0,T]$,
}\\
\ns\ds\qq\qq\qq\qq\qq\qq\dbE\int_0^T\(\int_0^T|Z(t,s)|^qds\)^{p\over q}dt<\infty\Big\},\\
\ns\ds C_\dbF^p([0,T];H)=\Big\{X:[0,T]\times\O\to H\Bigm|X(\cd)\hb{
is $\dbF$-adapted, $t\mapsto X(t)$ is continuous}\\
\ns\ds\qq\qq\qq\qq\qq\qq\hb{ from $[0,T]$ to $L^p_{\cF_T}(\O;H)$,
}\sup_{t\in[0,T]}\dbE[|X(t)|^p]<\infty\Big\}.\ea$$
We denote
$$\ba{ll}
\ns\ds\cH^p[0,T]=L^p_\dbF(0,T;H)\times
L^p(0,T;L_\dbF^2(0,T;H)),\\
\ns\ds\dbH^p[0,T]=C^p_\dbF([0,T];H)\times
L^p_{\dbF}(\Omega;L^2(0,T;H)).\ea$$
Next, let $(\O^2,\cF^2,\dbP^2)=(\O\times\O,\cF\otimes
\cF,\dbP\otimes\dbP)$ be the completion of the product probability
space of the original $(\O,\cF,\dbP)$ with itself, where we define
the filtration as $\dbF^2=\{\cF_t\otimes\cF_t,~t\in[0,T]\}$ with
$\cF_t\otimes\cF_t$ being the completion of $\cF_t\times\cF_t$. It
is worthy of noting that any random variable $\xi=\xi(\o)$ defined
on $\O$ can be extended naturally to $\O^2$ as
$\xi'(\o,\o')=\xi(\o)$, with $(\o,\o')\in\O^2$. Similar to the
above, we define
$$L^1(\O^2,\cF^2,\dbP^2;H)=\Big\{\xi:\O^2\1n\to\1n H\Bigm|\xi\hb{ is
$\cF^2$-measurable, }
\dbE^2|\xi|\1n\equiv\2n\int_{\O^2}|\xi(\o',\o)|\dbP(d\o')\dbP(d\o)<\infty\Big\}.$$
For any $\eta\in L^1(\O^2,\cF^2,\dbP^2;H)$, we denote
$$\dbE'\eta(\o,\cd)=\int_\O\eta(\o,\o')\dbP(d\o')\in
L^1(\O,\cF,\dbP).$$
Note that if $\eta(\o,\o')=\eta(\o')$, then
$$\dbE'\eta=\int_\O\eta(\o')\dbP(d\o')=\int_\O\eta(\o)\dbP(d\o)=\dbE\eta.$$
In what follows, $\dbE'$ will be used when we need to distinguish
$\o'$ from $\o$, which is the case when both $\o$ and $\o'$ appear
at the same time. Finally, we denote
$$\D=\Big\{(t,s)\in[0,T]^2\Bigm|t\le s\Big\},\q\D^*=\Big\{(t,s)\in[0,T]^2\Bigm|t\ge s\Big\}\equiv
\cl{\D^c}.$$
Let
\bel{g,f}
g:\D\times\O\times\dbR^{3n}\times\dbR^m\to\dbR^n,\qq\th:\D\times\O^2\times\dbR^{6n}\to\dbR^m,\ee
be some suitable maps (see below for precise conditions) and define
\bel{G2}\ba{ll}
\ns\ds\G(t,s,Y,Z,\h Z)=\dbE'\[\th(t,s,y,z,\h z,Y,Z,\h
Z)\]_{(y,z,\hat
z)=(Y,Z,\h Z)}\\
\ns\ds=\int_\O\th(t,s,\o,\o',Y(\o),Z(\o),\h Z(\o),Y(\o'),Z(\o'),\h
Z(\o'))\dbP(d\o'),\ea\ee
for all reasonable random variables $(Y,Z,\h Z)$. This gives the
precise meaning of (\ref{G}). Hereafter, when we talk about MF-BSVIE
(\ref{MF-BSVIE}), the mapping $\G$ is defined by (\ref{G2}). With
such a mapping, we have
$$\ba{ll}
\ns\ds\G(t,s,Y(s),Z(t,s),Z(s,t))\equiv\G(t,s,\o,Y(s,\o),Z(t,s,\o),Z(s,t,\o))\\
\ns\ds=\int_\O\th(t,s,\o,\o',Y(s,\o),Z(t,s,\o),Z(s,t,\o),Y(s,\o'),
Z(t,s,\o'),Z(s,t,\o'))\dbP(d\o').\ea$$
Clearly, the operator $\G$ is {\it nonlocal} in the sense that the
value $\G(t,s,\o,Y(s,\o),Z(t,s,\o),Z(s,t,\o))$ of
$\G(t,s,Y(s),Z(t,s),Z(s,t))$ at $\o$ depends on the whole set
$$\{(Y(s,\o'),Z(t,s,\o'),Z(s,t,\o'))\bigm|\o'\in\O\},$$
not just on $(Y(s,\o),Z(t,s,\o),Z(s,t,\o))$. To get some feeling
about such an operator, let us look at a simple but nontrivial
special case.
\ms
\bf Example 2.1. \rm Let
$$\ba{ll}
\ns\ds\th(t,s,\o,\o',y,z,\hat z,y',z',\hat
z')=\th_0(t,s,\o)+A_0(t,s,\o)y+B_0(t,s,\o)z+C_0(t,s,\o)\hat
z\\
\ns\ds\qq\qq\qq\qq\qq\qq\q+A_1(t,s,\o,\o')y'+B_1(t,s,\o,\o')z'+C_1(t,s,\o,\o')\hat
z'.\ea$$
We should carefully distinguish $\o'$ and $\o$ in the above. Then
(suppressing $\o$)
$$\ba{ll}
\ns\ds\G(t,s,Y(s),Z(t,s),Z(s,t))=\th_0(t,s)+A_0(t,s)Y(s)+B_0(t,s)Z(t,s)
+C_0(t,s)Z(s,t)\\
\ns\ds\qq\qq\qq\qq\qq\qq\q+\dbE'[A_1(t,s)Y(s)]+\dbE'[B_1(t,s)Z(t,s)]+\dbE'[C_1(t,s)Z(s,t)],\ea$$
where, for example,
$$\dbE'[B_1(t,s)Z(t,s)]=\int_\O B_1(t,s,\o,\o')Z(t,s,\o')\dbP(d\o').$$
For such a case,
$(Y(\cd),Z(\cd\,,\cd))\mapsto\G(\cd\,,\cd\,,Y(\cd),Z(\cd\,,\cd),Z(\cd\,,\cd))$
is affine.
\ms
Having some feeling about the operator $\G$ from the above, let us
look at some useful properties of the operator $\G$ in general. To
this end, we make the following assumption.
\ms
{\bf(H0)$_q$} The map $\th:\D\times\O^2\times\dbR^{6n}\to\dbR^m$ is
measurable and for all $(t,y,z,\hat z,y',z',\hat
z')\in[0,T]\times\dbR^{6n}$, the map
$(s,\o,\o')\mapsto\th(t,s,\o,\o',y,z,\hat z,y',z',\hat z')$ is
$\dbF^2$-progressively measurable on $[t,T]$. Moreover, there exist
constants $L>0$ and $q\ge2$ such that
\bel{th-Lip}\ba{ll}
\ns\ds|\th(t,s,\o,\o',y_1,z_1,\hat z_1,y_1',z_1',\hat
z_1')-\th(t,s,\o,\o',y_2,z_2,\hat z_2,y_2',z_2',\hat z_2')|\\
\ns\ds\le L\(|y_1-y_2|+|z_1-z_2|+|\hat z_1-\hat
z_2|+|y_1'-y_2'|+|z_1'-z_2'|+|\hat z_1'-\hat z_2'|\),\\
\ns\ds\qq\qq\qq\forall(t,s,\o,\o')\in\D\times\O^2,~(y_i,z_i,\hat
z_i,y_i',z_i',\hat z_i')\in\dbR^{6n},i=1,2,\ea\ee
and
\bel{th-growth}\ba{ll}
\ns\ds|\th(t,s,\o,\o',y,z,\hat z,y',z',\hat z')|\le
L\(1+|y|+|z|+|\hat z|^{2\over q}+|y'|
+|z'|+|\hat z'|^{2\over q}\),\\
\ns\ds\qq\qq\qq\forall(t,s,\o',\o)\in\D\times\O^2,~(y,z,\hat
z,y',z',\hat z')\in\dbR^{6n}.\ea\ee
In the above, we may replace constant $L$ by some function $L(t,s)$
with certain integrability (similar to \cite{Yong 2008}). However,
for the simplicity of presentation, we prefer to take a constant
$L$. Also, we note that $(\hat z,\hat
z')\mapsto\th(t,s,\o,\o',y,z,\hat z,y',z',\hat z')$ is assumed to
grow no more than $|\hat z|^{2\over q}+|\hat z'|^{2\over q}$. If
$q=2$, then the growth is linear and if $q>2$, the growth is
sublinearly. This condition is very subtle in showing that the
solution $(Y(\cd),Z(\cd\,,\cd))$ of MF-BSVIE belongs in
$\cH^q[0,T]$. We would like to mention that (H0)$_\infty$ is
understood as that (\ref{th-growth}) is replaced by the following
\bel{th-growth2}\ba{ll}
\ns\ds|\th(t,s,\o,\o',y,z,\hat z,y',z',\hat z')|\le
L\(1+|y|+|z|+|y'|+|z'|\),\\
\ns\ds\qq\qq\qq\forall(t,s,\o',\o)\in\D\times\O^2,~(y,z,\hat
z,y',z',\hat z')\in\dbR^{6n}.\ea\ee
Under (H0)$_q$, for any $(Y(\cd),Z(\cd\,,\cd))\in\cH^q[0,T]$, we see
that for each $t\in[0,T]$, the map
$$\ba{ll}
\ns\ds(s,\o)\mapsto\G(t,s,\o,Y(s),Z(t,s),Z(s,t))\\
\ns\ds\equiv\int_\O\th(t,s,\o,\o',Y(s,\o),Z(t,s,\o),Z(s,t,\o),Y(s,\o'),Z(t,s,\o'),Z(s,t,\o'))\dbP(d\o')\ea$$
is $\dbF$-progressively measurable on $[t,T]$. Also,
\bel{2.6}\ba{ll}
\ns\ds|\th(t,s,\o,\o',Y(s,\o),Z(t,s,\o),Z(s,t,\o),y,z,\hat z)|\\
\ns\ds\le L\(1+|Y(s,\o)|+|Z(t,s,\o)|+|Z(s,t,\o)|^{2\over
q}+|y|+|z|+|\hat z|^{2\over q}\).\ea\ee
Consequently,
\bel{2.7}\ba{ll}
\ns\ds|\G(t,s,Y(s),Z(t,s),Z(s,t))|\\
\ns\ds\le L\(1+|Y(s)|+|Z(t,s)|+|Z(s,t)|^{2\over
q}+\dbE|Y(s)|+\dbE|Z(t,s)|+\dbE|Z(s,t)|^{2\over q}\).\ea\ee
Likewise, for any
$(Y_1(\cd),Z_1(\cd\,,\cd)),(Y_2(\cd),Z_2(\cd\,,\cd))\in\cH^q[0,T]$,
we have
\bel{2.8}\ba{ll}
\ns\ds|\G(t,s,Y_1(s),Z_1(t,s),Z_1(s,t))-\G(t,s,Y_2(s),Z_2(t,s),Z_2(s,t))|\\
\ns\ds\le
L\(|Y_1(s)-Y_2(s)|+|Z_1(t,s)-Z_2(t,s)|+|Z_1(s,t)-Z_2(s,t)|\\
\ns\ds\qq+\dbE|Y_1(s)-Y_2(s)|+\dbE|Z_1(t,s)-Z_2(t,s)|+\dbE|Z_1(s,t)-Z_2(s,t)|\).\ea\ee
The above two estimates will play an interesting role later. We now
introduce the following definition.
\ms
\bf Definition 2.2. \rm A pair of
$(Y(\cd),Z(\cd\,,\cd))\in\cH^p[0,T]$ is called an {\it adapted
M-solution} of MF-BSVIE (\ref{MF-BSVIE}) if (\ref{MF-BSVIE}) is
satisfied in the It\^o sense and the following holds:
\bel{M}Y(t)=\dbE Y(t)+\int_0^tZ(t,s)dW(s),\qq\qq t\in[0,T].\ee
\ms
It is clear that (\ref{M}) implies
\bel{M2}Y(t)=\dbE[Y(t)\bigm|\cF_S]+\int_S^tZ(t,s)dW(s),\qq0\le S\le
t\le T.\ee
This suggests us define $\cM^p[0,T]$ as the set of all elements
$(y(\cd),z(\cd\,,\cd))\in\cH^p[0,T]$ satisfying:
\bel{2.10}y(t)=\dbE\[y(t)\bigm|\cF_S\]+\int_S^tz(t,s)dW(s),\qq
t\in[S,T],\q S\in[0,T).\ee
Obviously $\cM^p[0,T]$ is a closed subspace of $\cH^p[0,T]$. Note
that for any $(y(\cd),z(\cd\,,\cd))\in\cM^2[0,T]$,
\bel{2.11}\dbE|y(t)|^2=\(\dbE\[y(t)\bigm|\cF_S\]\)^2+\dbE\int_S^t|z(t,s)|^2ds\ge\dbE\int_S^t|z(t,s)|^2ds.\ee
Relation (\ref{2.11}) can be generalized a little bit more. To see
this, let us present the following lemma.
\ms
\bf Lemma 2.3. \sl Let $0\le S<t\le T$, $\eta\in
L^p_{\cF_S}(\O;\dbR^n)$ and $\z(\cd)\in
L^p_\dbF(\O;L^2(S,t;\dbR^n))$. Then
\bel{2.12}\dbE\[|\eta|^p+\(\int_S^t|\z(s)|^2ds\)^{p\over2}\]\le
K\dbE\Big|\eta+\int_S^t\z(s)dW(s)\Big|^p.\ee
Hereafter, $K>0$ stands for a generic constant which can be
different from line to line.
\ms
\it Proof. \rm For fixed $(S,t)\in\D$ (which means $0\le S\le t\le
T$) with $S<t$, let
$$\xi=\eta+\int_S^t\z(s)dW(s),$$
which is $\cF_t$-measurable. Let $(Y(\cd),Z(\cd))$ be the adapted
solution to the following BSDE:
$$Y(r)=\xi-\int_r^tZ(s)dW(s),\qq r\in[S,t].$$
Then it is standard that
\bel{2.13}\dbE\[\sup_{r\in[S,t]}|Y(r)|^p+\(\int_S^t|Z(s)|^2ds\)^{p\over2}\]\le
K\dbE|\xi|^p.\ee
Now,
$$Y(S)+\int_S^tZ(s)dW(s)=\xi=\eta+\int_S^t\z(s)dW(s).$$
By taking conditional expectation $\dbE[\cd\,|\,\cF_S]$, we see that
$$Y(S)=\eta.$$
Consequently,
$$\int_S^t\(Z(s)-\z(s)\)dW(s)=0,$$
which leads to
$$Z(s)=\z(s),\qq s\in[S,t],~\as$$
Then (\ref{2.12}) follows from (\ref{2.13}). \endpf
\ms
We have the following interesting corollary for elements in
$\cM^p[0,T]$ (comparing with (\ref{2.11})).
\ms
\bf Corollary 2.4. \sl For any $(y(\cd),z(\cd\,,\cd))\in\cM^p[0,T]$,
the following holds:
\bel{2.14}\dbE\(\int_S^t|z(t,s)|^2ds\)^{p\over2}\le
K\dbE|y(t)|^p,\qq\forall S\in[0,t].\ee
\ms
\it Proof. \rm Applying (\ref{2.12}) to (\ref{2.10}), we have
$$\dbE\(\int_S^t|z(t,s)|^2ds\)^{p\over2}\le\dbE\[\big|\dbE[y(t)\bigm|\cF_S]\big|^p+
\(\int_S^t|z(t,s)|^2ds\)^{p\over2}\]\le K\dbE|y(t)|^p.$$
This proves the corollary. \endpf
\ms
From the above, we see that for any
$(y(\cd),z(\cd , \cd))\in\cM^p[0,T]$, and any $\b>0$,
\bel{}\ba{ll}
\ns\ds K\dbE\int_0^Te^{\b t}|y(t)|^pdt\ge\dbE\int_0^Te^{\b
t}\[|\dbE y(t)|^p+\(\int_0^t|z(t,s)|^2ds\)^{p\over2}\]dt\\
\ns\ds\qq\qq\qq\qq\ge\dbE\int_0^Te^{\b
t}\(\int_0^t|z(t,s)|^2ds\)^{p\over2}dt.\ea\ee
Hence,
$$\ba{ll}
\ns\ds\|(y(\cd),z(\cd\,,\cd))\|_{\cH^p[0,T]}^p\equiv\dbE\[\int_0^T|y(t)|^pdt+\int_0^T\(\int_0^T
|z(t,s)|^2ds\)^{p\over2}dt\]\\
\ns\ds\le
K\dbE\[\int_0^T|y(t)|^pdt+\int_0^T\(\int_0^t|z(t,s)|^2ds\)^{p\over2}dt
+\int_0^T\(\int_t^T|z(t,s)|^2ds\)^{p\over2}dt\]\\
\ns\ds\le K\dbE\[\int_0^Te^{\b t}|y(t)|^pdt+\int_0^Te^{\b
t}\(\int_0^t|z(t,s)|^2ds\)^{p\over2}dt+
\int_0^Te^{\b t}\(\int_t^T|z(t,s)|^2ds\)^{p\over2}dt\]\\
\ns\ds\le K\dbE\[\int_0^Te^{\b t}|y(t)|^pdt+\int_0^Te^{\b
t}\(\int_t^T|z(t,s)|^2ds\)^{p\over2}dt\]\le
K\|(y(\cd),z(\cd\,,\cd))\|_{\cH^p[0,T]}^p.\ea$$
This means that we can use the following as an equivalent norm in
$\cM^p[0,T]$:
$$\|(y(\cd),z(\cd\,,\cd))\|_{\cM^p[0,T]}\equiv\left\{\dbE\int_0^Te^{\b
t}|y(t)|^pdt+\dbE\int_0^Te^{\b
t}\(\int_t^T|z(t,s)|^2ds\)^{p\over2}dt\right\}^{1\over p}.$$
Sometimes we use $\cM^p_\b[0,T]$ for $\cM^p[0,T]$ to emphasize the
involved parameter $\b$.
\ms
To conclude this subsection, we state the following corollary of
Lemma 2.3 relevant to BSVIEs, whose proof is straightforward.
\ms
\bf Corollary 2.5. \sl Suppose $(\eta(\cd),\z(\cd ,\cd))$ is an
adapted M-solution to the following BSVIE:
\bel{2.15}\eta(t)=\xi(t)+\int_t^Tg(t,s)ds-\int_t^T\z(t,s)dW(s),\q
t\in[0,T],\ee
for $\xi(\cd)\in L^p_{\cF_T}(0,T;\dbR^n)$ and $g(\cd\,,\cd)\in
L^p(0,T;L^1_\dbF(0,T;\dbR^n))$. Then
\bel{2.16}\dbE\[|\eta(t)|^p+\(\int_t^T|\z(t,s)|^2ds\)^{p\over2}\]\le
K\dbE\[|\xi(t)|^p+\(\int_t^T|g(t,s)|ds\)^p\], \q \forall t\in[0,T].\ee
\rm
\ms
\subsection{Mean-field forward stochastic Volterra integral
equations.}
In this subsection, we study the following MF-FSVIE:
\bel{MF-FSVIE1}\ba{ll}
\ns\ds X(t)=\f(t)+\int_0^tb(t,s,X(s),\G^b(t,s,X(s)))ds\\
\ns\ds\qq\qq\qq+\int_0^t\si(t,s,X(s),\G^\si(t,s,X(s)))dW(s),\qq
t\in[0,T],\ea\ee
where
\bel{2.6}\left\{\ba{ll}
\ns\ds\G^b(t,s,X)=\dbE'\[\th^b(t,s,\xi,X,)\]_{\xi=X}\equiv\int_\O\th^b(t,s,\o,\o',X(\o),X(\o'))
\dbP(d\o'),\\
\ns\ds\G^\si(t,s,X)=\dbE'\[\th^\si(t,s,\xi,X)\]_{\xi=X}\equiv\int_\O\th^\si(t,s,\o,\o',X(\o),X(\o'))
\dbP(d\o'). \ea\right.\ee
We see that MF-FSVIE (\ref{MF-FSVIE1}) is slightly more general than
MF-FSVIE (\ref{MF-FSVIE}) because of the above definition
(\ref{2.6}) of the operators $\G^b$ and $\G^\si$.
\ms
An $\dbF$-adapted process $X(\cd)$ is called a solution to
(\ref{MF-FSVIE1}) if (\ref{MF-FSVIE1}) is satisfied in the usual
It\^o sense. To guarantee the well-posedness of (\ref{MF-FSVIE1}),
let us make the following hypotheses.
\ms
{\bf(H1)} The maps $b:
\D^*\times\O\times\dbR^n\times\dbR^{m_1}\to\dbR^n$ and
$\si:\D^*\times\O\times\dbR^n\times\dbR^{m_2}\to\dbR^n$ are
measurable, and for all
$(t,x,\g,\g')\in[0,T]\times\dbR^n\times\dbR^{m_1}\times\dbR^{m_2}$,
the map
$$(s,\o)\mapsto(b(t,s,\o,x,\g),\si(t,s,\o,x,\g'))$$
is $\dbF$-progressively measurable on $[0,t]$. Moreover, there
exists some constant $L>0$ such that
\bel{b-si-Lip}\ba{ll}
\ns\ds
|b(t,s,\o,x_1,\g_1)-b(t,s,\o,x_2,\g_2)|+|\si(t,s,\o,x_1,\g'_1)-\si(t,s,\o,x_2,\g'_2)|\\
\ns\ds\qq\le
L(|x_1-x_2|+|\g_1-\g_2|+|\g'_1-\g'_2|),\\
\ns\ds\qq\qq\qq(t,s,\o)\in\D^*\times\O,~(x_i,\g_i,\g_i')\in\dbR^n\times\dbR^{m_1}\times\dbR^{m_2},~
i=1,2.\ea\ee
Moreover,
\bel{b-si-growth}\ba{ll}
\ns\ds|b(t,s,\o,x,\g)|+|\si(t,s,\o,x,\g')|\le
L(1+|x|+|\g|+|\g'|),\\
\ns\ds\qq\qq\qq(t,s,\o,x,\g,\g')\in\D^*\times\O\times\dbR^n\times\dbR^{m_1}\times\dbR^{m_2}.\ea\ee
{\bf(H2)} The maps
$\th^b:\D^*\times\O^2\times\dbR^{2n}\to\dbR^{m_1}$ and
$\th^\si:\D^*\times\O^2\times\dbR^{2n}\to\dbR^{m_2}$ are measurable,
and for all $(t,x,x')\in[0,T]\times\dbR^{2n}$, the map
$$(s,\o,\o')\mapsto(\th^b(t,s,\o,\o',x,x'),\th^\si(t,s,\o,\o',x,x'))$$
is $\dbF^2$-progressively measurable on $[0,t]$. Moreover, there
exists some constant $L>0$ such that
\bel{th-b-Lip}\ba{ll}
\ns\ds
|\th^b(t,s,\o,\o',x_1,x'_1)-\th^b(t,s,\o,\o',x_2,x'_2)|+|\th^\si(t,s,\o,\o',x_1,x'_1)
-\th^\si(t,s,\o,\o',x_2,x'_2)|\\
\ns\ds\qq\le
L(|x_1-x_2|+|x'_1-x'_2|),\qq(t,s,\o,\o')\in\D^*\times\O^2,
(x_i,x_i')\in\dbR^{2n},i=1,2,\ea\ee
and
\bel{th-b-growth}\ba{ll}
\ns\ds|\th^b(t,s,\o,\o',x,x')|+|\th^\si(t,s,\o,\o',x,x')|\le
L(1+|x|+|x'|),\\
\ns\ds\qq\qq\qq\qq\qq\qq(t,s,\o,\o')\in\D^*\times\O^2,
x,x'\in\dbR^n.\ea\ee
We will also need the following assumptions.
\ms
{\bf(H1)$'$} In addition to (H1), the map
$t\mapsto(b(t,s,\o,x,\g),\si(t,s,\o,x,\g'))$ is continuous on
$[s,T]$.
\ms
{\bf(H2)$'$} In addition to (H2), the map
$t\mapsto(\th^b(t,s,\o,\o',x,x'),\th^\si(t,s,\o,\o',x,x'))$ is
continuous on $[s,T]$.
\ms
Now, let us state and prove the following result concerning MF-FSVIE
(\ref{MF-FSVIE1}).
\ms
\bf Theorem 2.6. \sl Let {\rm(H1)--(H2)} hold. Then for any $p\ge2$,
and $\f(\cd)\in L^p_\dbF(0,T;\dbR^n)$, MF-FSVIE $(\ref{MF-FSVIE1})$
admits a unique solution $X(\cd)\in L^p_\dbF(0,T;\dbR^n)$, and the
following estimate holds:
\bel{|X|Lp-estimate}\dbE\int_0^T|X(t)|^pdt\le
K\(1+\dbE\int_0^T|\f(t)|^pdt\).\ee
Further, for $i=1,2$, let $X_i(\cd)\in L^p_\dbF(0,T;\dbR^n)$ be the
solutions of $(\ref{MF-FSVIE1})$ corresponding to $\f_i(\cd)\in
L^p_\dbF(0,T;\dbR^n)$ and
$b_i(\cd),\si_i(\cd),\th_i^b(\cd),\th_i^\si(\cd)$ satisfying
{\rm(H1)--(H2)}. Let
$$\left\{\ba{ll}
\ns\ds\G_i^b(t,s,X)=\dbE'\[\th_i^b(t,s,\xi,X)\]_{\xi=X}\equiv\int_\O\th_i^b(t,s,\o,\o',X(\o),X(\o'))
\dbP(d\o'),\\
\ns\ds\G^\si_i(t,s,X)=\dbE'\[\th_i^\si(t,s,\xi,X)\]_{\xi=X}\equiv\int_\O\th_i^\si(t,s,\o,\o',X(\o),
X(\o'))\dbP(d\o'),\ea\right.\q i=1,2.$$
Then the following stability estimate
holds:
\bel{|X-X|Lp-estimate}\ba{ll}
\ns\ds\dbE\int_0^T|X_1(t)-X_2(t)|^p\le K\Big\{\dbE\int_0^T|\f_1(t)-\f_2(t)|^pdt\\
\ns\ds\q+\dbE\int_0^T\(\int_0^t|b_1(t,s,X_1(s),\G^b_1(t,s,X_1(s)))-b_2(t,s,X_1(s),\G^b_2(t,s,X_1(s)))|ds\)^pdt\\
\ns\ds\q+\dbE\int_0^T\(\int_0^t|\si_1(t,s,X_1(s),\G^b_1(t,s,X_1(s)))-\si_2(t,s,X_1(s),\G^b_2(t,s,X_1(s)))|^2
ds\)^{p\over2}dt\Big\}.\ea\ee
Moreover, let {\rm(H1)$'$--(H2)$'$} hold. Then for any $p\ge2$, and
any $\f(\cd)\in C^p_\dbF([0,T];\dbR^n)$, the unique solution
$X(\cd)\in C^p_\dbF([0,T];\dbR^n)$, and estimate
$(\ref{|X|Lp-estimate})$ is replaced by the following:
\bel{|X|-estimate}\sup_{t\in[0,T]}\dbE|X(t)|^p\le
K\Big\{1+\sup_{t\in[0,T]}\dbE|\f(t)|^p\Big\}.\ee
Also, for $i=1,2$, let $X_i(\cd)\in L^p_\dbF(0,T;\dbR^n)$ be the
solutions of $(\ref{MF-FSVIE1})$ corresponding to $\f_i(\cd)\in
L^p_\dbF(0,T;\dbR^n)$ and
$b_i(\cd),\si_i(\cd),\th_i^b(\cd),\th_i^\si(\cd)$ satisfying
{\rm(H1)$'$--(H2)$'$}. Then $(\ref{|X-X|Lp-estimate})$ is replaced
by the following:
\bel{|X-X|-estimate}\ba{ll}
\ns\ds\sup_{t\in[0,T]}\dbE|X_1(t)-X_2(t)|^p\le K\Big\{\sup_{t\in[0,T]}\dbE|\f_1(t)-\f_2(t)|^p\\
\ns\ds+\sup_{t\in[0,T]}\dbE\(\int_0^t|b_1(t,s,X_1(s),\G^b_1(t,s,X_1(s)))-b_2(t,s,X_1(s),\G^b_2(t,s,X_1(s)))|ds\)^p\\
\ns\ds+\sup_{t\in[0,T]}\dbE\(\int_0^t|\si_1(t,s,X_1(s),\G^b_1(t,s,X_1(s)))-\si_2(t,s,X_1(s),\G^b_2(t,s,X_1(s)))|^2
ds\)^{p\over2}\Big\}.\ea\ee
\ms
\it Proof. \rm By (H2), similar to (\ref{2.7})--(\ref{2.8}), making
use of (\ref{th-b-growth}), for any $X(\cd)\in
L^p_\dbF(0,T;\dbR^n)$, we have
\bel{}|\G^b(t,s,X(s))|+|\G^\si(t,s,X(s))|\le
L\(1+\dbE|X(s)|+|X(s)|\).\ee
Thus, if $X(\cd)\in L^p_\dbF(0,T;\dbR^n)$ is a solution to
(\ref{MF-FSVIE1}) with $\f(\cd)\in L^p_\dbF(0,T;\dbR^n)$, then by
(\ref{b-si-growth}),
\bel{2.29}\ba{ll}
\ns\ds\dbE|X(t)|^p\le
3^{p-1}\dbE\Big\{|\f(t)|^p+\Big|\int_0^tb(t,s,X(s),\G^b(t,s,X(s)))ds\Big|^p\\
\ns\ds\qq\qq\qq\qq+\Big|\int_0^t\si(t,s,X(s),\G^\si(t,s,X(s)))dW(s)\Big|^p\Big\}\\
\ns\ds\le
3^{p-1}\Big\{\dbE|\f(t)|^p+\dbE\(\int_0^tL\[1+|X(s)|+|\G^b(t,s,X(s))|\]ds\)^p\\
\ns\ds\qq\qq+\dbE\(\int_0^tL^2\[1+|X(s)|+|\G^\si(t,s,X(s))|\]^2ds\)^{p\over2}\Big\}\\
\ns\ds\le K\Big\{1+\dbE|\f(t)|^p+\int_0^t|X(s)|^pds\Big\}.\ea\ee
Consequently,
$$\int_0^t\dbE|X(r)|^pdr\le
K\Big\{1+\int_0^t\dbE|\f(r)|^pdr+\int_0^t\[\int_0^r|X(s)|^pds\]dr\Big\},\qq0\le
t\le T.$$
Using Gronwall's inequality, we obtain (\ref{|X|Lp-estimate}).
\ms
Now, let $\d>0$ be undetermined. For any $x(\cd)\in
L^p_\dbF(0,\d;\dbR^n)$, define
$$\ba{ll}
\ns\ds\cG(x(\cd))(t)=\f(t)+\int_0^tb(t,s,x(s),\G^b(t,s,x(s)))ds\\
\ns\ds\qq\qq\qq+\int_0^t\si(t,x,x(s),\G^\si(t,s,x(s)))dW(s),\qq
t\in[0,\d].\ea$$
Then we have
$$\ba{ll}
\ns\ds\dbE\int_0^\d|\cG(x(\cd))(t)|^pdt\le
K\dbE\Big\{\int_0^\d|\f(t)|^pdt+\int_0^\d\(\int_0^t\(1+|x(s)|+|\G^b(t,s,x(s))|\)ds\)^p\\
\ns\ds\qq\qq\qq\qq\qq\qq+\int_0^\d\Big|\int_0^t\si(t,s,x(s),\G^\si(t,s,x(s)))dW(s)\Big|^pdt
\Big\}\\
\ns\ds\le
K\Big\{\dbE\int_0^\d|\f(t)|^pdt+\dbE\int_0^\d|x(t)|^pdt\Big\}.\ea$$
Thus, $\cG:L^p_\dbF(0,\d;\dbR^n)\to L^p_\dbF(0,\d;\dbR^n)$. Next,
for any $x_1(\cd),x_2(\cd)\in L^p_\dbF(0,\d;\dbR^n)$, we have
(making use of (\ref{b-si-Lip}) and (\ref{th-b-Lip}))
$$\ba{ll}
\ns\ds\dbE\int_0^\d|\cG(x_1(\cd))(t)-\cG(x_2(\cd))(t)|^pdt\\
\ns\ds\le2^{p-1}\Big\{\dbE\int_0^\d\[\int_0^tL\(|x_1(s)-x_2(s)|+|\G^b(t,s,x_1(s))-\G^b(t,s,x_2(s))|\)ds\]^pdt\\
\ns\ds\q+\int_0^\d\dbE\[\int_0^tL^2\(|x_1(s)-x_2(s)|^2+|\G^\si(t,s,x_1(s))-\G^\si(t,s,x_2(s))|^2\)
ds\]^{p\over2}dt\Big\}\\
\ns\ds\le K_0\d\dbE\int_0^\d|x_1(t)-x_2(t)|^pdt,\ea$$
with $K_0>0$ being an absolute constant (only depending on $L$ and
$p$). Then letting $\d={1\over2K_0}$, we see that
$\cG:L^p_\dbF(0,\d;\dbR^n)\to L^p_\dbF(0,\d;\dbR^n)$ is a
contraction. Hence, MF-FSVIE (\ref{MF-FSVIE1}) admits a unique
solution $X(\cd)\in L^p_\dbF(0,\d;\dbR^n)$.
\ms
Next, for $t\in[\d,2\d]$, we write (\ref{MF-FSVIE}) as
\bel{MF-FSVIE2}\ba{ll}
\ns\ds X(t)=\h\f(t)+\int_\d^tb(t,s,X(s),\G^b(t,s,X(s)))ds\\
\ns\ds\qq\qq\qq+\int_\d^t\si(t,s,X(s),\G^\si(t,s,X(s))dW(s),\ea\ee
with
$$\ba{ll}
\ns\ds\h\f(t)=\f(t)+\int_0^\d
b(t,s,X(s),\G^b(t,s,X(s)))ds+\int_0^\d\si(t,s,X(s),\G^\si(t,s,X(s))dW(s).\ea$$
Then a similar argument as above applies to obtain a unique solution
of (\ref{MF-FSVIE2}) on $[\d,2\d]$. It is important to note that the
step-length $\d>0$ is uniform. Hence, by induction, we obtain the
unique solvability of (\ref{MF-FSVIE1}) on $[0,T]$.
\ms
Now, for $i=1,2$, let $X_i(\cd)\in L^p_\dbF(0,T;\dbR^n)$ be the
solutions of (\ref{MF-FSVIE1}) corresponding to $\f_i(\cd)\in
L^p_\dbF(0,T;\dbR^n)$ and
$b_i(\cd),\si_i(\cd),\th_i^b(\cd),\th_i^\si(\cd)$ (satisfying
(H1)--(H2)). Then
$$\ba{ll}
\ns\ds\dbE|X_1(t)-X_2(t)|^p\le3^{p-1}\dbE\Big\{|\f_1(t)-\f_2(t)|^p\\
\ns\ds\qq+\(\int_0^t|b_1(t,s,X_1(s),\G^b_1(t,s,X_1(s)))-b_2(t,s,X_2(s),\G^b_2(t,s,X_2(s)))|ds\)^p\\
\ns\ds\qq+\Big|\int_0^t\(\si_1(t,s,X_1(s),\G^b_1(t,s,X_1(s)))-\si_2(t,s,X_2(s),\G^b_2(t,s,X_2(s)))\)
dW(s)\Big|^p\Big\}\\
\ns\ds\le
K\Big\{\dbE|\f_1(t)-\f_2(t)|^p+\dbE\int_0^t|X_1(s)-X_2(s)|^pds\\
\ns\ds\qq+\dbE\(\int_0^t|b_1(t,s,X_1(s),\G^b_1(t,s,X_1(s)))-b_2(t,s,X_1(s),\G^b_2(t,s,X_1(s)))|ds\)^p\\
\ns\ds\qq+\dbE\Big|\int_0^t\(\si_1(t,s,X_1(s),\G^b_1(t,s,X_1(s)))-\si_2(t,s,X_1(s),\G^b_2(t,s,X_1(s)))\)
dW(s)\Big|^p\Big\}.\ea$$
Then we can obtain estimate (\ref{|X-X|Lp-estimate}).
\ms
The conclusions under (H1)$'$--(H2)$'$ are easy to obtain. \endpf
\ms
\subsection{Linear MF-FSVIEs and MF-BSVIEs.}
Let us now look at linear MF-FSVIEs, by which we mean the
following:
\bel{L-MF-FSVIE}\ba{ll}
\ns\ds X(t)=\f(t)+\int_0^t\(A_0(t,s)X(s)+\dbE'\[C_0(t,s)X(s)\]\)ds\\
\ns\ds\qq\qq\q+\int_0^t\(A_1(t,s)X(s)+\dbE'\[C_1(t,s)X(s)\]\)dW(s),\qq
t\in[0,T].\ea\ee
For such an equation, we introduce the following hypotheses.
\ms
{\bf(L1)} The maps
$$A_0,A_1:\D^*\times\O\to\dbR^{n\times n},\q
C_0,C_1:\D^*\times\O^2\to\dbR^{n\times n},$$
are measurable and uniformly bounded. For any $t\in[0,T]$,
$s\mapsto(A_0(t,s),A_1(t,s))$ is $\dbF$-progressively measurable on
$[0,t]$, and $s\mapsto(C_0(t,s),C_1(t,s))$ is $\dbF^2$-progressively
measurable on $[0,t]$.
\ms
{\bf(L1)$'$} In addition to (L1), the maps
$$t\mapsto(A_0(t,s,\o),A_1(t,s,\o),C_0(t,s,\o,\o'),C_1(t,s,\o,\o'))$$
is continuous on $[s,T]$.
\ms
Clearly, by defining
$$\left\{\ba{ll}
\ns\ds
b(t,s,\o,x,\g)=A_0(t,s,\o)x+\g,\q\th^b(t,s,\o,\o',x,x')=C_0(t,s,\o,\o')x',\\
\ns\ds\si(t,s,\o,x,\g')=A_1(t,s,\o)x+\g',\q\th^\si(t,s,\o,\o',x,x')=C_1(t,s,\o,\o')x',\ea\right.$$
we see that (\ref{L-MF-FSVIE}) is a special case of
(\ref{MF-FSVIE1}). Moreover, (L1) implies (H1)--(H2), and (L1)$'$
implies (H1)$'$--(H2)$'$. Hence, we have the following corollary of
Theorem 2.6.
\ms
\bf Corollary 2.7. \sl Let {\rm(L1)} hold, and $p\ge2$. Then for any
$\f(\cd)\in L^p_\dbF(0,T;\dbR^n)$, $(\ref{L-MF-FSVIE})$ admits a
unique solution $X(\cd)\in L^p_\dbF(0,T;\dbR^n)$, and estimate
$(\ref{|X|Lp-estimate})$ holds. Further, let $p>2$. If for $i=1,2$,
$A_0^i(\cd),A_1^i(\cd),C_0^i(\cd)$, $C_1^i(\cd)$ satisfy {\rm(L1)},
$\f_i(\cd)\in L^p_\dbF(0,T;\dbR^n)$, and $X_i(\cd)\in
L^p_\dbF(0,T;\dbR^n)$ are the corresponding solutions to
$(\ref{L-MF-FSVIE})$, then for any $r\in(2,p)$,
\bel{stability3}\2n\ba{ll}
\ns\ds\dbE\int_0^T|X_1(t)-X_2(t)|^rdt\le
K\dbE\int_0^T|\f_1(t)-\f_2(t)|^rdt+K\(1+\dbE\int_0^T|\f_1(t)|^pdt\)^{r\over
p}\\
\ns\ds\cd\2n\int_0^T\3n\Big\{\[\dbE\(\1n\int_0^t\3n|A^1_0(t,s)\1n-\1n
A^2_0(t,s)|^{r\over r-1}ds\1n\)\1n^{(r-1)p\over p-r}\]\1n^{p-r\over
p}\3n+\2n\[\dbE^2\1n\(\1n\int_0^t\2n|C_0^1(t,s)\1n-\1n
C_0^2(t,s)|^{r\over
r-1}ds\1n\)\1n^{(r-1)p\over p-r}\]^{p-r\over p}\\
\ns\ds+\1n\[\dbE\(\1n\int_0^t\2n|A^1_1(t,s)\1n-\1n
A_1^2(t,s)|^{2r\over
r-2}ds\1n\)\1n^{(r-2)p\over2(p-r)}\]\1n^{p-r\over
p}\3n+\2n\[\dbE^2\(\1n\int_0^t\2n|C_1^1(t,s)\1n-\1n
C^2_1(t,s)|^{2r\over r-2}ds\)^{(r-2)p\over2(p-r)}\]^{p-r\over
p}\1n\Big\}dt.\ea\ee
Moreover, let {\rm(L1)$'$} hold. Then for any $\f(\cd)\in
C_\dbF^p([0,T];\dbR^n)$, $(\ref{L-MF-FSVIE})$ admits a unique
solution $X(\cd)\in C_\dbF^p([0,T];\dbR^n)$, and estimate
$(\ref{|X|-estimate})$ holds. Now for $i=1,2$, let
$A_0^i(\cd),A_1^i(\cd),C_0^i(\cd)$, $C_1^i(\cd)$ satisfy
{\rm(L1)$'$}, $\f_i(\cd)\in C^p_\dbF([0,T];\dbR^n)$, and
$X_i(\cd)\in C^p_\dbF([0,T];\dbR^n)$ be the corresponding solutions
to $(\ref{L-MF-FSVIE})$, then for any $2<r<p$,
\bel{|X-X|}\2n\ba{ll}
\ns\ds\sup_{t\in[0,T[}\dbE|X_1(t)-X_2(t)|^r\le
K\sup_{t\in[0,T]}\dbE|\f_1(t)-\f_2(t)|^r\\
\ns\ds\q+K\(1+\sup_{t\in[0,T]}\dbE|\f_1(t)|^p\)^{r\over
p}\Big\{\sup_{t\in[0,T]}\[\dbE\(\1n\int_0^t\3n|A^1_0(t,s)\1n-\1n
A^2_0(t,s)|^{r\over r-1}ds\1n\)\1n^{(r-1)p\over p-r}\]\1n^{p-r\over
p}\\
\ns\ds\q+\sup_{t\in[0,T]}\[\dbE^2\1n\(\1n\int_0^t\2n|C_0^1(t,s)\1n-\1n
C_0^2(t,s)|^{r\over
r-1}ds\1n\)\1n^{(r-1)p\over p-r}\]^{p-r\over p}\\
\ns\ds\q+\sup_{t\in[0,T]}\[\dbE\(\1n\int_0^t\2n|A^1_1(t,s)\1n-\1n
A_1^2(t,s)|^{2r\over
r-2}ds\1n\)\1n^{(r-2)p\over2(p-r)}\]\1n^{p-r\over
p}\\
\ns\ds\q+\sup_{t\in[0,T]}\[\dbE^2\(\1n\int_0^t\2n|C_1^1(t,s)\1n-\1n
C^2_1(t,s)|^{2r\over r-2}ds\)^{(r-2)p\over2(p-r)}\]^{p-r\over
p}\1n\Big\}.\ea\ee
\ms
\it Proof. \rm We need only to prove the stability estimate. Let
$X_i(\cd)\in L^p_\dbF(0,T;\dbR^n)$ be the solutions to the linear
MF-FSVIEs corresponding to the coefficients
$(A_0^i(\cd),C^i_0(\cd),A^i_1(\cd),C^i_1(\cd))$ satisfying (L1) and
free term $\f_i(\cd)\in L^p_\dbF(0,T;\dbR^n)$. Then we have
$$\dbE\int_0^T|X_i(s)|^pds\le K\(1+\dbE\int_0^T|\f_i(s)|^p\).$$
Now, for any $2<r<p$,
$$\3n\3n\3n\3n\ba{ll}
\ns\ds\dbE\int_0^T|X_1(t)-X_2(t)|^r\le K\Big\{\dbE\int_0^T|\f_1(t)-\f_2(t)|^rdt\\
\ns\ds+\dbE\int_0^T\[\int_0^t\(|A_0^1(t,s)-A_0^2(t,s)||X_1(s)|+\dbE'[\,|C_0^1(t,s)
-C_0^2(t,s)||X_1(s)|\,]\)
ds\]^rdt\\
\ns\ds+\dbE\int_0^T\[\int_0^t\(|A^1_1(t,s)-A_1^2(t,s)|^2|X_1(s)|^2
+\dbE'[\,|C_1^1(t,s)-C^2_1(t,s)|^2|X_1(s)|^2\,]\)
ds\]^{r\over2}dt\Big\}\\
\ns\ds\le\1n
K\1n\Big\{\dbE\int_0^T|\f^1(t)-\f^2(t)|^rdt+\dbE\int_0^T\(\int_0^t|A^1_0(t,s)-A^2_0(t,s)|
|X_1(s)|ds\)^rdt\\
\ns\ds\q+\dbE^2\3n\int_0^T\3n\1n\(\1n\int_0^t\2n|C_0^1(t,s)\1n-\1n
C_0^2(t,s)||X_1\1n(s)|ds\)^r\2n
dt\1n+\1n\dbE\2n\int_0^T\3n\1n\(\1n\int_0^t\2n|A^1_1(t,s)\1n-\1n
A_1^2(t,s)|^2|X_1\1n(s)|^2
ds\1n\)^{\1n{r\over2}}dt\\
\ns\ds\q+\dbE^2\int_0^T\(\int_0^t|C_1^1(t,s)-C^2_1(t,s)|^2|X_1(s)|^2ds\)^{r\over2}dt\Big\}\\
\ns\ds\le\1n K\1n\Big\{\1n\dbE\3n\int_0^T\3n|\f^1(t)-\f^2(t)|^r\1n
dt\1n+\1n\dbE\2n\int_0^T\3n\(\int_0^t|A^1_0(t,s)-A^2_0(t,s)|^{r\over
r-1}ds\)^{r-1}\(\int_0^t
|X_1(s)|^rds\)dt\\
\ns\ds\q+\dbE^2\int_0^T\(\int_0^t|C_0^1(t,s)-C_0^2(t,s)|^{r\over
r-1}ds\)^{r-1}
\(\int_0^t|X_1(s)|^rds\)dt\\
\ns\ds\q+\dbE\int_0^T\(\int_0^t|A^1_1(t,s)-A_1^2(t,s)|^{2r\over
r-2}ds\)^{r-2\over2}\(\int_0^t|X_1(s)|^r
ds\)dt\\
\ns\ds\q+\dbE^2\int_0^T\(\int_0^t|C_1^1(t,s)-C^2_1(t,s)|^{2r\over
r-2}ds\)^{r-2\over2}\(\int_0^t|X_1(s)|^rds\)dt\Big\}\\
\ns\ds\le
K\dbE\int_0^T|\f^1(t)-\f^2(t)|^rdt+K\dbE\int_0^T\[\dbE\(\int_0^t
|X_1(s)|^rds\)^{p\over r}\]^{r\over p}\\
\ns\ds\q\cd\Big\{\1n\[\dbE\(\1n\int_0^t\2n|A^1_0(t,s)\1n-\1n
A^2_0(t,s)|^{r\over r-1}ds\)\1n^{(r-1)p\over p-r}\]\1n^{p-r\over
p}\3n+\2n\[\dbE^2\(\1n\int_0^t\2n|C_0^1(t,s)\1n-\1n
C_0^2(t,s)|^{r\over
r-1}ds\1n\)\1n^{(r-1)p\over p-r}\]^{p-r\over p}\\
\ns\ds\q+\1n\[\dbE\(\1n\int_0^t\2n|A^1_1(t,s)\1n-\1n
A_1^2(t,s)|^{2r\over r-2}ds\)^{(r-2)p\over2(p-r)}\]\1n^{p-r\over
p}\3n+\2n\[\dbE^2\1n\(\1n\int_0^t|C_1^1(t,s)\1n-\1n
C^2_1(t,s)|^{2r\over
r-2}ds\1n\)\1n^{(r-2)p\over2(p-r)}\]\1n^{p-r\over
p}\1n\Big\}dt.\ea$$
Then (\ref{stability3}) follows. The case that (L1)$'$ holds case be
proved similarly. \endpf
\ms
We point out that linear MF-FSVIE (\ref{L-MF-FSVIE}) is general
enough in some sense. To see this, let us formally look at the
variational equation of (\ref{MF-FSVIE1}). More precisely, let
$X^\d(\cd)$ be the unique solution of (\ref{MF-FSVIE1}) with
$\f(\cd)$ replaced by $\f(\cd)+\d\bar\f(\cd)$. We formally let
$$\bar X(t)=\lim_{\d\to0}{X^\d(t)-X(t)\over\d}.$$
Then $\bar X(\cd)$ should satisfy the following linear MF-FSVIE:
\bel{bar X}\ba{ll}
\ns\ds\bar X(t)=\bar\f(t)+\int_0^t\(b_x(t,s)\bar
X(s)+b_\g(t,s)\dbE'\[\th^b_x(t,s)\bar X(s,\o)+\th^b_{x'}(t,s)\bar
X(s,\o')\]\)ds\\
\ns\ds\qq\q+\int_0^t\(\si_x(t,s)\bar
X(s)+\si_\g(t,s)\dbE'\[\th^\si_x(t,s)\bar
X(s,\o)+\th^\si_{x'}(t,s)\bar X(s,\o')\]\)dW(s),\ea\ee
where (with a little misuse of $\g$)
\bel{bar X2}\left\{\ba{ll}
\ns\ds
b_x(t,s)=b_x(t,s,\o,\G^b(t,s,X(s,\o))),\q\th^b_x(t,s)=\th^b_x(t,s,\o,\o',X(s,\o),X(s,\o')),\\
\ns\ds
b_\g(t,s)=b_\g(t,s,\o,\G^b(t,s,X(s,\o))),\q\th^b_{x'}(t,s)=\th^b_{x'}(t,s,\o,\o',X(s,\o),X(s,\o')),\\
\ns\ds
\si_x(t,s)=\si_x(t,s,\o,\G^\si(t,s,X(s,\o))),\q\th^\si_x(t,s)=\th^\si_x(t,s,\o,\o',X(s,\o),X(s,\o')),\\
\ns\ds
\si_\g(t,s)=\si_\g(t,s,\o,\G^\si(t,s,X(s,\o))),\q\th^\si_{x'}(t,s)=\th^\si_{x'}(t,s,\o,\o',X(s,\o),X(s,\o')).
\ea\right.\ee
It is interesting to note that (\ref{bar X}) can be written as
follows:
\bel{bar X3}\ba{ll}
\ns\ds\bar
X(t)=\bar\f(t)+\int_0^t\Big\{\(b_x(t,s)+b_\g(t,s)\dbE'[\th^b_x(t,s)]\)\bar
X(s)+\dbE'\[b_\g(t,s)\th^b_{x'}(t,s)\bar
X(s)\]\Big\}ds\\
\ns\ds\qq\q+\int_0^t\Big\{\(\si_x(t,s)+\si_\g(t,s)\dbE'[\th^\si_x(t,s)]\)\bar
X(s,\o)+\dbE'\[\si_\g(t,s)\th^\si_{x'}(t,s)\bar
X(s,\o')\]\Big\}dW(s),\ea\ee
which is a special case of (\ref{L-MF-FSVIE}).
\ms
Mimicking the above, we see that general linear MF-BSVIE should take
the following form:
\bel{L-MF-BSVIE}\ba{ll}
\ns\ds
Y(t)=\psi(t)+\int_t^T\(\bar A_0(t,s)Y(s)+\bar B_0(t,s)Z(t,s)+\bar C_0(t,s)Z(s,t)\\
\ns\ds\qq\qq+\dbE'\[\bar A_1(t,s)Y(s)+\bar B_1(t,s)Z(t,s)+\bar
C_1(t,s)Z(s,t)\]\)ds-\int_t^TZ(t,s)dW(s).\ea\ee
For the coefficients, we should adopt the following hypothesis.
\ms
{\bf(L2)} The maps
$$\bar A_0,\bar B_0,\bar C_0:\D\times\O\to\dbR^{n\times n},\q
\bar A_1,\bar B_1,\bar C_1:\D\times\O^2\to\dbR^{n\times n}$$
are measurable and uniformly bounded. Moreover, for any $t\in[0,T]$,
$s\mapsto(\bar A_0(t,s),\bar B_0(t,s),\bar C_0(t,s))$ is
$\dbF$-progressively measurable on $[t,T]$, and $s\mapsto(\bar
A_1(t,s),\bar B_1(t,s),\bar C_1(t,s))$ is $\dbF^2$-progressively
measurable on $[t,T]$.
\ms
We expect that under (L2), for reasonable $\psi(\cd)$, the above
(\ref{L-MF-BSVIE}) will have a unique adapted M-solution. Such a
result will be a consequence of the main result of the next section.
\ms
\section{Well-posedness of MF-BSVIEs.}
In this section, we are going to establish the well-posedness of our
MF-BSVIEs. To begin with, let us introduce the following
hypothesis.
\ms
{\bf(H3)$_q$} The map $\th:\D\times\O^2\times\dbR^{6n}\to\dbR^m$
satisfies (H0)$_q$. The map
$g:\D\times\O\times\dbR^{3n}\times\dbR^m\to\dbR^n$ is measurable and
for all $(t,y,z,\h z,\g)\in[0,T]\times\dbR^{3n}\times\dbR^m$, the
map $(s,\o)\mapsto g(t,s,\o,y,z,\h z,\g)$ is $\dbF$-progressively
measurable. Moreover, there exist constants $L>0$ and $q\ge2$ such
that
\bel{g-Lip}\ba{ll}
\ns\ds|g(t,s,\o,y_1,z_1,\h z_1,\g_1)-g(t,s,\o,y_2,z_2,\h z_2,\g_2)|\\
\ns\ds\le L\(|y_1-y_2|+|z_1-z_2|+|\h z_1-\h
z_2|+|\g_1-\g_2|\),\\
\ns\ds\qq\qq\qq\qq\forall(t,s,\o)\in\D\times\O,(y_i,z_i,\h
z_i,\g_i)\in\dbR^{3n}\times\dbR^m,i=1,2,\ea\ee
and
\bel{g-growth}\ba{ll}
\ns\ds|g(t,s,\o,y,z,\h z,\g)|\le L\(1+|y|+|z|+|\h z|^{2\over q}+|\g|\),\\
\ns\ds\qq\qq\qq\qq\qq\forall(t,s,\o)\in\D\times\O,(y,z,\h
z,\g)\in\dbR^{3n}\times\dbR^m.\ea\ee
Similar to (H0)$_\infty$ in the previous section, (H3)$_\infty$ is
understood that (\ref{th-growth2}) holds and (\ref{g-growth}) is
replaced by the following:
\bel{g-growth2}\ba{ll}
\ns\ds|g(t,s,\o,y,z,\h z,\g)|\le L\(1+|y|+|z|+|\g|\),\\
\ns\ds\qq\qq\qq\qq\qq\forall(t,s,\o)\in\D\times\O,(y,z,\h
z,\g)\in\dbR^{3n}\times\dbR^m.\ea\ee
\subsection{A special MF-BSVIE.}
In this subsection, we firstly consider the following special
MF-BSVIE:
\bel{MF-BSVIE-s}Y(t)=\psi(t)+\int_t^T\wt g(t,s,Z(t,s),\wt
\G(t,s,Z(t,s)))ds-\int_t^TZ(t,s)dW(s),\q t\in[0,T],\ee
where
$$\left\{\ba{ll}
\ns\ds\wt g(t,s,Z,\g)=g(t,s,y(s),Z,z(s,t),\g),\\
\ns\ds\wt\G(t,s,Z)=\G(t,s,y(s),Z,z(s,t))\equiv\dbE'\[\th(t,s,y(s),Z,z(s,t),
y',z',\h z')\]_{(y',z',\h z')=(y(s),Z,z(s,t))},\ea\right.$$
for some given $(y(\cd),z(\cd\,,\cd))\in\cM^p[0,T]$. Therefore,
$$\wt g(t,s,Z(t,s),\wt\G(t,s,Z(t,s)))=g(t,s,y(s),Z(t,s),z(s,t),\G(t,s,y(s),Z(t,s),z(s,t))).$$
Note that we may take much more general $\wt g(\cd)$ and
$\wt\G(\cd)$. But the above is sufficient for our purpose, and by
restricting such a case, we avoid stating a lengthy assumption
similar to (H3)$_q$. We now state and prove the following result
concerning MF-BSVIE (\ref{MF-BSVIE-s}).
\ms
\bf Proposition 3.1. \sl Let {\rm(H3)$_q$} hold. Then for any $p>1$
and $\psi(\cd)\in L^p_{\cF_T}(0,T;\dbR^n)$, MF-BSVIE
$(\ref{MF-BSVIE-s})$ admits a unique M-solution
$(Y(\cd),Z(\cd\,,\cd))\in\cM^p[0,T]$. Moreover, the following
estimate holds:
\bel{Lp-s}\ba{ll}
\ns\ds\dbE\[|Y(t)|^p+\(\int_t^T|Z(t,s)|^2ds\)^{p\over2}\]\le
K\dbE\[|\psi(t)|^p+\(\int_t^T|\wt
g(t,s,0,\wt\G(t,s,0))|ds\)^p\].\ea\ee
Further, for $i=1,2$, let $\psi_i(\cd)\in L^p_{\cF_T}(0,T;\dbR^n)$,
$(y_i(\cd),z_i(\cd\,,\cd))\in\cM^p[0,T]$, and
$$\left\{\ba{ll}
\ns\ds\wt
g_i(t,s,Z(t,s),\wt\G_i(t,s,Z(t,s))=g_i(t,s,y_i(s),Z(t,s),z_i(s,t),\G_i(t,s,y_i(s),Z(t,s),z_i(s,t)),\\
\ns\ds\G_i(t,s,Y,Z,\h Z)=\dbE'\[\th_i(t,s,y_i(s),Z,z_i(s,t),y',z',\h
z')\]_{(y',z',\hat z')=(y_i(s),Z,z_i(s,t))}\ea\right.$$
with $g_i(\cd)$ and $\th_i(\cd)$ satisfying {\rm(H3)$_q$}. Then the
corresponding M-solutions $(Y_i(\cd),Z_i(\cd))$ satisfy the
following stability estimate:
\bel{stability-s}\ba{ll}
\ns\ds\dbE\[|Y_1(t)-Y_2(t)|^p+\(\int_t^T|Z_1(t,s)-Z_2(t,s)|^2ds\)^{p\over2}\]\le
K\dbE\[|\psi_1(t)-\psi_2(t)|^p\\
\ns\ds\qq+\(\int_t^T|\wt g_1(t,s,Z_1(t,s),\wt\G_1(t,s,Z_1(t,s))-\wt
g_2(t,s,Z_1(t,s),\wt\G_2(t,s,Z_1(t,s))|ds\)^p\].\ea\ee
\ms
\it Proof. \rm Fix $t\in[0,T)$. Consider the following MF-BSDE
(parameterized by $t$):
\bel{BSVIE-s}\eta(r)=\psi(t)+\int_r^T\wt g(t,s,\z(s),\wt
\G(t,s,\z(s)))ds-\int_r^T\z(s)dW(s),\q r\in[t,T].\ee
If $p\in(1,2]$, it follows from (H3)$_{q}$ that
$$ \ba{ll} \ns\ds
\dbE\int_0^T\(\int_t^T|\wt g(t,s,0,\wt\G(t,s,0))|ds\)^pdt\\
\ns\ds \le K \dbE\int_0^T\(\int_t^T(|y(s)|+|z(s,t)|
+\dbE|y(s)|+\dbE|z(s,t)|)ds\)^pdt + K \\
\ns\ds \le
K \dbE\int_0^T\int_t^T|y(s)|^pdsdt + K \dbE\int_0^T\(\int_0^s |z(s,t)|^2dt\)^{p\over
2}ds + K <\infty, \ea $$
As to the case of $p=q>2$, similarly we have
$$ \ba{ll} \ns\ds
\dbE\int_0^T\(\int_t^T|\wt g(t,s,0,\wt\G(t,s,0))|ds\)^qdt\\
\ns\ds \le K \dbE\int_0^T\(\int_t^T(|y(s)|+|z(s,t)|^{2\over
q}+\dbE|y(s)|+\dbE|z(s,t)|^{2\over q})ds\)^qdt + K\\
\ns\ds \le
K \dbE\int_0^T\int_t^T|y(s)|^qdsdt + K \dbE\int_0^T \int_t^T|z(s,t)|^2ds dt + K <\infty.\ea $$
Similar to a standard argument for BSDEs, making use of contraction
mapping theorem, we can show that the above MF-BSDE admits a unique
adapted solution
$$(\eta(\cd),\z(\cd))\equiv\(\eta(\cd\,;t,\psi(t)),\z(\cd\,;t,\psi(t))\).$$
Moreover, the following estimate holds:
\bel{Lp-BSDE}\ba{ll}
\ns\ds\dbE\[\sup_{r\in[t,T]}|\eta(r;t,\psi(t))|^p+\(\int_t^T|\z(s;t,\psi(t))|^2ds\)^{p\over2}\]\\
\ns\ds\le K\dbE\[|\psi(t)|^p+\(\int_t^T|\wt
g(t,s,0,\wt\G(t,s,0))|ds\)^p\].\ea\ee
Further, for $i=1,2$, let $\psi_i(\cd)\in L^p_{\cF_T}(0,T;\dbR^n)$,
$(y_i(\cd),z_i(\cd\,,\cd))\in\cM^p[0,T]$, and
$$\left\{\ba{ll}
\ns\ds\wt
g_i(t,s,Z(t,s),\wt\G_i(t,s,Z(t,s))=g_i(t,s,y_i(s),Z(t,s),z_i(s,t),\G_i(t,s,y_i(s),Z(t,s),z_i(s,t)),\\
\ns\ds\G_i(t,s,Y,Z,\h Z)=\dbE'\[\th_i(t,s,y_i(s),Z,z_i(s,t),y',z',\h
z')\]_{(y',z',\hat z')=(y_i(s),Z,z_i(s,t))}\ea\right.$$
with $g_i(\cd)$ and $\th_i(\cd)$ satisfying (H3)$_q$. Then let
$(\eta_i(\cd),\z_i(\cd))$ be the adapted solutions of the
corresponding BSDE. It follows that
\bel{stability-BSDE}\ba{ll}
\ns\ds\dbE\[\sup_{r\in[t,T]}|\eta_1(r)-\eta_2(r)|^p\]
+\dbE\(\int_t^T|\z_1(s)-\z_2(s)|^2ds\)^{p\over2}\\
\ns\ds\le\2n
K\dbE\[|\psi_1(t)-\psi_2(t)|^p\2n+\2n\(\1n\int_t^T\3n|\wt
g_1(t,s,\z_1(s),\wt\G_1(t,s,\z_1(s))-\wt
g_2(t,s,\z_1(s),\wt\G_2(t,s,\z_1(s))|ds\)^p\].\ea\ee
Now, we define
$$Y(t)=\eta(t;t,\psi(t)),\q Z(t,s)=\z(s;t,\psi(t)),\qq(t,s)\in\D,$$
and $Z(t,s)$ on $\D^c$ through the martingale representation:
$$Y(t)=\dbE Y(t)+\int_0^tZ(t,s)dW(s),\qq t\in[0,T].$$
Then $(Y(\cd),Z(\cd\,,\cd))\in\cM^p[0,T]$ is the unique M-solution
to (\ref{MF-BSVIE-s}). Estimates (\ref{Lp-s}) and
(\ref{stability-s}) follows easily from (\ref{Lp-BSDE}) and
(\ref{stability-BSDE}), respectively. \endpf
\ms
Note that the cases that we are interested in are $p=2,q$. We will
use them below.
\subsection{The general case.}
Now, we consider our MF-BSVIEs. For convenience, let us rewrite
(\ref{MF-BSVIE}) here:
\bel{MF-BSVIE2}\ba{ll}
\ns\ds
Y(t)=\psi(t)+\int_t^Tg(t,s,Y(s),Z(t,s),Z(s,t),\G(t,s,Y(s),Z(t,s),Z(s,t)))ds\\
\ns\ds\qq\qq\qq-\int_t^TZ(t,s)dW(s),\qq\qq t\in[0,T],\ea\ee
with
\bel{G(t,s)}\ba{ll}
\ns\ds\G(t,s,Y(s),Z(t,s),Z(s,t))=\dbE'\[\th(t,s,Y(s),Z(t,s),Z(s,t))\]\\
\ns\ds=\int_\O\th(t,s,\o',\o,Y(s,\o'),Z(t,s,\o'),Z(s,t,\o'),Y(s,\o),
Z(t,s,\o),Z(s,t,\o))\dbP(d\o').\ea\ee
Our main result of this section is the following.
\ms
\bf Theorem 3.2. \sl Let {\rm(H3)$_q$} hold with $2\le q<\infty$.
Then for any $\psi(\cd)\in L^q_{\cF_T}(0,T;\dbR^n)$, MF-BSVIE
$(\ref{MF-BSVIE2})$ admits a unique adapted M-solution
$(Y(\cd),Z(\cd\,,\cd))\in\cM^q[0,T]$, and the following estimate
holds:
\bel{Lp-estimate}\|(Y(\cd),Z(\cd\,,\cd))\|_{\cM^q[0,T]}\le
K\(1+\|\psi(\cd)\|_{L^q_{\cF_T}(0,T;\dbR^n)}\).\ee
Moreover, for $i=1,2$, let $g_i(\cd)$ and $\th_i(\cd)$ satisfy
{\rm(H3)$_q$}, and $\psi_i(\cd)\in L^q_{\cF_T}(0,T;\dbR^n)$. Let
$(Y_i(\cd),Z_i(\cd\,,\cd))\in\cM^q[0,T]$ be the corresponding
adapted M-solutions. Then
\bel{stability}\ba{ll}
\ns\ds\|(Y_1(\cd),Z_1(\cd\,,\cd))-(Y_2(\cd),Z_2(\cd\,,\cd))\|^2_{\cM^2[0,T]}\\
\ns\ds\le
K\dbE\Big\{\int_0^T|\psi_1(t)-\psi_2(t)|^2dt+\int_0^T\(\int_t^T|(g_1-g_2)(t,s)|ds\)^2dt\Big\},\ea\ee
where
$$\ba{ll}
\ns\ds(g_1-g_2)(t,s)=g_1(t,s,Y_1(s),Z_1(t,s),Z_1(s,t),\G_1(t,s,Y_1(s),Z_1(t,s),Z_1(s,t)))\\
\ns\ds\qq\qq\qq\qq-g_2(t,s,Y_1(s),Z_1(t,s),Z_1(s,t),\G_2(t,s,Y_1(s),Z_1(t,s),Z_1(s,t))),\ea$$
with
$$\ba{ll}
\ns\ds\G_i(t,s,Y_i(s),Z_i(t,s),Z_i(s,t))\\
\ns\ds=\dbE'\[\th_i(t,s,Y_i(s),Z_i(t,s),Z_i(s,t),y,z,\h
z)\]_{(y,z,\hat z)
=(Y_i(s),Z_i(t,s),Z_i(s,t))}\\
\ns\ds\equiv\int_\O\th_i(t,s,\o',\o,Y_i(s,\o'),Z_i(t,s,\o'),Z_i(s,t,\o'),Y_i(s,\o),
Z_i(t,s,\o),Z_i(s,t,\o))\dbP(d\o').\ea$$
\ms
\it Proof. \rm We split the proof into several steps.
\ms
\it Step 1. \rm Existence and uniqueness of M-solutions of
(\ref{MF-BSVIE2}) in $\cM^p[0,T]$ with $p\in(1,2]$.
\ms
Let $\psi(\cd)\in L_{\cF_T}^p(0,T;\dbR^n)$ be given. For any
$(y(\cd),z(\cd\,,\cd))\in\cM^p[0,T]$, we consider the following
MF-BSVIE
\bel{BSVIE(yz)}\ba{ll}
\ns\ds Y(t)=\psi(t)+\int_t^Tg(t,s,y(s),Z(t,s),z(s,t),\G(t,s,y(s),Z(t,s),z(s,t)))ds\\
\ns\ds\qq\qq\qq-\int_t^TZ(t,s)dW(s),\qq\qq t\in[0,T].\ea\ee
According to Proposition 3.1, there exists a unique adapted
M-solution $(Y(\cd),Z(\cd\,,\cd))\in\cM^p[0,T]$. Moreover, the
following estimate holds: (making use of (\ref{2.7} ) and (\ref{2.8} ))
\bel{3.12}\ba{ll}
\ns\ds\dbE\Big\{|Y(t)|^p+\(\int_t^T|Z(t,s)|^2ds\)^{p\over2}\Big\}\\
\ns\ds\le
K\dbE\Big\{|\psi(t)|^p+\(\int_t^T|g(t,s,y(s),0,z(s,t),\G(t,s,y(s),0,z(s,t))|ds\)^p\Big\}\\
\ns\ds\le K\dbE\Big\{1+|\psi(t)|^p+\[\int_t^T\(|y(s)|+|z(s,t)|+|\G(t,s,y(s),0,z(s,t))|\)ds\]^p\Big\}\\
\ns\ds\le
K\dbE\Big\{\1n1\1n+\1n|\psi(t)|^p\2n+\1n\[\int_t^T\2n\(|y(s)|+|z(s,t)|+\dbE|y(s)|+\dbE|z(s,t)|+|y(s)|+|z(s,t)|\)ds\]^p\Big\}\\
\ns\ds\le
K\dbE\Big\{1+|\psi(t)|^p+\[\int_t^T\(|y(s)|+|z(s,t)|\)ds\]^p\Big\}\\
\ns\ds\le
K\dbE\Big\{1+|\psi(t)|^p+\int_t^T|y(s)|^pds+\int_t^T|z(s,t)|^pds\Big\}.\ea\ee
Consequently, (making use of (\ref{2.14}) for
$(y(\cd),z(\cd\,,\cd))\in\cM^p[0,T]$)
$$\ba{ll}
\ns\ds\|(Y(\cd),Z(\cd\,,\cd))\|^p_{\cH^p[0,T]}\equiv\dbE\Big\{\int_0^T|Y(t)|^pdt
+\int_0^T\(\int_0^T|Z(t,s)|^2ds\)^{p\over2}dt\Big\}\\
\ns\ds\le
K\dbE\Big\{1+\int_0^T|\psi(t)|^pdt+\int_0^T\int_t^T|y(s)|^pdsdt+\int_0^T\int_t^T|z(s,t)|^pdsdt\Big\}\\
\ns\ds\le
K\dbE\Big\{1+\int_0^T|\psi(t)|^pdt+\int_0^T|y(t)|^pdt+\int_0^T\int_0^t|z(t,s)|^pdsdt\Big\}\\
\ns\ds\le
K\dbE\Big\{1+\int_0^T|\psi(t)|^pdt+\int_0^T|y(t)|^pdt+\int_0^T\(\int_0^t|z(t,s)|^2ds\)^{p\over2}dt\Big\}\\
\ns\ds\le
K\dbE\Big\{1+\int_0^T|\psi(t)|^pdt+\int_0^T|y(t)|^pdt\Big\}\\
\ns\ds\le
K\Big\{1+\|\psi(\cd)\|^p_{L^p_{\cF_T}(0,T;\dbR^n)}+\|(y(\cd),z(\cd\,,\cd))\|_{\cM^p[0,T]}\Big\}.\ea$$
Hence, if we define
$\Th(y(\cd),z(\cd\,,\cd))=(Y(\cd),Z(\cd\,,\cd))$, then $\Th$ maps
from $\cM^p[0,T]$ to itself. We now show that the mapping $\Th$ is
contractive. To this end, take any
$(y_i(\cd),z_i(\cd\,,\cd))\in\cM^p[0,T]$ ($i=1,2$), and let
$$(Y_i(\cd),Z_i(\cd\,,\cd))=\Th(y_i(\cd),z_i(\cd\,,\cd)).$$
Then by Proposition 3.1, we have (note (\ref{2.8}))
$$\ba{ll}
\ns\ds\dbE\[|Y_1(t)-Y_2(t)|^p+\(\int_t^T|Z_1(t,s)-Z_2(t,s)|^2ds\)^{p\over2}\]\\
\ns\ds\le K\dbE\(\int_t^T|g(t,s,y_1(s),Z_1(t,s),z_1(s,t),\G(t,s,y_1(s),Z_1(t,s),z_1(s,t)))\\
\ns\ds\qq-
g(t,s,y_2(s),Z_1(t,s),z_2(s,t),\G(t,s,y_2(s),Z_1(t,s),z_2(s,t)))|ds\)^p\\
\ns\ds\le K\dbE\[\int_t^T\(|y_1(s)-y_2(s)|+|z_1(s,t)-z_2(s,t)|\\
\ns\ds\qq+|\G(t,s,y_1(s),Z_1(t,s),z_1(s,t))-\G(t,s,y_2(s),Z_1(t,s),z_2(s,t))|\)ds\]^p\\
\ns\ds\le K\dbE\[\int_t^T\(|y_1(s)-y_2(s)|+|z_1(s,t)-z_2(s,t)|\\
\ns\ds\qq+|y_1(s)-y_2(s)|+|z_1(s,t))-z_2(s,t)|+\dbE|y_1(s)-y_2(s)|+\dbE|z_1(s,t)-z_2(s,t)|\)ds\]^p\\
\ns\ds\le
K\dbE\[\int_t^T\(|y_1(s)-y_2(s)|+|z_1(s,t)-z_2(s,t)|\)ds\]^p\\
\ns\ds\le
K\dbE\[\int_t^T|y_1(s)-y_2(s)|^pds+\(\int_t^T|z_1(s,t)-z_2(s,t)|ds\)^p\].\ea$$
Hence,
\bel{3.15}\ba{ll}
\ns\ds\|\Th(y_1(\cd),z_1(\cd\,,\cd))-\Th(y_2(\cd),z_2(\cd\,,\cd))\|^p_{\cM^p_\b[0,T]}\\
\ns\ds\equiv
\int_0^Te^{\b t}\dbE\[|Y_1(t)-Y_2(t)|^p+\(\int_t^T|Z_1(t,s)-Z_2(t,s)|^2ds\)^{p\over2}\]dt\\
\ns\ds\le K\int_0^Te^{\b
t}\dbE\[\int_t^T|y_1(s)-y_2(s)|^pds+\(\int_t^T|z_1(s,t)-z_2(s,t)|ds\)^p\]dt\\
\ns\ds=\1n K\dbE\[\1n\int_0^T\3n\(\1n\int_0^s\3n e^{\b
t}dt\)|y_1(s)\1n-\1n y_2(s)|^pds\1n+\2n\int_0^T\3n e^{\b
t}\(\1n\int_t^T\3n e^{-{\b\over p}s}e^{{\b\over p}s}|z_1(s,t)\1n-\1n z_2(s,t)|ds\)^pdt\]\\
\ns\ds\le\1n K\dbE\[{1\over\b}\2n\int_0^T\3n e^{\b t}|y_1(t)\1n-\1n
y_2(t)|^pdt\1n+\3n\int_0^T\3n e^{\b t}\(\1n\int_t^T\3n e^{-q\b
s\over
p}ds\)^{p\over q}\(\int_t^T\3n e^{\b s}|z_1(s,t)-z_2(s,t)|^pds\)dt\]\\
\ns\ds\le{K\over\b}\dbE\int_0^Te^{\b
t}|y_1(t)-y_2(t)|^pdt+K\({p\over\b q}\)^{p\over
q}\dbE\int_0^T\int_0^se^{\b
s}|z_1(s,t)-z_2(s,t)|^pdtds\\
\ns\ds\le{K\over\b}\dbE\int_0^Te^{\b
t}|y_1(t)-y_2(t)|^pdt+K\({1\over\b}\)^{p\over q}\dbE\int_0^Te^{\b
s}\(\int_0^s|z_1(s,t)-z_2(s,t)|^2dt\)^{p\over2}ds\\
\ns\ds\le\({K\over\b}+K\({1\over\b}\)^{p\over q}\)
\dbE\int_0^Te^{\b t}|y_1(t)-y_2(t)|^pdt\\
\ns\ds\le\({K\over\b}+K\({1\over\b}\)^{p\over
q}\)\|(y_1(\cd),z_1(\cd\,,\cd))-(y_2(\cd),z_2(\cd\,,\cd))\|^p_{\cM^p_\b[0,T]}.\ea\ee
Since the constant $K>0$ appears in the right hand side of the above
is independent of $\b$, by choosing $\b>0$ large, we obtain that
$\Th$ is a contraction. Hence, there exists a unique fixed point
$(Y(\cd),Z(\cd\,,\cd))\in\cM^p[0,T]$, which is the unique adapted
M-solution of (\ref{MF-BSVIE}).
\ms
\it Step 2. \rm The adapted M-solution
$(Y(\cd),Z(\cd\,,\cd))\in\cM^q[0,T]$ if $\psi(\cd)\in
L^q_{\cF_T}(0,T;\dbR^n)$.
\ms
Let $\psi(\cd)\in L^q_{\cF_T}(0,T;\dbR^n)\subseteq
L^2_{\cF_T}(0,T;\dbR^n)$. According to Step 1, there exists a unique
adapted M-solution $(Y(\cd),Z(\cd\,,\cd))\in\cM^2[0,T]$. We want to
show that in the current case, $(Y(\cd),Z(\cd\,,\cd))$ is actually
in $\cM^q[0,T]$. To show this, for the obtained adapted M-solution
$(Y(\cd),Z(\cd\,,\cd))$, let us consider the following MF-BSVIE:
\bel{MF-BSVIE3}\ba{ll}
\ns\ds
\wt Y(t)=\psi(t)+\int_t^Tg(t,s,\wt Y(s),\wt Z(t,s),Z(s,t),\G(t,s,\wt Y(s),\wt Z(t,s),Z(s,t))ds\\
\ns\ds\qq\qq\qq-\int_t^T\wt Z(t,s)dW(s),\qq\qq t\in[0,T].\ea\ee
For any $(y(\cd),z(\cd\,,\cd))\in\cM^p[0,T]$, by Proposition 3.1
(with $p=q$), the following MF-BSVIE admits a unique adapted
M-solution $(\wt Y(\cd),\wt Z(\cd\,,\cd))\in\cM^q[0,T]$:
\bel{MF-BSVIE3s}\ba{ll}
\ns\ds
\wt Y(t)=\psi(t)+\int_t^Tg(t,s,y(s),\wt Z(t,s),Z(s,t),\G(t,s,y(s),\wt Z(t,s),Z(s,t))ds\\
\ns\ds\qq\qq\qq-\int_t^T\wt Z(t,s)dW(s),\qq\qq t\in[0,T].\ea\ee
Thus, if we define $\wt\Th(y(\cd),z(\cd\,,\cd))=(\wt Y(\cd),\wt
Z(\cd\,,\cd))$, then $\wt\Th:\cM^q[0,T]\to\cM^q[0,T]$. We now show
that $\wt\Th$ is a contraction on $\cM^q[0,T]$ (Compare that $\Th$
in Step 1 is a contraction on $\cM^2[0,T]$). To this end, let
$(y_i(\cd),z_i(\cd\,,\cd))\in\cM^q[0,T]$ and let
$$(\wt Y_i(\cd),\wt
Z_i(\cd\,,\cd))=\wt\Th(y_i(\cd),z_i(\cd\,,\cd)),\qq i=1,2.$$
Then by Proposition 3.1 (with $p=q$), we have
$$\ba{ll}
\ns\ds\dbE\[|\wt Y_1(t)-\wt Y_2(t)|^q+\(\int_t^T|\wt Z_1(t,s)-\wt Z_2(t,s)|^2ds\)^{q\over2}\]\\
\ns\ds\le K\dbE\(\int_t^T|g(t,s,y_1(s),\wt Z_1(t,s),Z(s,t),\G(t,s,y_1(s),\wt Z_1(t,s),Z(s,t)))\\
\ns\ds\qq-
g(t,s,y_2(s),\wt Z_1(t,s),Z(s,t),\G(t,s,y_2(s),\wt Z_1(t,s),Z(s,t)))|ds\)^q\\
\ns\ds\le K\dbE\[\int_t^T\(|y_1(s)-y_2(s)|
+|\G(t,s,y_1(s),\wt Z_1(t,s),Z(s,t))-\G(t,s,y_2(s),\wt Z_1(t,s),Z(s,t))|\)ds\]^q\\
\ns\ds\le
K\dbE\[\int_t^T\(|y_1(s)-y_2(s)|+\dbE|y_1(s)-y_2(s)|\)ds\]^q\le
K\dbE\int_t^T|y_1(s)-y_2(s)|^qds.\ea$$
Then
$$\ba{ll}
\ns\ds\dbE\[\int_0^Te^{\b t}|\wt Y_1(t)-\wt
Y_2(t)|^qdt+\int_0^Te^{\b t}
\(\int_t^T|\wt Z_1(t,s)-\wt Z_2(t,s)|^2ds\)^{q\over2}dt\]\\
\ns\ds\le K\dbE\int_0^Te^{\b
t}\int_t^T|y_1(s)-y_2(s)|^qdsdt=K\dbE\int_0^T\int_0^se^{\b
t}|y_1(s)-y_2(s)|^qdtds\\
\ns\ds\le{K\over\b}\int_0^Te^{\b t}|y_1(t)-y_2(t)|^qdt.\ea$$
Hence, $\wt\Th$ is a contraction on $\cM^q[0,T]$ (with the
equivalent norm). Hence, (\ref{MF-BSVIE3s}) admits a unique adapted
M-solution $(\wt Y(\cd),\wt
Z(\cd\,,\cd))\in\cM^q[0,T]\subseteq\cM^2[0,T]$. Then by the
uniqueness of adapted solutions in $\cM^2[0,T]$ of
(\ref{MF-BSVIE3s}), it is necessary that
$$(Y(\cd),Z(\cd\,,\cd))=(\wt Y(\cd),\wt
Z(\cd\,,\cd))\in\cM^q[0,T].$$
\ms
\it Step 3. \rm Some estimates.
\nobreak
According to Proposition 3.1, we have
$$\ba{ll}
\ns\ds\dbE\Big\{|Y(t)|^q+\(\int_t^T|Z(t,s)|^2ds\)^{q\over2}\Big\}\\
\ns\ds\le
K\dbE\Big\{|\psi(t)|^q+\(\int_t^T|g(t,s,Y(s),0,Z(s,t),\G(t,s,Y(s),0,Z(s,t))|ds\)^q\Big\}\\
\ns\ds\le
K\dbE\Big\{1+|\psi(t)|^q+\[\int_t^T\(|Y(s)|+|Z(s,t)|^{2\over q}
+|\G(t,s,Y(s),0,Z(s,t))|\)ds\]^q\Big\}\\
\ns\ds\le
K\dbE\Big\{1+|\psi(t)|^q+\[\int_t^T\(|Y(s)|+|Z(s,t)|^{2\over q}\)ds\]^q\Big\}\\
\ns\ds\le
K\dbE\Big\{1+|\psi(t)|^q+\int_t^T|Y(s)|^qds+\(\int_t^T|Z(s,t)|^{2\over
q}ds\)^q\Big\}.\ea$$
Then
$$\ba{ll}
\ns\ds\dbE\Big\{\int_0^Te^{\b t}|Y(t)|^qdt
+\int_0^Te^{\b t}\(\int_t^T|Z(t,s)|^2ds\)^{q\over2}dt\Big\}\\
\ns\ds\le K\dbE\Big\{\int_0^Te^{\b
t}\(1+|\psi(t)|^q\)dt+\int_0^Te^{\b
t}\int_t^T|Y(s)|^qdsdt+\int_0^Te^{\b t}\(\int_t^T|Z(s,t)|^{2\over
q}ds\)^qdt\Big\}.\ea$$
Note that
$$\ba{ll}
\ns\ds\dbE\int_0^Te^{\b t}\(\int_t^T|Z(s,t)|^{2\over
q}ds\)^qdt=\dbE\int_0^Te^{\b t}\(\int_t^Te^{-{\b\over
q}s}e^{{\b\over
q}s}|Z(s,t)|^{2\over q}ds\)^qdt\\
\ns\ds\le\dbE\int_0^Te^{\b t}\(\int_t^Te^{-{\b\over
q-1}s}ds\)^{q-1}\(\int_t^Te^{\b s}|Z(s,t)|^2ds\)dt\\
\ns\ds=\dbE\int_0^Te^{\b t}\[{q-1\over\b}\(e^{-\b t\over
q-1}-e^{-{\b
T\over q-1}}\)\]^{q-1}\(\int_t^Te^{\b s}|Z(s,t)|^2ds\)dt\\
\ns\ds\le{(q-1)^{q-1}\over\b^{q-1}}\dbE\int_0^T\int_0^se^{\b
s}|Z(s,t)|^2dtds\le{(q-1)^{q-1}\over\b^{q-1}}\dbE\int_0^Te^{\b t
}|Y(t)|^2dt.\ea$$
Hence,
$$\ba{ll}
\ns\ds\dbE\Big\{\int_0^Te^{\b t}|Y(t)|^qdt
+\int_0^Te^{\b t}\(\int_t^T|Z(t,s)|^2ds\)^{q\over2}dt\Big\}\\
\ns\ds\le K\dbE\Big\{\int_0^Te^{\b
t}\(1+|\psi(t)|^q\)dt+{1\over\b}\int_0^Te^{\b
t}|Y(t)|^qdt+{1\over\b^{q-1}}\int_0^Te^{\b t}|Y(t)|^2dt\Big\}.\ea$$
By choosing $\b>0$ large, we obtain
$$\ba{ll}
\ns\ds\dbE\Big\{\int_0^Te^{\b t}|Y(t)|^qdt +\int_0^Te^{\b
t}\(\int_t^T|Z(t,s)|^2ds\)^{q\over2}dt\Big\}\le
K\dbE\(1+\int_0^Te^{\b t}|\psi(t)|^qdt\).\ea$$
Then (\ref{Lp-estimate}) follows.
\ms
Finally, let $\psi_i(\cd)\in L^q_{\cF_T}(0,T;\dbR^n)$, $g_i(\cd)$
and $\th_i(\cd)$ satisfy (H3)$_q$. Observe that
$$\ba{ll}
\ns\ds
Y_1(t)=\psi_1(t)+\int_t^Tg_1(t,s,Y_1(s),Z_1(t,s),Z_1(s,t),\G_1(t,s,Y_1(s),Z_1(t,s),Z_1(s,t)))ds\\
\ns\ds\qq\qq\qq-\int_t^TZ_1(t,s)dW(s)\\
\ns\ds=\psi_1(t)+\int_t^Tg_1(t,s,Y_1(s),Z_1(t,s),Z_1(s,t),\G_1(t,s,Y_1(s),Z_1(t,s),Z_1(s,t)))ds\\
\ns\ds\qq\qq-\int_t^Tg_2(t,s,Y_1(s),Z_1(t,s),Z_1(s,t),\G_2(t,s,Y_1(s),Z_1(t,s),Z_1(s,t)))ds\\
\ns\ds\qq\qq+\int_t^Tg_2(t,s,Y_1(s),Z_1(t,s),Z_1(s,t),\G_2(t,s,Y_1(s),Z_1(t,s),Z_1(s,t)))ds\\
\ns\ds\qq\qq-\int_t^Tg_2(t,s,Y_2(s),Z_1(t,s),Z_2(s,t),\G_2(t,s,Y_2(s),Z_1(t,s),Z_2(s,t)))ds\\
\ns\ds\qq\qq+\int_t^Tg_2(t,s,Y_2(s),Z_1(t,s),Z_2(s,t),\G_2(t,s,Y_2(s),Z_1(t,s),Z_2(s,t)))ds\\
\ns\ds\qq\qq\qq-\int_t^TZ_1(t,s)dW(s)\\
\ns\ds\equiv\h\psi_1(t)+\int_t^Tg_2(t,s,Y_2(s),Z_1(t,s),Z_2(s,t),\G_2(t,s,Y_2(s),Z_1(t,s),Z_2(s,t)))ds\\
\ns\ds\qq\qq\qq-\int_t^TZ_1(t,s)dW(s),\ea$$
with $\h\psi_1(\cd)$ defined in an obvious way. Then by Proposition
3.1 (with $p=2$), we obtain
$$\ba{ll}
\ns\ds\dbE\[|Y_1(t)-Y_2(t)|^2+\int_t^T|Z_1(t,s)-Z_2(t,s)|^2ds\]\le K\dbE|\h\psi_1(t)-\psi_2(t)|^2\\
\ns\ds\le K\dbE\Big\{|\psi_1(t)-\psi_2(t)|^2\\
\ns\ds\qq+\(\int_t^T|g_1(t,s,Y_1(s),Z_1(t,s),Z_1(s,t),\G_1(t,s,Y_1(s),Z_1(t,s),Z_1(s,t)))\\
\ns\ds\qq\qq-g_2(t,s,Y_1(s),Z_1(t,s),Z_1(s,t),\G_2(t,s,Y_1(s),Z_1(t,s),Z_1(s,t)))|ds\)^2\\
\ns\ds\qq+\(\int_t^T|g_2(t,s,Y_1(s),Z_1(t,s),Z_1(s,t),\G_2(t,s,Y_1(s),Z_1(t,s),Z_1(s,t)))\\
\ns\ds\qq\qq-g_2(t,s,Y_2(s),Z_1(t,s),Z_2(s,t),\G_2(t,s,Y_2(s),Z_1(t,s),Z_2(s,t)))|ds\)^2\Big\}\\
\ns\ds\le K\dbE\Big\{|\psi_1(t)-\psi_2(t)|^2+\(\int_t^T|(g_1-g_2)(t,s)|ds\)^2\\
\ns\ds\qq+\[\int_t^T\(|Y_1(s)-Y_2(s)|+|Z_1(s,t)-Z_2(s,t)|\)ds\]^2\Big\}.\ea$$
Then similar to the proof of the contraction for $\Th$, we can
obtain our stability estimate (\ref{stability}). \endpf
\ms
Let us make some remarks on the above result, together with its
proof.
\ms
First of all, we have seen that the growth of the maps
\bel{3.17}\hat z\mapsto g(t,s,y,z,\hat z),\q(\hat z,\hat
z')\mapsto\th(t,s,y,z,\hat z,y',z',\hat z')\ee
plays an important role in proving the well-posedness of MF-BSVIEs,
especially for the case of $p>2$. When $p\in(1,2]$, the adapted
M-solutions for BSVIEs was discussed in \cite{Wang 2011}. It is
possible to adopt the idea of \cite{Wang 2011} to treat MF-BSVIEs
for $p\in(1,2)$. If (H3)$_\infty$ holds, then for any $p>1$, as long
as $\psi(\cd)\in L^p_{\cF_T}(0,T;\dbR^n)$, (\ref{MF-BSVIE2}) admits
a unique adapted M-solution $(Y(\cd),Z(\cd\,,\cd))\in\cM^p[0,T]$. On
the other hand, if the maps in (\ref{3.17}) grow linearly, the
adapted M-solution $(Y(\cd),Z(\cd\,,\cd))$ of (\ref{MF-BSVIE2}) may
not be in $\cM^p[0,T]$ for $p>2$, even if $\psi(\cd)\in
L^p_{\cF_T}(0,T;\dbR^n)$. This can be seen from the following
example.
\ms
\bf Example 3.3. \rm Consider BSVIE:
\bel{3.20}Y(t)=\psi(t)+\int_t^TZ(s,t)ds-\int_t^TZ(t,s)dW(s),\q
t\in[0,T].\ee
Let
$$\psi(t) \equiv \int_0^T\psi_1(s)dW(s), \qq \forall t\in[0,T],$$
with $\psi_1(\cd)$ being deterministic and
$$\psi_1(\cd)\in
L^2(0,T)\setminus\bigcup_{p>2}L^p(0,T-\d;\dbR),$$
for some fixed $\d\in(0,T;\dbR)$. Thus, for any $p>1$,
$$\ba{ll}
\ns\ds\dbE\int_0^T|\psi(t)|^pds=T\dbE\Big|\int_0^T\psi_1(s)dW(s)\Big|^p\le
C\(\int_0^T|\psi_1(s)|^2ds\)^{p\over2},\ea$$
which means $\psi(\cd)\in L^p_{\cF_T}(0,T;\dbR)$ for any $p>1$. If
we define
$$\left\{\ba{ll}
\ns\ds Y(t)=\int_0^t\psi_1(s)dW(s)+\psi_1(t)(T-t),\qq
t\in[0,T],\\
\ns\ds Z(t,s)=\psi_1(s),\qq(s,t)\in[0,T]^2,\ea\right.$$
then
$$\ba{ll}
\ns\ds Y(t)=\int_0^t\psi_1(s)dW(s)+\psi_1(t)(T-t)\\
\ns\ds\qq=\psi(t)-\int_t^T\psi_1(s)dW(s)+\int_t^T\psi_1(t)ds\\
\ns\ds\qq=\psi(t)-\int_t^TZ(s,t)ds+\int_t^TZ(t,s)dW(s).\ea$$
This means that $(Y(\cd),Z(\cd\,,\cd))$ is the adapted M-solution of
(\ref{3.20}). We claim that $Y(\cd)\notin L^p_\dbF(0,T;\dbR)$, for
any $p>2$. In fact, if $Y(\cd)\in L^p_\dbF(0,T;\dbR)$ for some
$p>2$, then
$$\ba{ll}
\ns\ds\d^p\dbE\int_0^{T-\d}|\psi_1(t)|^pdt\le\int_0^T(T-t)^p|\psi_1(t)|^pdt\le2^{p-1}\dbE\int_0^T\(|Y(t)|^p
+\Big|\int_0^t\psi_1(s)dW(s)\Big|^p\)dt\\
\ns\ds\qq\qq\qq\qq\q\le
K\Big\{\int_0^T\dbE|Y(t)|^pdt+\(\int_0^T|\psi(s)|^2ds\)^{p\over2}\)<\infty.\ea$$
This is a contradiction.
\ms
Next, if
$$g_i(t,s,y,z,\hat z)=g_i(t,s,y,z),\qq\th_i(t,s,y,z,\hat
z,y',z',\hat z')=\th_i(t,s,y,z,y',z'),$$
then the stability estimate (\ref{stability}) can be improved to
\bel{stability2}\ba{ll}
\ns\ds\|(Y_1(\cd),Z_1(\cd\,,\cd))-(Y_2(\cd),Z_2(\cd\,,\cd))\|^q_{\cM^q[0,T]}\\
\ns\ds\le
K\dbE\Big\{\int_0^T|\psi_1(t)-\psi_2(t)|^qdt+\int_0^T\(\int_t^T|(g_1-g_2)(t,s)|ds\)^qdt\Big\},\ea\ee
for any $q>2$.
\ms
We point out that even for the special case of BSVIEs, the proof we
provided here significantly simplifies that given in \cite{Yong
2008}. The key is that we have a better understanding of the term
$Z(s,t)$ in the drift, and find a new way to treat it (see
(\ref{3.15})).
\ms
Now, let us look at linear MF-BSVIE (\ref{L-MF-BSVIE}). It is not
hard to see that under (L2), we have (H3)$_q$ with $q=2$. Hence, we
have the following corollary.
\ms
\bf Corollary 3.4. \rm Let {\rm(L2)} hold. Then for any
$\psi(\cd)\in L^2_{\cF_T}(0,T;\dbR^n)$, (\ref{L-MF-BSVIE}) admits a
unique adapted M-solution $(Y(\cd),Z(\cd\,,\cd))\in\cM^2[0,T]$.
\ms
\section{Duality Principles.}
In this section, we are going to establish two duality principles
between linear MF-FSVIEs and linear MF-BSVIEs. Let us first consider
the following linear MF-FSVIE (\ref{L-MF-FSVIE}) which is rewritten
below (for convenience):
\bel{4.1}\ba{ll}
\ns\ds X(t)=\f(t)+\int_0^t\(A_0(t,s)X(s)+\dbE'\[C_0(t,s)X(s)\]\)ds\\
\ns\ds\qq\qq\q+\int_0^t\(A_1(t,s)X(s)+\dbE'\[C_1(t,s)X(s)\]\)dW(s),\qq
t\in[0,T].\ea\ee
Let (L1) hold and $\f(\cd)\in L^2_\dbF(0,T;\dbR^n)$. Then by
Corollary 2.7, (\ref{4.1}) admits a unique solution $X(\cd)\in
L^2_\dbF(0,T;\dbR^n)$. Now, let $(Y(\cd),Z(\cd\,,\cd))\in\cM^2[0,T]$
be undetermined, and we observe the following:
$$\ba{ll}
\ns\ds\dbE\int_0^T\lan Y(t),\f(t)\ran dt=\dbE\int_0^T\lan
Y(t),X(t)-\int_0^t\(A_0(t,s)X(s)+\dbE'[C_0(t,s)X(s)]\)ds\ran dt\\
\ns\ds\qq\qq-\dbE\int_0^T\lan
Y(t),\int_0^t\(A_1(t,s)X(s)+\dbE'[C_1(t,s)X(s)]\)dW(s)\ran dt\\
\ns\ds\qq\qq\equiv\dbE\int_0^T\lan Y(t),X(t)\ran
dt-\sum_{i=1}^4I_i.\ea$$
We now look at each term $I_i$. First, for $I_1$, we have
$$I_1=\dbE\int_0^T\int_0^t\lan Y(t),A_0(t,s)X(s)\ran
dsdt=\dbE\int_0^T\lan X(t),\int_t^TA_0(s,t)^TY(s)ds\ran dt.$$
Next, for $I_2$, let us pay some extra attention on $\o$ and $\o'$,
$$\ba{ll}
\ns\ds I_2=\dbE\int_0^T\3n\int_0^t\2n\lan
Y(t),\dbE'[C_0(t,s)X(s)]\ran
dsdt=\dbE'\dbE\int_0^T\3n\int_s^T\2n\lan
C_0(t,s,\o,\o')^TY(t,\o),X(s,\o')\ran dtds\\
\ns\ds\q=\dbE\dbE^*\3n\int_0^T\3n\int_s^T\3n\lan
C_0(t,s,\o^*,\o)^TY(t,\o^*),X(s,\o)\ran
dtds\1n=\1n\dbE\2n\int_0^T\3n\lan
X(t),\int_t^T\2n\dbE^*[C_0(s,t)^TY(s)]ds\ran dt.\ea$$
Here, we have introduced the notation $\dbE^*$, whose definition is
obvious from the above, to distinguish $\dbE$ (and $\dbE'$). For
$I_3$, we have
$$\ba{ll}
\ns\ds I_3=\dbE\int_0^T\lan Y(t),\int_0^tA_1(t,s)X(s)dW(s)\ran
dt\\
\ns\ds\q=\dbE\int_0^T\lan\dbE
Y(t)+\int_0^tZ(t,s)dW(s),\int_0^tA_1(t,s)X(s)dW(s)\ran dt\\
\ns\ds\q=\dbE\int_0^T\int_0^t\lan Z(t,s),A_1(t,s)X(s)\ran
dsdt=\dbE\int_0^T\lan X(t),\int_t^TA_1(s,t)^TZ(s,t)ds\ran dt.\ea$$
Finally, we look at $I_4$.
$$\ba{ll}
\ns\ds I_4=\dbE\int_0^T\lan
Y(t),\int_0^t\dbE'[C_1(t,s)X(s)]dW(s)\ran dt\\
\ns\ds\q=\dbE\int_0^T\int_0^t\lan Z(t,s),\dbE'[C_1(t,s)X(s)]\ran dsdt\\
\ns\ds\q=\dbE'\dbE\int_0^T\int_s^T\lan
Z(t,s,\o),C_1(t,s,\o,\o')X(s,\o')\ran dtds\\
\ns\ds\q=\dbE\int_0^T\lan
X(t),\int_t^T\dbE^*[C_1(s,t)^TZ(s,t)]ds\ran dt.\ea$$
Hence, we obtain
$$\ba{ll}
\ns\ds\dbE\int_0^T\lan Y(t),\f(t)\ran dt=\dbE\int_0^T\lan
X(t),Y(t)-\int_t^T\(A_0(s,t)^TY(s)+A_1(s,t)^TZ(s,t)\\
\ns\ds\qq\qq\qq\qq\qq\qq+\dbE^*\[C_0(s,t)^TY(s)+C_1(s,t)^TZ(s,t)\]\)ds\ran
dt.\ea$$
On the other hand, suppose (L1)$'$ holds and $\f(\cd)\in
C^p_\dbF([0,T];\dbR^n)$. Then $X(\cd)\in C^p_\dbF([0,T];\dbR^n)$.
Consequently, we obtain the following {\it duality principle for
MF-FSVIEs} whose proof is clear from the above.
\ms
\bf Theorem 4.1. \sl Let {\rm(L1)} hold, and $\f(\cd),\psi(\cd)\in
L^2_\dbF(0,T;\dbR^n)$. Let $X(\cd)\in L^2_\dbF(0,T;\dbR^n)$ be the
solution to the linear MF-FSVIE $(\ref{4.1})$, and
$(Y(\cd),Z(\cd\,,\cd))\in\cM^2[0,T]$ be the adapted M-solution to
the following linear MF-BSVIE:
\bel{4.5}\ba{ll}
\ns\ds Y(t)=\psi(t)+\int_t^T\(A_0(s,t)^TY(s)+A_1(s,t)^TZ(s,t)\\
\ns\ds\qq\qq\qq\qq+\dbE^*\[C_0(s,t)^TY(s)+C_1(s,t)^TZ(s,t)\]\)ds-\int_t^TZ(t,s)dW(s).\ea\ee
Then
\bel{duality2}\dbE\int_0^T\lan X(t),\psi(t)\ran dt=\dbE\int_0^T\lan
Y(t),\f(t)\ran dt.\ee
\ms
\rm
We call (\ref{4.5}) the adjoint equation of (\ref{4.1}). The above
duality principle will be used in establishing Pontryagin's type
maximum principle for optimal controls of MF-FSVIEs.
\ms
Next, different from the above, we want to start from the followng
linear MF-BSVIE:
\bel{L-MF-BSVIE1}\ba{ll}
\ns\ds
Y(t)=\psi(t)+\int_t^T\(\bar A_0(t,s)Y(s)+\bar C_0(t,s)Z(s,t)+\dbE'[\bar A_1(t,s)Y(s)+\bar C_1(t,s)Z(s,t)]\)ds\\
\ns\ds\qq\qq\qq\qq-\int_t^TZ(t,s)dW(s),\qq\qq t\in[0,T].\ea\ee
This is a special case of (\ref{L-MF-BSVIE}) in which
$$\bar B_0(t,s)=0,\qq\bar B_1(t,s)=0.$$
Under (L2), by Corollary 3.4, for any $\psi(\cd)\in
L^2_\dbF(0,T;\dbR^n)$, (\ref{L-MF-BSVIE1}) admits a unique adapted
M-solution $(Y(\cd),Z(\cd\,,\cd))\in\cM^2[0,T]$. We point out here
that for each $t\in[0,T)$, the maps
$$s\mapsto\bar C_0(t,s),\q s\mapsto\bar C_1(t,s)$$
are $\dbF$-progressively measurable and $\dbF^2$-progressively
measurable on $[t,T]$, respectively. Now, we let a process
$X(\cd)\in L^2_\dbF(0,T;\dbR^n)$ be undetermined, and make the
following calculation:
$$\ba{ll}
\ns\ds\dbE\int_0^T\lan X(t),\psi(t)\ran dt=\dbE\int_0^T\lan
X(t),Y(t)-\int_t^T\(\bar A_0(t,s)Y(s)+\bar C_0(t,s)Z(s,t)\\
\ns\ds\qq\q+\dbE'[\bar A_1(t,s)Y(s)+\bar
C_1(t,s)Z(s,t)]\)ds-\int_t^TZ(t,s)dW(s)\ran
dt\\
\ns\ds=\dbE\int_0^T\lan X(t),Y(t)\ran dt-\dbE\int_0^T\int_t^T\lan
X(t),\bar A_0(t,s)Y(s)\ran dsdt\\
\ns\ds\qq-\dbE\int_0^T\int_t^T\lan X(t),\bar C_0(t,s)Z(s,t)\ran
dsdt-\dbE\int_0^T\int_t^T\lan X(t),\dbE'[\bar A_1(t,s)Y(s)]\ran
dsdt\\
\ns\ds\qq-\dbE\int_0^T\int_t^T\lan X(t),\dbE'[\bar
C_1(t,s)Z(s,t)]\ran dsdt\equiv\dbE\int_0^T\lan X(t),Y(t)\ran
dt-\sum_{i=1}^4I_i.\ea$$
Similar to the above, we now look at the terms $I_i$ ($i=1,2,3,4$)
one by one. First, we look at $I_1$:
$$\ba{ll}
\ns\ds I_1=\dbE\int_0^T\int_t^T\lan X(t),\bar A_0(t,s)Y(s)\ran
dsdt=\dbE\int_0^T\int_0^s\lan\bar A_0(t,s)^TX(t),Y(s)\ran dtds\\
\ns\ds\qq=\dbE\int_0^T\lan\int_0^t\bar A_0(s,t)^TX(s)ds,Y(t)\ran
dt.\ea$$
Next, for $I_2$, one has
$$\ba{ll}
\ns\ds I_2=\dbE\int_0^T\int_t^T\lan X(t),\bar C_0(t,s)Z(s,t)\ran
dsdt=\dbE\int_0^T\int_0^s\lan\bar C_0(t,s)^TX(t),Z(s,t)\ran dtds\\
\ns\ds\q=\int_0^T\dbE\int_0^t\lan\bar C_0(s,t)^TX(s),Z(t,s)\ran
dsdt\\
\ns\ds\q=\int_0^T\dbE\lan\int_0^t\dbE[\bar
C_0(s,t)^T\bigm|\cF_s]X(s)dW(s),\int_0^tZ(t,s)dW(s)\ran
dt\\
\ns\ds\q=\int_0^T\dbE\lan\int_0^t\dbE[\bar
C_0(s,t)^T\bigm|\cF_s]X(s)dW(s),Y(t)-\dbE
Y(t)\ran dt\\
\ns\ds\q=\dbE\int_0^T\lan\int_0^t\dbE[\bar
C_0(s,t)^T\bigm|\cF_s]X(s)dW(s),Y(t)\ran dt.\ea$$
Now, for $I_3$,
$$\ba{ll}
\ns\ds I_3=\dbE\int_0^T\int_t^T\lan X(t),\dbE'[\bar
A_1(t,s)Y(s)]\ran
dsdt=\dbE\dbE'\int_0^T\int_0^s\lan\bar A_1(t,s,\o,\o')^TX(t,\o),Y(s,\o')\ran dtds\\
\ns\ds\q=\dbE'\int_0^T\lan\int_0^t\dbE[\bar
A_1(s,t,\o,\o')^TX(s,\o)]ds,Y(t,\o')\ran
dt\\
\ns\ds\q=\dbE\int_0^T\lan\int_0^t\dbE^*[\bar
A_1(s,t,\o^*,\o)^TX(s,\o^*)]ds,Y(t,\o)\ran
dt\\
\ns\ds\q\equiv\dbE\int_0^T\lan\int_0^t\dbE^*[\bar
A_1(s,t)^TX(s)]ds,Y(t)\ran dt.\ea$$
Finally, similar to the above, one has
$$\ba{ll}
\ns\ds I_4=\dbE\int_0^T\int_t^T\lan X(t),\dbE'[\bar
C_1(t,s)Z(s,t)]\ran dsdt\\
\ns\ds\q=\dbE\dbE'\int_0^T\int_0^s\lan\bar C_1(t,s,\o,\o')^TX(t,\o),Z(s,t,\o')\ran dtds\\
\ns\ds\q=\dbE'\int_0^T\int_0^t\lan \dbE[\bar
C_1(s,t,\o,\o')^TX(s,\o)],Z(t,s,\o')\ran
dsdt\\
\ns\ds\q=\dbE\int_0^T\int_0^t\lan
\dbE^*[\bar C_1(s,t)^TX(s)],Z(t,s)\ran dsdt\\
\ns\ds\q=\int_0^T\dbE\int_0^t\lan
\dbE\[\dbE^*[\bar C_1(s,t)^TX(s)]\bigm|\cF_s\],Z(t,s)\ran dsdt\\
\ns\ds\q=\int_0^T\dbE\lan\int_0^t
\dbE^*\[\dbE[\bar C_1(s,t)^T\bigm|\cF_s]X(s)\]dW(s),\int_0^tZ(t,s)dW(s)\ran dt\\
\ns\ds\q=\int_0^T\dbE\lan\int_0^t
\dbE^*\[\dbE[\bar C_1(s,t)^T\bigm|\cF_s]X(s)\]dW(s),Y(t)-\dbE Y(t)\ran dt\\
\ns\ds\q=\dbE\int_0^T\lan\int_0^t\dbE^*\[\dbE[\bar
C_1(s,t)^T\bigm|\cF_s]X(s)\]dW(s),Y(t)\ran dt.\ea$$
Combining the above, we obtain
\bel{4.2}\ba{ll}
\ns\ds\dbE\int_0^T\lan X(t),\psi(t)\ran dt=\dbE\int_0^T\lan
X(t),Y(t)\ran dt-\sum_{i=1}^4I_i\\
\ns\ds=\dbE\int_0^T\lan
Y(t),X(t)-\int_0^t\(\bar A_0(s,t)^TX(s)+\dbE^*[\bar A_1(s,t)^TX(s)]\)ds\\
\ns\ds\qq-\int_0^t\(\dbE[\bar
C_0(s,t)^T\bigm|\cF_s]X(s)+\dbE^*\[\dbE[\bar
C_1(s,t)^T\bigm|\cF_s]X(s)\]\)dW(s)\ran dt.\ea\ee
Now, we are at the position to state and prove the following {\it
duality principle for MF-BSVIEs}.
\ms
\bf Theorem 4.2. \sl Let {\rm(L2)} hold and $\psi(\cd)\in
L^2_{\cF_T}(0,T;\dbR^n)$. Let $(Y(\cd),Z(\cd\,,\cd))\in\cM^2[0,T]$
be the unique adapted M-solution of linear MF-BSVIE
$(\ref{L-MF-BSVIE1})$. Further, let $\f(\cd)\in
L_\dbF^2(0,T;\dbR^n)$ and $X(\cd)\in L^2_\dbF(0,T;\dbR^n)$ be the
solution to the following linear MF-FSVIE:
\bel{L-MF-FSVIE2}\ba{ll}
\ns\ds X(t)=\f(t)+\int_0^t\(\bar A_0(s,t)^TX(s)+\dbE^*[\bar A_1(s,t)^TX(s)]\)ds\\
\ns\ds\qq\qq+\int_0^t\(\dbE[\bar
C_0(s,t)^T\bigm|\cF_s]X(s)+\dbE^*\[\dbE[\bar
C_1(s,t)^T\bigm|\cF_s]X(s)\]\) dW(s),\qq t\in[0,T].\ea\ee
Then
\bel{duality}\dbE\int_0^T\lan Y(t),\f(t)\ran dt=\dbE\int_0^T\lan
X(t),\psi(t)\ran dt.\ee
\ms
\it Proof. \rm For linear MF-FSVIE (\ref{L-MF-FSVIE2}), when (L2)
holds, we have (L1). Hence, for any $\f(\cd)\in
L^2_\dbF(0,T;\dbR^n)$, (\ref{L-MF-FSVIE2}) admits a unique solution
$X(\cd)\in L^2_\dbF(0,T;\dbR^n)$. Then (\ref{duality}) follows from
(\ref{4.2}) immediately. \endpf
\ms
We call MF-FSVIE (\ref{L-MF-FSVIE2}) the adjoint equation of
MF-BSVIE (\ref{L-MF-BSVIE1}). Such a duality principle will be used
to establish comparison theorems for MF-BSVIEs. Note that since for
$s<t$, $\bar C_0(s,t)^T$ is $\cF_t$-measurable and not necessarily
$\cF_s$-measurable, we have
\bel{4.8}\dbE[\bar C_0(s,t)^T\bigm|\cF_s]\ne\bar C_0(s,t),\qq
t\in(s,T],\ee
in general. Likewise, in general,
\bel{4.9}\dbE[\bar C_1(s,t)^T\bigm|\cF_s]\ne\bar C_1(s,t),\qq
t\in(s,T].\ee
\ms
\ms
We now make some comparison between Theorems 4.1 and 4.2.
First, we begin with linear MF-FSVIE (\ref{4.1}) which is rewritten
here for convenience:
\bel{4.7}\ba{ll}
\ns\ds X(t)=\f(t)+\int_0^t\(A_0(t,s)X(s)+\dbE'[C_0(t,s)X(s)]\)ds\\
\ns\ds\qq\qq\q+\int_0^t\(A_1(t,s)X(s)+\dbE'[C_1(t,s)X(s)]\)dW(s),\qq
t\in[0,T].\ea\ee
According to Theorem 4.1, the adjoint equation of (\ref{4.7}) is
MF-BSVIE (\ref{4.5}). Now, we want to use Theorem 4.2 to find the
adjoint equation of (\ref{4.5}) which is regarded as
(\ref{L-MF-BSVIE1}) with
$$\left\{\ba{ll}
\ns\ds\bar A_0(t,s)=A_0(s,t)^T,\qq\bar A_1(t,s,\o,\o')=C_0(s,t,\o',\o)^T,\\
\ns\ds\bar C_0(t,s)=A_1(s,t)^T,\qq\bar
C_1(t,s,\o,\o')=C_1(s,t,\o',\o)^T.\ea\right.$$
Then, by Theorem 4.2, we obtain the adjoint equation
(\ref{L-MF-FSVIE2}) with the coefficients:
$$\left\{\ba{ll}
\ns\ds\bar A_0(s,t)^T=A_0(t,s),\qq\bar
A_1(s,t,\o',\o)^T=C_0(t,s,\o,\o'),\\
\ns\ds\dbE[\bar
C_0(s,t)^T\bigm|\cF_s]=\dbE[A_1(t,s)\bigm|\cF_s]=A_1(t,s),\\
\ns\ds\dbE[\bar
C_1(s,t,\o',\o)^T\bigm|\cF_s]=\dbE[C_1(t,s,\o,\o')\bigm|\cF_s]=C_1(t,s,\o,\o').\ea\right.$$
Hence, (\ref{4.7}) is the adjoint equation of (\ref{4.5}). Thus, we
have the following conclusion:
$$\hb{\sl Twice adjoint equation of a linear MF-FSVIE is itself.}$$
\ms
Next, we begin with linear MF-BSVIE (\ref{L-MF-BSVIE1}). From
Theorem 4.2, we know that the adjoint equation is linear MF-FSVIE
(\ref{L-MF-FSVIE2}). Now, we want to use Theorem 4.1 to find the
adjoint equation of (\ref{L-MF-FSVIE2}) which is regarded as
(\ref{4.7}) with
$$\left\{\ba{ll}
\ns\ds A_0(t,s)=\bar A_0(s,t)^T,\qq C_0(t,s,\o,\o')=\bar A_1(s,t,\o',\o)^T,\\
\ns\ds A_1(t,s)=\dbE[\bar C_0(s,t)^T\bigm|\cF_s],\qq
C_1(t,s,\o,\o')=\dbE[\bar C_1(s,t,\o',\o)^T\bigm|\cF_s].\ea\right.$$
Then by Theorem 4.2, the adjoint equation is given by (\ref{4.5})
with coefficients:
$$\left\{\ba{ll}
\ns\ds A_0(s,t)^T=\bar A_0(t,s),\qq C_0(s,t,\o',\o)^T=\bar A_1(t,s,\o,\o'),\\
\ns\ds A_1(s,t)^T=\dbE[\bar C_0(t,s)\bigm|\cF_t],\qq
C_1(s,t,\o',\o)=\dbE[\bar C_1(t,s,\o,\o')\bigm|\cF_t].\ea\right.$$
In another word, the twice adjoint equation of linear MF-BSVIE
(\ref{L-MF-BSVIE1}) is the following:
\bel{}\ba{ll}
\ns\ds Y(t)=\psi(t)+\int_t^T\(\bar A_0(t,s)Y(s)+\dbE[\bar
C_0(t,s)\bigm|\cF_t]Z(s,t)\\
\ns\ds\qq\qq+\dbE'\[\bar A_1(t,s)Y(s)+\dbE[\bar
C_1(t,s)\bigm|\cF_t]Z(s,t)\]\)ds-\int_t^TZ(t,s)dW(s),\q
t\in[0,T],\ea\ee
which is different from (\ref{L-MF-BSVIE1}), unless $\bar C_0(t,s)$
and $\bar C_1(t,s)$ are $\cF_t$-measurable for all $(t,s)\in\D$.
Thus, we have the following conclusion:
$$\hb{\sl Twice adjoint of a linear MF-BSVIE is not necessarily itself.}$$
\section{Comparison Theorems.}
\rm
In this section, we are going to establish some comparison theorems
for MF-FSVIEs and MF-BSVIEs, allowing the dimension to be larger
than 1. Let
$$\dbR^n_+=\Big\{(x_1,\cds,x_n)\in\dbR^n\bigm|x_i\ge0,~1\le i\le n\Big\}.$$
When $x\in\dbR^n_+$, we also denote it by $x\ge0$, and say that $x$
is {\it nonnegative}. By $x\le0$ and $x\ge y$ (if $x,y\in\dbR^n$),
we mean $-x\ge0$ and $x-y\ge0$, respectively. Moreover, if $X(\cd)$
is a process, then by $X(\cd)\ge0$, we mean
$$X(t)\ge0,\qq t\in[0,T],\q\as$$
Also, $X(\cd)$ is said to be {\it nondecreasing} if it is
componentwise nondecreasing. Likewise, we may define $X(\cd)\le0$
and $X(\cd)\ge Y(\cd)$ (if both $X(\cd)$ and $Y(\cd)$ are
$\dbR^n$-valued processes), and so on.
\ms
In what follows, we let $e_i\in\dbR^n$ be the vector that the $i$-th
entry is 1 and all other entries are zero. Also, we let
$$\left\{\ba{ll}
\ns\ds\dbM^n_+=\Big\{A=(a_{ij})\in\dbR^{n\times
n}\bigm|a_{ij}\ge0,~i\ne j\Big\}\equiv \Big\{A\in\dbR^{n\times
n}\bigm|\lan Ae_i,e_j\ran\ge0,~i\ne j\Big\},\\
\ns\ds\h\dbM^{n\times m}_+=\Big\{A=(a_{ij})\in\dbR^{n\times
m}\bigm|a_{ij}\ge0,~1\le i\le n,~1\le j\le m\Big\},\\
\ns\ds\dbM^n_0=\Big\{A=(a_{ij})\in\dbR^{n\times
n}\bigm|a_{ij}=0,~i\ne j\Big\}\equiv \Big\{A\in\dbR^{n\times
n}\bigm|\lan Ae_i,e_j\ran=0,~i\ne j\Big\}.\ea\right.$$
Note that $\h\dbM^{n\times m}_+$ is the set of all $(n\times m)$
matrices with all the entries being nonnegative, $\dbM^n_+$ is the
set of all $(n\times n)$ matrices with all the off-diagonal entries
being nonnegative, and $\dbM^n_0$ is actually the set of all
$(n\times n)$ diagonal matrices. Clearly, $\dbM^n_+$ and
$\h\dbM^{n\times m}_+$ are closed convex cones of $\dbR^{n\times n}$
and $\dbR^{n\times m}$, respectively, and $\dbM^n_0$ is a proper
subspace of $\dbR^{n\times n}$. Whereas, for $n=m=1$, one has
\bel{5.1}\dbM^1_+=\dbM^1_0=\dbR,\qq\h\dbM^{1\times1}_+=\dbR_+\equiv[0,\infty).\ee
We have the following simple result which will be useful below and
whose proof is obvious.
\ms
\bf Proposition 5.1. \sl Let $A\in\dbR^{n\times m}$. Then
$A\in\h\dbM^{n\times m}_+$ if and only if
\bel{}Ax\ge0,\qq\forall x\in\dbR^m_+.\ee
\rm
In what follows, we will denote $\h\dbM^n_+=\h\dbM^{n\times n}_+$.
\subsection{Comparison of solutions to MF-FSVIEs.}
In this subsection, we would like to discuss comparison of solutions
to linear MF-FSVIEs. There are some positive and also negative
results. To begin with, let us first present an example of MF-FSDEs.
\ms
\bf Example 5.2. \rm Consider the following one-dimensional linear
MF-FSDE, written in the integral form:
$$X(t)=1+\int_0^t\dbE X(s)dW(s),\qq t\in[0,T].$$
Taking expectation, we have
$$\dbE X(t)=1,\qq\forall t\in[0,T].$$
Consequently, the solution $X(\cd)$ is given by
$$X(t)=1+\int_0^tdW(s)=1+W(t),\qq t\in[0,T].$$
Thus, although $X(0)=1>0$, the following fails:
$$X(t)\ge0,\qq t\in[0,T],\q\as$$
The above example shows that if the diffusion contains a nonlocal
term in an MF-FSDE, we could not get an expected comparison of
solutions, in general. Therefore, for linear MF-FSDEs, one had
better only look at the following:
\bel{LSDE}\ba{ll}
\ns\ds
X(t)=x+\int_0^t\(A_0(s)X(s)+\dbE'[C_0(s)X(s)]\)ds+\int_0^tA_1(s)X(s)dW(s),\q
t\in[0,T],\ea\ee
with the diffusion does not contain a nonlocal term. For the above,
we make the following assumption.
\ms
{\bf(C1)} The maps
$$A_0,A_1:[0,T]\times\O\to\dbR^{n\times n},\q
C_0:[0,T]\times\O^2\to\dbR^{n\times n},$$
are uniformly bounded, and they are $\dbF$-progressively measurable,
and $\dbF^2$-progressively measurable, respectively.
\ms
Note that, due to (\ref{5.1}), the above (C1) is always true if
$n=1$. We now present the following comparison theorem for linear
MF-FSDEs.
\ms
\bf Proposition 5.3. \sl Let {\rm(C1)} hold and moreover,
\bel{5.4}A_0(s,\o)\in\dbM^n_+,\q C_0(s,\o,\o')\in\h\dbM^n_+,\q
A_1(s,\o)\in\dbM^n_0,\qq s\in[0,T],~~\as\o,\o'\in\O.\ee
Let $X(\cd)\in L^2_\dbF(0,T;\dbR^n)$ be the solution of linear
MF-FSDE $(\ref{LSDE})$ with $x\ge0$. Then
\bel{ge0}X(t)\ge0,\qq\forall t\in[0,T],\q\as\ee
\ms
\it Proof. \rm It is known from Theorem 2.6 that as a special case
of MF-FSVIE, the linear MF-FSDE (\ref{LSDE}) admits a unique
solution $X(\cd)\in L^p_\dbF(0,T;\dbR^n)$ for any $x\in\dbR^n$, and
any $p\ge2$. Further, it is not hard to see that $X(\cd)$ has
continuous paths. Since the equation is linear, it suffices to show
that $x\le0$ implies
\bel{le0}X(t)\le0,\qq t\in[0,T],\q\as\ee
To prove (\ref{le0}), we define a convex function
$$f(x)=\sum_{i=1}^n(x_i^+)^2,\qq\forall
x=(x_1,x_2,\cds,x_n)\in\dbR^n,$$
where $a^+=\max\{a,0\}$ for any $a\in\dbR$. Applying It\^o's formula
to $f(X(t))$, we get
$$\ba{ll}
\ns\ds f(X(t))-f(x)=\int_0^t\[\lan
f_x(X(s)),A_0(s)X(s)+\dbE'[C_0(s)X(s)]\ran\\
\ns\ds\qq+{1\over2}\lan
f_{xx}(X(s))A_1(s)X(s),A_1(s)X(s)\ran\]ds+\int_0^t\lan
f_x(X(s)),A_1(s)X(s)\ran dW(s).\ea$$
We observe the following: (noting $A_0(s)\in\dbM^n_+$)
$$\ba{ll}
\ns\ds\lan f_x(X(s)),A_0(s)X(s)\ran=\sum_{i,j=1}^n2X_i(s)^+\lan
e_i,A_0(s)e_j\ran X_j(s)\\
\ns\ds\qq=\sum_{i=1}^n2X_i(s)^+\lan e_i,A_0(s)e_i\ran
X_i(s)+\sum_{i\ne j}2X_i(s)^+\lan
e_i,A_0(s)e_j\ran X_j(s)\\
\ns\ds\qq\le\sum_{i=1}^n2[X_i(s)^+]^2\lan
e_i,A_0(s)e_i\ran+\sum_{i\ne j}2\lan e_i,A_0(s)e_j\ran X_i(s)^+
X_j(s)^+\le Kf(X(s)).\ea$$
Also, one has (making use of $C_0(s)\in\h\dbM^n_+$)
$$\ba{ll}
\ns\ds\dbE\lan
f_x(X(s)),\dbE'[C_0(s)X(s)]\ran\\
\ns\ds=2\int_{\O^2}\sum_{i,j=1}^nX_i(s,\o)^+\lan
e_i,C_0(s,\o,\o')e_j\ran X_j(s,\o')\dbP(d\o)\dbP(d\o')\\
\ns\ds\le2\int_{\O^2}\sum_{i,j=1}^nX_i(s,\o)^+\lan
e_i,C_0(s,\o,\o')e_j\ran X_j(s,\o')^+\dbP(d\o)\dbP(d\o')\\
\ns\ds\le K\(\dbE\[\sum_{i=1}^nX_i(s)^+\]\)^2\le K\dbE f(X(s)).\ea$$
Next, we have (noting $A_1(\cd)$ and $f_{xx}(\cd)$ are diagonal)
$$\ba{ll}
\ns\ds{1\over2}\dbE\lan
f_{xx}(X(s))A_1(s)X(s),A_1(s)X(s)\ran={1\over2}\dbE\sum_{i=1}^nI_{(X_i(s)\ge0)}\(\lan
A_1(s)e_i,e_i\ran X_i(s)\)^2\\
\ns\ds={1\over2}\dbE\sum_{i=1}^n\lan
A_1(s)e_i,e_i\ran{}^2[X_i(s)^+]^2\le Kf(X(s)).\ea$$
Consequently,
$$\dbE f(X(t))\le f(x)+K\int_0^t\dbE f(X(s))ds,\qq t\in[0,T].$$
Hence, by Gronwall's inequality, we obtain
$$\sum_{i=1}^n\dbE|X_i(t)^+|^2\le K\sum_{i=1}^n|x_i^+|^2,\qq t\in[0,T].$$
Therefore, if $x\le0$ (component-wise), then
$$\sum_{i=1}^n\dbE|X_i(t)^+|^2=0,\qq\forall t\in[0,T].$$
This leads to (\ref{le0}). \endpf
\ms
We now make some observations on condition (\ref{5.4}).
\ms
\it 1. Let $C_0(\cd)=0$, $A_1(\cd)=0$, and $A_0(\cd)$ be continuous
and for some $i\ne j$,
$$\lan A_0(0)e_i,e_j\ran<0,$$
\rm i.e., at least one off-diagonal entry of $A_0(0)$ is negative.
Then by letting $x=e_i$, we have
$$X_j(t)=\lan X(t),e_j\ran=\int_0^t\lan A_0(s)X(s),e_j\ran ds=\lan A_0(0)e_i,e_j\ran t+o(t)<0,$$
for $t>0$ small. Thus, $X(0)\ge0$ does not imply $X(t)\ge0$.
\ms
\it 2. Let $A_0(\cd)=0$, $A_1(\cd)=0$, and $C_0(\cd)$ be continuous
and for some $i\ne j$,
$$\lan C_0(0)e_i,e_j\ran<0,$$
\rm i.e., at least one off-diagonal entry of $C_0(0)$ is negative.
Then by a similar argument as above, we have that $X(0)\ge0$ does
not imply $X(t)\ge0$.
\ms
\it 3. Let $A_0(\cd)=0$, $C_0(\cd)=0$ and for some $i\ne j$,
$$\int_0^T\dbP\(\lan A_1(s)e_i,e_j\ran\ne0\)ds>0,$$
\rm i.e., at least one off-diagonal entry of $A_1(\cd)$ is not
identically zero. Then by letting $x=e_i$, we have
$$X_j(t)=\int_0^t\lan A_1(s)X(s),e_j\ran dW(s)\not\equiv0,\qq t\in[0,T].$$
On the other hand,
$$\dbE X_j(t)=0,\qq t\in[0,T].$$
Hence,
$$X_j(t)\ge0,\qq\forall t\in[0,T],~\as$$
must fail.
\ms
\it 4. Let $n=1$, $A_0(\cd)=A_1(\cd)=0$ and $C_0(\cd)$ bounded,
$\dbF$-adapted with
$$C_0(s)\ne0,\q\dbE C_0(s)=0,\qq s\in[0,T].$$
\rm This means that ``$C_0(s)\ge0,~\forall s\in[0,T],~\as$'' fails
(or diagonal elements of $C_0(\cd)$ are not all nonnegative).
Consider the following MF-FSDE:
$$X(t)=1+\int_0^tC_0(s)\dbE X(s)ds,\qq t\in[0,T].$$
Then
$$\dbE X(t)=1,\qq t\in[0,T].$$
Hence,
$$X(t)=1+\int_0^tC_0(s)ds,\qq t\in[0,T].$$
It is easy to choose a $C_0(\cd)$ such that
$$X(t)\ge0,\qq\forall t\in[0,T],~\as$$
is violated.
\ms
The above observations show that, in some sense, conditions assumed
in (\ref{5.4}) are sharp for Proposition 5.3.
\ms
Based on the above, let us now consider the following linear
MF-FSVIE:
\bel{5.7}\ba{ll}
\ns\ds X(t)=\f(t)+\int_0^t\(A_0(t,s)X(s)+\dbE'\[C_0(t,s)X(s)\]\)ds\\
\ns\ds\qq\qq\qq\qq+\int_0^tA_1(s)X(s)dW(s),\qq\qq t\in[0,T].\ea\ee
Note that $A_1(\cd)$ is independent of $t$ here. According to
\cite{Tudor 1989}, we know that for (linear) FSVIEs (without the
nonlocal term, i.e., $C_0(\cd\,,\cd)=0$ in (\ref{5.7})), if the
diffusion depends on both $(t,s)$ and $X(\cd)$, i.e., $A_1(t,s)$
really depends on $(t,s)$, a comparison theorem will fail in
general. Next, let us look at an example which is concerned with the
free term $\f(\cd)$.
\ms
\bf Example 5.4. \rm Consider the following one-dimensional FSVIE:
$$X(t)=T-t+\int_0^tbX(s)ds+\int_0^t\si X(s)dW(s),\qq t\in[0,T],$$
for some $b,\si\in\dbR$. The above is equivalent to the following:
$$\left\{\ba{ll}
\ns\ds dX(t)=[bX(t)-1]dt+\si X(t)dW(t),\qq t\in[0,T],\\
\ns\ds X(0)=T.\ea\right.$$
The solution to the above is explicitly given by the following:
$$X(t)=e^{(b-{\si^2\over2})t+\si
W(t)}\[T-\int_0^te^{-(b-{\si^2\over2})s-\si W(s)}ds\],\qq
t\in[0,T].$$
We know that as long as $\si\ne0$, for any $t>0$ small and any
$K>0$,
$$\dbP\(\int_0^te^{-(b-{\si^2\over2})s-\si W(s)}ds\ge K\)>0.$$
Therefore, we must have
$$\dbP(X(t)<0)>0,\qq\forall t>0~\hb{(small)}.$$
On the other hand, if $\si=0$, then
$$X(t)=e^{bt}\[T-\int_0^te^{-bs}ds\],\qq t\in[0,T].$$
Thus, when $b=0$, one has
$$X(t)=T-t,\qq t\in[0,T],$$
and when $b\ne0$,
$$X(t)=e^{bt}T+{1\over b}(1-e^{bt})={e^{bt}\over b}\(e^{-bt}-1+bT\),\qq t\in[0,T].$$
Since
$$e^\l-1-\l>0,\qq\forall\l\ne0,$$
we have that $b<0$ implies
$$X(T)<0.$$
The above example tells us that when $\si\ne0$, or $\si=0$ and
$b<0$, although the free term $\f(t)=T-t$ is nonnegative on $[0,T]$,
the solution $X(\cd)$ of the FSVIE (\ref{5.7}) does not necessarily
remain nonnegative on $[0,T]$. Consequently, nonnegativity of the
free term is not enough for the solution of the MF-FSVIE to be
nonnegative. Thus, besides the nonnegativity of the free term, some
additional conditions are needed.
\ms
To present positive results, we introduce the following assumption.
\ms
{\bf(C2)} The maps
$$A_0:\D^*\times\O\to\dbR^{n\times n},\q A_1:[0,T]\times\O\to\dbR^{n\times
n},\q C_0:\D^*\times\O^2\to\dbR^{n\times n},$$
are measurable and uniformly bounded. For any $t\in[0,T]$,
$s\mapsto(A_0(t,s),A_1(s))$ is $\dbF$-progressively measurable on
$[0,t]$, and $s\mapsto C_0(t,s)$ is $\dbF^2$-progressively
measurable on $[0,t]$.
\ms
We now present the following result which is simple but will be
useful later.
\ms
\bf Proposition 5.5. \sl Let {\rm(C2)} hold. Further,
\bel{}A_0(t,s,\o),C_0(t,s,\o,\o')\in\h\dbM^n_+,\q
A_1(s,\o)=0,\q\ae(t,s)\in\D^*,~\as\o,\o'\in\O.\ee
Let $X(\cd)$ be the solution to $(\ref{5.7})$, with $\f(\cd)\in
L^2_\dbF(0,T;\dbR^n)$ and $\f(\cd)\ge0$. Then
\bel{5.10}X(t)\ge\f(t)\ge0,\qq t\in[0,T].\ee
\rm
\it Proof. \rm Define
$$(\cA X)(t)=\int_0^t\(A_0(t,s)X(s)+\dbE'[C_0(t,s)X(s)]\)ds,\qq t\in[0,T].$$
By our condition, we see that
$$(\cA X)(\cd)\ge0,\qq\forall X(\cd)\in L^2_\dbF(0,T;\dbR^n),~X(\cd)\ge0.$$
Now, we define the following sequence
$$\left\{\ba{ll}
\ns\ds X_0(\cd)=\f(\cd),\\
\ns\ds X_k(\cd)=\f(\cd)+(\cA X_{k-1})(\cd),\qq k\ge1.\ea\right.$$
It is easy to see that
$$X_k(\cd)\ge\f(\cd),\qq\forall k\ge0,$$
and
$$\lim_{k\to\infty}\|X_k(\cd)-X(\cd)\|_{L^2_\dbF(0,T;\dbR^n)}=0,$$
with $X(\cd)$ being the solution to (\ref{5.7}). Then it is easy to
see that (\ref{5.10}) holds. \endpf
\ms
For the case that the diffusion is nonzero in the equation, we have
the following result.
\ms
\bf Proposition 5.6. \sl Let {\rm(C2)} hold. Suppose
\bel{}\ba{ll}
\ns\ds A_0(t,s,\o)\in\dbM_+^n,\q C_0(t,s,\o,\o')\in\h\dbM_+^n,\q
A_1(s,\o)\in\dbM_0^n,\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\ae(t,s)\in\D^*,~\as\o,\o'\in\O.\ea\ee
Moreover, let $t\mapsto(\f(t),A_0(t,s),C_0(t,s))$ be continuous, and
$\f(\cd)\in L^p_\dbF(0,T;\dbR^n)$ for some $p>2$. Further,
\bel{5.11}\ba{ll}
\ns\ds\f(t_1)\ge\f(t_0)\ge0,\q A_0(t_1,s)\h x\ge A_0(t_0,s)\h x,\q
C_0(t_1,s)\h x\ge C_0(t_0,s)\h x,\\
\ns\ds\qq\qq\qq\qq\forall\,s\le t_0<t_1\le T,~s\in[0,T],~\h
x\in\dbR^n_+,~\as\ea\ee
Let $X(\cd)$ be the solution of linear MF-FSVIE $(\ref{5.7})$. Then
\bel{5.12}X(t)\ge0,\qq t\in[0,T],~\as\ee
\ms
\it Proof. \rm Let $\Pi=\{\t_k,0\le k\le N\}$ be an arbitrary set of
finitely many $\dbF$-stopping times with $0=\t_0<\t_1<\cds<\t_N=T$,
and we define its mesh size by
$$\|\Pi\|=\esssup_{\o\in\O}\max_{1\le k\le N}|\t_k-\t_{k-1}|.$$
Let
$$\left\{\ba{ll}
\ns\ds
A^\Pi_0(t,s)=\sum_{k=0}^{N-1}A_0(\t_k,s)I_{[\t_k,\t_{k+1})}(t),\qq
C^\Pi_0(t,s)=\sum_{k=0}^{N-1}C_0(\t_k,s)I_{[\t_k,\t_{k+1})}(t),\\
\ns\ds\f^\Pi(t)=\sum_{k=0}^{N-1}
\f(\t_k)I_{[\t_k,\t_{k+1})}(t).\ea\right.$$
Clearly, each $A_0(\t_k,\cd)$ is an $\dbF$-adapted bounded process,
each $C_0(\t_k,\cd)$ is an $\dbF^2$-adapted bounded process, and
each $\f(\t_k)$ is an $\cF_{\t_k}$-measurable random variable.
Moreover, for each $k\ge0$,
\bel{}A_0(\t_k,s)\in\dbM^n_+,\q C_0(\t_k,s)\in\h\dbM^n_+,\qq
s\in[\t_k,\t_{k+1}],~~\as,\ee
and
\bel{5.13}0\le\f(\t_k)\le\f(\t_{k+1}),\qq\as\ee
Now, we let $X^\Pi(\cd)$ be the solution to the following MF-FSVIE:
\bel{}\ba{ll}
\ns\ds X^\Pi(t)=\f^\Pi(t)+\int_0^t\(A_0^\Pi(t,s)X^\Pi(s)+\dbE'\[C_0^\Pi(t,s)X^\Pi(s)\]\)ds\\
\ns\ds\qq\qq\qq\qq\qq\qq+\int_0^tA_1(s)X^\Pi(s)dW(s),\qq
t\in[0,T].\ea\ee
Then on interval $[0,\t_1)$, we have
$$
X^\Pi(t)=\f(0)+\int_0^t\(A_0(0,s)X^\Pi(s)+\dbE'\[C_0(0,s)X^\Pi(s)\]\)ds+\int_0^tA_1(s)X^\Pi(s)dW(s),$$
which is an MF-FSDE, and $X^\Pi(\cd)$ has continuous paths. From
Proposition 5.3, we have
$$X^\Pi(t)\ge0,\qq t\in[0,\t_1),~\as$$
In particular,
\bel{5.16}\ba{ll}
\ns\ds
X^\Pi(\t_1-0)=\f(0)+\int_0^{\t_1}\(A_0(0,s)X^\Pi(s)+\dbE'\[C_0(0,s)X^\Pi(s)\]\)ds\\
\ns\ds\qq\qq\qq\qq\qq+\int_0^{\t_1}A_1(s)X^\Pi(s)dW(s)\ge0.\ea\ee
Next, on $[\t_1,\t_2)$, we have (making use the above)
$$\ba{ll}
\ns\ds
X^\Pi(t)=\f(\t_1)+\int_0^{\t_1}\(A_0(\t_1,s)X^\Pi(s)+\dbE'\[C_0(\t_1,s)X^\Pi(s)\]\)ds+\int_0^{\t_1}
A_1(s)X^\Pi(s)dW(s)\\
\ns\ds\qq\qq+\int_{\t_1}^t\(A_0(\t_1,s)X^\Pi(s)+\dbE'\[C_0(\t_1,s)X^\Pi(s)\]\)ds
+\int_{\t_1}^tA_1(s)X^\Pi(s)dW(s)\\
\ns\ds\qq=\f(\t_1)-\f(0)+X^\Pi(\t_1-0)\\
\ns\ds\qq\qq+\int_0^{\t_1}\Big\{\(A_0(\t_1,s)-A_0(0,s)\)X^\Pi(s)+\dbE'\[\(C_0(\t_1,s)-C_0(0,s)\)
X^\Pi(s)\]\Big\}ds\\
\ns\ds\qq\qq+\int_{\t_1}^t\(A_0(\t_1,s)X^\Pi(s)+\dbE'\[C_0(\t_1,s)X^\Pi(s)\]\)ds
+\int_{\t_1}^tA_1(s)X^\Pi(s)dW(s)\\
\ns\ds\qq\equiv\wt
X(\t_1)+\int_{\t_1}^t\(A_0(\t_1,s)X^\Pi(s)+\dbE'\[C_0(\t_1,s)X^\Pi(s)\]\)ds
+\int_{\t_1}^tA_1(s)X^\Pi(s)dW(s),\ea$$
where, by our conditions assumed in (\ref{5.11}), and noting
(\ref{5.16}),
$$\ba{ll}
\ns\ds\wt X(\t_1)\equiv\f(\t_1)-\f(0)+X^\Pi(\t_1-0)\\
\ns\ds\qq\qq+\int_0^{\t_1}\Big\{\(A_0(\t_1,s)-A_0(0,s)\)X^\Pi(s)+\dbE'\[\(C_0(\t_1,s)-C_0(0,s)\)
X^\Pi(s)\]\Big\}ds\ge0.\ea$$
Hence, by Proposition 5.3 again, one obtains
$$X^\Pi(t)\ge0,\qq t\in[\t_1,\t_2).$$
By induction, we see that
$$X^\Pi(t)\ge0,\qq t\in[0,T],~\as$$
On the other hand, it is ready to show that
$$\lim_{\|\Pi\|\to0}\|X^\Pi(\cd)-X(\cd)\|_{L^2_\dbF(0,T;\dbR^n)}=0,$$
Then (\ref{5.12}) follows from the stability estimate in Corollary
2.7. \endpf
\ms
We now look at the following (nonlinear) MF-FSVIEs with $i=0,1$:
\bel{5.5}\ba{ll}
\ns\ds
X_i(t)=\f_i(t)+\int_0^tb_i(t,s,X_i(t),\G^b_i(t,s,X_i(s)))ds+\int_0^t\si(s,X_i(s))dW(s),\q
t\in[0,T],\ea\ee
where
\bel{}\G^b_i(t,s,X_i(s))=\int_\O\th^b_i(t,s,\o,\o',X_i(s,\o),X_i(s,\o'))\dbP(d\o').\ee
Note that $\si(\cd)$ does not contain a nonlocal term, and it is
independent of $t\in[0,T]$, as well as $i=0,1$. The following result
can be regarded as an extension of \cite{Tudor 1989} from FSVIEs to
MF-FSVIEs.
\ms
\bf Theorem 5.7. \sl For $i=0,1$, let
$b_i(\cd),\si(\cd),\th^b_i(\cd)$ appeared in $(\ref{5.5})$ satisfy
{\rm(H1)--(H2)} and $\f_i(\cd)\in L^2_\dbF(0,T;\dbR^n)$. Further,
for all $x,\bar x,x'\in\dbR^n$, $\h x\in\dbR^n_+$,
$\g\in\dbR^{m_1}$, almost all $(t,s)\in\D^*$ and almost sure
$\o,\o'\in\O$,
\bel{5.19}(b_0)_\g(t,s,\o,x,\g)\in\h\dbM^{n\times
m_1}_+,\qq\si_x(s,\o,x)\in\dbM^n_0,\ee
and maps
\bel{5.20}\ba{ll}
\ns\ds t\mapsto(b_0)_x(t,s,\o,x,\g)\h x,\\
\ns\ds t\mapsto(b_0)_\g(t,s,\o,x,\g)(\th_0^b)_x(t,s,\o,\bar
x,x')\h x,\\
\ns\ds t\mapsto(b_0)_\g(t,s,\o,x,\g)(\th_0^b)_{x'}(t,s,\o,\bar
x,x')\h x,\\
\ns\ds t\mapsto b_1(t,s,\o,x,\g)-b_0(t,s,\o,x,\g),\\
\ns\ds
t\mapsto\th^b_1(t,s,\o,\o',x,x')-\th^b_0(t,s,\o,\o',x,x'),\\
\ns\ds
t\mapsto(b_0)_\g(t,s,\o,x,\g)\[\th^b_1(t,s,\o,\o',x,x')-\th^b_0(t,s,\o,\o',x,x')\],\\
\ns\ds t\mapsto\f_1(t)-\f_0(t)\ea\ee
are continuous, nonnegative and nondecreasing on $[s,T]$. Let
$X_i(\cd)\in L^p_\dbF(0,T;\dbR^n)$ be the solutions to the
corresponding equation $(\ref{5.5})$. Then
\bel{5.21}X_0(t)\le X_1(t),\qq\forall t\in[0,T],~\as\ee
\ms
\it Proof. \rm From the equations satisfied by $X_0(\cd)$ and
$X_1(\cd)$, we have the following:
$$\ba{ll}
\ns\ds X_1(t)-X_0(t)=\f_1(t)-\f_0(t)\\
\ns\ds\qq+\int_0^t\[b_1(t,s,X_1(s),\G^b_1(t,s,X_1(s)))-b_0(t,s,X_0(s),\G^b_0(t,s,X_0(s)))\]ds\\
\ns\ds\qq+\int_0^t\[\si(s,X_1(s))-\si(s,X_0(s))\]dW(s)\\
\ns\ds\qq=\h\f_1(t)-\h\f_0(t)\\
\ns\ds\qq+\int_0^t\[b_0(t,s,X_1(s),\G^b_0(t,s,X_1(s)))-b_0(t,s,X_0(s),\G^b_0(t,s,X_0(s)))\]ds\\
\ns\ds\qq+\int_0^t\[\si(s,X_1(s))-\si(s,X_0(s))\]dW(s),\ea$$
where (making use of Proposition 5.1 and (\ref{5.19})--(\ref{5.20}))
$$\ba{ll}
\ns\ds\h\f_1(t)-\h\f_0(t)=\f_1(t)-\f_0(t)\\
\ns\ds\qq+\int_0^t\[b_1(t,s,X_1(s),\G^b_1(t,s,X_1(s)))-b_0(t,s,X_1(s),\G^b_0(t,s,X_1(s)))\]ds\\
\ns\ds=\f_1(t)-\f_0(t)+\int_0^t\[b_1(t,s,X_1(s),\G^b_1(t,s,X_1(s)))-b_0(t,s,X_1(s),\G^b_1(t,s,X_1(s)))\]ds\\
\ns\ds\qq+\int_0^t\[\int_0^1(b_0)_\g(t,s,X_1(s),\wt\G^b_\l(t,s))d\l\]\(\G^b_1(t,s,X_1(s)))
-\G^b_0(t,s,X_1(s)))\)ds\ge0,\ea$$
and nondecreasing in $t$, where
$$\wt\G^b_\l(t,s)=(1-\l)\G^b_0(t,s,X_1(s))+\l\G^b_1(t,s,X_1(s)).$$
Now, we look at the following:
$$\ba{ll}
\ns\ds b_0(t,s,X_1(s),\G^b_0(t,s,X_1(s)))-b_0(t,s,X_0(s),\G^b_0(t,s,X_0(s)))\\
\ns\ds=\[\int_0^1(b_0)_x(t,s,X_\l(s),\G^b_\l(t,s))d\l\]\(X_1(s)-X_0(s)\)\\
\ns\ds\qq+\[\int_0^1(b_0)_\g(t,s,X_\l(s),\G^b_\l(t,s))d\l\]\(\G^b_0(t,s,X_1(s))-\G^b_0(t,s,X_0(s))\)\\
\ns\ds\equiv(b_0)_x(t,s)\(X_1(s)-X_0(s)\)+(b_0)_\g(t,s)\(\G^b_0(t,s,X_1(s))-\G^b_0(t,s,X_0(s))\),\ea$$
where
\bel{Xl}\left\{\ba{ll}
\ns\ds X_\l(s)=(1-\l)X_0(s)+\l X_1(s),\\
\ns\ds\G^b_\l(t,s)=(1-\l)\G^b_0(t,s,X_0(s))+\l\G^b_0(t,s,X_1(s)).\ea\right.\ee
and
$$\left\{\ba{ll}
\ns\ds(b_0)_x(t,s)=\int_0^1(b_0)_x(t,s,X_\l(s),\G^b_\l(t,s))d\l,\\
\ns\ds(b_0)_\g(t,s)=\int_0^1(b_0)_\g(t,s,X_\l(s),\G^b_\l(t,s))d\l.\ea\right.$$
Moreover,
$$\ba{ll}
\ns\ds\G^b_0(t,s,X_1(s))-\G^b_0(t,s,X_0(s))\\
\ns\ds=\int_\O\[\th_0^b(t,s,\o,\o',X_1(s,\o),X_1(s,\o'))-\th_0^b(t,s,\o,\o',X_0(s,\o),X_0(s,\o'))\]
\dbP(d\o')\\
\ns\ds=\Big\{\int_\O\[\int_0^1(\th_0^b)_x(t,s,\o,\o',X_\l(s,\o),X_\l(s,\o'))d\l\]
\dbP(d\o')\Big\}\(X_1(s,\o)-X_0(s,\o)\)\\
\ns\ds\qq+\int_\O\[\int_0^1(\th_0^b)_{x'}(t,s,\o,\o',X_\l(s,\o),X_\l(s,\o'))d\l\]
\(X_1(s,\o')-X_0(s,\o')\)\dbP(d\o')\\
\ns\ds=\dbE'\[(\th_0^b)_x(t,s)\]\(X_1(s)-X_0(s)\)+\dbE'\[(\th^b_0)_{x'}(t,s)\(X_1(s,\o')-X_0(s,\o')\)
\],\ea$$
where
\bel{}\left\{\ba{ll}
\ns\ds(\th^b_0)_x(t,s)=\int_0^1(\th^b_0)_x(t,s,\o,\o',X_\l(s,\o),
X_\l(s,\o'))d\l,\\
\ns\ds(\th^b_0)_{x'}(t,s)=\int_0^1(\th^b_0)_{x'}(t,s,\o,\o',X_\l(s,\o),X_\l(s,\o'))d\l,\ea\right.\ee
and $X_\l(\cd)$ is defined as (\ref{Xl}). Thus,
$$\ba{ll}
\ns\ds b_0(t,s,X_1(s),\G^b_0(t,s,X_1(s)))-b_0(t,s,X_0(s),\G^b_0(t,s,X_0(s)))\\
\ns\ds=\Big\{(b_0)_x(t,s)+\dbE'\[(b_0)_\g(t,s)(\th^b_0)_x(t,s)\]\Big\}\(X_1(s)-X_0(s)\)\\
\ns\ds\qq\qq+\dbE'\[(b_0)_\g(t,s)(\th^b_0)_{x'}(t,s)\(X_1(s,\o')-X_0(s,\o')\)\]\\
\ns\ds\equiv
A_0(t,s)\(X_1(s)-X_0(s)\)+\dbE'\[C_0(t,s)\(X_1(s)-X_0(s)\)\],\ea$$
where
\bel{5.24}\left\{\ba{ll}
\ns\ds A_0(t,s)=(b_0)_x(t,s)+\dbE'\[(b_0)_\g(t,s)(\th^b_0)_x(t,s)\]\in\dbM^n_+,\\
\ns\ds
C_0(t,s)=(b_0)_\g(t,s)(\th^b_0)_{x'}(t,s)\in\h\dbM^n_+,\ea\right.\qq(t,s)\in\D^*,~\as\ee
Similarly,
$$\ba{ll}
\ns\ds\si(s,X_1(s))-\si(s,X_0(s))\equiv
A_1(s)\(X_1(s)-X_0(s)\),\ea$$
where
\bel{5.25}A_1(s)\equiv\int_0^1\si_x(s,X_\l(s))d\l\in\dbM^n_0,\qq(t,s)\in\D^*,~\as\ee
Then we have
$$\ba{ll}
\ns\ds X_1(t)-X_0(t)=\h\f_1(t)-\h\f_0(t)+\int_0^t\Big\{A_0(t,s)\(X_1(s)-X_0(s)\)\\
\ns\ds\qq\qq\qq+\dbE'\[C_0(t,s)\(X_1(s)-X_0(s)\)\]\Big\}ds+\int_0^tA_1(s)\(X_1(s)-X_0(s)\)dW(s).\ea$$
From (\ref{5.19})--(\ref{5.20}), we see that the coefficients of the
above linear MF-FSVIE satisfy (C2), and $\h\f_1(\cd)-\h\f_0(\cd)$ is
nonnegative and nondecreasing. Then (\ref{5.21}) follows from
Proposition 5.6. \endpf
\ms
From the above proof, we see that one may replace $b_0(\cd)$ in
conditions (\ref{5.19}) by $b_1(\cd)$. Also, by an approximation
argument, we may replace the derivatives in (\ref{5.19}) of
$b_0(\cd)$ and $\si(\cd)$ by the corresponding difference quotients.
\subsection{Comparison theorems for MF-BSVIEs.}
In this subsection, we discuss comparison property for MF-BSVIEs.
First, we consider the following linear MF-BSVIE:
\bel{5.26}\ba{ll}
\ns\ds Y(t)=\psi(t)+\int_t^T\(\bar A_0(t,s)Y(s)+\bar
C_0(t)Z(s,t)+\dbE'\[\bar A_1(t,s)Y(s)\]\)ds\\
\ns\ds\qq\qq\qq\qq-\int_t^TZ(t,s)dW(s),\qq t\in[0,T].\ea\ee
Note that $Z(t,s)$ does not appear in the whole drift term, and
$Z(s,t)$ does not appear in the nonlocal term. Further, the
coefficient of $Z(s,t)$ is independent of $s$. Let us introduce the
following assumption.
\ms
{\bf(C3)} The maps
$$\bar A_0:\D\times\O\to\dbR^{n\times n},\q\bar C_0:[0,T]\times\O\to\dbR^{n\times n},\q
\bar A_1:\D\times\O^2\to\dbR^{n\times n}$$
are uniformly bounded, $\bar C_0(\cd)$ is $\dbF$-progressively
measurable, and for each $t\in[0,T]$, $s\mapsto\bar A_0(t,s)$ and
$s\mapsto\bar A_1(t,s)$ are $\dbF$-progressively measurable and
$\dbF^2$-progressively measurable on $[t,T]$, respectively.
\ms
We have the following result.
\ms
\bf Theorem 5.8. \sl Let {\rm (C3)} hold. In addition, suppose
\bel{}\ba{ll}
\ns\ds\bar A_0(t,s,\o)\in\dbM_+^n,\q\bar
A_1(t,s,\o,\o')\in\h\dbM_+^n,\q
\bar C_0(s,\o)\in\dbM_0^n,\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\ae(t,s)\in\D^*,~\as\o,\o'\in\O.\ea\ee
Moreover, let $t\mapsto(\bar A_0(s,t),\bar C_0(s,t))$ be continuous,
and
\bel{5.28}\ba{ll}
\ns\ds\bar A_0(s,t_1)^Tx\ge\bar A_0(s,t_0)^Tx,\q
\bar A_1(s,t_1)^Tx\ge\bar A_1(s,t_0)^Tx,\\
\ns\ds\qq\qq\qq\qq\forall\,s\le t_0<t_1\le
T,~s\in[0,T],~x\in\dbR^n_+,~\as\ea\ee
Let $(Y(\cd),Z(\cd\,,\cd))$ be the adapted M-solution to
$(\ref{5.26})$ with $\psi(\cd)\in L^2_{\cF_T}(0,T;\dbR^n)$,
$\psi(\cd)\ge0$. Then
\bel{5.29}\dbE\[\int_t^TY(s)ds\bigm|\cF_t\]\ge0,\qq\forall
t\in[0,T],~\as\ee
\it Proof. \rm We consider the following linear MF-FSVIE:
\bel{}\ba{ll}
\ns\ds X(t)=\f(t)+\int_0^t\(\bar A_0(s,t)^TX(s)+\dbE^*[\bar
A_1(s,t)^TX(s)]\)ds\\
\ns\ds\qq\qq\qq\qq\qq+\int_0^t\(\bar C_0(s)^TX(s)\)dW(s),\qq
t\in[0,T],\ea\ee
where
$$\f(t)=\int_0^t\eta(s)ds,\qq t\in[0,T],$$
for some $\eta(\cd)\in L^2_\dbF(0,T;\dbR^n)$ with $\eta(\cd)\ge0$.
By our conditions on $\bar A_0(\cd\,,\cd)$ and $\bar
A_1(\cd\,,\cd)$, using Proposition 5.6, we have
$$X(\cd)\ge0.$$
Then by Theorem 4.2, one obtains
$$\ba{ll}
\ns\ds0\le\dbE\int_0^T\lan\psi(t),X(t)\ran
dt=\dbE\int_0^T\lan\f(t),Y(t)\ran dt\\
\ns\ds\q=\dbE\int_0^T\int_0^t\lan\eta(s),Y(t)\ran
dsdt=\dbE\int_0^T\lan\eta(s),\int_s^TY(t)dt\ran ds.\ea$$
This proves (\ref{5.29}). \endpf
\ms
Since the conditions assumed in Proposition 5.6 are very close to
necessary conditions, we feel that it is very difficult (if not
impossible) to get better comparison results for general MF-BSVIEs.
However, if the drift term does not contain $Z(\cd\,,\cd)$, we are
able to get a much better looking result. Let us now make it
precise. For $i=0,1$, we consider the following (nonlinear)
MF-BSVIEs:
\bel{5.31}\ba{ll}
\ns\ds
Y_i(t)=\psi_i(t)+\int_t^Tg_i(t,s,Y_i(s),\G_i(t,s,Y_i(s)))ds-\int_t^TZ_i(t,s)dW(s),\q
t\in[0,T],\ea\ee
where
\bel{5.26a}\ba{ll}
\ns\ds\G_i(t,s,Y_i(s))=\dbE'\[\th_i(t,s,Y_i(s),Y_i(s,\o'))\]\\
\ns\ds\qq\qq\q~~\equiv\int_\O\th_i(t,s,\o,\o',Y_i(s,\o),Y_i(s,\o'))\dbP(d\o').\ea\ee
Note that in the above, $Z_i(\cd\,,\cd)$ does not appear in the
drift term.
\ms
\bf Theorem 5.9. \sl Let
$g_i:\D\times\O\times\dbR^n\times\dbR^m\to\dbR^n$ and
$\th_i:\D\times\O^2\times\dbR^n\times\dbR^n\to\dbR^m$ satisfy
{\rm(H3)$_q$} for some $q\ge2$. Moreover, for all $y,y'\in\dbR^n$,
$\g\in\dbR^m$, almost all $(t,s)\in\D$, and almost surely
$\o,\o'\in\O$, the following hold:
\bel{}\left\{\ba{ll}
\ns\ds(g_0)_\g(t,s,\o,y,\g)\in\h\dbM^{n\times
m}_+,\qq(\th_0)_{y'}(t,s,\o,\o',y,y')\in\h\dbM_+^{m\times n},\\
\ns\ds(g_0)_y(t,s,\o,y,\g)\in\h
M_+^n,\q(\th_0)_y(t,s,\o,\o',y,y')\in\h\dbM_+^{m\times
n},\ea\right.\ee
and
\bel{}\left\{\ba{ll}
\ns\ds g_1(t,s,\o,y,\g)\ge g_0(t,s,\o,y,\g),\\
\ns\ds\th_1(t,s,\o,\o',y,y')\ge\th_0(t,s,\o,\o',y,y').\ea\right.\ee
Let $\psi_i(\cd)\in L^2_{\cF_T}(0,T;\dbR^n)$ with
\bel{}\psi_0(t)\le\psi_1(t),\qq\forall t\in[0,T],~\as,\ee
and $(Y_i(\cd),Z_i(\cd\,,\cd))$ be the adapted M-solutions to the
corresponding MF-BSVIEs $(\ref{5.31})$. Then
\bel{5.36}Y_0(t)\le Y_1(t),\qq t\in[0,T],~\as\ee
\ms
\rm
\it Proof. \rm From the MF-BSVIEs satisfied by
$(Y_i(\cd),Z_i(\cd\,,\cd))$, we have
$$\ba{ll}
\ns\ds
Y_1(t)-Y_0(t)=\psi_1(t)-\psi_0(t)+\int_t^T\[g_1(t,s,Y_1(s),\G_1(t,s,Y_1(s)))\\
\ns\ds\qq\qq\qq-g_0(t,s,Y_0(s),\G_0(t,s,Y_0(s)))\]ds
-\int_t^T\[Z_1(t,s)-Z_0(t,s)\]dW(s)\\
\ns\ds=\h\psi_1(t)-\h\psi_0(t)+\int_t^T\[g_0(t,s,Y_1(s),\G_0(t,s,Y_1(s)))
-g_0(t,s,Y_0(s),\G_0(t,s,Y_0(s)))\]ds\\
\ns\ds\qq\qq\qq-\int_t^T\[Z_1(t,s)-Z_0(t,s)\]dW(s),\ea$$
where (making use of our condition)
$$\ba{ll}
\ns\ds\h\psi_1(t)-\h\psi_0(t)=\psi_1(t)-\psi_0(t)+\int_t^T\(g_1(t,s,Y_1(s),
\G_1(t,s,Y_1(s)))-g_0(t,s,Y_1(s),\G_0(t,s,Y_1(s)))\)ds\\
\ns\ds=\psi_1(t)-\psi_0(t)+\int_t^T\(g_1(t,s,Y_1(s),\G_1(t,s,Y_1(s)))
-g_0(t,s,Y_1(s),\G_1(t,s,Y_1(s)))\)ds\\
\ns\ds\qq\qq+\int_t^T\(g_0(t,s,Y_1(s),\G_1(t,s,Y_1(s)))-g_0(t,s,Y_1(s),\G_0(t,s,Y_1(s)))\)ds\\
\ns\ds=\psi_1(t)-\psi_0(t)+\int_t^T\(g_1(t,s,Y_1(s),\G_1(t,s,Y_1(s)))
-g_0(t,s,Y_1(s),\G_1(t,s,Y_1(s)))\)ds\\
\ns\ds\qq\qq+\int_t^T(\wt
g_0)_\g(t,s)\(\G_1(t,s,Y_1(s))-\G_0(t,s,Y_1(s))\)ds\ge0,\ea$$
with
$$\ba{ll}
\ns\ds(\wt
g_0)_\g(t,s)=\int_0^1(g_0)_\g(t,s,Y_1(s),\G_\l(t,s,Y_1(s)))d\l\in\h\dbM^{n\times m}_+,\\
\ns\ds\G_\l(t,s,Y_1(s))=(1-\l)\G_0(t,s,Y_1(s))+\l\G_1(t,s,Y_1(s)).\ea$$
Next, we note that
$$\ba{ll}
\ns\ds g_0(t,s,Y_1(s),\G_0(t,s,Y_1(s)))
-g_0(t,s,Y_0(s),\G_0(t,s,Y_0(s)))\\
\ns\ds=\int_0^1\Big\{(g_0)_y(t,s,Y_\l(s),\G_\l(t,s))\[Y_1(s)-Y_0(s)\]\\
\ns\ds\qq+(g_0)_\g(t,s,Y_\l(s),\G_\l(t,s))\[\G_0(t,s,Y_1(s))
-\G_0(t,s,Y_0(s))\]\Big\}d\l\\
\ns\ds\equiv(g_0)_y(t,s)\[Y_1(s)-Y_0(s)\]+(g_0)_\g(t,s)\[\G_0(t,s,Y_1(s))
-\G_0(t,s,Y_0(s))\],\ea$$
where
$$\left\{\ba{ll}
\ns\ds Y_\l(s)=(1-\l)Y_0(s)+\l Y_1(s),\\
\ns\ds\G_\l(t,s)=(1-\l)\G_0(t,s,Y_0(s))+\l\G_0(t,s,Y_1(s)),\ea\right.$$
and
$$\left\{\ba{ll}
\ns\ds(g_0)_y(t,s)=\int_0^1(g_0)_y(t,s,Y_\l(s),\G_\l(t,s))d\l\in\dbM^n_+,\\
\ns\ds(g_0)_\g(t,s)=\int_0^1(g_0)_\g(t,s,Y_\l(s),\G_\l(t,s))d\l\in\h\dbM^{n\times
m}_+.\ea\right.$$
Also,
$$\ba{ll}
\ns\ds\G_0(t,s,Y_1(s))-\G_0(t,s,Y_0(s))\\
\ns\ds=\dbE'\[\th_0(t,s,Y_1(s),
Y_1(s,\o'))-\th_0(t,s,Y_0(s),Y_0(s,\o'))\]\\
\ns\ds=\dbE'\int_0^1\Big\{(\th_0)_y(t,s,Y_\l(s),Y_\l(s,\o'))\(Y_1(s)-Y_0(s)\)\\
\ns\ds\qq\qq+(\th_0)_{y'}(t,s,Y_\l(s),Y_\l(s,\o'))\(Y_1(s,\o')-Y_0(s,\o')\)\Big\}d\l\\
\ns\ds\equiv\dbE'\[(\th_0)_y(t,s)\]\(Y_1(s)-Y_0(s)\)+\dbE'\[(\th_0)_{y'}(t,s)\(Y_1(s,\o')-Y_0(s,\o')\)\],\ea$$
with
$$\left\{\ba{ll}
\ns\ds(\th_0)_y(t,s)=\int_0^1(\th_0)_y(t,s,Y_\l(s),Y_\l(s,\o'))d\l,\\
\ns\ds(\th_0)_{y'}(t,s)=\int_0^1(\th_0)_{y'}(t,s,Y_\l(s),Y_\l(s,\o'))d\l.\ea\right.$$
Thus,
\bel{5.30}\ba{ll}
\ns\ds Y_1(t)-Y_0(t)=\h\psi_1(t)-\h\psi_0(t)+\int_t^T\Big\{\bar
A_0(t,s)\(Y_1(s)-Y_0(s)\)\\
\ns\ds\qq+\dbE'\[\bar A_1(t,s)
\(Y_1(s)-Y_0(s)\)\]\Big\}ds-\int_t^T\(Z_1(t,s)-Z_0(s,t)\)dW(s),\q
t\in[0,T],\ea\ee
with
\bel{5.38}\left\{\ba{ll}
\ns\ds\bar
A_0(t,s)=(g_0)_y(t,s)+\dbE'\[(g_0)_\g(t,s)(\th_0)_y(t,s)\]\in\h\dbM^n_+,\\
\ns\ds\bar
A_1(t,s)=(g_0)_\g(t,s)(\th_0)_{y'}(t,s)\in\h\dbM^n_+,\ea\right.\qq(t,s)\in\D,~\as\ee
Now, for any $\f(\cd)\in L^2_\dbF(0,T;\dbR^n)$, let $X(\cd)$ be the
solution to the following linear MF-FSVIE:
\bel{}X(t)=\f(t)+\int_0^t\(\bar A_0(s,t)^TX(s)+\dbE^*\[\bar
A_1(s,t)^TX(s)\]\)ds,\qq t\in[0,T].\ee
By Proposition 5.5, we know that $X(\cd)\ge0$. Then by Theorem 4.2,
we have
$$\ba{ll}
\ns\ds0\le\dbE\int_0^T\lan\h\psi_1(t)-\h\psi_0(t),X(t)\ran
dt=\dbE\int_0^T\lan\f(t),Y_1(t)-Y_0(t)\ran dt.\ea$$
Hence, (\ref{5.36}) follows. \endpf
\ms
Combining the above two results, we are able to get a comparison
theorem for the following MF-BSVIE:
\bel{5.34}\ba{ll}
\ns\ds
Y_i(t)=\psi_i(t)+\int_t^T\(g_i(t,s,Y_i(s),\G_i(t,s,Y_i(s)))+\bar
C_0(t)Z_i(s,t)\)ds\\
\ns\ds\qq\qq-\int_t^TZ_i(t,s)dW(s),\qq t\in[0,T],\ea\ee
where $\G_i(\cd)$ is as that in (\ref{5.31}). Under proper
conditions, we will have the following comparison:
\bel{}\dbE\[\int_t^TY_0(s)ds\bigm|\cF_t\]\le\dbE\[\int_t^TY_1(s)ds\bigm|\cF_t\],\qq\forall
t\in[0,T],~\as\ee
We omit the details here.
\ms
We note that in Proposition 5.6, monotonicity conditions for
$\f(\cd)$, $A_0(\cd\,,\cd)$ and $C_0(\cd\,,\cd)$ play a crucial
role. These kind of conditions were overlooked in \cite{Yong 2006,
Yong 2007, Yong 2008}. The following example shows that in general
(\ref{5.36}) might be false.
\ms
\bf Example 5.10. \rm Consider
$$Y_0(t)=-\int_t^TY_0(s)ds,\qq t\in[0,T],$$
and
$$Y_1(t)=t-\int_t^TY_1(s)ds,\qq t\in[0,T].$$
Then
$$Y_0(t)=0,\qq t\in[0,T],$$
and the equation for $Y_1(\cd)$ is equivalent to the following:
$$\dot Y_1(t)=Y_1(t)+1,\qq Y_1(T)=T,$$
whose solution is given by
$$Y_1(t)=e^{t-T}(T+1)-1,\qq t\in[0,T].$$
It is easy to see that
$$Y_1(t)<0=Y_0(t),\qq\forall t\in[0,T-\ln(T+1)).$$
Hence, (\ref{5.36}) fails.
\ms
To conclude this section, we would like to pose the following open
question: For general MF-BSVIEs, under what conditions on the
coefficients, one has a nice-looking comparison theorem?
\ms
We hope to be able to report some results concerning the above
question before long.
\section{An Optimal Control Problem for MF-SVIEs.}
In this section, we will briefly discuss a simple optimal control
problem for MF-FSVIEs. This can be regarded as an application of
Theorem 4.1, a duality principle for MF-FSVIEs. The main clue is
similar to the relevant results presented in \cite{Yong 2006, Yong
2008}. We will omit some detailed derivations. General optimal
control problems for MF-FSVIEs will be much more involved and we
will present systematic results for that in our forthcoming
publications.
\ms
Let $U$ be a non-empty bounded convex set in $\dbR^m$, and let $\cU$
be the set of all $\dbF$-adapted processes $u:[0,T]\times\O\to U$.
Since $U$ is bounded, we see that $\cU\subseteq
L^\infty_\dbF(0,T;\dbR^m)$. For any $u(\cd)\in\cU$, consider the
following controlled MF-FSVIE:
\bel{6.1}\ba{ll}
\ns\ds
X(t)=\f(t)+\int_0^tb(t,s,X(s),u(s),\G^b(t,s,X(s),u(s)))ds\\
\ns\ds\qq\qq\qq+\int_0^t\si(t,s,X(s),u(s),\G^\si(t,s,X(s),u(s)))dW(s),\qq
t\in[0,T],\ea\ee
where
$$\left\{\ba{ll}
\ns\ds b:\D^*\times\O\times\dbR^n\times U\times\dbR^{m_1}\to\dbR^n,\\
\ns\ds \si:\D^*\times\O\times\dbR^n\times U\times\dbR^{m_2}\to
\dbR^n,\ea\right.$$
and
$$\left\{\ba{ll}
\ns\ds\G^b(t,s,X(s),u(s))=\int_\O\th^b(t,s,\o,\o',X(s,\o),u(s,\o),X(s,\o'),u(s,\o'))\dbP(d\o')\\
\ns\ds\qq\qq\qq\qq\equiv\dbE'\[\th^b(t,s,X(s),u(s),x',u')\]_{(x',u')=(X(s),u(s))},\\
\ns\ds\G^\si(t,s,X(s),u(s))=\int_\O\th^\si(t,s,\o,\o',X(s,\o),u(s,\o),X(s,\o'),u(s,\o'))\dbP(d\o')\\
\ns\ds\qq\qq\qq\qq\equiv\dbE'\[\th^\si(t,s,X(s),u(s),x',u')\]_{(x',u')=(X(s),u(s))},\ea\right.$$
with
$$\left\{\ba{ll}
\ns\ds\th^b:\D^*\times\O^2\times\dbR^n\times U\times\dbR^n\times U\to\dbR^{m_1},\\
\ns\ds\th^\si:\D^*\times\O^2\times\dbR^n\times U\times\dbR^n\times
U\to\dbR^{m_2}.\ea\right.$$
In the above, $X(\cd)$ is referred to as the {\it state process} and
$u(\cd)$ as the {\it control process}. We introduce the following
assumptions for the state equation (Comparing with (H1)--(H2)):
\ms
{\bf(H1)$''$} The maps
$$\left\{\ba{ll}
\ns\ds b:\D^*\times\O\times\dbR^n\times
U\times\dbR^{m_1}\to\dbR^n,\\
\ns\ds\si:\D^*\times\O\times\dbR^n\times
U\times\dbR^{m_2}\to\dbR^n,\ea\right.$$
are measurable, and for all
$(t,x,u,\g,\g')\in[0,T]\times\dbR^n\times
U\times\dbR^{m_1}\times\dbR^{m_2}$, the map
$$(s,\o)\mapsto(b(t,s,\o,x,u,\g),\si(t,s,\o,x,u,\g'))$$
is $\dbF$-progressively measurable on $[0,t]$. Moreover, for all
$(t,s,\o,\o')\in\D^c\times\O$, the map
$$(x,u,\g,\g')\mapsto(b(t,s,\o,x,u,\g),\si(t,s,x,u,\g'))$$
is continuously differentiable and there exists some constant $L>0$
such that
\bel{b-si-Lip2}\ba{ll}
\ns\ds
|b_x(t,s,\o,x,u,\g)|+|b_u(t,s,\o,x,u,\g)|+|b_\g(t,s,\o,x,u,\g)|\\
\ns\ds+|\si_x(t,s,\o,x,u,\g')|+|\si_u(t,s,\o,x,u,\g')|+|\si_{\g'}(t,s,\o,x,u,\g')|\le L,\\
\ns\ds\qq\qq(t,s,\o,x,u,\g,\g')\in\D^*\times\O\times \dbR^n\times
U\times\dbR^{m_1}\times\dbR^{m_2}.\ea\ee
Further,
\bel{b-si-growth2}\ba{ll}
\ns\ds|b(t,s,\o,x,u,\g)|+|\si(t,s,\o,x,u,\g')|\le
L(1+|x|+|\g|+|\g'|),\\
\ns\ds\qq\qq\qq(t,s,\o,x,u,\g,\g')\in\D^*\times\O\times\dbR^n\times
U\times\dbR^{m_1}\times\dbR^{m_2}.\ea\ee
{\bf(H2)$''$} The maps
$$\left\{\ba{ll}
\ns\ds\th^b:\D^*\times\O^2\times\dbR^n\times U\times\dbR^n\times
U\to\dbR^{m_1},\\
\ns\ds\th^\si:\D^*\times\O^2\times\dbR^n\times U\times\dbR^n\times
U\to\dbR^{m_2},\ea\right.$$
are measurable, and for all $(t,x,u,x',u')\in[0,T]\times\dbR^n\times
U\times\dbR^n\times U$, the map
$$(s,\o,\o')\mapsto(\th^b(t,s,\o,\o',x,u,x',u'),\th^\si(t,s,\o,\o',x,u,x',u'))$$
is $\dbF^2$-progressively measurable on $[0,t]$. Moreover, for any
$(t,s,\o,\o')\in\D^*\times\O^2$,
$$(x,u,\g,\g')\mapsto(\th^b(t,s,\o,\o',x,u,x',u'),\th^\si(t,s,\o,\o',x,u,x',u'))$$
is continuously differentiable and there exists some constant $L>0$
such that
\bel{th b-si-Lip2}\ba{ll}
\ns\ds
|\th^b_x(t,s,\o,\o',x,u,x',u')|+|\th^b_u(t,s,\o,\o',x,u,x',u')|\\
\ns\ds\q+|\th^b_{x'}(t,s,\o,\o',x,u,x',u')|+|\th^b_{u'}(t,s,\o,\o',x,u,x',u')|\\
\ns\ds
\q+|\th^\si_x(t,s,\o,\o',x,u,x',u')|+|\th^\si_u(t,s,\o,\o',x,u,x',u')|\\
\ns\ds\q+|\th^\si_{x'}(t,s,\o,\o',x,u,x',u')|+|\th^\si_{u'}(t,s,\o,\o',x,u,x',u')|\le L,\\
\ns\ds\qq\qq(t,s,\o,\o',x,u,x',u')\in\D^*\times\O^2\times\dbR^n\times
U\times\dbR^n\times U.\ea\ee
Further,
\bel{th b-si-growth2}\ba{ll}
\ns\ds|\th^b(t,s,\o,\o',x,u,x',u')|+|\th^\si(t,s,\o,\o',x,u,x',u')|\le
L(1+|x|+|x'|),\\
\ns\ds\qq\qq\qq(t,s,\o,\o',x,u,x',u')\in\D^*\times\O^2\times\dbR^n\times
U\times\dbR^n\times U.\ea\ee
It is easy to see that under (H1)$''$--(H2)$''$, for any given
$u(\cd)\in\cU$, the state equation (\ref{6.1}) satisfies (H1)--(H2).
Hence, for any $\f(\cd)\in L^p_\dbF(0,T;\dbR^n)$, (\ref{6.1}) admits
a unique solution $X(\cd)\in L^p(0,T;\dbR^n)$.
\ms
To measure the performance of the control process $u(\cd)$, the
following (Lagrange type) {\it cost functional} is defined:
\bel{6.2}J(u(\cd))=\dbE\int_0^Tg(s,X(s),u(s),\G^g(s,X(s),u(s)))ds,\ee
where
$$g:[0,T]\times\O\times\dbR^n\times
U\times\dbR^\ell\to\dbR,$$
and
$$\ba{ll}
\ns\ds\G^g(s,X(s),u(s))=\int_\O\th^g(s,\o,\o',X(s,\o),u(s,\o),X(s,\o'),u(s,\o'))\dbP(d\o')\\
\ns\ds\qq\qq\qq\q~\equiv\dbE'\[\th^g(s,X(s),u(s),x',u')\]_{(x',u')=(X(s),u(s))},\ea$$
with
$$\th^g:[0,T]\times\O^2\times\dbR^n\times U\times\dbR^n\times
U\to\dbR^\ell.$$
For convenience, we make the following assumptions for the functions
involved in the cost functional.
\ms
{\bf(H1)$'''$} The map $g:[0,T]\times\O\times\dbR^n\times
U\times\dbR^\ell\to\dbR$ is measurable, and for all
$(x,u,\g)\in\dbR^n\times U\times\dbR^\ell$, the map $(t,\o)\mapsto
g(t,\o,x,u,\g)$ is $\dbF$-progressively measurable. Moreover, for
almost all $(t,\o)\in\D^*\times\O$, the map $(x,u,\g)\mapsto
g(t,\o,x,u,\g)$ is continuously differentiable and there exists some
constant $L>0$ such that
\bel{g-Lip2}\ba{ll}
\ns\ds
|g_x(t,\o,x,u,\g)|+|g_u(t,\o,x,u,\g)|+|g_\g(t,\o,x,u,\g)|\le L,\\
\ns\ds\qq\qq(t,\o,x,u,\g)\in[0,T]\times\O\times\dbR^n\times
U\times\dbR^\ell.\ea\ee
Further,
\bel{g-growth2}\ba{ll}
\ns\ds|g(t,\o,x,u,\g)|\le L(1+|x|+|\g|),\\
\ns\ds\qq\qq\qq(t,\o,x,u,\g)\in[0,T]\times\O\times\dbR^n\times
U\times\dbR^\ell.\ea\ee
{\bf(H2)$'''$} The map $\th^g:[0,T]\times\O^2\times\dbR^n\times
U\times\dbR^n\times U\to\dbR^\ell$ is measurable, and for all
$(x,u,x',u')\in\dbR^n\times U\times\dbR^n\times U$, the map
$(s,\o,\o')\mapsto(\th^g(t,\o,\o',x,u,x',u')$ is
$\dbF^2$-progressively measurable. Moreover, for almost all
$(t,\o,\o')\in[0,T]\times\O^2$, the map
$(x,u,x',u')\mapsto\th^g(t,s,\o,\o',x,u,x',u')$ is continuously
differentiable and there exists some constant $L>0$ such that
\bel{th g-si-Lip2}\ba{ll}
\ns\ds
|\th^g_x(t,\o,\o',x,u,x',u')|+|\th^g_u(t,\o,\o',x,u,x',u')|\\
\ns\ds\q+|\th^g_{x'}(t,\o,\o',x,u,x',u')|+|\th^g_{u'}(t,\o,\o',x,u,x',u')|\le L,\\
\ns\ds\qq\qq(t,\o,\o',x,u,x',u')\in[0,T]\times\O^2\times\dbR^n\times
U\times\dbR^n\times U.\ea\ee
Further,
\bel{th g-growth2}\ba{ll}
\ns\ds|\th^g(t,\o,\o',x,u,x',u')|\le
L(1+|x|+|x'|),\\
\ns\ds\qq\qq\qq(t,\o,\o',x,u,x',u')\in[0,T]\times\O^2\times\dbR^n\times
U\times\dbR^n\times U.\ea\ee
Under (H1)$''$--(H2)$''$ and (H1)$'''$--(H2)$'''$, the cost
functional $J(u(\cd))$ is well-defined. Then we can state our
optimal control problem as follows.
\ms
\bf Problem (C). \rm For given $\f(\cd)\in L^p_\dbF(0,T;\dbR^n)$,
find $\bar u(\cd)\in\cU$ such that
\bel{6.11}J(\bar u(\cd))=\inf_{u(\cd)\in\cU}J(u(\cd)).\ee
\ms
Any $\bar u(\cd)\in\cU$ satisfying (\ref{6.11}) is called an {\it
optimal control} of Problem (C), and the corresponding state process
$\bar X(\cd)$ is called an {\it optimal state process}. In this
case, we refer to $(\bar X(\cd),\bar u(\cd))$ as an {\it optimal
pair}.
\ms
We now briefly derive the Pontryagin type maximum principle for any
optimal pair $(\bar X(\cd),\bar u(\cd))$. To this end, we take any
$u(\cd)\in\cU$, let
$$u^\e(\cd)=\bar u(\cd)+\e[u(\cd)-\bar u(\cd)]=(1-\e)\bar u(\cd)+\e
u(\cd)\in\cU.$$
Let $X^\e(\cd)$ be the corresponding state process. Then
$$X_1(\cd)=\lim_{\e\to0}{X^\e(\cd)-\bar X(\cd)\over\e}$$
satisfies the following:
$$\ba{ll}
\ns\ds X_1(t)=\int_0^t\Big\{b_x(t,s)X_1(s)+b_u(t,s)[u(s)-\bar
u(s)]\\
\ns\ds\qq\qq\qq+b_\g(t,s)\dbE'\[\th^b_x(t,s)X_1(s,\o)+\th^b_u(t,s)[u(s,\o)-\bar
u(s,\o)]\\
\ns\ds\qq\qq\qq+\th^b_{x'}(t,s)X_1(s,\o')+\th^b_{u'}(t,s)[u(s,\o')-\bar
u(s,\o')]\]\Big\}ds\\
\ns\ds\qq\qq+\int_0^t\Big\{\si_x(t,s)X_1(s)+\si_u(t,s)[u(s)-\bar
u(s)]\\
\ns\ds\qq\qq\qq+\si_\g(t,s)\dbE'\[\th^\si_x(t,s)X_1(s,\o)+\th^\si_u(t,s)[u(s,\o)-\bar
u(s,\o)]\\
\ns\ds\qq\qq\qq+\th^\si_{x'}(t,s)X_1(s,\o')+\th^\si_{u'}(t,s)[u(s,\o')-\bar
u(s,\o')]\]\Big\}dW(s)\\
\ns\ds\qq=\int_0^t\Big\{\[b_x(t,s)+b_\g(t,s)\dbE'\th^b_x(t,s)\]X_1(s)\\
\ns\ds\qq\qq\qq+\[b_u(t,s)+b_\g(t,s)\dbE'\th^b_u(t,s)\][u(s)-\bar
u(s)]\\
\ns\ds\qq\qq\qq+\dbE'\[b_\g(t,s)\th^b_{x'}(t,s)X_1(s)+b_\g(t,s)\th^b_{u'}(t,s)[u(s)-\bar
u(s)]\]\Big\}ds\\
\ns\ds\qq\qq+\int_0^t\Big\{\[\si_x(t,s)+\si_\g(t,s)\dbE'\th^\si_x(t,s)\]X_1(s)\\
\ns\ds\qq\qq\qq+\[\si_u(t,s)+\si_\g(t,s)\dbE'\th^\si_u(t,s)\][u(s)-\bar
u(s)]\\
\ns\ds\qq\qq\qq+\dbE'\[\si_\g(t,s)\th^\si_{x'}(t,s)X_1(s)+\si_\g(t,s)\th^\si_{u'}(t,s)[u(s)-\bar
u(s)]\]\Big\}dW(s)\\
\ns\ds\q\equiv\int_0^t\Big\{A_0(t,s)X_1(s)+B_0(t,s)[u(s)-\bar
u(s)]+\dbE'\[C_0(t,s)X_1(s)+D_0(t,s)[u(s)-\bar
u(s)]\]\Big\}ds\\
\ns\ds\q\q+\2n\int_0^t\2n\Big\{A_1(t,s)X_1(s)\1n+\1n
B_1(t,s)[u(s)\1n-\1n\bar u(s)]\1n+\1n\dbE'\[C_1(t,s)X_1(s)\1n+\1n
D_1(t,s)[u(s)\1n-\1n\bar
u(s)]\]\Big\}dW(s)\\
\ns\ds\q\equiv\h\f(t)+\int_0^t\Big\{A_0(t,s)X_1(s)+\dbE'\[C_0(t,s)X_1(s)\]\Big\}ds\\
\ns\ds\qq\qq\qq+\int_0^t\Big\{A_1(t,s)X_1(s)+\dbE'\[C_1(t,s)X_1(s)\]\Big\}dW(s),\ea$$
where
$$\left\{\ba{ll}
\ns\ds b_\xi(t,s)=b_\xi(t,s,\bar X(s),\bar u(s),\G^b(t,s,\bar
X(s),\bar
u(s))),\qq\xi=x,u,\g,\\
\ns\ds\th^b_\xi(t,s)=\th^b_\xi(t,s,\o,\o',\bar X(s,\o),\bar
u(s,\o),\bar
X(s,\o'),\bar u(s,\o')),\qq\xi=x,u,x',u',\\
\ns\ds\si_\xi(t,s)=\si_\xi(t,s,\bar X(s),\bar u(s),\G^\si(t,s,\bar
X(s),\bar
u(s))),\qq\xi=x,u,\g,\\
\ns\ds\th^\si_\xi(t,s)=\th^b_\xi(t,s,\o,\o',\bar X(s,\o),\bar
u(s,\o),\bar X(s,\o'),\bar u(s,\o')),\qq\xi=x,u,x',u',\ea\right.$$
and
$$\left\{\ba{ll}
\ns\ds A_0(t,s)=b_x(t,s)+b_\g(t,s)\dbE'\th^b_x(t,s),\qq
B_0(t,s)=b_u(t,s)+b_\g(t,s)\dbE'\th^b_u(t,s),\\
\ns\ds C_0(t,s)=b_\g(t,s)\th^b_{x'}(t,s),\qq
D_0(t,s)=b_\g(t,s)\th^b_{u'}(t,s),\\
\ns\ds A_1(t,s)=\si_x(t,s)+\si_\g(t,s)\dbE'\th^\si_x(t,s),\qq
B_1(t,s)=\si_u(t,s)+\si_\g(t,s)\dbE'\th^\si_u(t,s),\\
\ns\ds C_1(t,s)=\si_\g(t,s)\th^\si_{x'}(t,s),\qq
D_1(t,s)=\si_\g(t,s)\th^\si_{u'}(t,s).\ea\right.$$
Also,
$$\ba{ll}
\ns\ds\h\f(t)=\int_0^t\Big\{B_0(t,s)[u(s)-\bar
u(s)]+\dbE'\[D_0(t,s)[u(s)-\bar
u(s)]\]\Big\}ds\\
\ns\ds\qq\qq+\int_0^t\Big\{B_1(t,s)[u(s)-\bar
u(s)]+\dbE'\[D_1(t,s)[u(s)-\bar u(s)]\]\Big\}dW(s).\ea$$
On the other hand, by the optimality of $(\bar X(\cd),\bar u(\cd))$,
we have
$$\ba{ll}
\ns\ds0\le\lim_{\e\to0}{J(u^\e(\cd))-J(\bar u(\cd))\over\e}\\
\ns\ds\q=\dbE\int_0^T\Big\{g_x(s)X_1(s)+g_u(s)[u(s)-\bar u(s)]\\
\ns\ds\qq\qq+g_\g(s)\dbE'\[\th^g_x(s)X_1(s,\o)+\th^g_u(s)[u(s,\o)-\bar
u(s,\o)]\\
\ns\ds\qq\qq+\th^g_{x'}(s)X_1(s,\o')+\th^g_{u'}(s)[u(s,\o')-\bar
u(s,\o')]\]\Big\}ds\\
\ns\ds\q=\dbE\int_0^T\Big\{\[g_x(s)+g_\g(s)\dbE'\th^g_x(s)\]X_1(s)+\[g_u(s)+g_\g(s)\dbE'\th^g_u(s)\]
[u(s)-\bar u(s)]\\
\ns\ds\qq\qq+\dbE'\[g_\g(s)\th^g_{x'}(s)X_1(s)+g_\g(s)\th^g_{u'}(s)[u(s)-\bar
u(s)]\]\Big\}ds\\
\ns\ds\q=\dbE\int_0^T\Big\{a_0(s)^TX_1(s)+b_0(s)^T[u(s)-\bar u(s)]\\
\ns\ds\qq\qq+\dbE'\[c_0(s)^TX_1(s)+d_0(s)^T[u(s)-\bar
u(s)]\]\Big\}ds\\
\ns\ds\q=\dbE\Big\{\h\f_0+\int_0^T\(a_0(s)^TX_1(s)+\dbE'\[c_0(s)^TX_1(s)\]\)ds\Big\},\ea$$
where
$$\left\{\ba{ll}
\ns\ds g_\xi(s)=g_\xi(s,\bar X(s),\bar u(s),\G^g(s,\bar X(s),\bar
u(s))),\qq\xi=x,u,\g,\\
\ns\ds\th^g_\xi(s)=\th^g_\xi(s,\o,\o',\bar X(s,\o),\bar u(s,\o),\bar
X(s,\o'),\bar u(s,\o')),\qq\xi=x,u,x',u',\ea\right.$$
and
$$\left\{\ba{ll}
\ns\ds a_0(s)^T=g_x(s)+g_\g(s)\dbE'\th^g_x(s),\qq
b_0(s)^T=g_u(s)+g_\g(s)\dbE'\th^g_u(s),\\
\ns\ds c_0(s)^T=g_\g(s)\th^g_{x'}(s),\qq
d_0(s)^T=g_\g(s)\th^g_{u'}(s),\\
\ns\ds\h\f_0=\int_0^T\Big\{b_0(s)^T[u(s)-\bar
u(s)]+\dbE'\[d_0(s)^T[u(s)-\bar u(s)]\]\Big\}ds.\ea\right.$$
Then for any undetermined $(Y(\cd),Z(\cd\,,\cd))\in\cM^2[0,T]$,
similar to the proof of Theorem 4.1, we have
$$\ba{ll}
\ns\ds\dbE\int_0^T\lan Y(t),\h\f(t)\ran dt=\dbE\int_0^T\lan
X_1(t),Y(t)-\int_t^T\(A_0(s,t)^TY(s)+A_1(s,t)^TZ(s,t)\\
\ns\ds\qq\qq\qq\qq\qq\qq+\dbE^*\[C_0(s,t)^TY(s)+C_1(s,t)^TZ(s,t)\]\)ds\ran
dt.\ea$$
Hence,
$$\ba{ll}
\ns\ds0\le\dbE\Big\{\h\f_0+\int_0^T\(a_0(s)^TX_1(s)+\dbE'\[c_0(s)^TX_1(s)\]\)ds\Big\}\\
\ns\ds\q=\dbE\Big\{\h\f_0-\int_0^T\lan Y(t),\h\f(t)\ran
dt+\int_0^T\lan
X_1(t),Y(t)-\int_t^T\(A_0(s,t)^TY(s)\\
\ns\ds\qq\qq+A_1(s,t)^TZ(s,t)+\dbE^*\[C_0(s,t)^TY(s)+C_1(s,t)^TZ(s,t)\]\)ds\ran
dt\\
\ns\ds\qq\qq+\int_0^T\(\lan X_1(t),a_0(t)\ran+\dbE'\[\lan
X_1(t),c_0(t)\ran\]\)dt\Big\}\\
\ns\ds\q=\dbE\Big\{\h\f_0-\int_0^T\lan Y(t),\h\f(t)\ran
dt+\int_0^T\lan
X_1(t),Y(t)+a_0(t)+\dbE^*c_0(t)\\
\ns\ds\qq-\int_t^T\(A_0(s,t)^TY(s)+A_1(s,t)^TZ(s,t)\\
\ns\ds\qq\qq+\dbE^*\[C_0(s,t)^TY(s)+C_1(s,t)^TZ(s,t)\]\)ds\ran
dt\Big\}.\ea$$
We now let $(Y(\cd),Z(\cd\,,\cd))\in\cM^2[0,T]$ be the adapted
M-solution to the following MF-BSVIE:
\bel{adjoint}\ba{ll}
\ns\ds Y(t)=-a_0(t)-\dbE^*c_0(t)+\int_t^T\(A_0(s,t)^TY(s)+A_1(s,t)^TZ(s,t)\\
\ns\ds\qq\qq+\dbE^*\[C_0(s,t)^TY(s)+C_1(s,t)^TZ(s,t)\]\)ds
dt-\int_t^TZ(t,s)dW(s).\ea\ee
Then
$$\ba{ll}
\ns\ds0\le\dbE\Big\{\h\f_0-\int_0^T\lan Y(t),\h\f(t)\ran dt\Big\}\\
\ns\ds\q=\dbE\Big\{\int_0^T\Big\{\lan b_0(t),u(t)-\bar u(t)\ran+\dbE'\[\lan d_0(t),u(t)-\bar u(t)\ran\]\Big\}dt\\
\ns\ds\qq-\int_0^T\lan Y(t),\int_0^t\(B_0(t,s)[u(s)-\bar
u(s)]+\dbE'\[D_0(t,s)[u(s)-\bar
u(s)]\]\)ds\\
\ns\ds\qq\qq+\int_0^t\(B_1(t,s)[u(s)-\bar
u(s)]+\dbE'\[D_1(t,s)[u(s)-\bar u(s)]\]\)dW(s)\ran dt\Big\}\\
\ns\ds\q=\dbE\Big\{\int_0^T\(\lan b_0(t)+[\dbE^*d_0(t)],u(t)-\bar u(t)\ran\)dt\\
\ns\ds\qq-\int_0^T\lan\int_t^T\(B_0(s,t)^TY(s)+\dbE^*[D_0(s,t)^TY(s)]\)ds,u(t)-\bar
u(t)\ran dt\\
\ns\ds\qq-\int_0^T\lan\int_t^T\(B_1(s,t)^TZ(s,t)+\dbE^*[D_1(s,t)^TZ(s,t)]\)ds,u(t)-\bar
u(t)\ran dt\Big\}.\ea$$
Hence, we must have the following variational inequality:
\bel{variational}\ba{ll}
\ns\ds\lan
b_0(t)+[\dbE^*d_0(t)]-\int_t^T\(B_0(s,t)^TY(s)+\dbE^*[D_0(s,t)^TY(s)]\\
\ns\ds\qq\qq\qq\qq+B_1(s,t)^TZ(s,t)+\dbE^*[D_1(s,t)^TZ(s,t)]\)ds,u-\bar
u(t)\ran\ge0,\\
\ns\ds\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq\qq\forall u\in U,~\ae
t\in[0,T],~\as\ea\ee
We now summarize the above derivation.
\ms
\bf Theorem 5.1. \sl Let {\rm(H1)$''$--(H2)$''$} and
{\rm(H1)$'''$--(H2)$'''$} hold and let $(\bar X(\cd),\bar u(\cd))$
be an optimal pair of Problem {\rm(C)}. Then the adjoint equation
$(\ref{adjoint})$ admits a unique adapted M-solution
$(Y(\cd),Z(\cd\,,\cd))\in\cM^2[0,T]$ such that the variational
inequality $(\ref{variational})$ holds.
\ms
\rm
The purpose of presenting a simple optimal control problem of
MF-FSVIEs here is to realize a major motivation of studying
MF-BSVIEs. It is possible to discuss Bolza type cost functional.
Also, some of the assumptions assumed in this section might be
relaxed. However, we have no intention to have a full exploration of
general optimal control problems for MF-FSVIEs in the current paper
since such kind of general problems (even for FSVIEs) are much more
involved and they deserve to be addressed in another paper. We will
report further results along that line in a forthcoming paper.
|
2,869,038,155,047 | arxiv | \section{Introduction}
\label{sec:Intro}
Probabilistic projections of mortality measures are important for many applications including population projection and pension and healthcare planning.
Until recently, most projections of mortality measures were deterministic, although the UN has recently started to base its official projections of mortality on probabilistic methods. Most projections, deterministic or probabilistic, do not incorporate cause-of-death information or other covariates.\par
\citet{leecarter:1992} developed the first method for projecting a mortality measure probabilistically. The Lee-Carter method projects age-specific mortality rates and is widely used today. It requires at least three time periods of age-specific death rates, an amount of data that is not available in many countries.
The method assumes that the logarithm of the age-specific death rates will increase linearly in the future, which may not be optimal for long term projections \citep{lee:2001:evaluatingLeeCarter}. \citet{girosi:2008:demforecasting} proposed a Bayesian method for smoothing age-specific death rates over both age and time. Though this method allows for the incorporation of covariates, it has been shown to perform well only for countries with good vital registration data. Like the Lee-Carter method, the \citet{girosi:2008:demforecasting} method assumes a constant rate of increase.
\citet{lutz:1998:expert} developed an expert-based method for probabilistic projections of population that incorporates subjective probabilistic projections for several demographic measures, including life expectancy.
\par \citet{raftery:2013:e0, raftery:2014:jointe0} presented a Bayesian hierarchical model (BHM) for projecting male and female life expectancy probabilistically for all countries of the world to 2100, and this method is now used by the UN as an input to its official population projections \citep{un:WPP2015}.
We extend the model in \citet{raftery:2013:e0} to include covariate information about generalized HIV epidemic prevalence and coverage of antiretroviral therapy (ART) in each country. A country is said to have a generalized HIV/AIDS epidemic when HIV prevalence is greater than 1\% in the general population, and it is not concentrated in at risk subgroups. While there are many diseases that have a high impact on mortality in a given country, the generalized HIV epidemic is unusual in that it dramatically increases age-specific mortality rates at prime adult ages. Its demographic impact is therefore different from that of other diseases, which tend primarily to affect mortality rates for very young and/or older ages.
Figure \ref{fig:Botse0HIV} shows the rise of the HIV epidemic and the corresponding evolution of life expectancy at birth in Botswana. There was a sharp lowering of life expectancy with the rise of the epidemic, and then a rapid recovery to pre-epidemic levels following the widespread introduction of ART.
\begin{figure}
\centering
\includegraphics[scale=0.45]{Botswanae0HIVART.pdf}
\caption{Life expectancy at birth (black), HIV prevalence (red) and ART coverage (blue) for Botswana from 1950-1955 to 2005-2015.}
\label{fig:Botse0HIV}
\end{figure}
To incorporate covariate information into a probabilistic projection model, we must also have a method for projecting the covariate of interest into the future. UNAIDS developed the Spectrum/EPP methodology for projecting HIV prevalence and demographic measures, including life expectancy at birth, while accounting for HIV prevalence and ART coverage among other things
\citep{stover:2012:spectrumEPP, stanecki:2012:HIVproj, spectrum:2014:software}.
The method is quite complicated and requires fine-grained data on a number of demographic and health measures for each country. It is recommended for reconstructing the HIV epidemic, including the time of onset, in a particular country and for projecting the epidemic up to five years into the future. It is not designed for the longer term projections that are important for long-term population projections \citep[p. 9]{spectrum:2014:guide}.
We use a version of the EPP package for \texttt{R} for projections of HIV prevalence to 2100 \citep{brown:2010:EPP}. We develop a simpler model for projecting life expectancy at birth while accounting for HIV prevalence and ART coverage that is more practical for long term projections.
\section{Methodology}
\label{sec:Methods}
\subsection{Data}
\label{subsec:data}
We use estimates of female life expectancy at birth from the United Nations \textit{World Population Prospects} (WPP) 2015 Revision \citep{un:WPP2015} for 201 countries. The UN produces estimates of period life expectancy at birth and age-specific mortality rates by five-year periods and five-year age groups; these are updated every two years. There are estimates for each country of the world for each five year period from 1950 to 2015. We do not use life expectancy inputs for Cambodia and Rwanda from the time periods of the genocides in these countries. To fit the model, we use UNAIDS estimates of past HIV prevalence and ART coverage for 40 countries with generalized epidemics. We use 1000 trajectories of HIV prevalence, using the same assumptions as UNAIDS does in their projections. Additionally, we use a single deterministic trajectory of ART coverage from UNAIDS in the projection stage. We code HIV prevalence as zero for all countries not experiencing a generalized HIV epidemic.
\subsection{Review of joint probabilistic projections of male and female life expectancy}
\label{subsec:jointe0}
Our methodology builds on the Bayesian hierarchical model for
probabilistic projection of female and male life expectancy used by the UN
\citep{raftery:2013:e0,raftery:2014:jointe0}, which proceeds as follows.
First, the Bayesian hierarchical model for female life expectancy is estimated using Markov chain Monte Carlo (MCMC), then probabilistic projections of female life expectancy are made from the present day to 2100. Projections of male expectancy are then made based on the projected values of female life expectancy \citep{raftery:2014:jointe0}.
The model provides a way for estimation and projection for one country to be improved using information from other countries.
At the lowest (observation) level,
the Bayesian hierarchical model for female life expectancy at birth is
\begin{eqnarray}
\Delta \ell_{c,t} \equiv \ell_{c,t+1} - \ell_{c,t} & = &
g(\ell_{c,t} \vert \theta^{(c)}) + \varepsilon_{c,t+1},
\label{eq:e0noHIV} \\
\varepsilon_{c,t} & \sim& N(0, (\omega f(\ell_{c,t}))^2),
\label{eq:epsct}
\end{eqnarray}
where $\ell_{c,t}$ is the female life expectancy at birth for country $c$ in time period $t$, $g( \cdot \vert \theta^{(c)})$ is the expected five-year gain in life expectancy, modeled as a double logistic function of current life expectancy and governed by country-specific parameters, $\theta^{(c)}$, $\varepsilon_{c,t+1}$ is a random perturbation around the expected gain, and $f(\ell_{c,t})$ is a smooth function of life expectancy.
The double logistic function for country $c$ is
\begin{equation} \label{eq:doublelogistic}
g(\ell_{c,t} \vert \theta^{(c)}) = \dfrac{k^c}{1+ \exp \left(-\frac{A_1}{\Delta_2^c}(\ell_{c,t} - \Delta_1^c - A_2\Delta_2^c)\right)}
+ \dfrac{z^c - k^c}{1 + \exp \left(-\frac{A_1}{\Delta_4^c}(\ell_{c,t} - \sum_{i=1}^3\Delta_i^c - A_2 \Delta_4^c)\right)},
\end{equation}
where $\theta^{(c)} = (\Delta_1^c, \Delta_1^c, \Delta_1^c, \Delta_1^c, k^c, z^c)$ and $A_1$ and $A_2$ are constants. The parameter $z^c$ is the expected country-specific asymptotic five-year gain in life expectancy. The other parameters govern the maximum value and the pace of rise and fall of expected five-year gains in life expectancy.
At the second level of the model, the country-specific parameters
$\theta^{(c)}$ are assumed to be drawn from the following world distribution:
\begin{center}
$\begin{array}{lr}
\Delta_i^c \vert \sigma_{\Delta_i} \stackrel{\text{iid}}{\sim} \text{Normal}_{[0,100]}( \Delta_i, \sigma_{\Delta_i}^2), & i = 1,\dots,4,\\
k^c \vert \sigma_k \stackrel{\text{iid}}{\sim} \text{Normal}_{[0,10]}(k, \sigma_k^2) & \\
z^c \vert \sigma_z \stackrel{\text{iid}}{\sim} \text{Normal}_{[0,0.653]}(z,\sigma^2_z). & \\
\end{array}$
\end{center}
At the third, top level of the model, prior distributions are specified for
the world parameters $\theta = (\Delta_1, \Delta_2, \Delta_3, \Delta_4, k, z,
\omega)$.
\vspace{0.1in}
The Bayesian hierarchical model is estimated using MCMC via Metropolis-Hastings, Gibbs sampling and slice sampling steps, yielding a joint posterior distribution of all model parameters \citep{raftery:2013:e0}. The smooth function $f(\ell_{c,t})$ specifying the variance of the perturbations is estimated separately and is treated as known in the MCMC algorithm.
Once the model has been estimated, projections of life expectancy are made based on each posterior sample of $\theta^{(c)}$ and a random perturbation, $\varepsilon_{c,t+1}$, drawn from a $N(0, (\omega f(\ell_{c,t}))^2)$ distribution, where $\omega$ is drawn from the posterior distribution.
After female projections of life expectancy are made, projections of male life expectancy, $\ell_{c,t}^m$, are made by modeling the gap between the two \citep{raftery:2014:jointe0}.
\subsection{Probabilistic Projections of Life Expectancy Accounting for HIV Prevalence}
\label{subsec:e0HIV}
We expand the BHM to account for generalized HIV/AIDS epidemics by adding
a covariate to the observation level of the model. The covariate is based on
$HIVnonART_{c,t} = HnA_{c,t}$, defined as follows.
Let $HIV_{c,t}$ and $ART_{c,t}$ be the HIV prevalence and ART coverage in percent of country $c$ at time period $t$, respectively. Then $HnA_{c,t} = HIV_{c,t} \times (100 - ART_{c,t})$; it can be viewed as approximating the percentage of the population who are infected but do not receive ART therapy.
The covariate we found to best predict change in life expectancy
was the change in this quantity, namely
$\Delta HnA_{c,t-1} = HnA_{c,t} - HnA_{c,t-1}$.
Our expanded observation equation is then
\begin{equation} \label{eq:e0HnA}
\Delta \ell_{c,t} = g(\ell_{c,t} \vert \theta^{(c)}) + \beta_{HnA} \Delta HnA_{c,t-1} + \varepsilon_{c,t+1}.
\end{equation}
The parameter $\beta_{HnA}$ is constant across countries and is estimated by MCMC along with the other parameters of the Bayesian hierarchical model. It has a diffuse prior distribution, chosen to be spread out enough that it has little impact on the final inference. Specifically, the prior distribution of $\beta_{HnA}$ is $N\left( 0, 0.25 \times \dfrac{Var(\Delta \ell_{c,t})}{Var(\Delta HnA_{c,t})}\right)$, where the prior variance is determined by the sample variances of observed changes in life expectancy and observed changes in $HnA$. The posterior distribution of $\beta_{HnA}$ is estimated with the other parameters in the MCMC via Gibbs sampling updates.
After estimation, we project female life expectancy in the same manner as outlined in Section~\ref{subsec:jointe0}. However, we make a projection based on each posterior sample of $(\theta^{(c)}, \beta_{HnA})$ and a random perturbation. We account for uncertainty in the HIV trajectories by using 1000 yearly trajectories of HIV projections from EPP \citep{brown:2010:EPP}. For each country $c$ and year $t$, we find the median, $z_{t,c}$, of projected adult HIV prevalence output from EPP. We use a single UNAIDS deterministic projection to 2100 as a baseline reference, and we construct 1000 trajectories from the single UNAIDS trajectory by using 1000 multipliers of the form $\dfrac{z^k_{t,c}}{z_{t,c}}$, at each time point $t$ for $k = 1, \dots, 1000$, where $z^k_{t,c}$ is prevalence at time $t$ in country $c$ in the $k$ simulated trajectory.
Thus the UNAIDS deterministic trajectory serves as the median trajectory of HIV prevalence to 2100, and the EPP trajectories determine the uncertainty.
We construct 5-year averages from the yearly trajectories to be used in the projection stage. From these, we use a single deterministic trajectory of ART coverage to compute 1000 trajectories of $\Delta HnA_{c,t}$ for all countries. We sample one out of 1000 trajectories of $\Delta HnA_{c,t}$ with equal probability to be used in the projection stage. \par
For the country of Liberia, the prevalence is projected to be so low in the future (nearly 0) that the multipliers are unrealistically large. We therefore treat it slightly differently. For this country we calculate $z^k_{t,c}-z_{t,c}$ for each time point $t$. We then add this distance to the UNAIDS trajectory to yield 1,000 trajectories with the UNAIDS trajectory as the median and borrow the uncertainty from the EPP trajectories. \par
The methods of \citet{raftery:2013:e0} and \citet{raftery:2014:jointe0} did not use the generalized HIV epidemic countries in model estimation.
By contrast, our estimation of the BHM does include these countries.
The covariate values are set to zero for non-epidemic countries.
Thus, the estimation of country-specific parameters and the projection of life expectancy for non-epidemic countries changes negligibly; we are fitting the model in \eqref{eq:e0noHIV} for these countries. For epidemic countries, the model in \eqref{eq:e0HnA} allows us to adjust for the effects of HIV on life expectancy in the linear term and to interpret $g(\cdot \vert \theta^{(c)})$ as the expected five-year gain in life expectancy in the absence of the epidemic.
Though high AIDS prevalence takes a big toll on a country's life expectancy at birth in the absence of ART, ART extends an infected person's life substantially. Several epidemiological case studies show that patients have nearly normal life expectancy when treated with ART \citep{mills:2011:ARTe0, johnson:2013:ARTe0}. In a country where ART coverage is high, a generalized HIV epidemic affects life expectancy like a chronic disease \citep{deeks:2013:HIVchronic}. \par
In a manner similar to \citet{raftery:2013:e0}, the distribution of random perturbations in the projection stage is $\varepsilon_{c,t+1} \sim N(0, (\omega \times f(\ell_{c,t,i})^2)$, where $\omega$ is a model parameter, $f(\ell_{c,t,i})$ is a smooth function and $i$ is an indicator for generalized HIV epidemic. To estimate $f(\ell_{c,t,i})$, we fit the model in \eqref{eq:e0HnA} using the same function $f(\ell_{c,t})$ as used by \citet{raftery:2013:e0}. Then, using mean posterior estimates of $g(\ell_{c,t} \vert \theta^{(c)})$, we projected life expectancy forward from 1950-1955 to the 2010-2015 period using only the mean model in \eqref{eq:e0HnA} with no random perturbations. We then calculated absolute residuals for these projections.\par
We fit loess curves to the absolute residuals for non-epidemic countries and for epidemic countries separately. These curves can be seen in Figure~\ref{fig:loess}. The black dots represent the absolute residuals from HIV countries, and the red loess curve is fit to these points. The grey dots represent the absolute residuals from non-HIV countries, and the blue curve is fit to these points. Here one can see that the HIV countries have more variability than the non-HIV countries. This variability is disseminated into the future for countries currently experiencing a generalized epidemic. For non-HIV countries, $f(\ell_{c,t,i=nonHIV})$ is the blue curve in Figure~\ref{fig:loess}. For HIV countries, $f(\ell_{c,t,i=HIV})$ is the maximum of the blue and red curves up to the highest observed life expectancy for an HIV country to date, namely 78.1. For projected female life expectancies above 78.1 years, we use the blue curve plus a constant that is the vertical difference between the red and blue curves at 78.1 years.
\begin{figure}[h]
\centering
\includegraphics[scale=0.55]{loessfinalpaperdraft.pdf}
\caption{The black dots represent absolute residuals for epidemic countries; the grey dots represent absolute residuals for non-epidemic countries. The blue line is the loess fit to the non-epidemic residuals. The red line is the loess fit to the epidemic residuals.}
\label{fig:loess}
\end{figure}
\par
\subsection{Model Validation}
\label{subsec:Validation}
\begin{table}[ht]
\centering
\caption{Predictive Validation Results for Female Life Expectancy.
The first column represents the set of countries used in the subsequent calibration calculations. The second column represents the time period of data used to fit the model, and the third column represents the time periods used in validation. The fourth column represents the number of countries used in validation. In the fifth column, ``No Covariates" represents the model in \eqref{eq:e0noHIV} and ``$\Delta HnA$" represents the model in \eqref{eq:e0HnA}. The sixth column contains the MAE as defined in Section~\ref{subsec:Validation}. The seventh and eighth columns contain coverage metrics for the 80\% and 95\% predictive intervals respectively.}
\vspace{0.2in}
\begin{tabular}{|c|c|c|c|c|c|cc|}
\hline
\multicolumn{1}{|c|}{Countries} & \multicolumn{1}{c|}{Training} &\multicolumn{1}{c|}{Test} &\multicolumn{1}{c|}{$n$} &\multicolumn{1}{c|}{Model} & \multicolumn{1}{c|}{MAE} & \multicolumn{2}{c|}{Coverage} \\ \cline{7-8}
& Period & Period & & & & 80\% & 95\% \\
\hline
\multirow{8}{*}{HIV} & \multirow{2}{*}{1950-2005} & \multirow{2}{*}{2005-2015} & \multirow{2}{*}{69} & No Covariates & 3.47 & 0.49 & 0.58 \\
& & & & $\Delta HnA$ & 2.29 & 0.71 & 0.87 \\ \cline{2-8}
& \multirow{2}{*}{1950-2005} & \multirow{2}{*}{2005-2010} & \multirow{2}{*}{40} & No Covariates & 2.40 & 0.53 & 0.63 \\
& & & & $\Delta HnA$ & 2.22 & 0.68 & 0.83 \\ \cline{2-8}
& \multirow{2}{*}{1950-2005} & \multirow{2}{*}{2010-2015} & \multirow{2}{*}{29} & No Covariates & 4.95 & 0.45 & 0.52 \\
& & & & $\Delta HnA$ & 2.39 & 0.76 & 0.93\\ \cline{2-8}
& \multirow{2}{*}{1950-2010} & \multirow{2}{*}{2010-2015} & \multirow{2}{*}{29} & No Covariates & 2.24 & 0.59 & 0.62\\
& & & & $\Delta HnA$ & 1.74 & 0.83 & 0.97 \\ \cline{2-8}
\hline
\multirow{8}{*}{All} & \multirow{2}{*}{1950-2005} & \multirow{2}{*}{2005-2015} & \multirow{2}{*}{390} & No Covariates & 1.15 & 0.83 & 0.91 \\
& & & & $\Delta HnA$ & 0.94 & 0.88 & 0.96 \\ \cline{2-8}
& \multirow{2}{*}{1950-2005} & \multirow{2}{*}{2005-2010} & \multirow{2}{*}{201} & No Covariates & 0.84 & 0.85 & 0.92 \\
& & & & $\Delta HnA$ & 0.81 & 0.90 & 0.96\\ \cline{2-8}
& \multirow{2}{*}{1950-2005} & \multirow{2}{*}{2010-2015} & \multirow{2}{*}{189} & No Covariates & 1.49 & 0.80 & 0.91 \\
& & & & $\Delta HnA$ & 1.09 & 0.86 & 0.97\\ \cline{2-8}
& \multirow{2}{*}{1950-2010} & \multirow{2}{*}{2010-2015} & \multirow{2}{*}{189} & No Covariates & 0.74 & 0.84 & 0.91 \\
& & & & $\Delta HnA$ & 0.66 & 0.89 & 0.97 \\ \cline{2-8}
\hline
\end{tabular}
\label{tab:modelvalid}
\end{table}
We performed predictive out of sample validation to assess our model. First, we fit the model in \eqref{eq:e0HnA} using data from 1950-1955 up to 2000-2005 and projected female life expectancy for the time periods 2005-2010 and 2010-2015.
We also fit the model in \eqref{eq:e0HnA} using data from 1950-1955 up to 2005-2010, and projected female life expectancy for the time period 2010-2015.
Table~\ref{tab:modelvalid} presents our results. The first column designates the set of countries for which the metrics have been calculated. Note that though the calibration of predictive intervals is designed to be nominal for ``All" countries, we include the subset of HIV countries for more detail. The second and third columns reflect the period of data used to train the model and validate the model, respectively. The fourth column contains the number of countries for which the subsequent calibration metrics are calculated.
There were 12 countries for which no new data became available between the
publication of the 2012 UN estimates in WPP 2012 \citep{un:WPP2012} and the 2015
UN estimates in WPP 2015 \citep{un:WPP2015}.
Hence, the WPP 2015 life expectancy estimate for the time period 2010-2015 is actually the projection of life expectancy for this period from the WPP 2012. As such, there is no ``observed" life expectancy for these countries for the time period 2010-2015, and so we excluded these countries in this time period from the validation exercise. In the fifth column, ``No Covariates" refers to the model in \eqref{eq:e0noHIV} and ``$\Delta HnA$" refers to the model in \eqref{eq:e0HnA}.
The last three columns contain our metrics.
The mean absolute error (MAE) is calculated as
\begin{equation}
\frac{1}{n} \sum_{c \in {\cal C}} \sum_{t \in {\cal T}}
\vert \hat{\ell}_{c,t} - \ell_{c,t} \vert,
\label{eq-MAE}
\end{equation}
where $\hat{\ell}_{c,t}$ is the median projection of female life expectancy for country $c$ in time period $t$.
In (\ref{eq-MAE}), ${\cal C}$ is the set of countries involved in calculating
the MAE (either the HIV countries or all countries),
${\cal T}$ is the set of five-year time periods involved as shown in the third
column, and $n$ is the number of country-time period combinations as
shown in the fourth column.
The last two columns show the proportion of countries whose 80\% and 95\% posterior predictive intervals contain the observed life expectancy in the validation period of interest.\par
In all the out of sample scenarios, we saw substantial improvements in coverage for HIV countries after accounting for HIV prevalence and ART coverage. We broke down the two-period out of sample exercise into the two projection periods to get more detailed information about the HIV countries. In 2005-2010, the model with no covariates missed 15 out of 40 HIV countries at the 95\% level. When we accounted for HIV prevalence and ART coverage, the number of HIV countries missed went down by over half to only 7 at the 95\% level.
For 2010-2015, the model with no covariates missed 14 out of 29 HIV countries at the 95\% level, and accounting for HIV prevalence and ART coverage
reduced this to only 2 HIV countries in the time period 2010-2015.
In the leave-two-time-periods-out validation exercise, we saw a decrease in MAE for HIV countries in every case, while the MAE remained unchanged for non-HIV countries. For the non-HIV countries, the addition of the covariate in the model in \eqref{eq:e0HnA} changed coverage negligibly.
When fitting the model using data from 1950-1955 up to 2005-2010 and projecting female life expectancy for 2010-2015, we also saw improvements. Our coverage was closer to nominal for HIV countries after accounting for HIV prevalence and ART coverage. We missed 11 HIV countries out of 29 in the model with no covariates, but only one HIV country after accounting for the HIV epidemic. The MAE also decreased when accounting for HIV prevalence and ART coverage.\par
Predictive validation results for male life expectancy are shown in
Table \ref{tab:modelvalidM} in Appendix B, and the conclusions are broadly
similar.
The current method used by the UN to project life expectancy in the presence of the HIV epidemic is the Spectrum/EPP package \citep{spectrum:2014:software, stanecki:2012:HIVproj, stover:2012:spectrumEPP}.
However, Spectrum is a complicated model with heavy data demands and is intended only for short-term projections up to five years into the future.
An important question is therefore whether our simpler method can produce short-term projections similar to those of the more complex Spectrum method.
To answer this, we fit our model in \eqref{eq:e0HnA} with WPP 2012 estimates of female life expectancy from 1950-1955 up to 2005-2010 \citep{un:WPP2012}. Then we projected female life expectancy to 2010-2015. We compared the projections from our simpler model designed to make long-term projections to the WPP 2012 projection for 2010-2015 made using Spectrum. In the left panel of Figure~\ref{fig:spectrumcompare}, we see that the five-year projections from our simpler method are similar to the projections from the more complicated Spectrum. In fact, the correlation between the projections from our proposed model and those published in WPP 2012 is 0.89.
The right panel of Figure~\ref{fig:spectrumcompare} shows the absolute deviation from the WPP 2015 estimate of female life expectancy in 2010-2015 for our projections and the projections published in WPP 2012. The WPP 2012 projections have a mean absolute error of 2.70 years using the WPP 2015 estimate for comparison. The projections produced with the model in \eqref{eq:e0HnA} have a mean absolute error of 2.17 years.
\begin{figure}
\centering
\includegraphics[scale=0.45]{projcomparednonARTwpp2015.pdf}
\includegraphics[scale=0.45]{absdevdnonARTwpp2015.pdf}
\caption{Comparison between short-term projections in WPP 2012 using Spectrum, and our simpler method.
On the $x$-axis are projections of female life expectancy in 2010-2015 produced using Spectrum and published in WPP 2012. On the $y$-axis are projections of female life expectancy in 2010-2015 produced by fitting the model in \eqref{eq:e0HnA} with WPP2012 data up to 2005-2010. The projections are mostly similar and remain close to the $y=x$ line.}
\label{fig:spectrumcompare}
\end{figure}
The large outlier in the right panel of Figure~\ref{fig:spectrumcompare} is the country of Botswana, and corresponds to the largest deviation seen in the left panel of Figure~\ref{fig:spectrumcompare}. Botswana has the highest HIV prevalence in the world, at 24.3\% in 2010-2015, and has had a recent scale up of ART coverage. The boost in ART coverage yielded a rapid recovery in life expectancy of nearly 20 years from the WPP 2012 estimate for the 2005-2010 time period. Our model captures this large jump in life expectancy. In summary, our model produces similar projections to the current methodology designed for short-term projections, but using a much simpler model with smaller data requirements.
\section{Case Studies}
\label{sec:CaseStudies}
We now give specific results for five countries that illustrate specific
aspects of the method. Results for all countries we consider as having
generalized epidemics are given for female life expenctancy in Appendix A
and for male life expectancy in Appendix B.
\subsection{Nigeria}
Nigeria in West Africa is the most populous country in Africa. It has a relatively small epidemic, with prevalence of 3.6\% in 2010-2015. Figure~\ref{fig:Nigeria} shows a comparison between projections of life expectancy under the model \eqref{eq:e0noHIV} with no covariates in blue, and the model \eqref{eq:e0HnA} with the HIV covariate in red.
The median projections of female life expectancy are higher than those
projected when not accounting for the HIV/AIDS epidemic, and
accounting for the epidemic leads to more uncertainty about the future trajectory of female life expectancy in Nigeria.
\begin{figure}
\centering
\includegraphics[height=7cm]{Nigeriae0FUNEPP2.pdf}
\includegraphics[height=7cm]{NigeriaHIVUNEPP.pdf}
\caption{The left panel shows projections of female life expectancy in Nigeria under the model in equation \eqref{eq:e0noHIV} in blue and equation \eqref{eq:e0HnA} in red. The solid lines represent medians, and the dashed lines are the 95\% intervals. After accounting for HIV prevalence and ART coverage, we see slightly more uncertainty about the future life expectancy in Nigeria and slightly higher median projections. The right panel shows the single trajectory of past estimates of HIV we use in model fitting in black. In red we have the median, 80\% interval and 90\% interval of probabilistic trajectories of HIV prevalence from EPP we use in our projections.}
\label{fig:Nigeria}
\end{figure}
\subsection{Kenya}
Kenya in East Africa has a medium-sized epidemic with HIV prevalence 5.7\% in 2010-2015. Figure~\ref{fig:Kenya} shows that Kenya has already recovered to pre-epidemic life expectancy levels. After accounting for HIV prevalence and ART coverage, we project a slightly higher median female life expectancy to 2100 with more uncertainty at all time periods.
\begin{figure}
\centering
\includegraphics[height=7cm]{Kenyae0FUNEPP2.pdf}
\includegraphics[height=7cm]{KenyaHIVUNEPP.pdf}
\caption{The left panel shows projection of female life expectancy in Kenya under the model in equation \eqref{eq:e0noHIV} in blue and equation \eqref{eq:e0HnA} in red. The solid lines represent medians, and the dashed lines are the 95\% intervals. After accounting for HIV prevalence and ART coverage, we see a higher median projection of life expectancy with more uncertainty. The right panel shows the single trajectory of past estimates of HIV we use in model fitting in black. In red we have the median, 80\% interval and 90\% interval of probabilistic trajectories of HIV prevalence from EPP we use in our projections.}
\label{fig:Kenya}
\end{figure}
\subsection{South Africa}
South Africa has the largest HIV/AIDS epidemic in the world in absolute numbers.
The estimated prevalence in the 2010-2015 time period is 17.5\%. Figure~\ref{fig:SouthAfrica} shows a comparison between projections of life expectancy under the model in \eqref{eq:e0noHIV} in blue and the model in \eqref{eq:e0HnA} in red. Figure~\ref{fig:SouthAfrica} reflects the clear impact of ART coverage on recovery in life expectancy under a large epidemic. After accounting for HIV prevalence and ART coverage, we project an initial recovery to pre-epidemic life expectancy levels with a steady rise through the end of the century. When not accounting for the HIV epidemic and, particularly, ART coverage, the model projects median end of century life expectancy only slightly higher than South Africa's life expectancy before the HIV/AIDS epidemic.
This is contrary to the epidemiological literature referenced in Section~\ref{subsec:e0HIV} that shows life expectancy recovers quickly after a scale-up of ART coverage.\par
As mentioned in Section~\ref{subsec:Validation}, there are a number of countries for which the UN did not have up-to-date life expectancy data at the time of publication of the WPP 2015, including South Africa. The UN estimates the female life expectancy for the period 2010-2015 as 59.1 years \citep{un:WPP2015}.
Statistics South Africa has published mid-year estimates of life expectancy
for each calendar year (Statistics South Africa 2010, 2011, 2013, 2014, 2015),
\nocite{ssa:2010,ssa:2011,ssa:2013,ssa:2014,ssa:2015}
and averaging these gives an estimate for the five-year period 2010-2015
of 60.7 years.
When we fit our model with data up to 2005-2010 and project forward
five years to 2010-2015, our 95\% interval for 2010-2015 is (56.6, 65.1).
When we fit our model with data only up to 2000-2005 and project forward
ten years to 2010-2015, our 95\% interval for 2010-2015 is (61.3, 63.7).
In both cases, our interval captures the outcome, whether measured
by the UN or Statistics South Africa, and in particular the rapid increase
in life expectancy due to the widespread rollout of ART.
\begin{figure}
\centering
\includegraphics[height=7cm]{SouthAfricae0FUNEPP2.pdf}
\includegraphics[height=7cm]{SouthAfricaHIVUNEPP.pdf}
\caption{The left panel shows projection of female life expectancy in South Africa under the model in equation \eqref{eq:e0noHIV} in blue and equation \eqref{eq:e0HnA} in red. The solid lines represent medians, and the dashed lines are the 95\% intervals. After accounting for HIV prevalence and ART coverage, we see an initial recovery of life expectancy to pre-epidemic levels followed by a steady rise through the end of the century. The right panel shows the single trajectory of past estimates of HIV we use in model fitting in black. In red we have the median, 80\% interval and 90\% interval of probabilistic trajectories of HIV prevalence from EPP we use in our projections.}
\label{fig:SouthAfrica}
\end{figure}
\subsection{Botswana}
Due to the early and rapid rise of HIV prevalence in Botswana, the ART scale-up was also quick. This led to the rapid recovery in life expectancy in the 2005-2010 time period. After accounting for the epidemic and antiretroviral therapy, we project a slightly slower rise in median life expectancy with more certainty than the model that does not account for the epidemic. This is in agreement with the epidemiological literature cited in Section~\ref{subsec:e0HIV} suggesting that HIV/AIDS will affect life expectancy as a chronic disease would after ART has become pervasive.
\begin{figure}
\centering
\includegraphics[height=7cm]{Botswanae0FUNEPP2.pdf}
\includegraphics[height=7cm]{BotswanaHIVUNEPP.pdf}
\caption{The left panel shows projection of female life expectancy in Botswana under the model in equation \eqref{eq:e0noHIV} in blue and equation \eqref{eq:e0HnA} in red. The solid lines represent medians, and the dashed lines are the 95\% intervals. After accounting for HIV prevalence and ART coverage, we see a dampened, slow-rising median projection of life expectancy with less uncertainty.}
\end{figure}
\label{fig:Botswana}
\subsection{Germany}
Germany is an example of a country that does not have a generalized epidemic. As can be seen in Figure~\ref{fig:Germany}, projections of life expectancy under the model in \eqref{eq:e0noHIV} and the model in \eqref{eq:e0HnA} differ negligibly in both median and uncertainty.
\begin{figure}
\centering
\includegraphics[height=7cm]{Germanye0F2015.pdf}
\caption{This figure shows projection of female life expectancy in Germany under the model in equation \eqref{eq:e0noHIV} in black and equation \eqref{eq:e0HnA} in red. The medians are identical, and the predictive intervals differ negligibly.}
\label{fig:Germany}
\end{figure}
\section{Discussion}
We have developed a probabilistic method for projecting life expectancy while accounting for generalized HIV/AIDS prevalence and antiretroviral therapy coverage. Our method has relatively modest data requirements. Through predictive validation we have shown that our method improves upon life expectancy projections for HIV/AIDS countries using the method in \citet{raftery:2013:e0}, while leaving projections for non-epidemic countries essentially unchanged.
Our projections improve in terms of both the mean absolute error of point predictions and the calibration of predictive intervals. Our method produces similar short-term projections to the UNAIDS Spectrum/EPP package, with a simpler model that requires much less data. Moreover, the method can produce long-term projections out to 2100.\par
Our model reflects the literature consensus mentioned in Section~\ref{subsec:e0HIV} that HIV prevalence will have large impacts on life expectancy only in the absence of antiretroviral therapy. Once ART covers a large proportion of the infected population, there is a one-time gain in life expectancy to pre-epidemic levels and the effects will be modest afterwards.
One limitation of our method is the quality of the ART coverage data and projections. As ART coverage is relatively new and hard to measure, the data we have are noisy. Improvements in ART data quality would likely result in improvements in projections for the generalized HIV epidemic countries. Given good quality data, our method could also be extended to account for other covariates that explain changes in life expectancy. The data would need to be available for every country used in model fitting back to 1950. Methodology for projecting the covariates would also be required.\par
\section{Acknowledgements}
The project described was supported by grants R01 HD-054511 and R01 HD-070936 from the Eunice Kennedy Shriver National Institute of Child Health and Human
Development. The authors are grateful to Le Bao, Samuel Clark, Yanjun He and Hana \v{S}ev\v{c}\'{i}kov\'{a} for helpful discussions and sharing data and code.
|
2,869,038,155,048 | arxiv | \section{Introduction}
The nonlinear Schr\"odinger equation (NLS) is one of the few examples of completely integrable nonlinear partial differential equations. Its study has lead to fundamental advances in
the theory and application of integrable systems possessing soliton solutions \cite{Ablowitz},\cite{Faddeev}. These soliton solutions are of importance for a large number of physical and mathematical problems ranging from
optical pulses propagation in nonlinear fibers to hydrodynamics, biophysics and condensed matter physics.
However, the application of continuum equations disregards often the inherent discrete lattice structure of the underlying system. The latter is naturally the case when dealing with sets of coupled units (oscillators) distributed e.g. in space, and if, the resulting phenomena of the nonlinear dynamics evolve on spatial scales comparable with the typical inter-unit distance.
The issue of discretisation of NLS was addressed early in \cite{AL} where it was noticed that among a large number of possible discretisation, leading to nonlinear lattice systems, there is one that is also integrable termed the Ablowitz-Ladik, or AL equation.
In contrast to the completely integrable AL equation, the so-called standard
discrete nonlinear Schr\"odinger equation, or DNLS equation, is known to be nonintegrable exhibiting also chaotic dynamics \cite{DNLS}.
Discrete nonlinear Schr\"odinger equations have been widely used in various contexts ranging from models of light propagation in arrays of optical waveguides \cite{application0}, the dynamics of atomic Bose-Einstein condensates trapped in optical lattices \cite{Morsch}, the study of denaturation of DNA double helix strands \cite{Peyrard}, breathers in granular materials \cite{Chong} to the dynamics of protein loops \cite{application1}.
The AL equation, due to its complete integrability, has been utilised for the development of perturbation theory exploring, for example, the problem of collisions between solitary waves in (nonintegrable)
lattices \cite{Kevredikis1}. Notably, besides the soliton solutions another important class of solutions of the AL equation are rational solutions which are discrete versions of the Peregrine soliton and the Kuznetsov-Ma breather \cite{akhm_AL},\cite{akhm_AL2}. These rational solutions have become a field of intense research recently \cite{Kim2}-\cite{2DAL3}.
For a more realistic description of many applications external forcing and dissipation needs to be included in the underlying model.
Regarding the incorporation of dissipation (and external driving forces) into discrete systems, for the ensuing dissipative dynamics attractors for infinite lattice dynamical systems have attracted considerable interest recently \cite{Bates}-\cite{Han}.
For their study the modern theory of infinite-dimensional dynamical systems provides powerful methods \cite{Hale}-\cite{Chueshov}.
We study the following two discrete nonlinear Schr\"odinger equations:
\begin{equation}
i\frac{d \psi_n}{dt}=\kappa(\psi_{n+1}-2\psi_n+\psi_{n-1})+\mu\,|\psi_{n}|^2(\psi_{n+1}+\psi_{n-1})-i\delta \psi_n+g_n,\,\,\,n\in {\mathbb{Z}}\label{eq:AL}
\end{equation}
with $\psi_n \in {\mathbb{C}}$ and initial conditions:
\begin{equation}
\psi_{n}(0)=\psi_{n,0},\,\,\,n \in {\mathbb{Z}},\label{eq:icsAL}
\end{equation}
and
\begin{equation}
i\frac{d \phi_n}{dt}=\kappa(\phi_{n+1}-2\phi_n+\phi_{n-1})+
\gamma|\phi_n|^2\phi_n-i\delta \phi_n+g_n,\,\,\,n\in {\mathbb{Z}}\label{eq:DNLS}
\end{equation}
with $\phi_n \in {\mathbb{C}}$ and initial conditions:
\begin{equation}
\phi_{n}(0)=\phi_{n,0},\,\,\,n \in {\mathbb{Z}}.\label{eq:icsDNLS}
\end{equation}
In what follows we refer to the damped and forced system (\ref{eq:AL}) and (\ref{eq:DNLS}) as dfAL and dfDNLS, respectively.
For the real parameter $\delta >0$, there is damping included in (\ref{eq:AL}),(\ref{eq:DNLS}), while $g=(g_n)_{n\in {\mathbb{Z}}}\neq 0$ serves in both systems as a general external force. The parameter $\mu$ and $\gamma$ determine the nonlinearity strength in (\ref{eq:AL}) and (\ref{eq:DNLS}), respectively. The value of the parameter $\kappa \in \mathbb{R}$
regulates the coupling strength and without loss of generality we will set $\kappa=1$ subsequently.
In the conservative limit, for $\delta=0$, and without external force, i.e. $g=0$, the integrable AL equation and and its nonintegrable DNLS counterpart result, respectively.
Notice that in (\ref{eq:AL}) and (\ref{eq:DNLS}) the nonlinear terms are both of cubic order. However, they are markedly different in the sense that, the nonlinear terms in (\ref{eq:AL}) are of nonlocal nature compared to the local terms in (\ref{eq:DNLS}).
System (\ref{eq:DNLS}),{\ref{eq:icsDNLS})
with a general local nonlinear term has been studied in \cite{Nikos}. For more details
concerning discrete nonlinear Schr\"odinger equations and their applications we refer to \cite{DNLS},\cite{Kevrekidis}.
With the present work we study an existence/closeness/congruence problem
in the sense of ``continuous dependence'' by investigating closeness of the solutions of the dfAL and dfDNLS for close enough initial data. In particular the following questions are tackled: {\em (i) assuming that the initial data of the dfAL \eqref{eq:AL} and the dfDNLS \eqref{eq:DNLS} are sufficiently close in $l^2$, do the associated solutions remain close for sufficiently long times?
(ii) While in the conservative limit the systems (\ref{eq:AL}) and (\ref{eq:DNLS}) exhibit distinct solution behaviour (integrability versus nonintegrability), the integrability of the AL equation gets destroyed by the inclusion of damping and external forcing. Then the question is, what do the two dissipative dynamics lattice systems have in common? Does their asymptotic dynamics possess a global attractor and if so, what is the limit behaviour of the latter? Do the two system even share a global attractor in the end?}
Moreover, from a wider point of view the answers to these questions seem important because the closeness and congruence results are not only relevant for discrete nonlinear Schr\"odinger equations but also for further investigations of the limit behaviour of nonlinear (discrete) lattice systems in general. For instance, the
asymptotic features of different discrete versions of physically important systems such as forced and damped continuum Ginzburg-Landau (GL) equations
can be studied from the perspective described above (we refer to the section \ref{section:outlook} for more details).
We answer the above questions by analytically proving that \textit{at least under certain smallness conditions on the initial data of the dfAL and the dfDNLS, the corresponding solutions remain close for sufficiently long times.} Crucially, with our congruence results we establish that the different discrete nonlinear Schr\"odinger equations exhibit the same asymptotic behaviour, that is they possess a common global attractor.
This is in as far interesting as from the two discrete nonlinear Schr\"odinger equations without external forcing and damping give rise to so profoundly different dynamics.
The outline of the paper is as follows: We compare the solution properties of the two infinite lattice dynamical systems presented by the dfAL and its dfDNLS counterpart with particular attention to their asymptotic behaviour where we
use methods from the theory of infinite-dimensional dynamical systems \cite{Hale}-\cite{Chueshov}.
We establish the well-posedness by proving the global existence of a unique solution to the dfAL and dfDNLS. We demonstrate that, when the distance between the initial data for the dfAL and dfDNLS is sufficiently small in $l^2$ it remains small for sufficiently long time.
We provide a sufficient criterion for the existence of a restricted global attractor for system (\ref{eq:AL}),(\ref{eq:icsAL}) and demonstrate the existence of a global attractor for system (\ref{eq:DNLS}),(\ref{eq:icsDNLS}).
Finally, we prove the congruence of these two attractors, when the dynamics is initialised in an appropriate bounded subset in a Banach space.
For the initial value problem for the infinite system of ordinary differential equations in (\ref{eq:AL}),(\ref{eq:icsAL}) and (\ref{eq:DNLS}),(\ref{eq:icsDNLS}), we consider solutions $\varphi=(\varphi_n)_{n\in {\mathbb{Z}}}\in C^1([0,\infty);l^2)$, where
\begin{equation}
l^2=\left\{ \varphi=(\varphi_n)_{n \in {\mathbb{Z}}}\,\in {\mathbb{C}}\,\,\,\vert \, \parallel \varphi\parallel_{l^2}=\left(\sum_{n \in {\mathbb{Z}}}|\varphi_n|^2\right)^{1/2}<\infty\right\}.\nonumber
\end{equation}
In the following we use the notation
\begin{equation}
B_R:=\left\{ \varphi \in l^2\,|\,
\parallel \varphi\parallel_{l^2}<R\right\}\nonumber
\end{equation}
for the ball centered at $0$ of radius $R$ in $l^2$.
For any $\varphi \in l^2$ we define the linear operators $A,B,B^{*}:\,l^2 \rightarrow l^2$,
\begin{equation}
(A\varphi)_{n }=\varphi_{n+1}-2\varphi_n+\varphi_{n-1},
\end{equation}
\begin{equation}
(B\varphi)_{n}=\varphi_{n+1}-\varphi_n,\qquad (B^{*}\varphi)_{n}=\varphi_{n-1}-\varphi_n. \nonumber
\end{equation}
It holds that
\begin{equation}
(B\varphi,\theta)_{l^2}=(\varphi,B^{*}\theta)_{l^2},\,\,\,\forall \varphi,\theta\in l^2,\nonumber
\end{equation}
and $-A=BB^{*}=B^{*}B$, implying that
\begin{equation}
(A\varphi,\varphi)_{l^2}=-\parallel B\varphi\parallel_{l^2}\le 0,\,\,\,\forall \varphi \in l^2.\nonumber
\end{equation}
Furthermore, we observe that
\begin{equation}
\parallel A\varphi\parallel_{l^2}\le 4\parallel \varphi\parallel_{l^2}.\nonumber
\end{equation}
As $A=A^*$, the linear continuous operator $A$ is self-adjoint on $D(A)=l^2$ and $A\le 0$. Then the operator $A$ generates a uniformly continuous semigroup on $l^2$.
With the help of the transformation $\tilde{\varphi}(t)=\exp(-2it)\varphi(t)$ the linear operator $A$ will be replaced by
the linear operator $\Delta$ determined by $(\Delta \varphi)_n=\varphi_{n+1}+\varphi_{n-1}$ in subsequent work. The effect of this transformation is merely a shift of the continuous spectrum of $A$ so that it comes to lie in the interval $[-2,2]$ instead of $[0,4]$ and one has
\begin{equation}
\parallel \Delta \varphi\parallel_{l^2}\le 2.\label{eq:boundDelta}
\end{equation}
The advantage of this transformation is that with $\Delta$ a more compact notation is achieved (see Eq.\,(\ref{eq:systemglobal})).
\section{Global existence of a unique solution}\label{subsection:existence}
For the current study of existence and uniqueness of a global solution of the dfAL and dfDNLS we combine them in a single system
\begin{equation}
i\frac{d \varphi_n}{dt}=(1+\mu\,|\varphi_{n}|^2)(\Delta \varphi)_n+
\gamma|\varphi_n|^2\varphi_n-i\delta \varphi_n(t)+g_n,\,\,\,n\in {\mathbb{Z}}\label{eq:systemglobal}
\end{equation}
with $\varphi_n \in {\mathbb{C}}$ and initial conditions:
\begin{equation}
\varphi_{n}(0)=\varphi_{n,0},\,\,\,n \in {\mathbb{Z}}.\label{eq:icsglobal}
\end{equation}
Note that for $\gamma=0$ ($\mu=0$) the dfAL (dfDNLS) results from (\ref{eq:systemglobal}).
We formulate the infinite dimensional dynamical system (\ref{eq:systemglobal}),(\ref{eq:icsglobal}) as as an initial value problem in the Hilbert space $l^2$:
\begin{eqnarray}
\dot{\varphi}&=&F(\varphi)\equiv -i(1+\mu\,|\varphi|^2)\Delta\varphi-i
\gamma|\varphi|^2\varphi- \delta \varphi-i g,\,\,\,t>0,\label{eq:Hilbertsystem}\\
\varphi(0)&=&\varphi_0.\label{eq:Hilbertic}
\end{eqnarray}
We will use the following lemma in applying classical ODE theory:
\begin{lemma}
\label{Lemma:Lipschitz}{\it \,\,Let $g=(g_n)_{n\in {\mathbb{Z}}}\in l^2$. The operator $F:\,l^2\rightarrow l^2$, defined by
\begin{equation}
\left(F(\theta)\right)_n= -i(1+\mu\,|\theta_{n}|^2)(\Delta \theta)_n-i
\gamma|\theta_n|^2\theta_n-\delta \theta_n(t)-ig_n\nonumber
\end{equation}
is Lipschitz continuous on bounded sets of $l^2$.}
\end{lemma}
\noindent{\bf Proof:} Let $\theta \in B_R$.
For the nonlinear operator $N:l^2\rightarrow l^2$, $(N(\theta))_n=-i\mu\, |\theta_{n}|^2(\theta_{n+1}+\theta_{n-1})-i
\gamma|\theta_n|^2\theta_n$ we have
\begin{eqnarray}
\parallel N(\theta) \parallel_{l^2}^2&=&\sum_n \left|\mu\,|\theta_{n}|^2(\theta_{n+1}+\theta_{n-1})+
\gamma|\theta_n|^2\theta_n\right|^2\nonumber\\
&\le& \sum_n \left(\mu^2\,|\theta_{n}|^4|\theta_{n+1}+\theta_{n-1}|^2+
\gamma^2|\theta_n|^4|\theta_n|^2\right)\nonumber\\
&\le&(2\mu^2+\gamma^2)R^4\parallel \theta\parallel_{l^2}^2.\nonumber
\end{eqnarray}
Hence, $N:\,l^2\rightarrow l^2$ is bounded on bounded sets of $l^2$.
For $\varphi,\theta \in B_R$ we derive
\begin{eqnarray}
\parallel N(\theta)-N(\varphi)\parallel_{l^2}^2&=&
\sum_{n \in {\mathbb{Z}}}\left|\mu\,|\theta_{n}|^2(\theta_{n+1}+\theta_{n-1})+
\gamma|\theta_n|^2\theta_n-\mu\,|\varphi_{n}|^2(\varphi_{n+1}+\varphi_{n-1})-
\gamma|\varphi_n|^2\varphi_n\right|^2\nonumber\\
&=&\frac{\mu^2}{4} \sum_{n \in {\mathbb{Z}}}|
\left(|\theta_n|^2+|\varphi_n|^2\right)\left[(\theta_{n+1}-\varphi_{n+1})+(\theta_{n-1}-\varphi_{n-1})\right]\nonumber\\
&+&
\left(|\theta_n|^2-|\varphi_n|^2\right)\left[(\theta_{n+1}+\varphi_{n+1})+(\theta_{n-1}+\varphi_{n-1})\right]|^2\nonumber\\
&+&\frac{\gamma^2}{4}\sum_{n \in {\mathbb{Z}}}|
(|\theta_n|^2 +|\varphi_n|^2)(\theta_n-\varphi_n)
+(|\theta_n|^2 -|\varphi_n|^2)(\theta_n+\varphi_n)|^2
\nonumber\\
&\le& \left(\frac{\mu}{2}\right)^2\left\{
2\sup_{n \in {\mathbb{Z}}}(|\theta_n|^2+|\varphi_n|^2)^2\,\sum_{n \in {\mathbb{Z}}}|\theta_n-\varphi_n|^2\right.\nonumber\\
&+&\left.\sup_{n \in {\mathbb{Z}}}\left((|\theta_{n+1} +\varphi_{n+1}|^2+|\theta_{n-1} +\varphi_{n-1}|^2)
\cdot \left(|\theta_n|+|\varphi_n|\right)^2 \right)\,\sum_{n \in {\mathbb{Z}}}|\theta_n-\varphi_n|^2
\right\}\nonumber\\
&+&\left(\frac{\gamma}{2}\right)^2 \left\{
\sup_{n \in {\mathbb{Z}}}(|\theta_n|^2 +|\varphi_n|^2)^2
\sum_{n \in {\mathbb{Z}}}
|\theta_n-\varphi_n|^2
+\sup_{n \in {\mathbb{Z}}}(|\theta_n| +|\varphi_n|)^4
\sum_{n \in {\mathbb{Z}}}|\theta_n-\varphi_n|^2\right\}\nonumber\\
&\le& (6\mu^2+5\gamma^2)R^4\,\parallel \theta-\varphi\parallel^2_{l^2},\nonumber
\end{eqnarray}
verifying that the map $N:\,l^2\rightarrow l^2$ is Lipschitz continuous on bounded sets of $l^2$ with Lipschitz constant
$L(R)=(6\mu^2+5\gamma^2)R^4$. Furthermore, since, due to (\ref{eq:boundDelta}), $\Delta$ is a bounded linear operator on $l^2$, we conclude that $F(\theta)$ is Lipschitz continuous on bounded sets of $l^2$.
\vspace*{0.5cm}
\hspace{16.5cm} $\square$
\vspace*{0.5cm}
For the proof of global existence of a unique solution to (\ref{eq:Hilbertsystem}),(\ref{eq:Hilbertic}) for $\mu \neq 0, \gamma=0$ we will use the following statement:
\begin{lemma}
{\it \,\,If $P=\sum_{n\in{\mathbb{Z}}}\ln(1+|\psi_n|^2)< \infty$, then $\parallel \psi\parallel_{l^2}<\infty$.}
\label{Lemma:Pmu}
\end{lemma}
\noindent{\bf Proof:} We have
\begin{equation}
P=\sum_{n\in{\mathbb{Z}}}\ln(1+|\psi_n|^2)< \infty\,\,\,\Leftrightarrow\,\,\, \forall \epsilon >0\,\,\,\exists N_{\epsilon} \in \mathbb{N}\,\,\,s.t.\,\,\,\sum_{|n|\ge N_{\epsilon}}\ln(1+|\psi_n|^2)< \epsilon.\nonumber
\end{equation}
Writing $\lambda_n=\ln(1+|\psi_n|^2)$
and choosing $N_{\epsilon}$ such that $\epsilon <1/2$ we get
\begin{eqnarray}
\sum_{|n|\ge N_{\epsilon}}|\psi_n|^2&=&\sum_{|n|\ge N_{\epsilon}}\left(\exp(|\lambda_n|^2)-1\right)=\sum_{|n|\ge N_{\epsilon}}\,\left(\sum_{k=0}^{\infty}\frac{|\lambda_n|^{2k}}{k!}-1\right)\nonumber\\
&=&\sum_{|n|\ge N_{\epsilon}}\,\sum_{k=1}^{\infty}\frac{|\lambda_n|^{2k}}{k!}=\sum_{k=1}^{\infty}\sum_{|n|\ge N_{\epsilon}}\,\frac{|\lambda_n|^{2k}}{k!}\nonumber\\
&\le&
\sum_{k=1}^{\infty}\sum_{|n|\ge N_{\epsilon}}\,|\lambda_n|^{2k}
\le \sum_{k=1}^{\infty}\left(\sum_{|n|\ge N_{\epsilon}}\,|\lambda_n|^{2}\right)^k\nonumber\\
&\le&\sum_{k=1}^{\infty}\epsilon^{k}=\frac{1}{1-\epsilon}-1< 2\epsilon .\nonumber
\end{eqnarray}
In conclusion, for all $0\le \epsilon<1/2$ there exists $0<N_{\epsilon}\in {\mathbb{N}}$ such that
\begin{equation}
\sum_{|n|\ge N_{\epsilon}}|\psi_n|^2<2\epsilon,\,\,\,\forall t\ge 0,\nonumber
\end{equation}
and hence, $\sum_{n\in{\mathbb{Z}}}|\psi_n|^2<\infty$.
\vspace*{0.5cm}
\hspace{16.5cm} $\square$
\vspace{0.5cm}
Regarding the global existence of a unique solution to (\ref{eq:Hilbertsystem}),(\ref{eq:Hilbertic}) we have the following:
\begin{proposition}
{\it \,\,For every $\varphi_0\in l^2$
the system (\ref{eq:Hilbertsystem}) possesses a unique global solution $\varphi(t)$ on $[0,\infty)$ belonging to
$C^1([0,\infty),l^2)$.}\label{Proposition:unique}
\end{proposition}
\noindent{\bf Proof:}
With the proven Lipschitz continuity of the operator $F$ on bounded sets of $l^2$, for any given initial data $\varphi_0\in l^2$
the existence of a unique solution $\varphi(t)\in C^1([0,T_0),l^2)$ for some $T_0>0$ can be verified by standard methods from the theory of ODEs (see e.g. \cite{Zeidler}).
Whenever $T_0<\infty$ then
$\lim_{t\rightarrow T_0^{-}}\parallel \varphi(t)\parallel_{l^2}=\infty$.
\vspace{0.5cm}
Moreover, we show the global existence of the solutions, that is, $T_0=\infty$.
The ensuing analysis is appropriately performed separately for the dfAL and dfDNLS.
First we treat the dfAL for which for convenience we set $\mu=1$. (Note that by the transformation ${\tilde\varphi}(t)=\sqrt{\mu} \varphi(t)$ the amplitude can be accordingly rescaled.)
For the dfAL we consider
\begin{eqnarray}
\frac{d}{dt}{P}(t)&=&\frac{d}{dt}\sum_{n\in{\mathbb{Z}}}\ln(1+|\psi_n(t)|^2)= 2\sum_{n\in{\mathbb{Z}}}\frac{
{\rm Re} \psi_n(t) {\rm Im} g_n -{\rm Re} g_n {\rm Im} \psi_n(t) -\delta |\psi_n(t)|^2}{ 1+|\psi_n(t)|^2}\nonumber\\
&\le& -2\delta \sum_{n\in{\mathbb{Z}}} \frac{|\psi_n(t)|^2}{1+|\psi_n(t)|^2}+4\sum_{n\in{\mathbb{Z}}} \frac{|g_n||\psi_n(t)|}{ 1+|\psi_n(t)|^2}\nonumber\\
&\le& -2\delta \sum_{n\in{\mathbb{Z}}} \frac{|\psi_n(t)|^2}{1+|\psi_n(t)|^2}+
4\sum_{n\in{\mathbb{Z}}} \left[ \frac{\delta}{2}\frac{|\psi_n(t)|^2}{ (1+|\psi_n(t)|^2)^2}+\frac{1}{2\delta}|g_n|^2\right]\nonumber\\
&=&-2\delta \sum_{n\in{\mathbb{Z}}}\left( \frac{|\psi_n(t)|^2}{ 1+|\psi_n(t)|^2}\right)^2+\frac{2}{\delta}\parallel g\parallel_{l^2}^2 < \infty,\,\,\,\forall t\ge 0,\nonumber
\end{eqnarray}
where we used Young's inequality.
Therefore, $P(t)< \infty$ for all $t\ge 0$.
Hence, by Lemma \ref{Lemma:Pmu} above we obtain that $\parallel \psi(t)\parallel_{l^2}
< \infty$ for all $t\ge 0$.
To demonstrate global existence for the dfDNLS we consider
\begin{eqnarray}
\frac{d}{dt} \parallel\phi(t)\parallel_{l^2}^2 &=&2\sum_{n\in{\mathbb{Z}}}[{\rm Re} \phi_n(t) {\rm Im} g_n-
{\rm Re} g_n {\rm Im} \phi_n(t)]
-\delta |\phi_n(t)|^2\nonumber\\
&\le& -2\delta \sum_{n\in{\mathbb{Z}}} |\phi_n(t)|^2+4\sum_{n\in{\mathbb{Z}}}|g_n||\phi_n(t)|\nonumber\\
&\le& -\delta \sum_{n\in{\mathbb{Z}}} |\phi_n(t)|^2+ \frac{4}{\delta}\parallel g\parallel_{l^2}^2 < \infty,\,\,\,\forall t\ge 0.\nonumber
\end{eqnarray}
That is, $\parallel \phi(t)\parallel_{l^2}^2
< \infty$ for all $t\ge 0$.
\vspace*{0.5cm}
\hspace{16.5cm} $\square$
\vspace*{0.5cm}
In conclusion, for the dfAL for any $\psi_0 \in l^2$, as well as for the dfDNLS for any
$\phi_0 \in l^2$,
the corresponding solution $\psi (t)$ of (\ref{eq:AL}),(\ref{eq:icsAL}) and $\phi (t)$ of (\ref{eq:DNLS}),(\ref{eq:icsDNLS}), respectively,
is bounded
for all $t \in [0,\infty)$. The solution operator determined by
\begin{equation}
S_{\mu}(t):\psi_0 \in l^2 \rightarrow \psi(t)=S_{\mu}(t)\psi_0\in l^2,\,\,\,t\ge 0,\nonumber
\end{equation}
and
\begin{equation}
S_{\gamma}(t):\phi_0 \in l^2 \rightarrow \phi(t)=S_{\gamma}(t)\phi_0\in l^2,\,\,\,t\ge 0,\nonumber
\end{equation}
generates a continuous semigroup $\left\{ S_{\mu}(t)\right\}_{t\ge 0}$ and
$\left\{ S_{\gamma}(t)\right\}_{t\ge 0}$ on $l^2$, respectively.
\section{Closeness of the dfAL and dfDNLS solutions}
In this section we demonstrate that for sufficiently small initial conditions for the dfAL and its dfDNLS counterpart (\ref{eq:icsAL}) and (\ref{eq:icsDNLS}), respectively, and provided the $l^2-$distance between them is sufficiently small,
the distance between the associated solutions to (\ref{eq:AL}) and (\ref{eq:DNLS}) remains small for sufficiently long times $t>0$.
\begin{theorem}
{\it \,\,For every $t_0>0$ there exist a small $\epsilon_0>0$ and positive constants $C_0$ and $C$ such that for every $\epsilon \in (0,\epsilon_0)$ for which the initial conditions of the dfAL, $\psi(0)$, and the dfDNLS, $\phi(0)$, and $g\in l^2$ satisfy:
\begin{equation}
\parallel \phi(0)-\psi(0)\parallel_{l^2}\le C_0 \epsilon^3,\label{eq:distance0}
\end{equation}
and
\begin{equation}
\frac{2}{\delta}\parallel g\parallel_{l^2}\le \parallel \phi(0)\parallel_{l^2}\le K_{\phi}\epsilon,\label{eq:Cphi}
\end{equation}
with some constant $K_{\phi}>0$, the corresponding solutions fulfill for every $t\in [0,t_0]$
\begin{equation}
\parallel \phi(t)-\psi(t)\parallel_{l^2}\le C \epsilon^3.\label{eq:boundy}
\end{equation}
}\label{Theorem:closeness}
\end{theorem}
\noindent{\bf Proof:}
Introducing for the (local) distance the variable
$y_n=\psi_n-\phi_n$ one derives using the Cauchy-Schwarz inequality
\begin{eqnarray}
\frac{d}{dt}\parallel y(t)\parallel_{l^2}^2&=&2\parallel y(t)\parallel_{l^2} \frac{d}{dt}\parallel y(t)\parallel_{l^2}\nonumber\\
&=& \sum_{n \in {\mathbb{Z}}}\left\{i[(\overline{y}_{n+1}+
\overline{y}_{n-1})y_n-({y}_{n+1}+
{y}_{n-1})\overline{y}_n]-2\delta |y_n|^2\right.\nonumber\\
&+&\left.i\mu |y_n+\phi_n|^2[(\overline{y}_{n+1}+\overline{\phi}_{n+1}
+\overline{y}_{n-1}+\overline{\phi}_{n-1})y_n-
({y}_{n+1}+{\phi}_{n+1}
+{y}_{n-1}+{\phi}_{n-1})\overline{y}_n]\right.\nonumber\\
&-&\left.2i\gamma |\phi_n|^2(\overline{\phi}_n y_n-\phi_n \overline{y}_n)\right\}\nonumber\\
&\le& (4-2\delta)\parallel y(t)\parallel_{l^2}^2+
4\mu\left(\parallel y(t)\parallel_{l^2}^3+\parallel \phi(t)\parallel_{l^2}^2 \parallel y(t)\parallel_{l^2}
+\parallel y(t)\parallel_{l^2}^2 \parallel \phi(t)\parallel_{l^2}+\parallel \phi(t)\parallel_{l^2}^3\right) \parallel y(t)\parallel_{l^2}
\nonumber\\
&+&2\gamma \parallel \phi(t)\parallel_{l^2}^3 \parallel y(t)\parallel_{l^2}\nonumber
\end{eqnarray}
where we exploited the continuous embeddings
$l^r\subset l^s,\,\,\,\parallel \phi\parallel_{l^s}\le \parallel\phi\parallel_{l^r},\,\,\,1 \le r\le s \le \infty$.
Let $t_0>0$ be given. Define
\begin{equation}
\overline{T}_0 =\sup \left\{\overline{t} \in [0,t_0]: \sup_{t\in [0,\overline{t}]}\parallel y(t)\parallel_{l^2} \le K_y\epsilon^3\right\}.\label{eq:timeinterval}
\end{equation}
In Lemma \ref{Lemma:asymboundgamma} below we establish that if $\frac{2}{\delta}\parallel g\parallel_{l^2}\le \parallel \phi(0)\parallel_{l^2}$ one has
$\parallel \phi(t)\parallel_{l^2}\le \parallel \phi(0)\parallel_{l^2}$ for all $t>0$.
Then we obtain
\begin{eqnarray}
\frac{d}{dt}\parallel y(t)\parallel_{l^2}&\le&
(2-\delta)\parallel y(t)\parallel_{l^2}\nonumber\\
&+&\left(2\mu \left(K_y^3\epsilon^6+K_{\phi}^2 K_y\epsilon^2+K_y^2 K_{\phi} \epsilon^4+K_{\phi}^3\right)+\gamma K_{\phi}^3\right)\epsilon^3.\nonumber
\end{eqnarray}
Furthermore, for every $t\in [0,\overline{T}_0]$ and sufficiently small $\epsilon >0$ one can find a positive constant $M_1$ independent of $\epsilon$ such that
\begin{equation}
2\mu \left(K_y^3\epsilon^6+K_{\phi}^2 K_y\epsilon^2+K_y^2 K_{\phi} \epsilon^4+K_{\phi}^3\right)+\gamma K_{\phi}^3\le M_1,\nonumber
\end{equation}
giving
\begin{equation}
\frac{d}{dt}\parallel y(t)\parallel_{l^2}\le \epsilon^3 M_1+M_2\parallel y(t)\parallel_{l^2},\nonumber
\end{equation}
with $M_2=2-\delta$.
Applying Gronwall's inequality one gets
\begin{equation}
\frac{d}{dt}\parallel y(t)\parallel_{l^2}\exp(-M_2 t)-\parallel y(0) \parallel_{l^2}\le \int_0^t \epsilon^3 M_1 \exp(-M_2 s) ds \le \epsilon^3 \frac{M_1}{M_2},\nonumber
\end{equation}
yielding with the assumption $\parallel y(0)\parallel_{l^2}\le C_0 \epsilon^3$ for every
$t\in [0,\overline{T}_0]$:
\begin{equation}
\parallel y(t)\parallel_{l^2}\le \left(C_0+\frac{M_1}{M_2}\right)\exp(M_2 t)\epsilon^3.\nonumber
\end{equation}
Thus one can set
\begin{equation}
K_y=\left(C_0+\frac{M_1}{M_2}\right)\exp(M_2 t),\nonumber
\end{equation}
and the time interval in (\ref{eq:timeinterval}) can be extended to the entire time range with $\overline{T}_0=t_0$ using an elementary continuation argument.
This concludes the proof.
\vspace*{0.5cm}
\hspace{16.5cm} $\square$
\vspace*{0.5cm}
After having shown that the distance between the solutions of the
dfAL and the dfDNLS measured in terms of the $l^2-$metric remains small (bounded above by ${\cal{O}}(\epsilon^3)$), compared to the $l^2-$norm of the solutions themselves (bounded above by ${\cal{O}}(\epsilon$), we demonstrate analogous features for the $l^{\infty}-$norm
determining the maximal distance between individual units.
\begin{theorem}
{\it \,\,Let the assumption of Theorem \ref{Theorem:closeness} hold. Assume further that
\begin{equation}
\parallel y(0)\parallel_{l^2} \le\parallel y(0)\parallel_{l^1}\le L_1\epsilon^3\label{eq:yl1}
\end{equation}
For every $t_0>0$, there exist a small $\epsilon_0>0$ and a positive constant $C_{\infty}$ such that for every $\epsilon \in (0,\epsilon_0)$ for
the maximal distance satisfies
\begin{equation}
\parallel \phi(t)-\psi(t)\parallel_{l^\infty}\le C_{\infty}\, \epsilon^3,\,\,\,t\in[0,t_0].\label{eq:boundy3}
\end{equation}
}\end{theorem}
\noindent{\bf Proof:} For the time evolution of the distance variable $y_n=\psi_n-\phi_n$ we derive
\begin{eqnarray}
\frac{d}{dt}y_n&=&
-i({y}_{n+1}+{y}_{n-1})-\delta y_n-i\left[\mu|y_n+\phi_n|^2({y}_{n+1}+{y}_{n-1}+{\phi}_{n+1}+{\phi}_{n-1})-\gamma |\phi_n|^2\phi_n\right].\nonumber
\end{eqnarray}
Performing a spatial Fourier transform
\begin{equation}
y_n(t)=\frac{1}{2\pi}\int_{-\pi}^{\pi}\hat{y}_q(t)\exp(i qn) dq,\qquad\hat{y}_q(t)=\sum_n y_n(t)\exp(-i q n),\nonumber
\end{equation}
gives the system
\begin{equation}
\dot{\hat{y}}_q(t)=-(2i\cos q+\delta)\hat{y}_q(t)-i\hat{F}_q(t),\nonumber
\end{equation}
the formal solution of which is given by
\begin{equation}
\hat{y}_q(t)=\hat{y}_q(0)\exp[-(2i\cos q+\delta)t]-i\int_0^t \hat{F}_q(\tau) \exp[-(2i\cos q+\delta)(t-\tau)]d\tau.\label{eq:FT}
\end{equation}
Applying the inverse Fourier transform to (\ref{eq:FT})
we obtain
\begin{equation}
y_n(t)=\frac{1}{2\pi}\int_{-\pi}^{\pi}\hat{y}_q(0)\exp[-(2i\cos q+\delta)t]\exp(iqn)dq
-\frac{i}{2\pi}\int_{-\pi}^{\pi}\int_0^t \hat{F}_q(\tau) \exp[-(2i\cos q+\delta)(t-\tau)]d\tau \exp(iqn)dq.\nonumber
\end{equation}
With the help of the Cauchy-Schwarz inequality and $\parallel y\parallel_{l^4}\le \parallel y\parallel_{l^2}$
we estimate as follows:
\begin{eqnarray}
|y_n(t)|&\le&\frac{1}{2\pi} \left|\int_{-\pi}^{\pi} \hat{y}_q(0) \exp[-(2i\cos q+\delta)t] \exp(iqn) dq \right|\nonumber\\
&+& \frac{1}{2\pi} \left|\int_{-\pi}^{\pi}\int_0^t \hat{F}_q(\tau) \exp[-(2i\cos q+\delta)(t-\tau)] d\tau \exp(iqn)dq \right|\nonumber\\
&\le& \frac{1}{2\pi} \int_{-\pi}^{\pi} \left|\hat{y}_q(0)\right| dq
+\frac{1}{2\pi} \int_{-\pi}^{\pi} \int_0^t \left|\hat{F}_q(\tau)\right|d\tau dq\nonumber\\
&=&\frac{1}{2\pi}\int_{-\pi}^{\pi} \left|\sum_n y_n(0)\exp(-iqn)\right| dq
+\frac{1}{2\pi}\int_{-\pi}^{\pi}\int_0^t \left|\sum_n F_n(\tau)\exp(-iqn)\right|d\tau dq\nonumber\\
&\le&
\frac{1}{2\pi}\int_{-\pi}^{\pi} \sum_n \left| y_n(0)\right| dq+
\frac{1}{2\pi}\int_{-\pi}^{\pi}\int_0^t \sum_n \left| F_n(\tau)\right|d\tau dq=\sum_n \left| y_n(0)\right| +
\int_0^t \sum_n \left| F_n(\tau)\right|d\tau\nonumber\\
&=&\parallel y(0)\parallel_{l^1}+
\int_0^t\sum_n \left|
\mu|y_n(\tau)+\phi_n(\tau)|^2({y}_{n+1}(\tau)+{y}_{n-1}(\tau)+{\phi}_{n+1}(\tau)+{\phi}_{n-1}(\tau))\right.\nonumber\\
&-&\left.\gamma |\phi_n(\tau)|^2\phi_n(\tau)\right|d\tau\nonumber\\
&\le&
\parallel y(0)\parallel_{l^1}+
\int_0^t\left[2\mu (\parallel y(\tau)\parallel_{l^4}^2 +\parallel \phi(\tau)\parallel_{l^4}^2)
(\parallel y(\tau)\parallel_{l^2}+\parallel \phi(\tau)\parallel_{l^2})+\gamma \parallel \phi(\tau)\parallel_{l^4}^2 \parallel \phi(\tau)\parallel_{l^2}\right]\nonumber\\
&\le&
\parallel y(0)\parallel_{l^1}+
\int_0^t\left[2\mu (\parallel y(\tau)\parallel_{l^2}^2 +\parallel \phi(\tau)\parallel_{l^2}^2)
(\parallel y(\tau)\parallel_{l^2}+\parallel \phi(\tau)\parallel_{l^2})+\gamma \parallel \phi(\tau)\parallel_{l^2}^2 \parallel \phi(\tau)\parallel_{l^2}\right]\nonumber
\end{eqnarray}
Let $t_0>0$ be given and define
\begin{equation}
\overline{T}_0 =\sup \left\{\overline{t} \in [0,t_0]: \sup_{t\in [0,\overline{t}]}=\parallel y(t)\parallel_{l^2} \le K_y\epsilon^3\right\}.\label{eq:timeinterval1}
\end{equation}
Then we derive the following upper bound:
\begin{eqnarray}
|y_n(t)|&\le&\parallel y(0)\parallel_{l^1}+\left[2\mu(K_y^2\epsilon^4+K_{\phi}^2)(K_y\epsilon^2+K_{\phi})+\gamma K_{\phi}^3\right]t\cdot \epsilon^3,\qquad \forall t\in [0,\overline{T}_0].\nonumber
\end{eqnarray}
For every $t\in [0,\overline{T}_0]$ and sufficiently small $\epsilon >0$ one can find a positive constant $M_1$ independent of $\epsilon$ such that
\begin{equation}
2\mu \left(K_y^3\epsilon^6+K_{\phi}^2 K_y\epsilon^2+K_y^2 K_{\phi} \epsilon^4+K_{\phi}^3\right)+\gamma K_{\phi}^3\le M_1,\nonumber
\end{equation}
giving with the hypothesis (\ref{eq:yl1}) for every
$t\in [0,\overline{T}_0]$:
\begin{equation}
|y_n(t)|\le (L_1+M_1)t\cdot\epsilon^3.\nonumber
\end{equation}
Taking the supremum over $n \in {\mathbb{Z}}$ one gets
\begin{equation}
\sup_{n \in {\mathbb{Z}}}
|y_n(t)|=\parallel y(t)\parallel_{l^{\infty}}\le (L_1+M_1)t\cdot\epsilon^3,\qquad \forall t\in [0,\overline{T}_0].\nonumber
\end{equation}
Thus one can set
\begin{equation}
K_{y,\infty}=(L_1+M_1)t,\nonumber
\end{equation}
and the time interval
can be extended to the entire time range with $\overline{T}_0=t_0$ by an elementary continuation argument.
\vspace*{0.5cm}
\hspace{16.5cm} $\square$
We remark that it is desirable to obtain a growth rate of the
distance $||y(t)||_{l^2}$ that is uniformly bounded for any $\epsilon>0$ and any finite $t$ in $(0,\infty)$ as
\begin{eqnarray}
\label{gr}
\frac{d}{dt}||y(t)||_{l^2}\leq M\,\varepsilon^3,\nonumber
\end{eqnarray}
where $M$ depends on the parameters and initial data but not on $t$, and consequently, the distance between solutions grows at most linearly
for any $t\in (0,\infty)$, as
\begin{eqnarray}
\label{gr1}
||y(t)||_{l^2}\leq M\, t\,\varepsilon^3.\nonumber
\end{eqnarray}
\section{Existence of a restricted global attractor for the semigroup $\left\{S_{\mu}(t)\right\}_{t\ge 0}$ in $l^2$}
In order to distinguish in the following between a global attractor (existent for the dfDNLS) and a restricted global attractor (relevant for the dfAL), we recall their definitions:
\vspace{0.5cm}
\noindent{\bf Definition:}{\it \,\,A set ${\cal A}_{\gamma}\subset l^2$ is called a global attractor for the semigroup $\left\{S_{\gamma}(t)\right\}_{t\ge 0}$ associated with the system (\ref{eq:DNLS}),(\ref{eq:icsDNLS}) in $l^2$ if
(i) ${\cal A}_{\gamma}\neq 0$ is a compact subset of $l^2$, (ii) an invariant set, that is,
$S_{\gamma}(t){\cal A}_{\gamma}={\cal A}_{\gamma}$ for all $t\ge 0$,
and, (iii) an attracting set for $\left\{S_{\gamma}\right\}_{t\ge 0}$ in $l^2$, that is, for all bounded $B\subset l^2$, it holds that
$\lim_{t\rightarrow \infty}\,dist(S_{\gamma}(t)B,{\cal A}_{\gamma})=0$, where the Hausdorff semi-distance between two nonempty subsets $U,V$ of $l^2$ is determined by
\begin{equation}
dist(U,V)=\sup_{u\in U}\,\inf_{v\in V}\,d(u,v)_{l^2}.\nonumber
\end{equation}}
\noindent{\bf Definition:}{\it \,\,Let $\left\{S_{\mu}(t)\right\}_{t\ge 0}$ be the semigroup associated with the system (\ref{eq:AL}),(\ref{eq:icsAL}). We say that ${\cal A}_{\mu}\subset l^2$ is a restricted global attractor for $\left\{S_{\mu}(t)\right\}_{t\ge 0}$
in $l^2$ if for some closed, nonempty subset $U\subset B_{R_{\mu}}$ of $l^2$, $S_{\mu}(t):U\rightarrow U$ ($t\ge 0$) is a semigroup on $U$ such that ${\cal A}_{\mu}$ is a global attractor for $\left\{S_{\mu}(t)\right\}_{t\ge 0}$
restricted to $U$, that is, (i) $S_{\mu}(t){\cal A}_{\mu}={\cal A}_{\mu}$ for $t\ge 0$, (ii) ${\cal A}_{\mu}$ is compact, and (iii) ${\cal A}_{\mu}$ attracts solutions of bounded subsets of $U$ \cite{Hale}.}
\vspace{0.5cm}
In this section we establish the existence of a restricted global attractor for the semigroup $\left\{S_{\mu}(t)\right\}_{t\ge 0}$ and in the next section the existence of a global attractor for $\left\{S_{\gamma}(t)\right\}_{t\ge 0}$ is shown.
\vspace{0.5cm}
\subsection{Existence of an absorbing set in $l^2$}
First we explore the existence of an absorbing set in $l^2$ for the dynamical system associated with the damped and forced dfAL (\ref{eq:AL}),(\ref{eq:icsAL}) in the asymptotic regime $t \rightarrow \infty$.
We need the following lemma:
\begin{lemma}
{\it \,\,Assume that $\delta$ and $g=(g_n)_{n\in {\mathbb{Z}}}$
satisfy
\begin{equation}
\delta^2 < 3 \sum_{n\in {\mathbb{Z}}} |g_n|^{4/3}.\label{eq:conditionglobal}
\end{equation}
Then for any $\psi_0$ the solutions exist globally in time and are uniformly bounded satisfying
\begin{equation}
\parallel \psi(t) \parallel_{l^2}\le \parallel \psi(0) \parallel_{l^2}.\label{eq:uniformbound}
\end{equation}
}
\end{lemma}
\noindent{\bf Proof:}
From Lemma \ref{Lemma:Pmu} we get
\begin{eqnarray}
\frac{d}{dt}{P}(t)&=&
\frac{d}{dt}\sum_{n\in{\mathbb{Z}}}\ln(1+|\psi_n(t)|^2)\nonumber\\
&\le& -2\delta \sum_{n\in{\mathbb{Z}}} \frac{|\psi_n(t)|^2}{1+|\psi_n(t)|^2}+4\sum_{n\in{\mathbb{Z}}} \frac{|g_n||\psi_n(t)|}{ 1+|\psi_n(t)|^2}.\nonumber
\end{eqnarray}
The relation (\ref{eq:uniformbound}) is satisfied if $\dot{P}(t)\le 0$, which is true if
\begin{equation}
2\sum_{n\in{\mathbb{Z}}} \left(\frac{1}{ 1+|\psi_n(t)|^2}(-\delta |\psi_n|^2+2|\psi_n||g_n|)\right)\le 0,\nonumber
\end{equation}
leading to
\begin{equation}
-2\delta \sum_{n\in{\mathbb{Z}}} |\psi_n|^2+4\sum_{n\in{\mathbb{Z}}}|\psi_n||g_n|\le 0.\label{eq:suff}
\end{equation}
With the aid of Young's inequality and the continuous embedding $l^2\subset \l^4$ one derives
\begin{equation}
\sum_{n\in{\mathbb{Z}}}|\psi_n||g_n|\le \frac{3}{4}\sum_{n\in{\mathbb{Z}}}|g_n|^{4/3}+\frac{1}{4}\sum_{n\in{\mathbb{Z}}}|\psi_n|^4\le \frac{3}{4}\sum_{n\in{\mathbb{Z}}}|g_n|^{4/3}+\frac{1}{4}\parallel \psi\parallel_{l^2}^4,\nonumber
\end{equation}
so that one obtains the following sufficient condition
\begin{equation}
-2\delta \parallel \psi\parallel_{l^2}^2+\parallel \psi\parallel_{l^2}^4+3\sum_{n\in{\mathbb{Z}}}|g_n|^{4/3}\le 0,\label{eq:ineqglobal}
\end{equation}
for that the relation (\ref{eq:suff}) holds.
If condition (\ref{eq:conditionglobal}) holds, then the inequality (\ref{eq:ineqglobal}) is satisfied for any $\parallel \psi\parallel_{l^2}^2 \in {\mathbb{R}_+}$, so that it follows that $\dot{P}(t)\le 0$, which yields the uniform bound (\ref{eq:uniformbound}}).
\vspace*{0.5cm}
\hspace{16.5cm} $\square$
Now we state the main assertion:
\begin{lemma}
{\it \,\,Assume that the hypothesis (\ref{eq:conditionglobal}) holds and that
\begin{equation}
0<\parallel \psi(0)\parallel_{l^2}^2\le R_{\mu}^2<\frac{\delta}{4\mu}.\label{eq:ALbound}
\end{equation}
Let $(g_n)_{n\in {\mathbb{Z}}}=g \in l^2$ and $(\psi_n(0))_{n\in {\mathbb{Z}}}=\psi_0$. For the dynamical system determined by (\ref{eq:AL}),(\ref{eq:icsAL})
\begin{equation}
S_{\mu}(t):\,\psi_0\in l^2\rightarrow \psi(t)\in l^2,
\end{equation}
there exists a bounded absorbing set $B_{r}$ in $l^2$, that is, for every set $B\subset B_{R_\mu}$ of $l^2$, there is a number $t_0(B,B_{r})>0$ such that
$S_{\mu}(t)B\subset B_{r}$ for all $t\ge t_0(B,B_{r})$.}
\label{Lemma:absorbingmu}
\end{lemma}
\noindent{\bf Proof:} With the assumption (\ref{eq:ALbound}) we estimate
\begin{eqnarray}
\frac{d}{dt}\sum_{n\in {\mathbb{Z}}}|\psi_n|^2&=&
\sum_{n\in {\mathbb{Z}}}\left(
i\,\mu |\psi_n|^2 [\left(\overline{\psi}_{n+1}+\overline{\psi}_{n-1}\right)\psi_{n}-
\left({\psi}_{n+1}+{\psi}_{n-1}\right)\overline{\psi}_{n}
] +i\,(\overline{g}_n\psi_n-g_n\overline{\psi}_n)-2\delta \sum_{n\in {\mathbb{Z}}}|\psi_n|^2 \right)\nonumber\\
&\le&4\mu \sup_{t\ge 0}\sup_{n\in{\mathbb{Z}}}|\psi_n(t)|^2
\parallel \psi(t)\parallel_{l^2}^2-2\delta \parallel \psi(t)\parallel_{l^2}^2+4\parallel g\parallel_{l^2}\parallel \psi(t)\parallel_{l^2}\nonumber\\
&\le& 4\mu \sup_{t\ge 0}\parallel\psi(t)\parallel_{l^\infty}^2
\parallel \psi(t)\parallel_{l^2}^2-2\delta \parallel \psi(t)\parallel_{l^2}^2+4\parallel g\parallel_{l^2}\parallel \psi(t)\parallel_{l^2}\nonumber\\
&\le& 4\mu \parallel\psi(0)\parallel_{l^2}^2
\parallel \psi(t)\parallel_{l^2}^2-2\delta \parallel \psi(t)\parallel_{l^2}^2+4\parallel g\parallel_{l^2}\parallel \psi(t)\parallel_{l^2}\nonumber\\
&\le& 4\mu R_{\mu}^2 \parallel \psi(t)\parallel_{l^2}^2-2\delta \parallel \psi(t)\parallel_{l^2}^2+4\parallel g\parallel_{l^2}\parallel \psi(t)\parallel_{l^2}.\label{eq:estimateattractorpsi}
\end{eqnarray}
Using Young's inequality for the last term on the right side of (\ref{eq:estimateattractorpsi})
\begin{equation}
4\parallel g\parallel_{l^2}\parallel \psi(t)\parallel_{l^2}\le
\frac{4}{\delta}\parallel g\parallel_{l^2}^2
+ \delta \parallel \psi(t)\parallel_{l^2}^2,
\end{equation}
we have
\begin{equation}
\frac{d}{dt}\parallel \psi(t)\parallel_{l^2}^2 +2\delta \parallel \psi(t)\parallel_{l^2}^2\le (4\mu R_{\mu}^2 +\delta)\parallel \psi(t)\parallel_{l^2}^2+\frac{4}{\delta}\parallel g\parallel_{l^2}^2,\nonumber
\end{equation}
so that
\begin{equation}
\frac{d}{dt}\parallel \psi(t)\parallel_{l^2}^2 +\left(\delta-4\mu R_{\mu}^2\right) \parallel \psi(t)\parallel_{l^2}^2\le \frac{4}{\delta}\parallel g\parallel_{l^2}^2,\label{eq:psiestimate}
\end{equation}
ensuring that for $0<R_{\mu}^2<\delta/(4\mu)$ that $\psi \in L^{\infty}([0,\infty),l^2)$.
Applying Gronwall's inequality gives:
\begin{equation}
\parallel \psi(t)\parallel_{l^2}^2\le \parallel \psi(0)\parallel_{l^2}^2\exp(-(\delta-4\mu R_{\mu}^2)t)+\frac{4}{\delta}\frac{\parallel g\parallel_{l^2}^2}{\delta-4\mu R_{\mu}^2}\left(1-\exp(-(\delta-4\mu R_{\mu}^2)t)\right).\label{eq:psiGronwall}
\end{equation}
In the asymptotic regime $t\rightarrow \infty$ this leads to
\begin{equation}
{\lim\sup}_{t \rightarrow \infty} \parallel \psi(t)\parallel_{l^2}^2\le \frac{4}{\delta}\frac{\parallel g\parallel_{l^2}^2}{\delta-4\mu R_{\mu}^2}.\nonumber
\end{equation}
Defining
\begin{equation}
\rho^2=\frac{4}{\delta}\frac{\parallel g\parallel_{l^2}^2}{\delta-4\mu R_{\mu}^2},\nonumber
\end{equation}
we observe that for any number $r$, satisfying $R_{\mu}>r>\rho$, the ball $B_{r}$ of $l^2$ is an absorbing set for the semigroup $S_{\mu}(t):$ That is, for a set
$B\in B_R$
it follows that, for $t\ge t_0(B,B_{r})$, where
\begin{equation}
t_0=\frac{1}{\delta-4\mu R^2}\log\left(\frac{R^2-\rho^2}{{r}^2-\rho^2}\right),\nonumber
\end{equation}
one has $\parallel \psi(t)\parallel_{l^2}^2\le r^2$, i.e.
$S_{\mu}(t)B\subset B_{r}$.
\vspace*{0.5cm}
\hspace{16.5cm} $\square$
\vspace{0.5cm}
Note that, although we ensured in section \ref{subsection:existence} the global existence of a unique solution to the dfAL in $l^2$, the nonlocal feature of the nonlinear term of the dfAL
allows to show the existence of an absorbing set only when the sufficient condition (\ref{eq:ALbound}) is satisfied with the effect that all solutions are contained
in a ball $R_{\mu}$ in $l^2$ for all $t\ge 0$ (cf. Eq. (\ref{eq:estimateattractorpsi})).
\subsection{Asymptotic compactness of the semigroup $\left\{S_{\mu}(t)\right\}_{t\ge 0}$}
Here we verify that the semigroup $\left\{S_{\mu}(t)\right\}_{t\ge 0}$ associated with the dfAL (\ref{eq:AL}),(\ref{eq:icsAL}) possesses the asymptotic tail end property.
\begin{lemma}
{\it \,\, } Let $(\psi_n(0))_{n\in {\mathbb{Z}}}=\psi_0 \in B$
and $(g_n)_{n\in {\mathbb{Z}}}=g \in l^2$. For any $\xi>0$ there exist
$T(\xi)$ and $M(\xi)$ such that the solution $\psi(t)$ of (\ref{eq:AL}),(\ref{eq:icsAL}) satisfies for all $t\ge T(\xi)$:
\begin{equation}
\sum_{|n|> 2K}|\psi_n(t)|^2\le \xi\,\,\,\,{\rm for\, any\,\,\,\,} K>M(\xi).\label{eq:asymptotic}
\end{equation}
\label{Lemma:asymtailmu}
\end{lemma}
\noindent{\bf Proof:} For a contradiction let us suppose that this assertion is not true,
i.e. there is an $\epsilon_0>0$ and a subsequence $(n_k)_{k \in {\mathbb{Z}}}$ of $\mathbb{Z}$ such that
\begin{equation}
\sum_{|k|>m}|\psi_{n_k}(t)|^2 \ge \epsilon_0,\,\,\,\forall t\ge 0,\,\,\,{\rm for\,\, any}\,\, m\in {\mathbb{N}}.\nonumber
\end{equation}
Using (\ref{eq:psiestimate}) we have
\begin{equation}
\frac{d}{dt}\sum_{|k|>m}|\psi_{n_k}(t)|^2 +\left[\delta-4\mu R_{\mu}^2\right] \sum_{|k|>m}|\psi_{n_k}(t)|^2\le \frac{4}{\delta} \sum_{|k|>m}|g_{n_k}|^2,\nonumber
\end{equation}
and Gronwall's inequality gives
\begin{equation}
\sum_{|k|>m}|\psi_{n_k}(t)|^2 \le \sum_{|k|>m}|\psi_{n_k}(0)|^2 \exp(-(\delta-4\mu R_{\mu}^2)t)+\frac{4}{\delta}\frac{\sum_{|k|>m} |g_{n_k}|^2}{\delta-4\mu R_{\mu}^2}\left(1-\exp(-(\delta-4\mu R_{\mu}^2)t)\right).\label{eq:partialsum}
\end{equation}
From (\ref{eq:partialsum}) we infer
\begin{equation}
\inf_{t\in[0,\infty]}\sum_{|k|>m}|\psi_{n_k}(t)|^2 =
\frac{4}{\delta}\frac{\sum_{|k|>m} |g_{n_k}|^2}{\delta-4\mu R^2} \ge \epsilon_0>0,\,\,\,{\rm for\,\, any}\,\, m\in {\mathbb{N}}.\nonumber
\end{equation}
Therefore, for every $m\in {\mathbb{N}}$, $\sum_{|k|>m}|g_{n_k}|^2>C(\delta,\mu,R_{\mu})\cdot \epsilon_0>0$, which contradicts the fact that for $(g_n)_{n\in {\mathbb{Z}}}=g$ with $\parallel g \parallel_{l^2}<\infty$, every subsequence $(g_{n_k})_{k\in {\mathbb{N}}}$ must converge.
\vspace*{0.5cm}
\hspace{16.5cm} $\square$
\vspace*{0.5cm}
\noindent{\bf Definition:} The semigroup $\left\{S_{\mu}(t)\right\}_{t\ge 0}$ is said to be asymptotically compact in $l^2$ if, for any bounded $B\subset B_{R_{\mu}} \subset l^2$, and any sequence $\left\{t_n\right\}$, $\left\{\phi_n\right\}$ with $t_n\ge 0$, $t_n \rightarrow \infty$ as $n\rightarrow \infty$, and $\phi_n \in B\subset B_{R_{\mu}}$, the sequence $\left\{S_{\mu}(t_n)\phi_n\right\} $ is relatively compact in $l^2$.
\vspace{0.5cm}
\begin{proposition}
{\it \,\,Under the same conditions of Lemma \ref{Lemma:asymtailmu} the semigroup $\left\{S_{\mu}(t)\right\}_{t\ge 0}$ is asymptotically compact.}
\label{Proposition:asymcompmu}
\end{proposition}
\noindent{\bf Proof:}
By contradiction: Suppose that every subsequence of $\psi^n(t_n)=S_{\mu}(t_n)\psi_0^n \in B$ diverges in $l^2$ as $t_n \rightarrow \infty$ (equivalently, no sequence $\psi^n(t_n)=S_{\mu}(t_n)\psi_0^n$ has a convergent subsequence in $l^2$ as $t_n \rightarrow \infty$).
Take any subsequence $\psi^{n_k}(t_{n_k})=S_{\mu}(t_{n_k})\psi_0^{n_k} \in B$, $k\in {\mathbb{N}}$. Suppose $\parallel \psi^{n_k}(t_{n_k}) \parallel_{l^2}^2 \rightarrow \infty$ as $t_{n_k}\rightarrow \infty$, that is, $k\rightarrow \infty$.
Then for any $M\in {\mathbb{R}}>0$ there are only finitely many $k$ such that
\begin{equation}
\parallel \psi^{n_k}(t_{n_k}) \parallel_{l^2}^2\le M.\label{eq:Mk}
\end{equation}
Denote all values of $k$ for which (\ref{eq:Mk}) is satisfied by $k_1,...,k_m$. Setting $N_M=\max\left\{k_1,...,k_m\right\}+1$, then for any $k>N_M$ it holds that
\begin{equation}
\parallel \psi^{n_k}(t_{n_k}) \parallel_{l^2}^2=\sum_{i\in {\mathbb{Z}}}|\psi^{n_k}_i(t_{n_{k}})|^2>M,\,\,\,\forall k>N_M.\label{eq:normpsiM}
\end{equation}
We split the infinite sum in (\ref{eq:normpsiM}) as
\begin{equation}
\sum_{i\in {\mathbb{Z}}}|\psi^{n_k}_i(t_{n_{k}})|^2=\sum_{|i|>2L}|\psi^{n_k}_i(t_{n_{k}})|^2+\sum_{|i|\le 2L}|\psi^{n_k}_i(t_{n_{k}})|^2
>M,\,\,\,\forall k>N_M,\label{eq:normpsiM1}
\end{equation}
for any fixed $0<L<\infty$. For the finite sum we get $\sum_{|i|\le 2L}|\psi^{n_k}_i(t_{n_{k}})|^2<M_L$ with $M>M_L$, implying that
\begin{equation}
\sum_{|i|\ge 2L}|\psi^{n_k}_i(t_{n_{k}})|^2
>M-M_L>0,\,\,\,\forall k>N_M.\label{eq:normpsiM2}
\end{equation}
Since the relation (\ref{eq:normpsiM2}) holds for all
$t\ge t_{n_{k> N_M}}$, for all $M>0$ and any $L>0$, it contradicts the asymptotic tail end property of $\left\{S_{\mu}(t)\right\}_{t\ge 0}$ as established by Lemma \ref{Lemma:asymtailmu}.
\vspace*{0.5cm}
\hspace{16.5cm} $\square$
Finally, facilitating Proposition \ref{Proposition:asymcompmu} and Theorem 1.1 in \cite{Temam}, we are now able to state the main result of this section.
\begin{theorem}
{\it \,\,
The semigroup $\left\{S_{\mu}(t)\right\}_{t\ge 0}$ associated with the dfAL (\ref{eq:AL}),(\ref{eq:icsAL}) possesses a unique restricted global attractor ${\cal A}_{\mu}\subset B_{r}\subset l^2$.}\label{Theorem:attractor AL}
\end{theorem}
\vspace{0.5cm}
\section{Existence of a global attractor for the semigroup $\left\{S_{\gamma}(t)\right\}_{t\ge 0}$}
Here we recall some results regarding the existence of an absorbing set in $l^2$ for the dynamical system belonging to the dfDNLS (\ref{eq:DNLS}),(\ref{eq:icsDNLS}) in the asymptotic regime $t \rightarrow \infty$ (see also \cite{Nikos}).
\begin{lemma}
{\it \,\,
Let $(g_n)_{n\in {\mathbb{Z}}}=g \in l^2$ and $(\phi_n(0))_{n\in {\mathbb{Z}}}=\phi_0$. For the dynamical system determined by (\ref{eq:DNLS}),(\ref{eq:icsDNLS})
\begin{equation}
S_{\gamma}(t):\,\phi_0\in l^2\rightarrow \phi(t)\in l^2
\end{equation}
there exists a bounded absorbing set $B_{\tilde{r}}$ in $l^2$, that is, for every bounded set $B$ of $l^2$, there is a $t_0(B,B_{\tilde{r}})$ such that
$S_{\gamma}(t)B\subset B_{\tilde{r}}$ for all $t\ge t_0(B,B_{\tilde{r}})$.
Furthermore, if
\begin{equation}
\frac{2}{\delta}\parallel g\parallel_{l^2}\le \parallel \phi_0\parallel_{l^2},\label{eq:conddeltaphi0}
\end{equation}
then it holds
\begin{equation}
\parallel \phi(t)\parallel_{l^2}\le \parallel \phi_0\parallel_{l^2},\,\,\,\forall t\ge 0.\label{eq:phitbelow}
\end{equation}
\label{Lemma:asymboundgamma}}
\end{lemma}
\noindent{\bf Proof:} We derive for the dfDNLS
\begin{eqnarray}
\frac{d}{dt}\sum_{n\in {\mathbb{Z}}}|\phi_n|^2&=&
\sum_{n\in {\mathbb{Z}}}\left( i\,(\overline{g}_n\phi_n-g_n\overline{\phi}_n)-2\delta \sum_{n\in {\mathbb{Z}}}|\phi_n|^2 \right)\nonumber\\
&\le& -2\delta \parallel \phi(t)\parallel_{l^2}^2+4\parallel g\parallel_{l^2}\parallel \phi(t)\parallel_{l^2}.\label{eq:estimateattractorphi}
\end{eqnarray}
This gives
\begin{equation}
\frac{d}{dt}\parallel \phi(t)\parallel_{l^2}^2 +\delta \parallel \phi(t)\parallel_{l^2}^2\le \frac{4}{\delta}\parallel g\parallel_{l^2}^2,\nonumber
\end{equation}
from which with the use of Gronwall's inequality we obtain:
\begin{equation}
\parallel \phi(t)\parallel_{l^2}^2\le \parallel \phi(0)\parallel_{l^2}^2\exp(-\delta t)+\frac{4}{\delta^2}\parallel g\parallel_{l^2}^2\left(1-\exp(-\delta t)\right).\label{eq:phit}
\end{equation}
Asymptotically for $t\rightarrow \infty$ it results that
\begin{equation}
{\lim\sup}_{t \rightarrow \infty} \parallel \phi(t)\parallel_{l^2}^2\le \frac{4}{\delta^2}\parallel g\parallel_{l^2}^2.\nonumber
\end{equation}
Defining
\begin{equation}
\tilde{\rho}^2=\frac{4}{\delta^2}\parallel g\parallel_{l^2}^2,\nonumber
\end{equation}
we observe that for any number $\tilde{r}>\tilde{\rho}$ the ball $B_{\tilde{r}}\subset l^2$
is an absorbing set for the semigroup $S_{\gamma}(t).$ That is, if $B$ is a bounded set of $l^2$ included in a ball $B_R$
it follows that for $t\ge t_0(B,B_{\tilde{r}})$ where
\begin{equation}
t_0=\frac{1}{\delta}\log\left(\frac{R^2-\tilde{\rho}^2}{\tilde{r}^2-\tilde{\rho}^2}\right),\nonumber
\end{equation}
one has $\parallel \psi(t)\parallel_{l^2}^2\le \tilde{r}^2$, that is,
$S_{\gamma}(t)B\subset B_{\tilde{r}}$.
Finally, from the relations
\begin{equation}
\parallel \phi(t)\parallel_{l^2}^2\le \parallel \phi(0)\parallel_{l^2}^2\exp(-\delta t)+\frac{4}{\delta^2}\parallel g\parallel_{l^2}^2\left(1-\exp(-\delta t)\right)\le \parallel \phi(0)\parallel_{l^2}^2\nonumber
\end{equation}
one obtains (\ref{eq:phitbelow}).
\vspace*{0.5cm}
\hspace{16.5cm} $\square$
\vspace*{0.5cm}
Concerning the asymptotic tail end property, for
the dfDNLS (\ref{eq:DNLS}),(\ref{eq:icsDNLS}) we have the following lemma.
\vspace*{0.5cm}
\begin{lemma}
{\it \,\, Let $(\phi_n(0))_{n\in {\mathbb{Z}}}=\psi_0 \in B$, where $B$ is a bounded set of $l^2$ and
$(g_n)_{n\in {\mathbb{Z}}}=g \in l^2$. For any $\xi>0$ there exist
$T(\xi)$ and $M(\xi)$ such that the solution $\phi(t)$ of (\ref{eq:DNLS}),(\ref{eq:icsDNLS}) satisfies for all $t\ge T(\xi)$:
\begin{equation}
\sum_{|n|\ge 2K}|\phi_n(t)|^2\le \xi\,\,\,\,{\rm for\, any\,\,\,\,} K>M(\xi).\label{eq:asymptoticDNLS}
\end{equation}}\label{Lemma:asymtailgamma}
\end{lemma}
\noindent{\bf Definition:} The semigroup $\left\{S_{\gamma}(t)\right\}_{t\ge 0}$ is said to be asymptotically compact in $l^2$ if, for any bounded $B \subset l^2$, and any sequence $\left\{t_n\right\}$, $\left\{\phi_n\right\}$ with $t_n\ge 0$, $t_n \rightarrow \infty$ as $n\rightarrow \infty$, and $\phi_n \in B$, the sequence $\left\{S_{\gamma}(t_n)\phi_n\right\} $ is relatively compact in $l^2$.
\begin{proposition}
{\it \,\,Under the same conditions of Lemma \ref{Lemma:asymtailgamma}, the semigroup $\left\{S_{\gamma}(t)\right\}_{t\ge 0}$ is asymptotically compact.}
\label{Proposition:asymcompgamma}
\end{proposition}
The proofs of Lemma \ref{Lemma:asymtailgamma} and Proposition \ref{Proposition:asymcompgamma} proceed in a similar manner to the corresponding proofs for the dfAL and are omitted here.
In conclusion, by virtue of Proposition \ref{Proposition:asymcompmu} and Theorem 1.1 in \cite{Temam}, we have:
\begin{theorem}
{\it \,\,
The semigroup $\left\{S_{\gamma}(t)\right\}_{t\ge 0}$ attributed to the dfDNLS (\ref{eq:DNLS}),(\ref{eq:icsDNLS}), has a unique global attractor ${\cal A}_{\gamma}\subset B_{r}\subset l^2$.}\label{Theorem:attractorDNLS}
\end{theorem}
\vspace{0.5cm}
\section{Congruence of the attractors ${\cal A}_{\mu}$ and ${\cal A}_{\gamma}$}
At last we establish the congruence of the attractors ${\cal A}_{\mu}$ and ${\cal A}_{\gamma}$ where we assume the following:\\
(I) $\psi_0=\phi_0 \in B\subset B_{R_{\mu}}$, \\
(II) hypothesis (\ref{eq:Cphi}) of Theorem \ref{Theorem:closeness} holds with $K_{\phi}\epsilon \le R_{\mu}$,\\
(III) the conditions (\ref{eq:conditionglobal}) and (\ref{eq:ALbound}) are satisfied.
Notice that (I)-(II) confines not only $\psi(t)=S_{\mu}(t)\psi_0$, but also $\phi(t)=S_{\gamma}(t)\phi_0$ to $B_{R_{\mu}}$ for all $t\ge 0$.
\vspace*{0.5cm}
\begin{theorem}
{\it \,\,Let assumptions (I)-(III) above hold. Then the attractors
${\cal A}_{\mu}$ and ${\cal A}_{\gamma}$ coincide according to
\begin{equation}
{\rm dist}\left({\cal A}_{\mu},{\cal A}_{\gamma}\right)=0.
\end{equation}\label{Theorem:congruence}}
\end{theorem}
\noindent{\bf Proof:}
For any bounded subset $B\subset l^2$, it holds that
\begin{eqnarray}
{\rm dist}({\cal A}_{\mu},{\cal A}_{\gamma})&\le& {\rm dist}({\cal A}_{\mu},S_{\mu}(t)B)+ {\rm dist}(S_{\mu}(t)B,S_{\gamma}(t)B)+ {\rm dist}(S_{\gamma}(t)B,{\cal A}_{\gamma}).\nonumber
\end{eqnarray}
As ${\cal A}_{\mu}$ attracts any bounded set $B\subseteq B_{R_{\mu}}\subset l^2$, for
any $\xi >0$, there is some $T_{\mu}(\xi)>0$ such that
\begin{equation}
{\rm dist}\left({\cal A}_{\mu},S_{\mu}(t)B\right)=\sup_{a\in
{\cal A}_{\mu}}\,\inf_{\psi_0 \in B} {\rm dist}\left(a,S_{\mu}(t)\psi_0\right)_{l^2} <\frac{\xi}{3},\,\,\,\forall t\ge T_{\mu}. \label{eq:ineq1}
\end{equation}
Analogously, as ${\cal A}_{\gamma}$ attracts any bounded set $B\subseteq B_{R_{\mu}}\subset l^2$ (${\cal A}_{\gamma}$ actually attracts any bounded set in $l^2$ anyway),
for
any $\xi >0$, there is some $T_{\gamma}(\xi)>0$ such that
\begin{equation}
{\rm dist}\left(S_{\gamma}(t)B,{\cal A}_{\gamma}\right)={\rm dist}\left({\cal A}_{\gamma},S_{\gamma}(t)B\right)
=\sup_{a\in
{\cal A}_{\gamma}}\,\inf_{\phi_0 \in B} {\rm dist}\left(a,S_{\gamma}(t)\phi_0\right)_{l^2}
<\frac{\xi}{3},\,\,\,\forall t\ge T_{\gamma}. \label{eq:ineq2}
\end{equation}
Let $\overline{T}=\max\{T_{\mu}(\xi),T_{\gamma}(\xi)\}$ and consider the time interval $[0,t_0]$ with $t_0\ge \overline{T}$.
In light of Theorem \ref{Theorem:closeness} we have that for every $t_0>0$, there exists a small $\epsilon_0>0$ and some $C>0$ such that
for every $\epsilon \in (0,\epsilon_0)$ and for all
$\psi_0=\phi_0 \in B \subseteq B_{R_{\mu}}$, fulfilling hypothesis (II), it holds that for every $t\in [0,t_0]$
\begin{equation}
{\rm dist}(S_{\mu}(t)B,S_{\gamma}(t)B)=\inf_{\psi_0 \in B} {\rm dist}_{l^2}\left(S_{\mu}(t)\psi_0,S_{\gamma}(t)\psi_0\right)
<C\cdot {\epsilon^3}.\label{eq:ineq3}
\end{equation}
Combining (\ref{eq:ineq1}),(\ref{eq:ineq2}) and (\ref{eq:ineq3})
we get
\begin{eqnarray}
0\le {\rm dist}\left({\cal A}_{\mu},{\cal A}_{\gamma}\right)&\le&
\frac{\xi}{3}+C\epsilon^3+\frac{\xi}{3}=\frac{2\xi}{3}+C\epsilon^3,\,\,\,\forall \xi>0,\,\,\,\,\,\,\forall \epsilon\in (0,\epsilon_0).\nonumber
\end{eqnarray}
By setting $\epsilon_0^3=\xi/(3C)$ one gets
\begin{equation}
{\rm dist}\left({\cal A}_{\mu},{\cal A}_{\gamma}\right)< \xi,\nonumber
\end{equation}
from which by the arbitrariness of $\xi$ it follows that
${\rm dist}\left({\cal A}_{\mu},{\cal A}_{\gamma}\right)=0$,
and the proof is finished.
\vspace*{0.5cm}
\hspace{16.5cm} $\square$
\section{Outlook}\label{section:outlook}
Finally as an outlook on future studies,
regarding the analytical closeness results
there remains the problem of obtaining estimates in our statements that hold uniformly for any finite time.
Moreover, extensions of the main closeness results to higher dimensional lattices $\mathbb{Z}^N$, for $N\geq 2$ and for generalized nonlinearities are of interest. As examples, we will consider the closeness of the solutions of higher dimensional discrete nonlinear Schr\"odinger lattices with generalized power and saturable nonlinearity, to those of the $N$-dimensional generalization of the AL lattice \cite{trio}.
Another aspect is the persistence of localised wave forms, supplied by the analytical solutions of the AL equation:\\
(i) under the impact of forcing and damping in the AL equation itself,\\
(ii) and in other (nonintegrable) discrete nonlinear Schr\"odinger equations in the conservative and unforced limit as well as with the inclusion of damping and forcing. The
corresponding closeness theorems can be formulated along the lines given in this paper. Especially with view to applications the persistence of soliton solutions in (damped and forced) nonintegrable discrete nonlinear Schr\"odinger equations plays an important role \cite{trio}.
Furthermore, utilising the tools provided in this manuscript,
the
asymptotic features of different discrete versions of forced and damped continuum Ginzburg-Landau (GL) equations
represented in combined form by
\begin{equation}
\frac{d u_n}{dt}=u_n+(1+i\epsilon)
(u_{n+1}-2u_n+u_{n-1})-(1+i\epsilon)\,|u_{n}|^2(\gamma u_n+\mu (u_{n+1}+u_{n-1})),\,\,\,n\in {\mathbb{Z}}\label{eq:gGLE}
\end{equation}
can be explored. For $\gamma=0, \mu\ne 0$ ($\gamma \neq, \mu=0$), a discrete GL equation with nonlocal (local) nonlinear term results.
Application of our analytical closeness and congruence methods links rigorously these discrete GL equations (\ref{eq:gGLE})
and their associated discrete nonlinear Schr\"odinger counterparts (the dfAL and dfDNLS with $\kappa=-1$ in (\ref{eq:AL}) and (\ref{eq:DNLS}), respectively) arising in the limit $\epsilon \rightarrow 0^+$ from (\ref{eq:gGLE}). In particular a continuity statement can be formulated proving that the solutions of the GL equations converge to those of the DNLSs.
Furthermore, with respect to the global attractor congruence results the limit behaviour of a global attractor of a discrete GL equation can be treated by proving its upper semicontinuity in the {\it inviscid} limit, that is
as $\epsilon \rightarrow 0^+$ \cite{duo}.
In addition, in a similar way as represented in this paper for the two discrete nonlinear Schr\"odinger equations the congruence of the global attractors for the nonlocal GL equation and its local GL counterpart can be demonstrated \cite{prepDirk}.
\vspace*{0.5cm}
\centerline{{\bf Acknowledgement}}
I am very grateful to Nikos I. Karachalios for many fruitful discussions.
|
2,869,038,155,049 | arxiv | \section{Motivation and significance}
System identification is the process through which models are built directly from input and output data. The first step in this process is to choose the mathematical representation that will fit the data. In this sense, the NARMAX (Nonlinear Autoregressive Moving Average with eXogenous inputs) models \cite{leontaritis1985input} are of great interest due to their flexibility and representation capabilities. They are composed of lagged input, output and prediction errors. The key point in working with such models is the selection of the appropriate structure, which must be as simple as possible, but sufficiently complex to capture the dynamics underlying the data \cite{aguirre2009}.
The Forward Regression Orthogonal Estimator (FROE) \cite{billings1989identification} is a standard algorithm to perform the structure selection of NARX models. It is based on the Error Reduction Ratio (ERR) measure, which evaluates how good each single model term is in explaining the output data variance. There are some important disadvantages in the use of this technique: it has limitations for training data with the presence of certain input characteristics \cite{piroddi2003}; and it suffers from the \textit{curse of dimensionality} with the increment of the degree of non-linearity and higher long-term dependencies. To circumvent these problems, one can resort to Evolutionary Algorithms (EA), such as Genetic Algorithms (GA) \cite{holland1975,goldberg1988}, Genetic Programming (GP) \cite{koza1992} and Multi-Gene Genetic Programming (MGGP) \cite{hinchliffe1996,hinchliffe2001,hinchliffe2003}. Recently, several modeling and forecasting works have been developed with the use of MGGP (as in \cite{ghareeb2013,mehr2017,safari2018,madvar2019}). The algorithm has shown itself to be very flexible and to present good performance.
To the best of the authors' knowledge, the only available toolbox capable of performing MGGP optimization is the GPTIPS \cite{searson2010gptips}, a machine learning platform for MATLAB focused on symbolic regression. Unfortunately, it is restricted to those who own the licenses which hinders contribution from the community.
This paper presents the \textbf{mggp} package in \textbf{python}. In addition to the MGGP evolutionary algorithm framework, we encapsulate basic methods to work with NARMAX models, i.e. parameter estimation, model simulation and validation methods; so that the user can easily work on the evaluation function to be optimized.
\section{Software description}
\sloppypar{
The current toolbox is focused on the structure selection of NARX/NARMAX models using an evolutionary algorithm called MGGP. To structure the adequate framework that is representative of such models, we attempted to develop a well organized Object Oriented program that comprises all necessary tools to perform the task, i.e. model coding (representation), basic parameter estimation, simulation and validation methods, and the aimed structure selection algorithm.
}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\linewidth]{MGGP_representation}
\caption{MGGP individual representation. It is a sequel of genetic programs as basis functions. The linear combination of these programs represents the mathematical model. It is highlighted one single gene that represents the mathematical function $2x_2 + x_1$.}
\label{fig:mggp_representation}
\end{figure}
\subsection{Representation}
In NARMAX models, the current output is obtained from past input-output and residual signals, as follows:
\begin{equation}
\small
\label{eq:narx}
\begin{split}
y[k]=\mathit{F}^l(&y[k-1],...,y[k-n_y],u[k-1],...,\\
&u[k-n_u],\xi[k-1]...\xi[k-n_\xi]) + \xi[k],
\end{split}
\end{equation}
where $F[\cdot]$ is a nonlinear function; $y[k]$, $u[k]$ and $\xi[k]$ are the output signal, input signal and residual vector, respectively; and $n_y$, $n_u$ and $n_\xi$ are their respective maximum lags. These models are extensions of NARX models in which residual terms are included to remove parameters' bias. In the case of \textit{polynomial} models, the nonlinear function is a polynomial function of degree $l$ ($F^l$). The MGGP fits naturally as a NARMAX description since it can be represented as the linear combination of separated basis functions:
\begin{equation}
\small
g(\varphi,\Theta)=\sum^m_{i=1}\theta_ig_i(\varphi),
\end{equation}
where $ m $ is the number of basis functions, $ g_i $ represents individual functions and $ \theta_i $ the model parameters. It can be seen as a GA individual \cite{holland1975,goldberg1988} in which each \textit{gene} contains one GP individual \citep{koza1992,eiben2003,poli2008} as the basis function (see more in \cite{hinchliffe2001} and \cite{hinchliffe2003}). Figure \ref{fig:mggp_representation} graphically presents an MGGP individual representation as a sequel of GP individuals, which are tree representations of mathematical functions. To codify such representation we use the GA and GP frameworks available in the \textit{Distributed Evolutionary Algorithms in Python} (DEAP) \cite{deap2012} package.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\linewidth]{MGGP_flowchart}
\caption{Mono-objective MGGP optimization algorithm flowchart.}
\label{fig:mggp_flowchart}
\end{figure}
\subsection{MGGP algorithm}
As an EA, the MGGP possesses a standard behavior. It begins with a initial population of random individuals (chromosomes) and at each generation (main loop) the best solutions are sought (\textit{selection}), then combined (\textit{recombination/reproduction/crossover} and \textit{mutation}) in order to generate even better individuals. The MGGP works with two levels of crossover: \textit{low-level crossover} and \textit{high-level crossover}. In the former, one gene is randomly selected from each parent individual and they exchange GP subtrees as genetic material. In the latter, the genetic materials are exchanged as entire basis functions in a way similar to GA one-point crossover. We include two kinds of mutation in the algorithm: an inner mutation, which occurs as a GP subtree mutation, and an outer mutation, which swaps a gene for a new one, with an entirely new basis function (see \cite{poli2008} for details on GP genetic operators).
Figure \ref{fig:mggp_flowchart} exhibits a flowchart of the toolbox mono-objective MGGP algorithm. It begins with a initial population that is evaluated. Then, the generation loop starts: \textit{i)} the parent individuals are selected via tournament, \textit{ii)} each parent couple has a chance to be recombined (\textit{CXPB}), \textit{iii)} each individual that has not been recombined has a chance to be mutated (\textit{MTPB}), \textit{iv)} the individuals are evaluated, and \textit{v)} the elitism operator is applied. The evaluation function shall be customized by the user.
The MGGP multi-objective evolution uses the NSGA2 framework \cite{Deb2000}. It differs from the mono-objective optimization in the tournament and selection of the next generation (elitism) which must observe the Pareto Optimal Set.
\subsection{Computational implementation}
The \textbf{mggp} package, available on GitHub for pull requests\footnote{https://github.com/CastroHc/MGGP} and in the PyPI repository for installation\footnote{pip install mggp}, is composed of two classes: the \textit{mggpElement class} and the \textit{mggpEvolver class}.
An \textit{mggpElement} object is responsible for carrying the attributes and methods used to create, simulate and evaluate individuals from an MGGP population. This class is able to build single-input single-output (SISO) and multiple-input single-output (MISO) models and works with NARX and NARMAX representations. It comprises three simulation methods, i.e. one-step-ahead prediction, free-run simulation and multiple-shooting, and their respective mean-squared error (MSE) scores (see \cite{ribeiro_smoothness_2020} for comparison analysis); the Least Squares (LS) and Extended Least Squares (ELS) parameter estimation methods \cite{young1968}; and a standard FROE structure selection method \cite{chen1989orthogonal}. The NARMAX terms, which are GP individuals, are constructed from a set of mathematical functions that can be customized by the user. The \textit{mggpElement} class has a built-in back-shift operator to determine the lag of a term.
An \textit{mggpEvolver} object is responsible to execute the evolution of a population. The individuals from this population are created according to the primitive set of mathematical functions defined in the \textit{mggpElement} object. It is able to perform mono- and multi-objective optimizations.
\section{Illustrative Example}
In this section we illustrate how to use the \textbf{mggp} toolbox through a simple example. We will show (i) how to set up an \textit{mggpElement} object, (ii) how to define a specific model and simulate it, (iii) how to build an evaluation function, and (iv) how to set up an \textit{mggpEvolver} object and run the optimization algorithm.
\subsection{Simulate a system using the \textit{mggpElement} class}
Use the \textit{setPset()} method to define the maximum delay of a single back-shift operator as one ($q1$) and the number of variables as two (one input and one output). This way the primitive set is able to define polynomial NARX models (it is already set to use multiplication function).
\begin{lstlisting}
from mggp import mggpElement
element = mggpElement()
element.setPset(maxDelay=1,numberOfVariables=2)
element.renameArguments({'ARG0':'y1','ARG1':'u1'})
\end{lstlisting}
Consider a stochastic system given by \cite{piroddi2003}:
\[\small y[k] = 0.75y[k-2] + 0.25u[k-1] - 0.20y[k-2]u[k-1],\]
where $u[k]$ is a White Gaussian Noise with mean of zero and variance one. Use the \textit{createModel()} method to build a model from a list of strings that is representative of the system and then compile it. The model parameters are known.
\begin{lstlisting}
listStrings = ['q1(y1)','u1','mul(q1(y1),u1)']
model = element.createModel(listStrings)
element.compile_model(model)
theta = np.array([0.75,0.25,-0.20])
\end{lstlisting}
Use the \textit{predict\_freeRun()} method to simulate the system from a initial condition and the input vector.
\begin{lstlisting}
u = np.random.normal(loc=0,scale=1,size=(500))
y0 = np.zeros(2)
y = element.predict_freeRun(model,theta,y0,u[:-1])
\end{lstlisting}
\subsection{Define the evaluation function and run the optimization}
The MGGP is an optimization algorithm that minimizes an evaluation (or cost) function. In this sense, the \textit{mggpEvolver} class is set to run an evaluation function that receives as \textit{only} argument the individual to be evaluated. We present a very simple example in which we seek to minimize the one-step-ahead MSE and the model parameters are estimated via LS method. An exception treatment must be performed to keep the program running after a Singular Matrix Exception error is raised.
\begin{lstlisting}
def evaluate(ind):
try:
element.compile_model(ind)
theta = element.ls(ind,y,u)
return element.score_osa(ind, theta, y, u),
except np.linalg.LinAlgError:
return np.inf,
\end{lstlisting}
Use the \textit{mggpEvolver} constructor to set up the MGGP parameters. It must be defined the population size (popSize), the crossover probability (CXPB), the mutation probability (MTPB), the number of generations to be run (n\_gen), the maximum height of the trees that represents model terms (maxHeight), the maximum number of terms an individual may possess (maxTerms), the percentage of individuals from a population to be kept after a generation (elite) and the \textit{mggpElement} object that carries information about individual creation. The evaluation function is sent as an argument of the \textit{run()} method, which returns a log of fitnesses history and the resultant elite population (hall of fame - hof).
\begin{lstlisting}
from mggp import mggpEvolver
mggp = mggpEvolver(popSize=500,CXPB=0.9,MTPB=0.1,n_gen=50,
maxHeight=3,maxTerms=5,verbose=True,elite=10,
element=element)
hof,log = mggp.run(evaluate=evaluate)
model = hof[0]
for term in model:
print(term)
\end{lstlisting}
\subsection{Multi-objective example}
For a multi-objective optimization, the \textit{mggpElement} must be set to create individuals with the \textit{weight} attribute that indicates the number of objectives and the type of optimization (maximization/minimization). This is done in the constructor, as follows:
\begin{lstlisting}
from mggp import mggpElement
element = mggpElement(weights=(-1,-1))
element.setPset(maxDelay=1,numberOfVariables=2)
element.renameArguments({'ARG0':'y1','ARG1':'u1'})
\end{lstlisting}
The argument `\textit{weights=(-1,-1)}' defines an \textit{mggpElement} object that minimizes two objectives. Next, define the cost function to minimize the objectives: (i) the one-step-ahead prediction error and (ii) the number of model terms. The \textit{mggpEvolver} object may be initialized as in the previous example. Finally, use the \textit{runMO()} method to execute the optimization:
\begin{lstlisting}
def evaluate(ind):
try:
element.compile_model(ind)
theta = element.ls(ind,y,u)
return element.score_osa(ind, theta, y, u), len(ind)
except np.linalg.LinAlgError:
return np.inf, np.inf
hof,log = mggp.runMO(evaluate=evaluate,popPercent=0.8)
\end{lstlisting}
The attribute \textit{popPercent} defines the number of individuals to be kept in the next generation. It is a percentage of \textit{popSize} and selects this number of individuals from the current population plus the resultant offspring observing the Pareto Optimal Set.
\section{Impact}
The \textbf{mggp} is an open source and easy-to-use toolbox that performs the \textit{nonlinear system identification} task via MGGP algorithm. It allows the automatic construction of NARX/NARMAX models using any mathematical function included in the primitive set of an \textit{mggpElement} object (e.g. exponential, sine, tanh, greater than, etc.). The built-in back-shift operator releases the user of the responsibility to predetermine maximum term lags and to build the whole set of candidate terms. Further, researchers can customize the evaluation function using their own algorithms that are not encapsulated in \textit{mggpElement} class. This widens the toolbox application allowing the inclusion of prior knowledge to evaluate individuals and set constraints in the parameters values or in the model structure (gray-box identification).
\section{Conclusion}
In the present work a new package in \textbf{python} is introduced that performs system identification using NARMAX models. The package encapsulates basic methods of black-box identification such as LS, ELS and FROE and performs mono- and multi-objective optimizations. The users are free to build their cost functions using any method not encapsulated in the package classes. Thus, a broad range of researches in the area of system identification via MGGP is opened. The project is available on GitHub so that users can contribute via
forking and making pull requests, please refer to the repository
indicated in the metadata.
\subsection{Future versions}
Some issues for future versions are listed in the following.
\textbf{MIMO models:} the extension to multiple-output is essential to widen applications. In its actual version, the \textbf{mggp} package is capable of building up to MISO models.
\textbf{ERR methods:} there are several algorithms of the FROE type for structure selection using ERR measures with different properties and capabilities.
\textbf{Human-machine interface:} we have received feedback from users about the use of the package for several inputs. It may get complicated for the users to implement their cost functions and perform analysis. We intend to simplify the interface to ease its use.
\textbf{Optimization algorithms:} there are few built-in operators for cross-over and mutation processes. The algorithms for mono- and multi-optimization are rigid and do not accept modifications. We intend to widen the possibilities taking advantage of the object oriented structure of the package and create an abstract interface for genetic operators. Thus, in future versions, the user may customize the optimization algorithm including genetic operator objects into the \textit{mggpEvolver} object.
\bibliographystyle{unsrtnat}
|
2,869,038,155,050 | arxiv | \section{Introduction}
One of the main quests of contemporary astrophysics is the determination of the nature of dark energy, the mysterious component of the cosmic fluid responsible for the accelerated expansion of the Universe. While the evidence of its presence has became compelling in the past decade \citep{AS06.1,RI07.1,DU08.1,KO08.1,RU08.1,KI08.1}, its nature remains entirely unexplained. In particular, while virtually all the present observational evidence is in concordance with a cosmological-constant interpretation of the dark energy, its possible dynamical evolution is not well constrained. The detection of this time evolution would hint at dark energy being different from vacuum energy, and would call for some more general explanation, such as minimally coupled scalar fields \citep{RA88.1,WE88.2,BR00.1}. Since, in this standard generalization, dark energy is not supposed to clump on the scales of the largest cosmic structures, the only way its nature can be unveiled is by studying the expansion history of the Universe. The expansion rate of the Universe as a function of cosmic time in turn affects the process of structure formation, and consequently the many observable properties of cosmic structures that are accessible to observations.
The most immediate effect of cosmology on cosmic structures is on the number counts of objects, especially the most massive and extreme ones such as galaxy clusters. A generic dynamical dark-energy model that postulates a dark-energy density increasing with redshift will necessarily imply an earlier structure formation than in cosmological-constant models (if the linear amplitude of density fluctuations at present is fixed), and thus a higher abundance of objects at any time.
An alternative channel for the detection of possible effects of dynamical-dark energy is the study of clustering properties of cosmic structures. Models predicting a higher abundance of objects imply that high-mass clusters are less exceptional, and as a consequence the linear bias with respect to the underlying dark-matter density-field should be reduced. The linear bias, the structure abundance, and the linear correlation function of density fluctuations all affect the angular and spatial correlation functions that are observed in cluster catalogs, and all depend on the behavior of dark energy. We are therefore justified in exploring the effect of different quintessence models on the clustering properties of massive galaxy clusters, and understanding whether differences between them could be detected significantly in cluster catalogs produced by forthcoming experiments. This is the purpose of the present work.
We focus on blind cluster surveys based on the X-ray emission of the hot intra-cluster plasma and on the spectral distortion of the CMB radiation produced by the thermal Sunyaev-Zel'dovich (\citealt{SU72.1}, SZ henceforth) effect. The cosmological models that we address range from the concordance cosmological-constant $\Lambda$CDM cosmogony to early dark-energy models, to intermediate models with a constant equation-of-state parameter for dark energy $w_\mathrm{x} \ne -1$ or with a gentle evolution in $w_\mathrm{x}$ with time. \cite{SA07.1} performed a simple preliminary study of the two-point angular correlation function predicted to be observed by \emph{Planck} in models with early quintessence. As will become evident from the results shown in this paper, our findings are qualitatively consistent with those, while a quantitative comparison is not directly possible because of the different catalog definitions. Observational results about the clustering properties of galaxy clusters measured in optical and infrared catalogs can be found in \cite{BR07.1}, \cite{PA08.1} and \cite{ES08.1}.
This paper is structured as follows. In Sect.~\ref{sct:cosmo}, a brief overview of the different cosmological models employed in this paper is presented, with a description of their main features and a summary of the various cosmological parameters. In Sect.~\ref{sct:clus}, we review the formalism used to describe clustering of galaxy clusters in the past light cone of the observer. In Sect.~\ref{sct:survey}, we detail the properties of the X-ray and SZ surveys analyzed in the present work and summarize the scaling laws used in linking the mass and redshift of objects to their observable properties in Sect.~\ref{sct:scaling}. In Sect.~\ref{sct:cat}, we describe the properties of the cluster catalogs obtained therefrom. In Sect.~\ref{sct:res}, we present our results on the spatial and the angular correlation functions, and in Sect.~\ref{sct:sum}, we summarize our conclusions. We shall use throughout the \cite{SH02.1} prescription for the computation of both the cluster mass function and the linear bias.
\section{Cosmological models}\label{sct:cosmo}
We use seven different cosmological models. The first four of them are models with an early dark-energy (EDE henceforth) component, labeled from EDE1 to EDE4. In early dark-energy cosmologies, the dark-energy contribution is assumed to be represented by a quintessence scalar field whose evolution tracks that of the dominant component of the cosmic fluid at a given time \citep{WE88.1,WE88.2,WE95.1}. As a consequence, the density parameter for dark energy at very early times does not vanish as in more conventional models, but flattens to a finite value. To ensure dark-energy dominance at low redshift however, an \emph{ad hoc} mechanism for breaking the tracking behavior must be adopted, usually in the form of a non-standard kinetic term in the quintessence Lagrangian \citep{HE01.1,DO01.1,DO06.1} or a non-minimal coupling between quintessence and neutrinos with evolving mass \citep{WE07.1}.
An adequate parametrization of early dark-energy models consists of the dark-energy density parameter at present, $\Omega_{\mathrm{x},0}$, the dark-energy equation-of-state parameter at present, $w_{\mathrm{x},0}$ and a suitable average of the dark-energy density parameter at early times during the phase of linear structure formation,
\begin{equation}
\bar{\Omega}_\mathrm{x,sf} \equiv -\frac{1}{\ln a_\mathrm{eq}} \int_{\ln a_\mathrm{eq}}^0 \Omega_\mathrm{x} (a) d\ln a,
\end{equation}
where $a_\mathrm{eq}$ is the scale factor at matter-radiation equality. Observational constraints from Type Ia supernovae, large-scale structure, and CMB allow $\bar{\Omega}_\mathrm{x,sf}$ to be on the order of a few percent at most \citep{DO05.1,DO07.1}. Our models EDE1 and EDE2 were introduced and studied in \cite{BA06.1} (see also \citealt{FE07.1}), while EDE3 and EDE4 were investigated by \cite{WA08.1} and have cosmological parameters more closely adapted to the latest WMAP data releases \citep{DO06.1,DO07.1}. For a detailed analysis of how early dark-energy models compare with other dark-energy models on observational grounds, especially with respect to type-Ia supernovae data sets, we refer the reader to \cite{RU08.1}.
Apart from the EDE models, we also investigate a cosmological model with dynamical dark-energy parametrised as in \cite{KO08.1}, which we briefly describe below. In this case, the dark-energy equation-of-state parameter is assumed to evolve with the scale factor as
\begin{equation}\label{eqn:wx}
w_\mathrm{x}(a) = \frac{aw_0}{a+a_*} + \frac{a(1-a)w_1}{a+a_*} - \frac{a_*}{a+a_*},
\end{equation}
where $z_* = 1/a_*-1$ is a transition redshift that we set to be $z_* = 10$ in what follows. In any case, \cite{KO08.1} showed that the precise choice of the transition redshift is not extremely relevant to the inferred value of the parameters $w_0$ and $w_1$. In the low-redshift limit, $z \ll z_*$, Eq. (\ref{eqn:wx}) reduces to the more standard form
\begin{equation}
w_\mathrm{x}(a) = w_0 + (1-a)w_1
\end{equation}
\citep{CH01.1,LI03.1}. We shall assume in the following that $w_0 = -1$, while $w_1 = 0.5$, which is almost the highest possible value inferred at $95.4\%$ confidence level by WMAP-5 year data \citep{KO08.1}. For brevity, we refer to this model by K08 in the remainder of the paper.
In addition to those described above, we analyzed a model with constant $w_\mathrm{x} = -0.8$ and a standard $\Lambda$CDM cosmological model with parameters given by the latest WMAP-5 year data release in combination with Type-Ia supernovae and baryon acoustic oscillations \citep{KO08.1}. The redshift evolution in the equation-of-state parameters for our seven dark-energy models is shown in Fig.~\ref{fig:wz}, while the values of the main cosmological parameters are summarized in Table~\ref{tab:par}. There, the Hubble constant is expressed as $H_0 = h\: 100$ km s$^{-1}$ Mpc$^{-1}$, and $\sigma_8$ represents the \emph{rms} of primordial density fluctuations smoothed on a scale of $8\,h^{-1}$ comoving Mpc.
\begin{figure}[t]
\includegraphics[width=\hsize]{Figures/wz}\hfill
\caption{The redshift evolution of the dark energy equation of state parameter $w_\mathrm{x}$ for the seven dark energy models adopted in this work, as labelled in the plot.}
\label{fig:wz}
\end{figure}
\begin{table}[t!]
\caption{Parameter values for the seven cosmological models investigated in this work}
\begin{center}
\label{tab:par}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Model & $\Omega_{\mathrm{m},0}$ & $\Omega_{\mathrm{x},0}$ & $\bar{\Omega}_{\mathrm{x,sf}}$
& $h$ & $w_{\mathrm{x},0}$ &
$\sigma_8$\\
\hline
\hline
EDE1 & $0.330$ & $0.670$ & $0.040$ & $0.670$ & $-0.928$ & $0.820$ \\
EDE2 & $0.360$ & $0.640$ & $0.040$ & $0.620$ & $-0.997$ & $0.780$ \\
EDE3 & $0.284$ & $0.716$ & $0.033$ & $0.686$ & $-0.942$ & $0.715$ \\
EDE4 & $0.282$ & $0.718$ & $0.048$ & $0.684$ & $-0.935$ & $0.655$ \\
K08 & $0.279$ & $0.721$ & $-$ & $0.701$ & $-1$ & $0.817$ \\
$w_\mathrm{x} = -0.8$ & $0.279$ & $0.721$ & $-$ & $0.701$ & $-0.800$ & $0.817$ \\
$\Lambda$CDM & $0.279$ & $0.721$ & $-$ & $0.701$ & $-1$ & $0.817$ \\
\hline
\end{tabular}
\end{center}
\end{table}
Note the highly non-trivial behavior of $w_\mathrm{x}(z)$ in the early dark-energy models, for which the equation-of-state parameter reaches positive values for relatively low redshift, especially for EDE3 and EDE4. We also note the extremely low normalization of the power spectrum for linear density fluctuations (parametrized by $\sigma_8$) in model EDE4, necessary to counter act the quite high dark-energy density at early times, which in turn determines a particularly low linear density contrast threshold for spherical collapse. As can be seen from Fig.~\ref{fig:wz}, all our models approach the cosmological-constant behavior at low redshift, and they are all constructed to also be in agreement, beyond CMB data, with large-scale structure \citep{TE04.2,TE04.1} and Type-Ia supernovae data.
\section{Clustering formalism}\label{sct:clus}
We used the formalism developed by \cite{MA97.1} and further applied, among others, by \cite{MO98.1} in studying high-redshift galaxy clustering and by \cite{MO00.1,MO01.1,MO02.1} in describing the clustering of galaxy clusters in the past light cone of an observer given a survey selection function. In this section, we briefly summarize this formalism and refer to the quoted papers for additional detail.
The starting point of the formalism is the number of objects of mass $M$ per unit redshift around $z$ given a background cosmology, $\mathcal{N}(M,z) = 4\pi g(z) n(M,z)$, where $n(M,z)$ is the standard differential mass function \citep{PR74.1,BO91.1,SH02.1} and $g(z)$ is the Jacobian determinant
\begin{equation}\label{eqn:jacobian}
g(z) = r^2(z) \frac{dr}{dz}(z)\,.
\end{equation}
Evaluating the differential mass function for cosmological models with dynamical dark-energy, we set the linear-density threshold for collapse as computed in \cite{BA06.1}. We shall further comment on this choice in Sect.~\ref{sct:sum}. In Eq.~(\ref{eqn:jacobian}), $r(z)$ is the comoving radial distance to redshift $z$, and $g(z)$ thus represents the comoving volume per unit redshift around $z$. Equation~(\ref{eqn:jacobian}) is valid only for a spatially flat cosmological model, which we assume henceforth. Under the same assumption, the comoving radial distance $r(z)$ is
\begin{equation}
r(z) = \frac{c}{H_0} \int_0^z \frac{dz'}{E(z')}\,,
\end{equation}
where $E(z) \equiv H(z)/H_0$ is the normalised Hubble parameter.
We consider a galaxy-cluster catalog covering the mass range $\left[ M_1,M_2 \right]$, where $M_1$ and $M_2$ in general depends on redshift. Then, the true all-sky equivalent redshift distribution of objects in the catalog reads
\begin{equation}\label{eqn:dist}
\mathcal{N}(z) = \int_{M_1}^{M_2} \mathcal{N}(M,z) dM.
\end{equation}
For the realistic situations explored below, the observed redshift distribution equals Eq.~(\ref{eqn:dist}) multiplied by the fractional sky coverage $f_\mathrm{sky}$ of the survey. In the real world, cluster catalogs are restricted by a threshold value of some observable (for instance the X-ray flux or the SZ decrement), hence all objects of mass higher than the mass $M_1$, corresponding to this limiting observable at a given redshift by some scaling relation, will be included in the catalog. This amounts to setting $M_2 = + \infty$ in Eq.~(\ref{eqn:dist}).
One important ingredient for predicting the observed clustering properties of galaxy clusters is a relation between the density contrast of collapsed objects and that of the underlying matter distribution with the correct time evolution, i.e.,~the effective bias. For the linear `monochromatic' bias, we adopt the expression
\begin{eqnarray}
b(M,z) &=& 1 + \frac{1}{\delta_\mathrm{c}} \left[ a \frac{\delta_\mathrm{c}^2}{D_+^2(z)S(M)} -1 \right] +
\nonumber
\\
&+& p\frac{2}{\delta_\mathrm{c}} \left[ \frac{1}{1 + \left[ \sqrt{a}\delta_\mathrm{c}/\left(D_+(z)\sqrt{S(M)} \right) \right]^{2p}} \right],
\end{eqnarray}
where $D_+(z)$ is the linear growth factor for density fluctuations, $S(M)$ is the variance in the primordial density field smoothed on a scale corresponding to the mass $M$, and $\delta_\mathrm{c}$ is the linearly extrapolated density contrast at collapse for a spherical density perturbation. We note that in general $\delta_\mathrm{c}$ depends on redshift (although mildly for models with constant $w_\mathrm{x}$), but for clarity we omitted this dependence in the previous equation.
The standard \cite{PR74.1} (see also \citealt{MO96.1}) relation is recovered by setting $a = 1$ and $p = 0$. The more precise relation proposed by \cite{SH99.1} (see also \citealt{SH01.1}) can be obtained by instead setting $a = 0.707$ and $p = 0.3$. We shall follow this second option for consistency with the mass-function prescription and because it has been shown to provide closer agreement with the bias measured in $\Lambda$CDM numerical simulations.
Given the above, the effective bias is defined as the linear bias weighted by the abundance of clusters in the catalog at hand,
\begin{equation}\label{eqn:beff}
b_\mathrm{eff}(z) \equiv \frac{1}{\mathcal{N}(z)} \int_{M_1}^{M_2} b(M,z) \mathcal{N}(M,z) dM\,.
\end{equation}
We define $\xi(r,z_1,z_2)$ to be the two-point correlation function of the underlying density distribution, for density peaks placed at the two different redshifts $z_1$ and $z_2$. This is conveniently computed by Fourier-transforming the non-linear power spectrum of the density fluctuations, described e.g.,~using the fit of \cite{PE96.1}. More accurate prescriptions for evaluating the non-linear matter power spectrum exist \citep{SM03.1}, but their differences to that given by \cite{PE96.1} are small and relevant only on scales $\lesssim 1$ Mpc $h^{-1}$, which are not pertinent here. The object correlation function can then be defined as
\begin{equation}
\xi_\mathrm{obj}(r,z_1,z_2) \equiv b_\mathrm{eff}(z_1)b_\mathrm{eff}(z_2)\xi(r,z_1,z_2)\,.
\end{equation}
The problem given by the presence of a double redshift dependence in the correlation function of the underlying density fluctuations is solved by considering a single, average redshift $\overline{z}$ defined as $D_+(\overline{z}) = \sqrt{D_+(z_1)D_+(z_2)}$ so that, effectively, $\xi(r,z_1,z_2) = \xi(r,\overline{z})$. The effect of redshift-space distortions is also taken into account by multiplying the correlation function $\xi(r,z_1,z_2)$ with the factor (\citealt{KA87.1}, see also \citealt{ZA96.1,MA00.1}) $1 + 2\beta(\overline{z})/3 + \beta^2(\overline{z})/5$, with $\beta(z) \equiv f(z)/b_\mathrm{eff}(z)$ and
\begin{equation}
f(z) \equiv -\frac{d \ln D_+(z)}{d \ln (1+z)}.
\end{equation}
In models with a cosmological constant, the function $f(z)$ can be well approximated by \citep{LA91.1}
\begin{equation}
f(z) \simeq \Omega_\mathrm{m}^{0.6}(z) + \frac{\Omega_\mathrm{x}(z)}{70} \left[ 1 + \frac{\Omega_\mathrm{m}(z)}{2} \right]\,.
\end{equation}
In more general models with a dynamical evolution in the dark-energy component, $f(z)$ must be evaluated numerically.
Taking all past light cone effects into account, the observed spatial correlation function is given by
\begin{equation}
\xi_\mathrm{obs}(r) = \frac{1}{A^2} \int_{z_\mathrm{inf}}^{z_\mathrm{sup}}\int_{z_\mathrm{inf}}^{z_\mathrm{sup}} \frac{\mathcal{N}(z_1)}{r(z_1)} \frac{\mathcal{N}(z_2)}{r(z_2)} \xi_\mathrm{obj}(r,z_1,z_2) dz_1dz_2\,,
\end{equation}
with normalisation
\begin{equation}
A \equiv \int_{z_\mathrm{inf}}^{z_\mathrm{sup}} \frac{\mathcal{N}(z)}{r(z)} dz\,.
\end{equation}
In the previous two equations, $z_\mathrm{inf}$ and $z_\mathrm{sup}$ are the minimum and maximum redshifts, respectively spanned by the cluster catalog at hand. In realistic situations, $z_\mathrm{inf} \simeq 0$, while $z_\mathrm{sup}$ depends on the sensitivity of the instrument used.
Accordingly, the observed angular correlation function is
\begin{equation}\label{eqn:angular}
\omega_\mathrm{obs}(\theta) = \frac{1}{B^2} \int_{z_\mathrm{inf}}^{z_\mathrm{sup}}\int_{z_\mathrm{inf}}^{z_\mathrm{sup}} \mathcal{N}(z_1) \mathcal{N}(z_2) \xi_\mathrm{obj}(\overline{r},z_1,z_2) dz_1dz_2,
\end{equation}
where
\begin{equation}
\overline{r} = \overline{r}(z_1,z_2,\theta) \equiv \sqrt{r^2(z_1) + r^2(z_2) - 2r(z_1)r(z_2)\cos(\theta)}
\end{equation}
and the normalisation in this case is the total number of objects included in the catalog at hand,
\begin{equation}
B \equiv \int_{z_\mathrm{inf}}^{z_\mathrm{sup}} \mathcal{N}(z) dz.
\end{equation}
In the small angle approximation (see for instance \citealt{PE80.1}), the relation in Eq.~(\ref{eqn:angular}) simplifies to
\begin{equation}
\omega_\mathrm{obs}(\theta) = \frac{1}{B^2} \int_{z_\mathrm{inf}}^{z_\mathrm{sup}} \frac{\mathcal{N}^2(z)}{dr(z)/dz}
\int_{-\infty}^{+\infty} \xi_\mathrm{obj}(r_*,z,z) du dz,
\end{equation}
with
\begin{equation}
r_* = r_*(u,\theta,z) \equiv \sqrt{u^2 + r^2(z)\theta^2}.
\end{equation}
With the help of this formalism, it is possible to produce realistic theoretical expectations for the correlation properties of galaxy clusters in the past light cone, as a function of catalog properties and the cosmological model.
\section{Survey properties}\label{sct:survey}
We consider five ongoing and planned cluster surveys here, two of which are X-ray based, while the remaining three are SZ based.
\subsection{X-ray catalogs}
Two forthcoming X-ray surveys are addressed in this work. The first has the properties of the \emph{eRosita} wide survey, described in the mission definition document \footnote{available at \\\texttt{http://www.mpe.mpg.de/projects.html\#erosita}}. It is designed to have a sky coverage of $f_\mathrm{sky} \simeq 0.485$ and a limiting flux of $F_\mathrm{lim} = 3.3 \times 10^{-14}$ erg s$^{-1}$ cm$^{-2}$ in the energy band $\left[ 0.5,2.0 \right]$ keV. We note that this planned survey fulfills almost exactly the requirements specified in the dark-energy task force paper of \cite{HA05.1}, where a survey of X-ray clusters optimal to constraining the evolution in the dark-energy equation-of-state is described. This proposed survey covers $\Omega \simeq 2 \times 10^4$ square degrees ($f_\mathrm{sky} \simeq 0.485$) like the \emph{eRosita} wide survey, and has a limiting flux in the $\left[ 0.5,2.0 \right]$ keV energy band that is only slightly lower of, $F_\mathrm{lim} = 2.3 \times 10^{-14}$ erg s$^{-1}$ cm$^{-2}$. We adopt however the exact parameters of the \emph{eRosita} survey.
The second is the XMM cluster survey (\citealt{SA08.1}, XCS henceforth), a serendipitous search for X-ray clusters in the existing exposures of the XMM satellite archive. The sky coverage estimated on the basis of the already surveyed area, the pointings still to be analysed, and the expected mission lifetime is $\Omega \simeq 500$ square degrees ($f_\mathrm{sky} \simeq 0.012$). As for the depth of the survey, since the XMM archive occupies a range of different exposure times, and thus of limiting fluxes, a single limiting flux is inappropriate for describing the XCS catalog. If fluxes in the $\left[ 0.1,2.4 \right]$ keV energy band are considered, a single limiting flux of $F_\mathrm{lim} = 3.5 \times 10^{-13}$ erg s$^{-1}$ cm$^{-2}$ might be used, although the redshift-dependent cut reported in Fig.~9 of \cite{SA08.1} is more appropriate, being fairly fit by
\begin{equation}
\frac{F_\mathrm{lim}(z)}{10^{-13} \mbox{ erg s}^{-1}\mbox{ cm}^{-2}} = 2.8\: z^{-1/3}.
\end{equation}
We note that the XMM cluster survey has a brighter limiting flux at all cosmological redshifts and a much smaller covered area than the \emph{eRosita} survey, hence large differences are expected in the number of clusters, and the clustering signal-to-noise ratio detected in the former survey will be much lower than in the latter.
\subsection{SZ catalogs}
We consider three planned blind sub-mm surveys. The first is based on South Pole Telescope (SPT) observations, and was considered by \cite{MA03.1} in an attempt to predict possible constraints on cosmological parameters and dark energy. The proposed survey area amounts to $\Omega \simeq 4 \times 10^3$ square degrees ($f_\mathrm{sky} \simeq 0.097$), for a limiting SZ flux density (see definition in the Sect. \ref{sct:scaling} below) of $S_{\nu_0,\mathrm{lim}} \simeq 5$ mJy at a frequency $\nu_0 \equiv 150$ GHz. We note that a blind survey of SZ clusters with SPT has indeed already started with the first successful detections \citep{ST08.1}, hence a comparison of our theoretical predictions with observations may be imminent.
The second SZ cluster survey is based on the portion of sky that the Atacama Cosmology Telescope (ACT) will observe. According to \cite{SE07.1}, this will consist of two stripes covering 4 degrees in declination and 360 degrees in right ascension, for a total of $\Omega \simeq 3.6 \times 10^3$ square degrees ($f_\mathrm{sky} \simeq 0.087$). The depth of the survey is defined in terms of the (frequency-independent) integrated Compton-$y$ parameter over the solid angle covered by the virial sphere of each individual cluster, $Y_{200}$. According to the simulations performed in \cite{SE07.1}, for a limiting integrated Compton parameter of $Y_{200,\mathrm{lim}} \simeq 10^{-3}$ arcmin$^2$, a galaxy cluster-sample $\sim 90\%$ complete is produced, even with the inclusion of noise from radio and infrared point sources.
The last survey that we consider will be carried out by the \emph{Planck} satellite. This is the largest sub-millimeter survey of the sky that is currently being developed. Even though the beam of the satellite will be quite large, it is predicted to detect several thousands of clusters by their thermal SZ effect. As shown in \cite{SC07.1}, the sky coverage for the detection of galaxy clusters will be highly non-uniform, especially at low detection significance. On the other hand, we believe that a uniform sky coverage of $\Omega \simeq 3 \times 10^{4}$ square degrees ($f_\mathrm{sky} \simeq 0.727$) is realistic and sufficient for our purposes, and hence we adopt it here. As for the limiting SZ observable, the noise due to Planck's scanning path is highly structured on cluster-scales and shorter length. This means that assuming a simple flux-detection threshold is insufficient for our study. In \cite{SC07.1}, the minimum mass detected as a function of redshift is presented for a limiting integrated $y$-parameter of $Y_{200,\mathrm{lim}} = 10^{-3}$ arcmin$^2$, based on a numerical simulation in a $\Lambda$CDM world model. This minimum mass is virtually independent of the filtering scheme adopted, and is described reasonably well by
\begin{equation}
\log \left(\frac{M_\mathrm{lim}(z)}{10^{15} M_\odot h^{-1}}\right) = -1.200 + 1.469\arctan\left[ \left(z-0.10\right)^{0.440} \right]
\end{equation}
for $z \ge 0.11$, and by
\begin{equation}
\log \left(\frac{M_\mathrm{lim}(z)}{10^{15} M_\odot h^{-1}}\right) = -1.924 + 8.333 z
\end{equation}
if $z \le 0.11$.
This kind of fit may appear cumbersome, but we were unable to find a simpler functional form that adequately reproduces the results of \cite{SC07.1}, because of the steep increase in the limiting mass at $z \gtrsim 0.1$. Note that the two branches of the fit join smoothly at $z = 0.11$.
The simulation by \cite{SC07.1} was performed for a single cosmological model, namely a WMAP-1 cosmology \citep{SP03.1} with $\Omega_{\mathrm{m},0} = 0.3$, $\Omega_{\Lambda,0} = 0.7$, $h = 0.7$ and $\sigma_8 = 0.9$. To correct for the different cosmologies used here, we proceed as follows. First, we convert the minimum mass from \cite{SC07.1} into a minimum integrated Compton $y$-parameter, according to the scaling relation described in Sect.~\ref{sct:scaling} below and by considering the appropriate WMAP-1 cosmology. We then convert this minimum integrated $y$-parameter back into a minimum mass using the same scaling relation but using the various cosmologies adopted here. This procedure explains why clusters of the same mass produce different signals in different cosmologies, because of their different formation histories and the differences in geometry of the universe between models. On the other hand, altering the cluster abundance may also change the amount of undetected objects, and thus the background noise. A proper treatment of this issue would require a far more detailed analysis and probably fully numerical simulations, which we decide was unnecessary for our purposes.
\section{Scaling relations}\label{sct:scaling}
To relate survey properties to the extent in mass and redshift space of the resulting cluster catalog, it is necessary to link the mass and redshift of an individual cluster to the relevant observable, namely X-ray flux for X-ray surveys and SZ flux density or integrated Compton-$y$ parameter for sub-mm surveys. We do this by means of realistic scaling laws. We note that not all features of these scaling relations are well established, especially concerning their redshift evolution. We described in the following what we propose to be the most suitable way to proceed given our aims.
\subsection{X-ray scaling relations}
First of all, we used the conversion between the X-ray temperature and the virial mass adopted in \cite{FE07.2} (see also \citealt{BA03.1}), i.e.,~a virial relation with normalisation based on the simulations of \cite{MA01.1}
\begin{equation}\label{eqn:mt}
kT(M_{200},z) = 4.88 \mbox{ keV} \left[ \frac{M_{200}}{10^{15} M_\odot} h(z) \right]^{2/3},
\end{equation}
where the mass is measured in units of $M_\odot$. Additionally, a luminosity-temperature relation given by
\begin{equation}\label{eqn:lt}
L(T) = 2.5 \times 10^{43} \mbox{ erg s}^{-1} h^{-2} \left( \frac{kT}{1.66 \mbox{ keV}} \right)^{2.331}
\end{equation}
was used. This is based on observations by \cite{AL98.1}, and is assumed not to evolve with redshift according to the analyses of \cite{MU97.1}, \cite{RE99.1} and \cite{HA02.1}. Combining these two relations, we obtain the mass-luminosity scaling law
\begin{equation}\label{eqn:lm}
L(M_{200},z) = 3.087 \times 10^{44} \mbox{ erg s}^{-1} h^{-2} \left[ \frac{M_{200}}{10^{15} M_\odot} h(z)\right]^{1.554}.
\end{equation}
Choosing a reasonable value for the Hubble constant, $h = 0.7$, Eq.~(\ref{eqn:lm}) equals
\begin{equation}
L(M_{200},z) = 1.097 \times 10^{45} \mbox{ erg s}^{-1} \left[ \frac{M_{200}}{10^{15} M_\odot h^{-1}} E(z) \right]^{1.554},
\end{equation}
where the mass is now expressed in $M_\odot h^{-1}$. \cite{BA03.1} demonstrated that this mass-luminosity relation is a good fit to the X-ray cluster observations compiled by \cite{RE02.1}.
A possible steepening of the luminosity-temperature relation for low-mass clusters and groups of galaxies has also been advocated \citep{HE00.2,HE00.1,XU00.1}, which would require replacing Eq.~(\ref{eqn:lt}) with a broken power-law. However, it has been shown \citep{OS04.1,KH07.1} that the scaling relation for groups is consistent with that for clusters, although the scatter for groups is considerably larger. This could bias the estimate of the relation's slope; However since this has not been established definitively, we prefer to adhere to Eq.~(\ref{eqn:lt}) in the following. A steepening of the luminosity-temperature relation for groups of galaxies would have the consequence of including fewer low-mass objects in the various catalogs. These would thus contain a higher fraction of high-mass clusters whose clustering properties would enhance the differences between cosmological models. Our results may thus slightly underestimate the distinguishing power of the correlations.
To convert the bolometric X-ray luminosity provided by the scaling relations to the luminosity in a given band required to characterize the cluster catalogs of \emph{eRosita} and XCS, we adopt a Raymond-Smith \citep{RA77.1} plasma model implemented via the \texttt{xspec} software package \citep{AR96.1}, with metal abundance $Z = 0.3 Z_\odot$ \citep{FU98.1,SC99.1}. Once the mass-luminosity relation is obtained, the mass (and redshift)-flux relation trivially follows.
\subsection{SZ scaling relations}
The SZ effect is a scattering process appearing in the CMB spectrum as absorption at frequencies below $\sim 218$ GHz and as emission above. Nonetheless, a flux density can be formally associated with the temperature distortion imprinted by the thermal SZ effect in the following way.
We consider the Compton-$y$ parameter observed in a given direction $\theta$ of the sky and integrate it over an arbitrary solid angle $\Omega$ to obtain
\begin{equation}
Y = \int_\Omega y(\theta) d^2\theta.
\end{equation}
The temperature distortion over the patch of the sky covered by $\Omega$ is proportional to $Y$ times the typical frequency pattern of the thermal SZ effect, hence the monochromatic SZ flux per unit frequency received from the solid angle $\Omega$ can be defined as $S_\nu \equiv j(\nu) Y$, with
\begin{equation}
j(\nu) = 2 \frac{(kT_\gamma)^3}{(hc)^2} \left| f(\nu) \right|.
\end{equation}
Here, $T_\gamma$ is the CMB temperature and $f(\nu)$ is the typical spectral signature of the thermal SZ effect,
\begin{equation}
f(\nu) = \frac{x^4 e^x}{(e^x-1)^2} \left[ x\frac{e^x+1}{e^x-1} -4 \right],
\end{equation}
where $x \equiv h\nu/k T_\gamma$, and relativistic corrections are ignored (see however \citealt{IT04.1} and references therein).
In \cite{SE07.1}, a prescription for linking the mass of a cluster to the Compton parameter integrated over the solid angle subtended by the virial sphere is proposed based on numerical simulations, according to
\begin{equation}\label{eqn:yint}
Y_{200}(M_{200},z) = \frac{2.504 \times 10^{-4}}{\left( D_\mathrm{A}(z)/1 \mbox{ Mpc} \right)^2} \left(\frac{M_{200}}{10^{15} M_\odot}\right)^{1.876}E(z)^{2/3}.
\end{equation}
If we wish to convert the relation in Eq.~(\ref{eqn:yint}) into a scaling law for the SZ equivalent flux density, the slope and redshift dependence will obviously remain unchanged. The normalization, however, must be converted into an SZ flux. We shall assume that the SZ monochromatic flux relates to $Y_{200}$, in the sense that the amount of SZ signal detected outside the virial radius is negligible. Keeping in mind that
\begin{equation}
2 \frac{(kT_\gamma)^3}{(hc)^2} = 2.701 \times 10^{-18} \mbox{ J m}^{-2} = 2.701 \times 10^{11} \mbox{ mJy},
\end{equation}
the scaling law of Eq.~(\ref{eqn:yint}) turns into
\begin{equation}
S_\nu(M_{200},z) = \frac{6.763 \times 10^{7} \mbox{ mJy}}{\left(D_\mathrm{A}(z)/1 \mbox{ Mpc}\right)^2} \left(\frac{M_{200}}{10^{15} M_\odot}\right)^{1.876} \left| f(\nu) \right| E(z)^{2/3}.
\end{equation}
As stated in Sect.~\ref{sct:survey}, for the SPT catalog construction we assumed $\nu = \nu_0 \equiv 150$ GHz, for which $f(\nu_0) = -3.833$. The negative value indicates absorption, as is to be expected since $\nu_0 < 218$ GHz. It follows
\begin{equation}
S_{\nu_0}(M_{200},z) = \frac{2.592 \times 10^{8} \mbox{ mJy}}{\left(D_\mathrm{A}(z)/1 \mbox{ Mpc}\right)^2} \left(\frac{M_{200}}{10^{15} M_\odot}\right)^{1.876} E(z)^{2/3}.
\end{equation}
We use this scaling relation for the SPT cluster catalog.
\section{Catalog properties}\label{sct:cat}
\begin{figure}[t]
\includegraphics[width=\hsize]{Figures/minimumMass}\hfill
\caption{The minimum mass for clusters included in the catalogs constructed with the five different surveys investigated in this work, as labelled in the plot. Results for the seven different dark-energy models described in the text are shown, using the same color and line types as in Fig.~\ref{fig:wz}.}
\label{fig:min}
\end{figure}
Figure~\ref{fig:min} shows the minimum mass as a function of redshift for a cluster to enter each catalog, for all the dark-energy models employed in this work. To compute that, we simply converted the limiting flux or integrated Compton-$y$ parameter for the various surveys into a mass by means of the scaling relations described in Sect.~\ref{sct:scaling}. We notice that, as expected, the minimum mass for X-ray selected catalogs is a monotonically increasing function of redshift, while this is not the case for SZ catalogs, whose minimum mass slightly decreases at high redshift. This because the intrinsic redshift-independence of the SZ decrement causes the SZ flux density or integrated Compton $y$-parameter to scale as the inverse of the angular-diameter distance squared, while the X-ray flux scales as the inverse of the luminosity distance squared, and the former tends to flatten at high redshift. An exception to this behavior occurs for \emph{Planck}, whose limiting mass behaves in a more similar way to the X-ray catalogs because \emph{Planck} has a very large beam that significantly smoothes the signal, especially when the angular size of the source is small.
The differences between different cosmologies arise because the scaling relations used in this work include distances and the expansion history of the universe, and hence depend on cosmology. The justification for this is that in models with dynamical dark-energy, and in early dark-energy models in particular, structure formation begins at earlier times than in more standard models with $w_\mathrm{x} = $ constant. As a consequence, clusters at a given redshift have more concentrated host dark-matter halos and more compact gas distributions, which enhances the SZ effect and X-ray emission. This agrees with the differences in the minimum mass for a given cosmology being more pronounced for X-ray catalogs, because X-ray emission is proportional to the square of the gas density, while the SZ effect scales only linearly with the density.
Among the X-ray cluster catalogs, XCS has a systematically higher minimum mass compared to \emph{eRosita}, because of the higher minimum observed flux. Similarly, ACT has a higher minimum integrated Compton $y-$parameter than SPT, and hence the minimum mass included in the catalog is systematically higher. The minimum mass for the \emph{Planck} catalog is always much larger than that of SPT and ACT, thus we expect the number of clusters per unit area entering in its catalog to be relatively small. However, this is in some way compensated by the large area of the \emph{Planck} survey.
\begin{figure}[t]
\includegraphics[width=\hsize]{Figures/redshiftDistribution}\hfill
\caption{The redshift distribution (all-sky equivalent) of clusters entering each of the five catalogs used in this work. Differences between different cosmologies are shown with the same color and line types as in previous figures, as labelled in the plot.}
\label{fig:dist}
\end{figure}
The redshift distribution of objects entering the various cluster catalogs for the seven different cosmologies used in this work is shown in Fig.~\ref{fig:dist}. As expected, large differences occur between different survey catalogs and different dark-energy models. The models EDE1 and EDE2 show very similar results, and also the largest number of objects included in a given catalog. This is due to the enhanced cluster abundance in these models \citep{FE07.1}. The model EDE3 shows results that are very similar to the model with constant $w_\mathrm{x} = -0.8$, except for a moderate difference at low redshifts, close to the peak of the distribution. This is agrees with \cite{WA08.1}, where it was shown that the cluster number counts predicted to be obtained with \emph{Planck} in model EDE3 is almost identical to those for a standard $\Lambda$CDM model. The $\Lambda$CDM model always produces the smallest number of objects, for all catalogs and at all redshifts, with the K$08$ model being intermediate between the $\Lambda$CDM and the $w_\mathrm{x} = -0.8$ models.
As expected, the SZ surveys (except for \emph{Planck}, which has a minimum mass behavior more similar to the X-ray catalogs) have a far wider redshift distribution than the X-ray catalogs. This is because the minimum mass slightly decreases at high redshift, as opposed to a monotonic increase (see Fig.~\ref{fig:min}), and the sample may contain a high number of low-mass objects. For SPT we caution that the redshift distribution remains significantly above zero at the limiting redshift of our analysis, $z = 3$. However, as we verified, the number of objects included in this catalog with $z \ge 3$ is just $\sim 0.4\%$ of the total, and hence negligibly contributes in the redshift integrals needed for computing the observed spatial and angular correlation functions. Those objects might be significant when binning the catalog in redshift, but again, the number of clusters with $z \ge 3$ is at most a few per cent of the number with $z \ge 1.5$, hence we assume that those objects can be safely neglected. The scaling relation between the SZ flux density and the mass of the host dark-matter halo is indeed highly uncertain for low masses and high redshifts, and hence we prefer to cut our sample at $z = 3$.
\section{Results}\label{sct:res}
\subsection{Full catalogs}
\begin{figure}[t]
\includegraphics[width=\hsize]{Figures/bias}\hfill
\caption{Effective bias for the five cluster catalogs used in this work. Different colors and line styles refer to different cosmological models, as labelled in the plot.}
\label{fig:bias}
\end{figure}
In Fig.~\ref{fig:bias}, we show the effective bias computed by using all the clusters in the different samples investigated in this work. Results for the seven different cosmological models described in Sect.~\ref{sct:cosmo} are also shown. The effective bias for models EDE1 and EDE2 is always significantly smaller than for the other models, and this holds true for all cluster catalogs considered here. This is obviously because forming massive objects is easier in those models, and a much higher abundance of objects is present in the various catalogs than for the other cosmological models. As a consequence, large galaxy clusters are less exceptional objects, and are less biased with respect to the underlying dark-matter density field. Models EDE3 and EDE4 (especially the former) are more similar to the standard $\Lambda$CDM case with respect to bias. For EDE3, this is consistent with the previous discussion and also with the findings of \cite{WA08.1}, while for the EDE4 model this is unexpected, since the cluster abundance at a given redshift (see Fig.~\ref{fig:dist}) is lower than, but quite similar to, those for EDE1 and EDE2. However, model EDE4 has an extremely low normalisation for the power spectrum of linear density fluctuations, which is expected to produce a higher `monochromatic' bias. This is likely to play a major role in the computation of the effective bias.
In line with the previous discussion, the $\Lambda$CDM model produces the largest bias, while the K08 and $w_\mathrm{x} = -0.8$ models give slightly smaller and very similar results to EDE3 and EDE4. In particular, the bias for the EDE1 model is up to a factor of $\sim 2$ smaller than that for $\Lambda$CDM at high redshift. It is important to note that, even though this is not clearly visible in Fig.~\ref{fig:bias}, the ratio of the effective bias in models EDE3 and EDE4 to that in model $\Lambda$CDM actually \emph{decreases} quite steeply with redshift for $z < 0.2$, and increases again at higher redshift. Conversely, for other EDE models this ratio continously increases. This is due to the peculiar behavior of the dark-energy equation-of-state parameter in models EDE3 and EDE4, and will have important consequences on the effect of binning the cluster catalog in redshift, as discussed in the next subsection.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.75\hsize]{Figures/correlation}\hfill
\end{center}
\caption{Predicted correlation function for the five cluster catalogs used in this work. Different colors and linestyles refer to different cosmological models, as labelled in the plot. Errorbars are computed with the bootstrap method, and refer only to the $\Lambda$CDM model for clarity.}
\label{fig:correlation}
\end{figure*}
In Fig.~\ref{fig:correlation}, we show the observed correlation function for the five surveys and seven dark-energy models considered here. As expected, the spatial correlation function decreases with increasing radius, and starts oscillating for large separations, $r \gtrsim 40$ Mpc $h^{-1}$. The correlation function in the standard $\Lambda$CDM model is practically indistinguishable from that in the model with a constant dark-energy equation-of-state parameter $w_\mathrm{x} = -0.8$ and also in the dynamical-dark energy model K08. In both cases, this is due to the similar behavior of the dark-energy equation-of-state parameter (see Fig.~\ref{fig:wz}). In more detail, for the K08 case $w_\mathrm{x}$ differs from the concordance value only at very low redshift, but approaches $-1$ at $z \gtrsim 3$. In particular, the equation of state parameter is identical to the $\Lambda$CDM scenario during the linear stage of structure formation.
On the other hand, models with early dark-energy differ significantly. Because of their lower effective bias, the observed correlation functions are also lower than those for more standard models, by $\sim 50-60\%$ at small radii for \emph{eRosita} and SPT, and slightly less for the other surveys. These observed differences are caused by a combination of different effective bias, linear density-fluctuation correlation functions, and object redshift distributions.
The error bars in Fig.~\ref{fig:correlation} (shown only for the $\Lambda$CDM model for simplicity) were computed with the bootstrap method and imply that the difference between EDE and $\Lambda$CDM-like models would be detectable in the correlation functions observed with \emph{eRosita}, SPT and (to a lesser extent) \emph{Planck}. On the other hand, this difference would be completely lost in the noise for ACT and in particular for XCS. The large error bars visible in the XCS panel are due to the very small sky coverage of this survey, which is not adequately compensated by an increase in depth. This result qualitatively agrees with \cite{MO00.1}, who showed that a deep survey has larger errors in the observed and angular correlation functions than a wide one. The same line of reasoning also applies to ACT, whose limiting integrated Compton-$y$ parameter is too shallow to allow the collection of a significant signal.
We note that in an attempt to reduce the size of the error bars, we increased the radial binning for XCS and ACT, so that the relative error, scaling $\propto 1/\sqrt{dr}$, would decrease. However, even then the size of the error bars remains much larger than the differences in correlation amplitudes between the EDE models and the concordance cosmological scenario. We also note that in principle the same procedure could be applied to the relative errors in the other catalogs, resulting in an even lower amplitude of the relative errors. This is obviously unnecessary in this case, and in the following
we also perform this operation only for the XCS and ACT catalogs. However, we keep in mind that when the error bars for the other catalogs are only slightly larger than the difference between cosmological models, enlarging the radial binning would probably allow a significant detection of deviations from the concordance $\Lambda$CDM model.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.75\hsize]{Figures/angular}\hfill
\end{center}
\caption{Angular correlation function for the five cluster catalogs used in this work. Different colors and linestyles refer to different cosmological models, as labelled in the plot. Errorbars are computed with the bootstrap method, and refer only to the $\Lambda$CDM model for clarity.}
\label{fig:angular}
\end{figure*}
We note that while \emph{Planck} and XCS have a similar all-sky equivalent redshift distribution of objects, the sizes of the relative errors are very different. This is because of the differences in sky coverage of about three orders of magnitude between the two different surveys, that enter quadratically in the computation of the error bars (see also the comments presented in Sect.~\ref{sct:cat}).
A popular measure of the clustering amplitude from a given cluster catalog is the correlation length, defined as the spatial separation $r_0$ for which the correlation function equals unity, i.e.,~$\xi_\mathrm{obs}(r_0) = 1$. Among the five catalogs analyzed here, the one with the largest correlation length is ACT, having $r_0 \simeq 16$ Mpc $h^{-1}$ for the $\Lambda$CDM cosmology, while the one with the smallest correlation length is \emph{eRosita}, with $r_0 \simeq 10$ Mpc $h^{-1}$ for the same model. As a consequence of the smaller correlation amplitude, the correlation length is also smaller in models with an early dark-energy contribution, by $\sim 20\%$ compared to the concordance cosmology.
If the redshift information about clusters in one catalog is inaccessible or inadequate, i.e.,~if one has only projected information on the plane of the sky, then the only accessible clustering measure is the angular correlation function discussed in Sect.~\ref{sct:clus}. Figure~\ref{fig:angular} shows the angular correlation functions. The differences between different cosmological models and different surveys are enhanced, probably because the angular correlation functions are integrals over the spatial correlation functions along the line-of-sight. This results in a slight separation between the $\Lambda$CDM model and the K08 and $w_\mathrm{x} = -0.8$ models, which is absent from the observed three-dimensional correlation function. Accordingly, in the angular correlation function the ratio of the early-dark energy models to the $\Lambda$CDM-like models can be as high as a factor of $\sim 3$. Finally, differences between individual EDE models are also enhanced, showing that the cosmology producing the lowest angular-correlation amplitude at all separations and for all catalogs is EDE4.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.5\hsize]{Figures/correlation_z_lt_0_1}\hfill
\includegraphics[width=0.5\hsize]{Figures/correlation_z_gt_0_1}\hfill
\end{center}
\caption{Same as Figure \ref{fig:correlation} but with cluster catalogs restricted to $z \le 0.1$ (left panel) and $z > 0.1$ (right panel). For the SPT catalog, the right panel actually shows the restriction to $0.1 < z \le 0.3$. Only three cosmological models are shown here for simplicity.}
\label{fig:correlation_z_lt}
\end{figure*}
As for the observed spatial correlation function, bootstrap error bars are shown for the $\Lambda$CDM case only for clarity. It is apparent that the difference between $\Lambda$CDM cosmology and EDE models is detected significantly by \emph{eRosita}, while with SPT and \emph{Planck} this is likely only for the most extreme model EDE4, at least with the radial binning we used in the error computation. If this radial binning is increased, the differences between other EDE models and the concordance cosmology become detectable. Finally, as for the spatial correlation function, the error bars for XCS and ACT are too large to allow significant detection of any difference between cosmologies. In both cases, the survey is not deep enough to allow a noteworthy reduction in Poisson noise.
\subsection{Redshift selected catalogs}
In the best-case scenario, knowledge of the redshift of objects entering the catalogs can be assumed. If this is the case, the galaxy-cluster sample can be sub divided into a limited number of redshift bins, such that the number of observed (spatial and/or angular) pairs of objects is broadly the same in each bin. This has the effect of making the relative errors in the correlation functions approximately similar in each bin, and hence allows the coherent study of the redshift evolution in the correlation function. Measuring the redshift evolution in the clustering properties of galaxy clusters on the observational side allows us to determine more accurately the underlying cosmology and dark-energy evolution, since it provides an additional constraint. As a byproduct of this analysis, we can also check whether suitable redshift binning can increase the ratios of the correlation amplitudes in the different cosmological models.
We note that similar numbers of angular pairs in each redshift bin should correspond to similar absolute number of objects per bin, while this is not necessarily the case for spatial pairs.
We then assume a perfect redshift knowledge of the observed sample. Tests showed that a redshift binning that ensures a suitably high total number of clusters per bin is $z \le 0.1$, $0.1 < z \le 0.3$, and $z > 0.3$ for all catalogs. This choice yields approximately equal signal-to-noise ratios in each bin for the angular correlation function. The same could be obtained for the observed spatial correlation function if each of the bins contained the same number of three-dimensional pairs. This is not true because the highest-redshift bin contains too few spatial pairs compared to the other two. The SPT catalog is the only exception due to its wide redshift distribution (see Fig.~\ref{fig:dist}). Thus, we shall use the same redshift binning for the spatial correlation function only for SPT and two bins, $z \le 0.1$, $z > 0.1$, in all other cases.
Figure~\ref{fig:correlation_z_lt} shows the spatial correlation function for all cluster catalogs studied here, for $z \le 0.1$ in the left panel and $z > 0.1$ in the right. For the SPT catalog, the right panel shows results for $0.1< z \le 0.3$, while results for $z > 0.3$ are shown in Fig.~\ref{fig:correlation_z_gt}. Here, we consider only three cosmological models, namely the standard $\Lambda$CDM and the two early-dark energy models EDE1 and EDE4. This choice was made because the EDE4 and $\Lambda$CDM models exhibit the largest differences in terms of spatial and angular correlation functions. Additionally, model EDE1 was chosen as being representative of early-quintessence cosmologies with a $w_\mathrm{x}(z)$ evolution that completely differs from EDE4.
The spatial correlation functions computed in all catalogs and for the three different cosmological models increase with increasing redshift, in agreement with the findings of \cite{MO00.1}. The increment is significant for $\emph{eRosita}$ and $\emph{Planck}$, and can become significant for SPT if the radial binning is increased, according to the previous discussion. For example, the correlation length for \emph{Planck} increases from $r_0 \simeq 9$ Mpc $h^{-1}$ to $r_0 \simeq 17$ Mpc $h^{-1}$ in going from $z \le 0.1$ to $z > 0.1$. As for the SPT catalog, for which we added a high-redshift bin, the correlation length increases from $r_0 \simeq 7$ Mpc $h^{-1}$ for $z \le 0.1$ up to $r_0 \simeq 14$ Mpc $h^{-1}$ for $z > 0.3$.
The ratio of the correlation amplitudes in different cosmological models generally increases when clusters are restricted to $z \le 0.1$, and the ratio of the $\Lambda$CDM and EDE4 models in terms of the spatial correlation function increases significantly. For example, in the \emph{eRosita} sample the ratio of the correlation functions increases from a factor of $\sim 60\%$ to $\sim 2$, and similar increments are seen in the other catalogs. On the other hand, the relative difference between EDE1 and $\Lambda$CDM is practically unchanged compared to the full samples. For high-redshift samples, the ratio of the correlation amplitudes in $\Lambda$CDM and EDE4 models is still larger than for the complete samples, but smaller than for low-redshift samples. The differences between EDE1 and the concordance $\Lambda$CDM models are again very similar to those between the low-redshift and the full samples.
The different behavior of EDE1 and EDE4 can be attributed to the different trend with redshift of the effective bias discussed above. Due to the behavior of the redshift distributions shown in Fig. \ref{fig:dist}, the subsamples with $z \le 0.1$ and $z > 0.1$ are virtually dominated by objects at $z \sim 0.1$ and $z \sim 0.2$ respectively (because of a combination of a sharp decline in the mass function with redshift and volume effects). The ratio of the correlation functions of the underlying density fluctuations in the EDE and $\Lambda$CDM models are virtually unchanged when going from $z \sim 0.1$ to $z \sim 0.2$, and the same is true for the ratio of the effective bias in $\Lambda$CDM and EDE1 model, although in the latter model it increases steadily, but slowly with redshift. Instead, the ratio of the biases in $\Lambda$CDM and EDE4 decreases steeply between $z \sim 0$ and $z \sim 0.2$; The difference between the EDE4 and $\Lambda$CDM model is therefore expected to be more enhanced in low-redshift than high-redshift catalogs, while the difference with EDE1 remain about the same.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\hsize]{Figures/correlation_z_gt_0_3}\hfill
\end{center}
\caption{As Fig.~\ref{fig:correlation}, but only for the SPT catalog restricted to $z > 0.3$. Only three cosmologies are shown here for simplicity.}
\label{fig:correlation_z_gt}
\end{figure}
The relative errors are still very small for \emph{eRosita} in all redshift-selected catalogs, thus the signal-to-noise ratio for differences in the correlation amplitudes between $\Lambda$CDM and EDE models is optimal for the $z \le 0.1$ catalog. The same is also true for \emph{Planck}, because the error bars tend to increase in the $z > 0.1$ sample. For SPT, the only redshift bin in which a difference between the $\Lambda$CDM and the EDE1 or EDE4 models can be reliably detected is the high-redshift bin, $z > 0.3$. For this bin, we obtain essentially the same results as for the full sample. The situation remains practically unchanged for XCS and ACT, where the relative errors are still very large and do not allow any significant detection of the differences between concordance and more exotic models.
Figures~\ref{fig:angular_z_lt} and \ref{fig:angular_z_gt} show the angular correlation functions for cluster catalogs binned in redshift according to the scheme $z \le 0.1$, $0.1 < z \le 0.3$ and $z > 0.3$, as discussed above. For convenience, we show results for all the catalogs considered here and for the cosmological models EDE1, EDE4 and $\Lambda$CDM. In contrast to the spatial correlation functions, the angular correlation functions decrease in amplitude with increasing redshift, a trend that is significant for the \emph{eRosita} and SPT samples, while the large relative errors for the \emph{Planck} catalog probably allow a detection of this decrement only at large angular separations and if the radial binning for the computation of error bars is enlarged.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.5\hsize]{Figures/angular_z_lt_0_1}\hfill
\includegraphics[width=0.5\hsize]{Figures/angular_z_gt_0_1}\hfill
\end{center}
\caption{As Fig.~\ref{fig:angular}, but with cluster catalogs restricted to $z \le 0.1$ (left panel) and $0.1< z \le 0.3$ (right panel). Only three cosmological models are shown here for simplicity.}
\label{fig:angular_z_lt}
\end{figure*}
The ratio of the correlation amplitudes in the $\Lambda$CDM model to those in the EDE models is slightly lower in the first two (low-redshift) bins than for the full samples. Since the relative error bars also increase in those bins, the signal-to-noise ratio for differences in the angular correlation functions is also lower. In contrast, the ratio increases slightly in the highest-redshift bin, $z > 0.3$, but there
the relative errors are also larger, especially for \emph{Planck}. As a consequence, differences between the concordance model and both EDE models can be significantly detected with the \emph{eRosita} and, at least at large angular separations, with the SPT full sample or selected in the intermediate redshift bin, $0.1 < z < 0.3$. The situation remains again unchanged for XCS and ACT, with relative errors being too large to reach any significant conclusion.
According to the discussion above, distinguishing between different cosmologies by means of the angular correlation function does not take any particular advantage of redshift selection. It is legitimate to ask if the situation is different when selection is performed based on the observable used for cluster survey, instead of pure redshift. For instance, binning X-ray selected clusters according to their flux would correspond to a simultaneous (and highly non-linear) binning with mass and redshift. To check this issue, we performed clustering analysis of the SPT cluster sample after binning it according to flux density as $5$ mJy $\le S_{\nu_0} < 10$ mJy, $10$ mJy $< S_{\nu_0} \le 25$ mJy and $S_{\nu_0} > 25$ mJy. This choice ensures a suitably high number of spatial and angular pairs in each bin. However, the outcome is that the error bars are always larger than the differences between the various models for all bins. In other words, this kind of binning does not improve the detection of deviations from the concordance $\Lambda$CDM cosmological model with respect to the angular correlation function, and hence we decided not to show these results here.
\section{Summary and discussion}\label{sct:sum}
We have studied the clustering properties of galaxy clusters in various cosmological models with dynamical and non-dynamical evolution in the dark-energy component. In addition to the concordance $\Lambda$CDM cosmology, we addressed a model with a constant equation-of-state parameter for dark energy, $w_\mathrm{x} = -0.8$, a dynamical-dark energy model with the $w_\mathrm{x}(z)$ parametrization proposed by \cite{KO08.1}, and four models with a non-negligible amount of early dark energy. Cosmological models with dynamical dark energy are generically expected to form structures earlier, affecting both the abundance of massive galaxy clusters and their spatial distribution and clustering properties, this latter property having been the issue here.
To predict forthcoming observations, we computed the effective bias as a function of redshift and observed spatial and angular correlation functions as a function of (physical or apparent) separation that are expected to be measured in cluster catalogs produced by planned blind surveys both in the X-ray and in the sub-mm regimes by means of the thermal SZ effect. The X-ray surveys that we considered in this work are the \emph{eRosita} wide survey, which is described in detail in the related mission definition document, and the XMM cluster survey, based on existing pointings of the XMM satellite. For the SZ surveys, we focused on the South Pole Telescope, the Atacama Cosmology Telescope, and the \emph{Planck} all-sky survey of the Cosmic Microwave Background.
For computing the clustering properties of objects contained in each catalog as a function of cosmology, we employed a well-established formalism that takes past-light cone and selection effects on the cluster sample into account. To link the limiting flux of each survey to the minimum mass that enters the respective catalog at a given redshift, we adopted realistic scaling relations, based both on observations and numerical simulations, between the mass of a cluster and its X-ray luminosity or SZ flux density, or its integrated Compton-$y$ parameter. It turns out that the minimum mass entering a cluster catalog depends not only on the instrument considered, but also on the cosmological model adopted. This is so because the scaling relations mentioned above depend on the underlying world model through the expansion rate and cosmological distances.
As one could naively expect, the number of objects entering a given catalog at a given redshift depends heavily on the cosmology, the highest cluster abundances being present in models with EDE. This is caused by the well known higher mass function displayed by these models and the lower minimum mass entering the catalogs. The SPT catalog is the most extended in redshift due to the low limiting SZ flux density ($5$ mJy), while distributions for XCS and \emph{Planck} are the most limited in redshift, because of the quite shallow limiting X-ray flux of the former and large beam of the latter, which tends to dilute the signal.
For all catalogs, the first two models with early quintessence, EDE1 and EDE2, display the smallest effective bias at all redshifts. This is due to the high abundance of structures present in these models. On the other hand, the concordance $\Lambda$CDM model always displays the highest effective bias, for all catalogs and at all redshifts, while the other models lie somewhere in-between the two. Because of the width of their extent in redshift, SPT and ACT catalogs show the flattest trend for the effective bias, especially SPT, for which $b_\mathrm{eff}(z)$ is at most $\sim 10$ at $z = 3$. Other catalogs, particularly those based upon \emph{Planck} and XCS have a much steeper trend, the effective bias reaching considerably higher values already at $z \sim 2$.
Concerning the spatial correlation function, all the EDE models almost coincide on all scales, having a correlation amplitude smaller than that for the models $\Lambda$CDM, K08 and constant $w_\mathrm{x} = -0.8$, which are also very similar to each other. The largest difference is displayed by the catalogs produced with \emph{eRosita} and SPT. In the former, the relative errors are also smallest, thus making \emph{eRosita} the most promising instrument for the detection of EDE through the spatial correlation function, if the full cluster catalog is to be used. Detection might also be possible for SPT and \emph{Planck} at intermediate scales $r \sim r_0$, while it is completely out of question for XCS and ACT, whose error bars are too large. This is due to the shallow limiting fluxes for both of them, which is not adequately compensated for by the area covered (as is the case for \emph{Planck}). The situation is similar for the angular correlation function. There the smallest error bars are produced by \emph{eRosita} as well, while the EDE4 model produces the smallest correlation amplitude for all the catalogs.
To explore the redshift evolution in the spatial and angular correlation functions, we also performed different cuts in redshift of the various cluster catalogs, in a way that the number of pairs of objects in each bin would remain approximately the same. For the number of spatial pairs, we adopted the double binning $z \le 0.1$ and $z > 0.1$, with the exception of SPT which allows the inclusion of a third bin, $z > 0.3$, with the modification of the second bin to $0.1 < z \le 0.3$. This same binning was also employed for the number of angular pairs.
The result of this analysis is that the spatial correlation function increases with increasing redshift, the correlation length for \emph{eRosita} growing by $\sim 80\%$ between the low- and high-redshift bins. The relative errors show that the increment in the correlation function is significant for \emph{eRosita}, SPT and \emph{Planck}, for all cosmological models considered in this work. Also, comparing the amplitude of the ratio between spatial correlation functions in EDE models and $\Lambda$CDM-like cosmologies, with the size of the error bars indicates that it is better to focus on the low-redshift cluster subsample in order to maximize significant differences between the concordance model and the EDE models in the \emph{eRosita} and \emph{Planck} catalogs. The high-redshift bin, $z > 0.3$, is better for the SPT catalog, giving results compatible with those from the full catalog. As for the angular correlation function, it tends to decrease with increasing redshift, a trend that is significant for all models and catalogs except the usual XCS and ACT. In the absence of errors, in the high-redshift bin there would be the highest chance of distinguishing a $\Lambda$CDM model from models with an early quintessence contribution, since there the ratios of correlation amplitudes are at their highest. However, the errorbars are also large there, so that \emph{eRosita} is the only survey expected to permit significant detection of deviations from the concordance model at all redshifts. In general, if one is interested in optimizing the differences between angular correlation functions measured in different models, it does not pay to subdivide catalogs according to redshift.
We also found that the same is true when cluster catalogs are binned according to the observable used to define them, an approach more directly motivated from the observational point of view. Specifically, binning the SPT catalog according to the flux density always produces ratios of angular correlation functions in different cosmologies that are similar to those for the full catalog, while the error bars are always larger.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\hsize]{Figures/angular_z_gt_0_3}\hfill
\end{center}
\caption{As Fig.~\ref{fig:angular}, but only for catalogs restricted to $z > 0.3$. Only three cosmologies are shown here for simplicity.}
\label{fig:angular_z_gt}
\end{figure}
Before concluding, two notes of caution are in order. First of all, some discussion arose in the literature on whether the semi-analytic calculations performed in \cite{BA06.1} on the spherical collapse model in cosmologies with dynamical-dark energy are indeed correct, and how accurately they are reproduced by $N$-body simulations \citep{FR08.1,FR08.2,GR08.1}. While the discussion is not yet settled, \cite{SA07.1} performed the same calculations as \cite{BA06.1} using a different approach, and found results that are perfectly consistent with the latter. In addition, \cite{SC08.1} used the same approach as \cite{BA06.1} to successfully evaluate the spherical collapse behavior in a modified gravity scenario. Hence, we are at least reasonably confident that the approach followed by \cite{BA06.1}, also employed in this work, is fundamentally correct. Moreover, numerical simulations using the correct early-time behavior of the growth factor for scaling the initial conditions yield results other than those of \cite{GR08.1} and \cite{FR08.1}, which tend towards the expectation from the analytic work of \cite{BA06.1}. Although precise direct integrations of the spherical collapse equations are difficult, further work avoiding approximations has so far confirmed our earlier results (Pace et al., in preparation). Even though definitive conclusions are not reached yet, these facts seem to justify our choice.
Secondly, estimates of the number counts of galaxy clusters detected by \emph{Planck} seem to fall substantially below the estimates of \cite{SC07.1}, and the redshift distribution is apparently shallower than that represented in Fig.~\ref{fig:dist} (\emph{Planck} SZ Challenge (in preparation), see also \citealt{LE08.1}). While definitive new limits for \emph{Planck} are not yet available, this part of the results should be read with caution. In particular, a decrease in the number of objects in the \emph{Planck} catalog would produce an increase in the relative errors, which scale as the inverse of the square root of the number of pairs of objects.
The present work shows that, while mild modifications to the redshift evolution in the equation-of-state parameter for dark energy $w_\mathrm{x}(z)$ have a negligible impact on the clustering properties of galaxy clusters, more exotic models such as early dark-energy cosmologies can change the effective bias of collapsed objects significantly, and thus also the spatial and angular correlation amplitudes of galaxy clusters. We have shown that at least some of the forthcoming blind surveys both in the X-ray and sub-mm regimes will be able to distinguish significantly between these models and more generally place constraints on the time evolution in the dark-energy density.
In general, we expect object clustering to be less effective than e.g.,\ direct abundance data in determining the cosmological model. For instance, in the \emph{eRosita} catalog the number of clusters at $z \gtrsim 1$ increases by about one order of magnitude between the $\Lambda$CDM and the EDE4 models. This variation is much larger than the corresponding variation in the correlation functions, while the size of the relative errors, assuming Poisson statistics, is comparable. Object counting at high-redshift is also expected to be capable of distinguishing cosmological models with a gentle variation in dark-energy density from standard cosmology. The number of high-$z$ clusters in the \emph{eRosita} catalog is higher by a factor $\sim 2.5$ in the model with constant $w_\mathrm{x} = -0.8$ than for $\Lambda$CDM.
Nevertheless, the results of this work show that clustering of massive clusters by itself remains a fundamental channel to unravel the effect of the expansion history of the Universe on the process of structure formation, although employing this information in addition to simple object number counts is certainly an interesting issue to be explored.
{\small
\section*{Acknowledgments}
We acknowledge financial contributions from contracts ASI-INAF I/023/05/0 and ASI-INAF I/088/06/0. We wish to thank the anonymous referee for useful remarks that allowed us to improve the presentation of our work.
\bibliographystyle{aa}
|
2,869,038,155,051 | arxiv |
\section{Introduction}
The H$\rightarrow$ZZ$\rightarrow 4\ell $ decay channel ($\ell = e,\mu$) has a large signal-to-background ratio due to the complete reconstruction of the final state decay products and excellent lepton momentum resolution. This makes it one of the most important channels for studies of the Higgs boson's properties. Measurements performed using this decay channel and the Run 1 data set include the determination of the mass and spin-parity of the new boson, its width and fiducial cross sections, as well as tests for anomalous HVV couplings. \cite{Chatrchyan:2013mxa}
The results are presented on the measurements of properties of the Higgs boson in the H$\rightarrow$ZZ$\rightarrow 4\ell$ decay channel at $\sqrt{s}=13$TeV. Categories have been introduced targeting subleading production modes of the Higgs boson such as vector boson fusion (VBF) and associated production with a vector boson (WH, ZH) or top quark pair ttH. In addition, dedicated measurements of the boson's mass, width, total and differential cross sections have been presented.
\section{Event Selection}
The Z candidates are formed with pairs of leptons of the same flavor and opposite-charge ($e^{+} e^{-}$, $\mu^{+}\mu^{-}$) and required to pass 12 < $m_{\ell^{+}\ell^{-}}$ < 120GeV.
They are then combined into ZZ candidates, wherein we denote as Z$_{1}$ the Z candidate with an invariant mass closest to the nominal Z boson mass, and as Z$_{2}$ the other one. The flavors of involved leptons define three mutually exclusive subchannels: $4e$, $4\mu$ and $2e 2\mu$.
To be considered for the analysis, ZZ candidates have to pass a set of kinematic requirements that improve the sensitivity to Higgs boson decays.
The Z$_{1}$ invariant mass must be larger than $40$GeV.
All leptons must be separated in angular space by at least $\Delta R(\ell_i, \ell_j) > 0.02$.
At least two leptons are required to have p$_{T} > 10$ GeV at least one is required to have p$_{T}$ > 20GeV.
To further suppress events with leptons originating from hadron decays in jet fragmentation or from the decay of low-mass hadronic resonances, all four opposite-charge lepton pairs that can be built with the four leptons (irrespective of flavor) are required to satisfy $m_{\ell^{+}\ell^{-}}$ > 4GeV, where selected FSR photons are disregarded in the invariant mass computation.
Finally, the four-lepton invariant mass m$_{4\ell}$ must be larger than $70$GeV, which defines the mass range of interest for the subsequent steps of the analysis. The results of the events selection is shown in the Figure \ref{fig:ESC}(Left)
\begin{figure}[ht]
\centering
\begin{tabular}{ccc}
& \includegraphics[height=2in]{CMS-PAS-HIG-16-041_Figure_006} & \\
\includegraphics[height=1.5in]{Figure_007-a} & \includegraphics[height=1.5in]{Figure_007-b} &
\includegraphics[height=1.5in]{Figure_007-c}\\
\end{tabular}
\caption{ \label{fig:disc} Top: Distribution of $\ensuremath{\mathcal{D}^\text{kin}_\text{bkg}} $ versus $m_{4\ell}$ The gray scale represents the expected total number of ZZ background and SM Higgs boson signal events for $m_{H}$ = 125 GeV, Bottom: Distribution of categorization discriminants (Left) $\mathcal{D}_{2jet}$. (Middle) $\mathcal{D}_{1jet}$. (Right) $\mathcal{D}_{VH}$ = max($\mathcal{D}_{WH}$,$\mathcal{D}_{ZH}$) in Ref \cite{Sirunyan:2017exp}}
\end{figure}
\section{Event Categorization}
To improve the sensitivity to the Higgs boson production mechanisms, the selected events are classified into mutually exclusive categories. Seven categories are defined, using the following criteria applied in this exact order. Figure \ref{fig:ESC}(right) shows the signal relative purity of different production processes.
The full kinematic information from each event using either the Higgs boson decay products or associated particles in its production is extracted using matrix element calculations and used to form several kinematic discriminants. The discriminant sensitive to $gg/q\bar{q} \rightarrow 4\ell $ kinematics is $\ensuremath{\mathcal{D}^\text{kin}_\text{bkg}}$ and the discriminants $\mathcal{D}_{1jet}$,$\mathcal{D}_{2jet}$ and $\mathcal{D}_{VH}$ = max($\mathcal{D}_{WH}$,$\mathcal{D}_{ZH}$) are used to target a specific Higgs production mode. The full definition of the observables can be found in Refs \cite{Khachatryan:2014kca} \cite{Spin} \cite{Spin1}.
\begin{itemize}
\item {\bf VBF-2jet-tagged}: exactly 4 leptons. In addition there must be either 2 or 3 jets of which at most 1 is b-tagged, or at least 4 jets and no b-tagged jets. Finally, ${\cal D}_{\rm 2jet}>0.5$ is required.
\item {\bf VH-hadronic-tagged}: exactly 4 leptons. In addition there must be 2 or 3 jets, or at least 4 jets and no b-tagged jets. Finally, ${\cal D}_{\rm VH} \equiv {\rm max}({\cal D}_{\rm ZH},{\cal D}_{\rm WH})>0.5$ is required.
\item {\bf VH-leptonic-tagged}: no more than 3 jets and no b-tagged jets in the event,
and exactly 1 additional lepton or 1 additional pair of opposite sign same flavor leptons. This category also includes events with no jets and at least 1 additional lepton.
\item {\bf ttH-tagged}: at least 4 jets of which at least 1 is b-tagged, or at least 1 additional lepton.
\item {\bf VH-MET-tagged}: exactly 4 leptons, no more than 1 jet and E$_T^{\rm miss}$ > 100GeV.
\item {\bf VBF-1jet-tagged}: exactly 4 leptons, exactly 1 jet and ${\cal D}_{\rm 1jet}>0.5$.
\item {\bf Untagged}: consists of the remaining events.
\end{itemize}
\begin{figure}[ht]
\centering
\begin{tabular}{cc}
\includegraphics[height=2in]{a_M4l_Full_4l_inclusive_} &
\includegraphics[height=2in]{Categories} \\
\end{tabular}
\caption{ \label{fig:ESC} Distribution of the four-lepton reconstructed invariant mass $m_{4\ell}$ in the full mass range (left)the signal relative purity of the seven event categories in terms of Higgs boson production processes(Right) in Ref \cite{Sirunyan:2017exp}}
\end{figure}
\section{Results}
\subsection*{Signal Strength}
The signal-strength is defined as the production cross section of the Higgs boson times its branching fraction to four leptons relative to the standard model expectation. To extract the signal strength modifier we perform a multi-dimensional fit that relies on two variables in all the analysis categories: the four-lepton invariant mass $m_{4l}$ and the $\mathcal{D}^{\text{kin}}_{\text{bkg}}$ discriminant. We define the two-dimensional likelihood function as:
\begin{equation}
\mathcal{L}_{2D}(m_{4l},\mathcal{D}^\text{kin}_\text{bkg}) = \mathcal{L}(m_{4l}) \mathcal{L}(\mathcal{D}^\text{kin}_\text{bkg}|m_{4l}) .
\end{equation}
Figure \ref{fig:figure3} shows the results.
\begin{figure}[ht]
\centering
\begin{tabular}{cc}
\includegraphics[height=2in]{Figure_008-a.pdf} &
\includegraphics[height=2in]{Figure_008-c.pdf} \\
\end{tabular}
\caption{ \label{fig:figure3} Observed values of the signal strength for the seven event categories, compared to the combined $\mu$ shown as a vertical line (left) Result of the 2D likelihood scan for the $\mu_{F}$ and $\mu_{V}$ signal strength modifiers. The solid and dashed contours show the 68\% and 95\% CL regions, respectively(Right) in Ref \cite{Sirunyan:2017exp}}
\end{figure}
\subsection*{Fiducial Cross Section}
The measurement of the cross section for the production and decay pp $\rightarrow$ H $\rightarrow$ 4$\ell$ within a fiducial volume defined to match closely the reconstruction level selection is presented. This measurement has minimal dependence on the assumptions of the relative fraction or kinematic distributions of the separate production modes. A maximum likelihood fit of the signal and background parameterizations to the observed 4$\ell$
mass distribution, is performed to extract the integrated fiducial cross section without categorization and use of discriminants. Figure \ref{fig:figurexs} shows the results for the fiducial cross section as a function of center of mass energy.
\begin{figure}[ht]
\centering
\includegraphics[height=2.4in]{Figure_010-a}
\caption{ \label{fig:figurexs} The measured fiducial cross section as a function of center of mass energy in Ref \cite{Sirunyan:2017exp} }
\end{figure}
\subsection*{Mass and Width}
The measurement of the mass of the Higgs boson exploits additional information
from per-event relative mass uncertainties $\mathcal{D}_\text{mass}$, which are defined by propagating per-lepton momentum errors to the 4$\ell$ candidate. Using this variable brings an expected improvement of about 8$\%$ to the uncertainty of the mass measurement. Figure \ref{fig:figureM}(Left) shows the results for 1D,2D and 3D likelihood scan.
A measurement of the width performed using on-shell Higgs boson production. An unbinned maximum likelihood fit to the $m_{4\ell}$ distribution is performed over the range of selected events. The strength of fermion-induced couplings and vector-boson-induced couplings are independent and are left unconstrained in the fit. For such a large width, interference between the signal and background production of the $m_{4\ell}$ final state becomes important and is taken into account in analysis. Results shown in Figure \ref{fig:figureM} (Right).
\begin{figure}[ht]
\centering
\begin{tabular}{cc}
\includegraphics[height=2in]{Figure_011-a.pdf} &
\includegraphics[height=2in]{Figure_012-b.pdf} \\
\end{tabular}
\caption{ \label{fig:figureM} Left: 1D likelihood scan as a function of mass for the 1D, 2D, and 3D measurement,The likelihood scans are shown for the mass measurement using the refitted mass distribution with m(Z1) constraint, Right: Observed and expected likelihood scan of $\Gamma_{H}$ using the signal range 105 < $m_{4\ell}$ < 140 GeV, with $m_{H}$ floated in Ref \cite{Sirunyan:2017exp}}
\end{figure}
\section{Conclusions}
Several measurements of Higgs boson production in the four-lepton final state at $\sqrt{s}$ = 13TeV have been presented, using data samples corresponding to an integrated luminosity of 35.9fb$^{-1}$. All results are consistent, within their uncertainties, with the expectations for the SM Higgs boson. The detailed analysis is explained in the paper \cite{Sirunyan:2017exp}.
|
2,869,038,155,052 | arxiv | \section{Introduction}\label{Sec 1}
Throughout the paper, we use $(a_1,\ldots,a_k)$ to denote the greatest common divisor (gcd) of the integers $a_1,\ldots,a_k$, and write $\langle a_1,\ldots,a_k\rangle$ for an ordered $k$-tuple of integers. Let $a_1,\ldots,a_k,b,n\in \mathbb{Z}$, $n\geq 1$. A linear congruence in $k$ unknowns $x_1,\ldots,x_k$ is of the form
\begin{align} \label{cong form}
a_1x_1+\cdots +a_kx_k\equiv b \pmod{n}.
\end{align}
By a solution of (\ref{cong form}), we mean an $\mathbf{x}=\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_n^k$ that satisfies (\ref{cong form}). The following result, proved by D. N. Lehmer \cite{LEH2}, gives the number of solutions of the above linear congruence:
\begin{proposition}\label{Prop: lin cong}
Let $a_1,\ldots,a_k,b,n\in \mathbb{Z}$, $n\geq 1$. The linear congruence $a_1x_1+\cdots +a_kx_k\equiv b \pmod{n}$ has a solution $\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_{n}^k$ if and only if $\ell \mid b$, where
$\ell=(a_1, \ldots, a_k, n)$. Furthermore, if this condition is satisfied, then there are $\ell n^{k-1}$ solutions.
\end{proposition}
Counting the number of solutions of the above congruence with some restrictions on the solutions is also a problem of great interest. As an important example, one can mention the restrictions $(x_i,n)=t_i$ ($1\leq i\leq k$), where $t_1,\ldots,t_k$ are given positive divisors of $n$. The number of solutions of the linear congruences with the above restrictions, which were called {\it restricted linear congruences} in \cite{BKSTT}, was first considered by Rademacher \cite{Rad1925} in 1925 and Brauer \cite{Bra1926} in 1926, in the special case of $a_i=t_i=1$ $(1\leq i \leq k)$, and they proved the following nice formula for the number $N_n(k,b)$ of such solutions:
\begin{align*}
N_n(k,b)= \frac{\varphi(n)^k}{n} \mathlarger{\prod}_{p\, \mid \, n, \,
p\, \mid\, b} \!\left(1-\frac{(-1)^{k-1}}{(p-1)^{k-1}}
\right)\mathlarger{\prod}_{p\, \mid\, n, \, p \, \nmid \, b}
\!\left(1-\frac{(-1)^k}{(p-1)^k}\right),
\end{align*}
where $\varphi(n)$ is Euler's totient function and the products are taken over all prime divisors $p$ of $n$. Since then, this problem has been studied, in several other special cases, in many papers (very recently, it was studied in its `most general case' in \cite{BKSTT}) and has found very interesting applications in number theory, combinatorics, geometry, computer science, cryptography etc; see \cite{BKS2, BKSTT3, BKSTT, BKSTT2, COH0, JAWILL} for a detailed discussion about this problem and a comprehensive list of references. Another restriction of potential interest is imposing the condition that all $x_i$ are {\it distinct}. Unlike the first problem, there seems to be very little published on the second problem. Recently, Grynkiewicz et al. \cite{GPP}, using tools from additive combinatorics and group theory, proved necessary and sufficient conditions under which the linear congruence $a_1x_1+\cdots +a_kx_k\equiv b \pmod{n}$, where $a_1,\ldots,a_k,b,n$ ($n\geq 1$) are arbitrary integers, has a solution $\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_{n}^k$ with all $x_i$ distinct; see also \cite{ADP, GPP} for connections to zero-sum theory and \cite{BKS7} for connections to coding theory. So, it would be an interesting problem to give an explicit formula for the number of such solutions. Quite surprisingly, this problem was first considered, in a special case, by Sch\"{o}nemann \cite{SCH} almost two centuries ago(!) but his result seems to have been forgotten. Sch\"{o}nemann \cite{SCH} proved the following result:
\begin{theorem} \label{Schonemann thm}
Let $p$ be a prime, $a_1,\ldots,a_k$ be arbitrary integers, and $\sum_{i=1}^k a_i \equiv 0 \pmod{p}$ but $\sum_{i \in I} a_i \not\equiv 0 \pmod{p}$ for all $\emptyset \not= I\varsubsetneq \lbrace 1, \ldots, k\rbrace$. The number $N_p(k)$ of solutions $\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_{p}^k$ of the linear congruence $a_1x_1+\cdots +a_kx_k\equiv 0 \pmod{p}$, with all $x_i$ distinct, is independent of the coefficients $a_1,\ldots,a_k$ and is equal to
$$
N_p(k)=(-1)^{k-1}(k-1)!(p-1)+(p-1)\cdots(p-k+1).
$$
\end{theorem}
In this paper, we generalize Sch\"{o}nemann's theorem using Proposition~\ref{Prop: lin cong} and a result on graph enumeration. This seems to be a rather uncommon method in the area; besides, our proof technique or its modifications may be useful for dealing with other cases of this problem (or even the general case) or other relevant problems. We state and prove our main result in the next section.
\section{Main Result}\label{Sec 2}
Our generalization of Sch\"{o}nemann's theorem is obtained via a graph theoretic method which may be also of independent interest. We need two formulas on graph enumeration (see Theorem~\ref{graph enum} below). These formulas are in terms of the {\it deformed exponential function} which is a special case of the {\it three variable Rogers-Ramanujan function} defined below. These functions have interesting applications in combinatorics, complex analysis, functional differential equations, and statistical mechanics (see \cite{ACH, KOSH, LANG, LIU, SCSO, SOKA} and the references therein).
\begin{definition}\label{Rog-Ram func}
The {\it three variable Rogers-Ramanujan function} is
$$
R(\alpha,\beta,q)=\sum_{m\geq0}\frac{\alpha^{m}\beta^{(_{2}^{m})}}{(1+q)(1+q+q^2)\cdots(1+q+\cdots+q^{m-1})}.
$$
Also, the {\it deformed exponential function} is
$$
F(\alpha,\beta)=R(\alpha,\beta,1)=\sum_{m\geq0}\frac{\alpha^{m}\beta^{(_{2}^{m})}}{m!}.
$$
\end{definition}
Let $g(c,e,k)$ be the number of simple graphs with $c$ connected components, $e$ edges, and $k$ vertices labeled $1,\ldots,k$, and $g'(e,k)$ be the number of simple {\it connected} graphs with $e$ edges and $k$ labeled vertices. Suppose that
$$
G(t,y,z)=\sum_{c,e,k}g(c,e,k)t^{c}y^{e}\frac{z^k}{k!},
$$
and
$$
CG(y,z)=\sum_{e,k}g'(e,k)y^{e}\frac{z^k}{k!}.
$$
\begin{theorem} {\rm (\cite{ACH, STAN2})} \label{graph enum}
The generating functions for counting simple graphs and simple connected graphs satisfy, respectively,
$$
G(t,y,z)=F(z,1+y)^t,
$$
and
$$
CG(y,z)=\log F(z,1+y),
$$
where $F$ is the deformed exponential function defined above.
\end{theorem}
Now, we are ready to state and prove our main result:
\begin{theorem} \label{Gener Schonemann thm}
Let $a_1,\ldots,a_k,b,n$ $(n\geq 1)$ be arbitrary integers, and $(\sum_{i \in I} a_i, n)=1$ for all $\emptyset \not= I\varsubsetneq \lbrace 1, \ldots, k\rbrace$. The number $N_n(b;a_1,\ldots,a_k)$ of solutions $\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_{n}^k$ of the linear congruence $a_1x_1+\cdots +a_kx_k\equiv b \pmod{n}$, with all $x_i$ distinct, is
\begin{align*}
& N_n(b;a_1,\ldots,a_k)\\
&=\begin{dcases}
(-1)^{k}(k-1)!+(n-1)\cdots(n-k+1), & \text{if \ $(\sum_{i=1}^{k} a_i, n) \nmid b$}; \\
(-1)^{k-1}(k-1)!\left((\sum_{i=1}^{k} a_i, n)-1\right)+(n-1)\cdots(n-k+1), & \text{if \ $(\sum_{i=1}^{k} a_i, n) \mid b$}.
\end{dcases}
\end{align*}
\end{theorem}
\begin{proof}
Let $\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_{n}^k$ be a solution of the linear congruence $a_1x_1+\cdots +a_kx_k\equiv b \pmod{n}$. Note that our desired solutions are those for which none of the $\binom{k}{2}$ equalities $x_u = x_v$, $1\leq u<v \leq k$, holds. Let $T_k=\lbrace \lbrace u,v \rbrace : 1\leq u<v \leq k \rbrace$. By the inclusion-exclusion principle, the number of such solutions is
\begin{align} \label{iep}
N_n(b;a_1,\ldots,a_k)&=\sum_{e=0}^{(_{2}^{k})}(-1)^{e}\sum_{\substack{S \subseteq T_k \\ |S|=e}}N(S),
\end{align}
where $N(S)$ is the number of solutions of the linear congruence with $x_{\alpha} = x_{\beta}$ for $\lbrace \alpha,\beta \rbrace \in S$.
Now, we need to calculate
$$\sum_{\substack{S \subseteq T_k \\ |S|=e}}N(S).$$
In order to calculate $N(S)$, we construct the graph $G(S)$ on vertices $1,\ldots,k$ and edge set $S$. In calculating $N(S)$ we note that all vertices $i$ in a connected component of $G(S)$ correspond to the same $x_i$ in the linear congruence (by the definition of $N(S)$), and so we can simplify the linear congruence by grouping the $x_i$ which are equal to each other. This procedure eventually gives a new linear congruence in which the coefficients are of the form $\sum_{i \in I} a_i$, where $\emptyset \not= I\subseteq \lbrace 1, \ldots, k\rbrace$, and the number of terms is equal to the number of connected components of $G(S)$. If $G(S)$ has $c>1$ connected components then since $(\sum_{i \in I} a_i, n)=1$ for all $\emptyset \not= I\varsubsetneq \lbrace 1, \ldots, k\rbrace$, by Proposition~\ref{Prop: lin cong} we have $N(S)=n^{c-1}$. Also, if $G(S)$ is connected, that is, $c=1$ then $N(S)$ is the number of solutions of the linear congruence $(\sum_{i=1}^{k}a_i)x\equiv b \pmod{n}$, and so by Proposition~\ref{Prop: lin cong}, $N(S)$ in this case, we denote it by $A$, is equal to $(\sum_{i=1}^{k} a_i, n)$ if $(\sum_{i=1}^{k} a_i, n) \mid b$, and is equal to zero otherwise. Let $g(c,e,k)$ be the number of simple graphs with $c$ connected components, $e$ edges, and $k$ vertices labeled $1,\ldots,k$, and $g'(e,k)$ be the number of simple {\it connected} graphs with $e$ edges and $k$ labeled vertices. Now, recalling (\ref{iep}), we get
\begin{align*}
N_n(b;a_1,\ldots,a_k)&=\sum_{e=0}^{(_{2}^{k})}(-1)^{e}\left(Ag'(e,k)+\sum_{c=2}^{k}n^{c-1}g(c,e,k)\right)
\\
&=A\sum_{e=0}^{(_{2}^{k})}(-1)^{e}g'(e,k)+\frac{1}{n}\sum_{e=0}^{(_{2}^{k})}\sum_{c=2}^{k}(-1)^{e}n^{c}g(c,e,k)\\
&=(A-1)\sum_{e=0}^{(_{2}^{k})}(-1)^{e}g'(e,k)+\frac{1}{n}\sum_{e=0}^{(_{2}^{k})}\sum_{c=1}^{k}(-1)^{e}n^{c}g(c,e,k).
\end{align*}
Now, in order to evaluate the latter expression, we use the two formulas mentioned in Theorem~\ref{graph enum}. In fact, by Theorem~\ref{graph enum}, we have
$$
\sum_{e,k}(-1)^{e}g'(e,k)\frac{z^k}{k!}=\log F(z,0),
$$
and
$$
\sum_{c,e,k}(-1)^{e}n^{c}g(c,e,k)\frac{z^k}{k!}=F(z,0)^n,
$$
where $F$ is the deformed exponential function. Note that $F(z,0)=1+z$. Now, we have
$$
\sum_{e=0}^{(_{2}^{k})}(-1)^{e}g'(e,k)=\text{the coefficient of $\frac{z^k}{k!}$ in $\log(1+z)$, which is equal to $\frac{k!(-1)^{k+1}}{k}$},
$$
and
$$
\sum_{e=0}^{(_{2}^{k})}\sum_{c=1}^{k}(-1)^{e}n^{c}g(c,e,k)=\text{the coefficient of $\frac{z^k}{k!}$ in $(1+z)^{n}$, which is equal to $k!(_{k}^{n})$}.
$$
Consequently, the number $N_n(b;a_1,\ldots,a_k)$ of solutions $\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_{n}^k$ of the linear congruence $a_1x_1+\cdots +a_kx_k\equiv b \pmod{n}$, with all $x_i$ distinct, is
\begin{align*}
& N_n(b;a_1,\ldots,a_k)=\frac{(A-1)k!(-1)^{k+1}}{k}+\frac{k!(_{k}^{n})}{n}\\
&=\begin{dcases}
(-1)^{k}(k-1)!+(n-1)\cdots(n-k+1), & \text{if \ $(\sum_{i=1}^{k} a_i, n) \nmid b$}; \\
(-1)^{k-1}(k-1)!\left((\sum_{i=1}^{k} a_i, n)-1\right)+(n-1)\cdots(n-k+1), & \text{if \ $(\sum_{i=1}^{k} a_i, n) \mid b$}.
\end{dcases}
\end{align*}
\end{proof}
\begin{rema}
Note that in Sch\"{o}nemann's theorem, $b$ is zero and $n$ is prime but in Theorem~\ref{Gener Schonemann thm}, both $b$ and $n$ are arbitrary.
\end{rema}
It would be an interesting problem to see if the technique presented in this paper can be modified so that it covers the problem in its full generality. So, we pose the following question.
\bigskip
\noindent{\textbf{Problem 1.}} Let $a_1,\ldots,a_k,b,n$ ($n\geq 1$) be arbitrary integers. Give an explicit formula for the number of solutions $\langle x_1,\ldots,x_k \rangle \in \mathbb{Z}_{n}^k$ of the linear congruence $a_1x_1+\cdots +a_kx_k\equiv b \pmod{n}$ with all $x_i$ distinct.
Such results would be interesting from several aspects. As we mentioned in the Introduction, the number of solutions of the linear congruence with the restrictions $(x_i,n)=t_i$ ($1\leq i\leq k$), where $t_1,\ldots,t_k$ are given positive divisors of $n$, has found very interesting applications in number theory, combinatorics, geometry, computer science, cryptography etc. Therefore, having an explicit formula for the number of solutions with all $x_i$ distinct may also lead to interesting applications in these or other directions. The problem may also have implications in zero-sum theory (see \cite{ADP, GPP}) and
in coding theory (see \cite{BKS7}).
\section*{Acknowledgements}
The authors are grateful to the anonymous referees for a careful reading of the paper and helpful comments.
|
2,869,038,155,053 | arxiv | \section{Introduction}
This document is a template for \LaTeXe. If you are reading a paper or
PDF version of this document, please download the electronic file
\texttt{ifacconf.tex}. You will also need the class file
\texttt{ifacconf.cls}. Both files are available on the IFAC web site.
Please stick to the format defined by the \texttt{ifacconf} class, and
do not change the margins or the general layout of the paper. It
is especially important that you do not put any running header/footer
or page number in the submitted paper.\footnote{
This is the default for the provided class file.}
Use \emph{italics} for emphasis; do not underline.
Page limits may vary from conference to conference. Please observe the
page limits of the event for which your paper is intended.
\section{Procedure for Paper Submission}
Next we see a few subsections.
\subsection{Review Stage}
For submission guidelines, follow instructions on paper submission
system as well as the event website.
Note that conferences impose strict page limits, so it will be better
for you to prepare your initial submission in the camera ready layout
so that you will have a good estimate for the paper
length. Additionally, the effort required for final submission will be
minimal.
\subsection{Equations}
Some words might be appropriate describing equation~(\ref{eq:sample}), if
we had but time and space enough.
\begin{equation} \label{eq:sample}
{{\partial F}\over {\partial t}} = D{{\partial^2 F}\over {\partial x^2}}.
\end{equation}
See \cite{Abl:56}, \cite{AbTaRu:54}, \cite{Keo:58} and \cite{Pow:85}.
\subsubsection{Example.} This equation goes far beyond the
celebrated theorem ascribed to the great Pythagoras by his followers.
\begin{thm}
The square of the length of the hypotenuse of a right triangle equals
the sum of the squares of the lengths of the other two sides.
\end{thm}
\begin{pf}
The square of the length of the hypotenuse of a right triangle equals the sum of the squares
of the lengths of the other two sides.
\end{pf}
Of course LaTeX manages equations through built-in macros. You may
wish to use the \texttt{amstex} package for enhanced math
capabilities.
\subsection{Figures}
To insert figures, use the \texttt{graphicx} package. Although other
graphics packages can also be used, \texttt{graphicx} is simpler to
use. See Fig.~\ref{fig:bifurcation} for an example.
\begin{figure}
\begin{center}
\includegraphics[width=8.4cm]{bifurcation}
\caption{Bifurcation: Plot of local maxima of $x$ with damping $a$ decreasing}
\label{fig:bifurcation}
\end{center}
\end{figure}
Figures must be centered, and have a caption at the bottom.
\subsection{Tables}
Tables must be centered and have a caption above them, numbered with
Arabic numerals. See table~\ref{tb:margins} for an example.
\begin{table}[hb]
\begin{center}
\caption{Margin settings}\label{tb:margins}
\begin{tabular}{cccc}
Page & Top & Bottom & Left/Right \\\hline
First & 3.5 & 2.5 & 1.5 \\
Rest & 2.5 & 2.5 & 1.5 \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Final Stage}
Authors are expected to mind the margins diligently. Papers need to
be stamped with event data and paginated for inclusion in the
proceedings. If your manuscript bleeds into margins, you will be
required to resubmit and delay the proceedings preparation in the
process.
\subsubsection{Page margins.} See table~\ref{tb:margins} for the
page margins specification. All dimensions are in \emph{centimeters}.
\subsection{PDF Creation}
All fonts must be embedded/subsetted in the PDF file. Use one of the
following tools to produce a good quality PDF file:
\subsubsection{PDFLaTeX} is a special version of LaTeX by Han The
Thanh which produces PDF output directly using Type-1 fonts instead of
the standard \texttt{dvi} file. It accepts figures in JPEG, PNG, and PDF
formats, but not PostScript. Encapsulated PostScript figures can be
converted to PDF with the \texttt{epstopdf} tool or with Adobe Acrobat
Distiller.
\subsubsection{Generating PDF from PostScript} is the classical way of
producing PDF files from LaTeX. The steps are:
\begin{enumerate}
\item Produce a \texttt{dvi} file by running \texttt{latex} twice.
\item Produce a PostScript (\texttt{ps}) file with \texttt{dvips}.
\item Produce a PDF file with \texttt{ps2pdf} or Adobe Acrobat
Distiller.
\end{enumerate}
\subsection{Copyright Form}
IFAC will put in place an electronic copyright transfer system in due
course. Please \emph{do not} send copyright forms by mail or fax. More
information on this will be made available on IFAC website.
\section{Units}
Use SI as primary units. Other units may be used as secondary units
(in parentheses). This applies to papers in data storage. For example,
write ``$15\,\mathrm{Gb}/\mathrm{cm}^2$ ($100\,\mathrm{Gb}/\mathrm{in}^2$)''.
An exception is when
English units are used as identifiers in trade, such as ``3.5 in
disk drive''. Avoid combining SI and other units, such as current in
amperes and magnetic field in oersteds. This often leads to confusion
because equations do not balance dimensionally. If you must use mixed
units, clearly state the units for each quantity in an equation. The
SI unit for magnetic field strength $\mathbf{H}$ is $\mathrm{A}/\mathrm{m}$. However, if you wish to
use units of $\mathrm{T}$, either refer to magnetic flux density $\mathbf{B}$ or
magnetic field strength symbolized as $\mu_0\,\mathbf{H}$. Use the center dot to
separate compound units, e.g., ``$\mathrm{A} \cdot \mathrm{m}^2$''.
\section{Helpful Hints}
\subsection{Figures and Tables}
Figure axis labels are often a source of confusion. Use words rather
than symbols. As an example, write the quantity ``Magnetization'', or
``Magnetization M'', not just ``M''. Put units in parentheses. Do not
label axes only with units. For example, write ``Magnetization
($\mathrm{A}/\mathrm{m}$)'' or ``Magnetization ($\mathrm{A} \mathrm{m}^{-1}$)'', not just
``$\mathrm{A}/\mathrm{m}$''. Do not
label axes with a ratio of quantities and units. For example, write
``Temperature ($\mathrm{K}$)'', not ``$\mbox{Temperature}/\mathrm{K}$''.
Multipliers can be especially confusing. Write ``Magnetization
($\mathrm{kA}/\mathrm{m}$)'' or ``Magnetization ($10^3 \mathrm{A}/\mathrm{m}$)''. Do not write
``Magnetization $(\mathrm{A}/\mathrm{m}) \times 1000$'' because the reader would not know
whether the axis label means $16000\,\mathrm{A}/\mathrm{m}$ or $0.016\,\mathrm{A}/\mathrm{m}$.
\subsection{References}
Use Harvard style references (see at the end of this document). With
\LaTeX, you can process an external bibliography database
using \texttt{bibtex},\footnote{In this case you will also need the \texttt{ifacconf.bst}
file, which is part of the \texttt{ifaconf} package.}
or insert it directly into the reference section. Footnotes should be avoided as
far as possible. Please note that the references at the end of this
document are in the preferred referencing style. Papers that have not
been published should be cited as ``unpublished''. Capitalize only the
first word in a paper title, except for proper nouns and element
symbols.
\subsection{Abbreviations and Acronyms}
Define abbreviations and acronyms the first time they are used in the
text, even after they have already been defined in the
abstract. Abbreviations such as IFAC, SI, ac, and dc do not have to be
defined. Abbreviations that incorporate periods should not have
spaces: write ``C.N.R.S.'', not ``C. N. R. S.'' Do not use abbreviations
in the title unless they are unavoidable (for example, ``IFAC'' in the
title of this article).
\subsection{Equations}
Number equations consecutively with equation numbers in parentheses
flush with the right margin, as in (\ref{eq:sample}). To make your equations more
compact, you may use the solidus ($/$), the $\exp$ function, or
appropriate exponents. Use parentheses to avoid ambiguities in
denominators. Punctuate equations when they are part of a sentence, as
in
\begin{equation} \label{eq:sample2}
\begin{array}{ll}
\int_0^{r_2} & F (r, \varphi ) dr d\varphi = [\sigma r_2 / (2 \mu_0 )] \\
& \cdot \int_0^{\inf} exp(-\lambda |z_j - z_i |) \lambda^{-1} J_1 (\lambda r_2 ) J_0 (\lambda r_i ) d\lambda
\end{array}
\end{equation}
Be sure that the symbols in your equation have been defined before the
equation appears or immediately following. Italicize symbols ($T$
might refer to temperature, but T is the unit tesla). Refer to
``(\ref{eq:sample})'', not ``Eq. (\ref{eq:sample})'' or ``equation
(\ref{eq:sample})'', except at the beginning of a sentence: ``Equation
(\ref{eq:sample}) is \ldots''.
\subsection{Other Recommendations}
Use one space after periods and colons. Hyphenate complex modifiers:
``zero-field-cooled magnetization''. Avoid dangling participles, such
as, ``Using (1), the potential was calculated'' (it is not clear who or
what used (1)). Write instead: ``The potential was calculated by using
(1)'', or ``Using (1), we calculated the potential''.
A parenthetical statement at the end of a sentence is punctuated
outside of the closing parenthesis (like this). (A parenthetical
sentence is punctuated within the parentheses.) Avoid contractions;
for example, write ``do not'' instead of ``don' t''. The serial comma
is preferred: ``A, B, and C'' instead of ``A, B and C''.
\section{Conclusion}
A conclusion section is not required. Although a conclusion may review
the main points of the paper, do not replicate the abstract as the
conclusion. A conclusion might elaborate on the importance of the work
or suggest applications and extensions.
\begin{ack}
Place acknowledgments here.
\end{ack}
\subsection{Notations and Mathematical Preliminaries}
We use $\mathbb{R}$,$\mathbb{B}$,$\mathbb{N}$ to denote the sets of real numbers, Boolean bits, integer numbers, respectively. We use letters $x,q,p,c,u$ to denote column vectors and capital letters $X,Q,P,C,U$ to denote matrices.
To encode categorical variables that have no ordinal relationship, we use \emph{One-Hot Encoding}. For example, the elements of the set $\{red, blue, black\}$ can be encoded as $red = [001], blue = [010], black = [100]$.
\emph{Voronoi diagram} is a partition of the plane into regions close to each of a given set of objects. In our work, these objects are the robotic agents. For each agent there is a corresponding region consisting of all points of the plane closer to that agent than to any other agent. For example, the blue dashes lines in Fig.\ref{fig:CSRNN} represent voronoi boundaries.
\subsection{Agent Dynamics and Communication Conditions}
\label{sec:prelimDynCom}
Consider a system of $N$ robotic agents labeled in the set $Sys = \{1,2,...,N\}$ in a bounded two dimensional Euclidean space $\mathcal{W}$. The state of an agent $i \in Sys$, at time $t$ is denoted by $x_i{(t)} = [q_i{(t)}^T,c_i{(t)}^T,p_i{(t)}^T]^T$, where $q_i{(t)} \in \mathbb{R}^2$ is the position of the agent, $c_i{(t)}\in \mathbb{B}^N$ is the communication link vector with $c{(t)}_{i,j} =1$ if agents $i,j$ are connected and $c{(t)}_{i,j} =0$ otherwise. The type of an agent (e.g router, base, red, etc.) is not changing over time and is one-hot encoded as $p_i$ .
We consider the following dynamics for agent $i$:
\begin{equation}\label{dyn_cont}
\dot{q}_{i}{(t)} =A_{i} q_{i}{(t)} + B_{i} u_{i}{(t)}
\end{equation}
where $u_i{(t)} \in \mathcal{U} \subset \mathbb{R}^2$ and $\mathcal{U}$ is the set of admissible controls. We assume that by choosing an appropriate sampling time $\Delta t >0$ the continuous dynamics(\ref{dyn_cont}) of agent $i$ can be approximated by discrete dynamics of the form:
\begin{equation}
\label{dyn}
q_{i}{(t_{k+1})} = A_{i,d} q_{i}{(t_k)} + B_{i,d} u_{i}{(t_k)}
\end{equation}
where $k \in \mathbb{N}$, and $\forall k >0, \: t_{k+1}-t_k =\Delta t$. $\Delta t >0$ is chosen such that the pair $(A_{i,d},B_{i,d})$ is controllable.
Given a workspace $\mathcal{W}$ as described above, two agents $i$ and $j$ are connected at time $t_k$ (we write $c{(t_k)}_{i,j} =1$) if both of the following conditions hold:
\begin{itemize}
\item The Euclidean distance between agents $i$ and $j$ is less than a fixed communication range $r$. Formally, $d(q_i{(t_k)}, q_j{(t_k)}) < r$.
\item They are neighbors in the corresponding Voronoi diagram.
\end{itemize}
Such conditions are common in mobile robotic networks, where there is a specific communication range and the voronoi neighbors is required to minimize the number of connections.
Now consider the state $x_i{(t_k)}$ and a control sequence $u_i \in \mathcal{U^{\omega}}$ a run $x_i = x_i{(t_k)}x_i{(t_{k+1})}x_i{(t_{k+2})} ... \in (\mathbb{R}^2 \times \mathbb{B}^{2N})^{\omega}$ is generated by agent $i$ with dynamics(\ref{dyn}) is an infinite sequence obtained from state trajectory of agent $i$. Given a finite planning horizon $T \in \mathbb{N}$, the state of agent $i$ $x_i{(t_k)}$ and the control sequence $u_{i,T}=u_i{(t_k)},u_i{(t_{k+1})},...,u_i{(t_{k+T})}$ and the dynamics(\ref{dyn}), we can produce the unique horizon-$T$ run of agent $i$ as $x__{i,T} (x_i{(t_k)},u__{i,T}) = x_i{(t_k)}x_i{(t_{k+1})} ... x_i{(t_{k+T})}$. In addition, $\lambda_T$ denotes the set of all communication graphs over the planning horizon-$T$.
We denote the state of the system (all agents) at time $t$ by $X{(t)}$, and the state of the system over the planning horizon $T$ by $X$. We use the same notational approach for position, communication link information and control of all agents at time $t$ by $Q{(t)},C{(t)},U{(t)}$ and for the planning horizon $T$ by $ Q_T,C_T,U_T$, respectively. We assume that the types of the agents do not change over time (i.e $ p_i{(t)} = p_i; \forall t \in [0,T]$). Thus, the matrix storing the types of all agents $P(t)=P$.
\documentclass{ifacconf}
\usepackage{graphicx}
\usepackage{natbib}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{enumitem}
\usepackage{comment}
\usepackage{amsmath,amssymb,amsfonts}
\usepackage{algorithmic}
\usepackage{graphicx}
\usepackage{textcomp}
\usepackage{xcolor}
\newcounter{example}[section]
\newenvironment{example}[1][]{\refstepcounter{example}\par\medskip
\noindent \textbf{Example~\theexample. #1} \rmfamily}{\medskip}
\DeclareMathOperator*{\argmaxA}{arg\,max}
\begin{document}
\begin{frontmatter}
\title{Neural Network Control for Spatio-Temporal Specifications \thanksref{footnoteinfo}}
\thanks[footnoteinfo]{This work was partially supported by the National Science Foundation under grants IIS-2024606 and IIS-1723995 at Boston University and by {\color{blue} Ezio fill this in} at TU Wien.}
\author[First]{Suhail Alsalehi}
\author[First]{Noushin Mehdipour}
\author[Second]{Ezio Bartocci}
\author[First]{Calin Belta}
\address[First]{Boston University,Boston, MA 02215 USA (e-mail: alsalehi, noushinm, [email protected]).}
\address[Second]{TU Wien, Vienna, Austria (e-mail: [email protected]).}
\begin{abstract}
{\textcolor{red}{owned by Calin}}
We propose a framework to generate control sequences for dynamic networked and spatially distributed agents in order to satisfy a set of spatio-temporal requirements.Our approach uses the Spatio-Temporal Reach and Escape Logic (STREL) to specify the requirements. STREL extends the Signal Temporal Logic (STL) with spatial operators and is equipped with quantitative semantics (robustness function) that allows to transform the control problem into an optimization problem. However, the current quantitative semantics does not quantify over different spatial configurations that satisfy the same property. Thus, this notion limits also the possibility to optimize their spatial configuration as well.
To cope with this problem, we modify the original quantitative semantics to allow for smooth optimization and to maximize the robustness of satisfaction as opposed to satisfying the minimum requirements. Furthermore, to obtain a real-time controller, we generate a dataset of satisfying trajectories and train a neural net model. The effectiveness of the framework is showcased with a simulated example.
\end{abstract}
\begin{keyword}
Networked Systems, Communication, Spatio-temporal Specifications, RNN
\end{keyword}
\end{frontmatter}
\section{Introduction}
\label{sec:intro}
{\textcolor{red}{Introduction is owned by Calin}}
From satellite constellations to smart cities and biological materials, networked multiagent systems are rising in popularity. Often we are faced with the motion planning problem which aims at steering robotic agents in the system from an initial configuration to a final configuration while satisfying spatio-temporal requirements such as maintaining connectivity or avoiding obstacles. In robotic networked system, connectivity between robots is often coupled with the motion performance that determines the spatial configuration of agents in the system. Commonly, connectivity between two robots is achieved within a communication range.
Previously, the problem of synthesizing control strategies for multi-agent systems that satisfy temporal specifications has been addressed in the literature (\cite{Belta2017-zs,Tabuada2009-yz,Raman2014,Sadraddini2015-fh}). Temporal logics such as Signal Temporal Logic (STL) \cite{STL2004} are equipped with quantitative semantics, known as robustness function, that measure how well the signal satisfies a given specification. The quantitative semantics allow for mapping the control synthesis problem to an optimization problem with the robustness as a cost function.
More recently, spatio-temporal logics have emerged as a formal way to capture spatial and temporal requirements. Some of these logics are equipped with quantitative semantics and, similar to STL, can be used to map control synthesis problems to optimization problems with robustness as objective function. For example, \cite{Haghighi2016}and \cite{Liu2018} used the spatio-temporal logics SpaTel (\cite{Haghighi2015}) and STREL (\cite{Bartocci2017}), respectively, to solve the problem of synthesizing control for robotic networked systems with spatio-temporal specifications.
In \cite{Haghighi2016}, SpaTeL was used to synthesize control a swarm of robots by solving an optimization problem. SpaTeL looks at the satisfaction over the whole system and thus it does not inform the satisfaction for individual agents. The TSSL logic used to capture the spatial requirements is based on quadtrees and it becomes computationally expensive as the depth of quadtrees increases. In addition, SpaTel does not provide means to optimize the communication QoS between agents.
In \cite{Liu2018}, the quantitative semantics of STREL were used to solve the control problem and consider the communication QoS in the objective function at the same time. While STREL can capture the existence/nonexistence of a communication link between two agents, its robustness function defined in \cite{Bartocci2017} only considers agents within the communication range defined by the distance predicate and thus does not optimize the communication QoS. However, if agents are beyond the communication range, then STREL cannot optimize the spatial configuration to improve connectivity.
In this work, we utilize the STREL logic to solve the control synthesis problem. We propose new STREL quantitative semantics that provide the means for optimizing the spatial configuration and improve the connectivity between agents. Our semantics use sigmoid functions that depend on the distance between agents and the number of paths that satisfy the specifications in the network.
Several optimization approaches have been used in the literature to solve the control synthesis problem in excistance of spatial and/or temporal specifications. For example, Mixed Integer Programming (MIP) and Mixed Integer Linear Program (MILP) encoding were used in \cite{Raman2014-qx,Haghighi2016,Liu2018} and gradient-based method \cite{Pant2017,Haghighi2019-as,Varnai2020-eh,MehdipourAGM,Gilpin2021}. Formulating MIP and MILP is a complicated task and their performance times are unpredictable. Gradient-based methods are fast and simple but are prune to premature convergence to local optima.
The STREL semantics in \cite{Bartocci2017} are undifferentiable due to the existence of the functions min/max. Thus, gradient-based methods cannot be used for optimizing the robustness. Our proposed semantics uses a smooth approximation of the min/max functions \cite{Pant2017,Li2018} to allow for the use of gradient-based methods. To avoid premature convergence, we propose optimizing the robustness in two stages. In the first stage, we conduct exploration by means of a heuristic algorithm and find a good initialization point. In the second stage, using the initialization from the first stage, we run a gradient-based algorithm which have a fast convergence rate. We show that this approach provides better exploration and fast convergence comparing to the use of gradient-based optimization .
In general, optimizing the robustness can be computationally expensive. In the case of MIP and MILP, small changes in the specification can radically alter the running time. This makes the optimization less likely to meet real-time requirements in practice. Moreover, the optimization may converge to local optima, which might not satisfy the specification. In this work, we outline our approach to real-time control using Recurrent Neural Networks (RNN). The RNN learns a controller from a dataset of samples containing agent state trajectories and control sequences that satisfy the spatio-temporal requirements. The samples in the dataset are generated by solving control synthesis problem using optimization as described above. This method has shown success in several studies before (cf. \cite{liu2020recurrent,yaghoubi2020training}). In \cite{liu2020recurrent}, the authors showed that by training a feedforward RNN using a dataset of about 500 points, the model learns a feedback controller that satisfy STL formulae. The controller we need to learn for our control problem that satisfies STREL formula is complex and thus we use a more complex RNN architecture and higher numbers of training points. We choose the RNN architecture and train it with system states as input and control as output. Once trained, the RNN controller will predict in real-time the control policy at each state based on the current state and the history of the system.
We demonstrate our approach to solving teh control synthesis problem in a case study. The goal is to control robotic agents in a multiagent system to move from an initial configuration to a final configuration while satisfying spatio-temporal specification given by a STREL formula.
\textbf{The main contributions of this work can be summarized as follows}:
\begin{enumerate}
\item We propose new sound quantitative semantics for the Spatial Temporal Reach and Escape Logic (STREL). Our proposed semantics allow for optimizing the spatial satisfaction nd improving the connectivity in robotic networked systems.
\item Given a system of $N$ networked robotic agents, we propose a new optimization approach for solving the control synthesis problem. Our approach used Particle Swarm Optimization (PSO) algorithm to find a good initialization point to run the gradient descent (GD) algorithm. This approach achieves higher satisfaction rate comparing to the use of gradient descent with random initialization.
\item We provide Recurrent Neural Network-based real-time controllers for multiagent networked systems with spatio-temporal requirements specified in STREL.
\end{enumerate}
The rest of this paper is organized as follows. In Sec.\ref{sec:prelim} we provides the preliminaries. In Sec.\ref{sec:problem}, we write the problem formulation. Then in Sections.\ref{sec:newstrel}, we introduce the proposed STREL quantitative semantics. We discuss our proposed optimization approach and compare it to the existing approaches in Sec.\ref{sec:solOPT}. Choosing the architecture of RNN and learning the real-time controller is covered in Sec.\ref{sec:learning_control}. In Sec.\ref{sec:CS}, we showcase the framework in a case studies and discuss the results. Finally, we conclude by pointing out the directions for future research.
\section{Preliminaries}
\label{sec:prelim}
\subsection{Agent Dynamics and Communication Conditions}
\label{sec:prelimDynCom}
Consider a system of $N$ robotic agents labeled in the set $Sys = \{1,2,...,N\}$ in a bounded two dimensional Euclidean space $\mathcal{W}$. The state of an agent $i \in Sys$, at time $t$ is denoted by $x_i{(t)}$. We consider the following discrete-time dynamics for agent $i$:
\begin{equation}
\label{dyn}
x_{i}{(t_{k+1})} = f(x_{i}(t_k),u_i(t_k))
\end{equation}
where $x_i(t_k)\in \mathcal{W} \subset \mathbb{R}^n$ and $u_i(t_k) \in \mathcal{U} \subset \mathbb{R}^m$ and $\mathcal{U}$ is the set of admissible controls.
Given the space $\mathcal{W}$ as described above, two agents $i$ and $j$ are connected at time $t_k$ (we write $c{(t_k)}_{i,j} =1$) if the Euclidean distance between agents $i$ and $j$ is less than a fixed communication range $r$. Formally, $d(x_i{(t_k)}, x_j{(t_k)}) < r$.
Such conditions are common in mobile robotic networks, where there is a specific communication range and the voronoi neighbors is required to minimize the number of connections.
Now consider the state $x_i{(t_k)}$ and a control sequence $u_i \in \mathcal{U^{\omega}}$ a run $x_i = x_i{(t_k)}x_i{(t_{k+1})}x_i{(t_{k+2})} ... \in (\mathbb{R}^2 \times \mathbb{B}^{2N})^{\omega}$ is generated by agent $i$ with dynamics(\ref{dyn}) is an infinite sequence obtained from state trajectory of agent $i$. Given a finite planning horizon $T \in \mathbb{N}$, the state of agent $i$ $x_i{(t_k)}$ and the control sequence $u_{i,T}=u_i{(t_k)},u_i{(t_{k+1})},...,u_i{(t_{k+T})}$ and the dynamics(\ref{dyn}), we can produce the unique horizon-$T$ run of agent $i$ as $x__{i,T} (x_i{(t_k)},u__{i,T}) = x_i{(t_k)}x_i{(t_{k+1})} ... x_i{(t_{k+T})}$. In addition, $\lambda_T$ denotes the set of all communication graphs over the planning horizon-$T$.
We denote the state of the system (all agents) at time $t$ by $X{(t)}$, and the state of the system over the planning horizon $T$ by $X$. We use the same notational approach for position, communication link information and control of all agents at time $t$ by $Q{(t)},C{(t)},U{(t)}$ and for the planning horizon $T$ by $ Q_T,C_T,U_T$, respectively. We assume that the types of the agents do not change over time (i.e $ p_i{(t)} = p_i; \forall t \in [0,T]$). Thus, the matrix storing the types of all agents $P(t)=P$.
\subsection{Spatio-Temporal Reach and Escape Logic (STREL)}
\label{sec:prelim-strel}
STREL was first introduced in \cite{Bartocci2017}. It is a logic capable of describing complex behaviors in mobile spatially distributed multi-agent systems. STREL can capture and monitor spatio-temporal specification of spatially distributed multiagent systems. Its formulae are interpreted for individual agents as opposed to other spatio-temporal logics such as SpaTel \cite{Haghighi2015}. In other words, a formula might be satisfied for agent $i$ but violated for agent $j$. The property satisfaction of the formula for the whole system can be simply set to the minimum of the robustness scores for individual agents. STREL is equipped with qualitative and quantitative semantics ranging over Boolean and real values respectively.
STREL has a spatial model that is captured using a graph $G = (V, E)$. The position of agent $i \in Sys$ is treated as a vertex $v_i \in V$. The connection between agents $i$ and $j$ is represented by the weighted edge $e_{i,j} \in E$. The weight of the edge $e_{i,j}$ maps the connection between agents $i,j$ into a certain domain. For instance, $E:V\times V \rightarrow \mathbb{B}$ (on-off connection) or $E:V\times V \rightarrow \mathbb{R}$ (Euclidean distance).
Let $V$ be the set of all nodes. A \emph{spatial temporal trace} $X$ is a function for all $v_i \in V \text{ and } t \in \mathbb{T} = [0,T] $ defined as follows
$$ X : V \rightarrow \mathbb{T} \rightarrow D^N$$
where $D^N$ represents the domain of the trace (for example Boolean $B$ or real $R$). To obtain the spatial topology over time, a \emph{location service} is defined as a function $\lambda : V \times \mathbb{T} \rightarrow G$, which returns a spatial model $G$ for each $t \in [0, T]$. The example below illustrates the concepts of spatial model $G$, spatial temporal trace $X$ and location service function $\lambda$.
\begin{example} \label{ex:example1}
Consider the team of 7 networked robotic agents as depicted in Fig. \ref{fig:example1}. Given the positions of all nodes $Q{(t_k)}= [q_1{(t_k)}^T q_2{(t_k)}^T ... q_7{(t_k)}^T], \: q_i(t_k) \in \mathbb{R}^2$ at time $t_k$ and communication conditions, the service location function $\lambda$ generates a graph $G$ shown in Fig. \ref{fig:example1}. The edges of graph $G$ can be captured with the adjacency matrix $C{(t_k)}$ with $c_{1,3}=c_{3,4}=c_{3,2}=c_{3,6}=c_{2,5}=c_{6,5}=1$ and all other element are zeros. There are three types of agents, namely $red,black$ and $blue$. The type information can be one-hot encoded as follows:
\begin{align}
p_i{(t_k)}=&
\begin{cases}
[0 0 1]\qquad; p_i- red \\
[0 1 0]\qquad; p_i- black\\
[1 0 0]\qquad; p_i- blue\\
\end{cases}
\end{align}
The spatial temporal trace of agent $i \in{1,...,7}$ is given by
\begin{equation}
x_i{(t_k)} = [q_{i}{(t_k)}^T, c_i{(k)}^T, p_i{(t_k)}^T ]^T
\end{equation}
\end{example}
\begin{figure}[hbt!]
\centering
\includegraphics[width=.5\linewidth]{figures/STREL_example.png}
\caption{A team of 7 networked robotic agents}
\label{fig:example1}
\end{figure}
Given the graph based spatial model and the definition of spatial temporal trace mentioned above, STREL is defined by extending STL with the spatial operators, $reach$ and
$escape$. The syntax of STREL is given by:
$$
\psi := \mu \; |\;\neg \phi \; | \; \phi_1 \vee \phi_2 \; | \; \phi_1 U_{[a,b]} \phi_2 \; | \; \phi_1 \mathcal{R}_{\leq d}^f \phi_2 \; | \; \mathcal{E}^f_{>d} \phi
$$
where $\mu$ is an Atomic Predicate (AP), $\neg$ negation, $\vee$ disjunction are the classical Boolean operators. Additionally, the Boolean operator conjunction $\wedge$ can be derived as $a \wedge b = \neg (\neg a \vee \neg b)$. The $until$ temporal operator is denoted by $\mathcal{U}_{[a,b]}$, with $[a,b]$ positive closed time interval. Additional temporal operators can be derived as well. For example, $eventually$ operator $F $ can be defined as $ F_{[a,b]} \phi = true \: U_{[a,b]} \phi$. Operator $always$ $\mathcal{G}$ can be derived from $eventually$ as $ \mathcal{G}_{[a,b]} \phi = \neg \: F_{[a,b]} \neg \phi$. The spatial operators "reach" and "escape" are represented by $\phi_1 \mathcal{R}_{\leq d}^f \phi_2$ and $\mathcal{E}^f_{>d} \phi$ respectively. $f$ is a distance function such as the Euclidean distance or the number of hops/edges. $f\leq d$ and $f\leq d$ are called \emph{distance predicates}.
Next, we present the \emph{STREL qualitative semantics} with respect to the spatio-temporal trace $x_l{(t_k)} \in D_x $. The formula $\phi$ holds ture for the spatio-temporal trace for agent $l$ at time $t_k$ and we write $x_l{(t_k)} \models \phi $. We also will use the \emph{signal interpretation function} $\iota$ which maps the atomic proposition (AP) and spatio-temporal trace $x_l(t_k) \in D_x$ to Boolean or real domain $D$. Formally,
$$\iota : AP \times D_x \rightarrow D$$
The STREL formula $\phi$ with respect to signal $x_i{(t_k)}$ at time $t_k$ and agent $i$ is defined as follows:
\begin{align*}
&\begin{aligned}
x_l{(t_k)} &\models \mu \Leftrightarrow \iota (\mu,x_l{(t_k)})>0 \\
x_l{(t_k)} &\models \neg \phi \Leftrightarrow \neg(x_l{(t_k)} \models \phi) \\
x_l{(t_k)} &\models \phi_1 \vee \phi_2 \Leftrightarrow x_l{(t_k)} \models \phi_1 \vee x_l{(t_k)} \models \phi_2 \\
\end{aligned}\\
&\begin{aligned}
x_l{(t_k)} &\models \phi_1 U_{[a,b]} \phi_2 \Leftrightarrow \exists_{ t_{k'}\in [t_k+a,t_k +b]} \\
\text{ s.t. } & x_l{(k')} \models \phi_2 \text{ and }\forall_{ t_{k''} \in [t_k,t_k']} x_l{(t_{k''})} \models \phi_1 \\
\end{aligned}\\
&\begin{aligned}
x_l{(t_k)} &\models \phi_1 \mathcal{R}^f_{\leq d} \phi_2 \Leftrightarrow \exists_{ \tau \in Routes(\lambda{(t_k)},l)} \\
&\exists_{ l' \in \tau : f(l,l') \leq d} \text{ s.t }x_{l'}{(k')} \models \phi_2 \text{ and } \\
&\bigwedge_{ j< \tau(l')}, x_{\tau[j]}{(t_k)} \models \phi_1 \\
\end{aligned}\\
&\begin{aligned}
x_l{(t_k)} &\models \mathcal{E}^f_{> d} \phi \Leftrightarrow \exists_{ \tau \in Routes(\lambda{(t_k)},l)} \exists_{ l'\in\tau:f(l,l')>d} \\
& \text{ s.t } \bigwedge_{j < \tau(l')}, x_{\tau[j]}{(t_k)} \models \phi
\end{aligned}\\
\end{align*}
where $\lambda{(t_k)}$ is the service location function at time $t_k$, $Routes(\lambda{(t_k)}, l)$ denotes an indexed sequence on the graph generated by the service function $\lambda{(t_k)}$ starting at node $l$, and $f(l,l')$ is the distance function between the nodes $l$ and $l'$.
To explain the intuition behind the spatial operators, consider the system of $7$ networked robotic agents in example \ref{ex:example1}, and consider the distance function $hops(l,l')$ which is the shortest path (number of edges) between agents $l,l'$. We describe the STREL spatial operator as follows:
\begin{itemize}
\item{$reach$:} $\phi_1 \mathcal{R}^{f}_{\leq d} \phi_2$ is satisfied by $(\lambda{(t_k)},x_l{(t_k)})$ iff $\phi_2$ is satisfied at a agent $l'$ reachable from $l$ through a continuous route $\tau$, length of $\tau$ satisfies the distance predicate $f(l,l')\leq r$, and $\phi_1$ is satisfied at $i$ and all other agents in $\tau$. For instance, $black \: \mathcal{R}^{hops}_{\leq 1} red$ is satisfied at agent 2 because agent 3 is $red$ with a distance of at most 1 hops from 2 and 2 $black$. However, it is violated at agent 5 because it is not $black$.
\item{$escape$:} $\mathcal{E}^{f}_{> d} \phi$ is satisfied by $(\lambda{(t_k)},x_l{(t_k)})$ iff there exists a continuous route $\tau$ from agent $l$ with a length that satisfies the distance predicate f(l,l')>r$ in which $\phi$ is satisfied at all agents. For instance,$\mathcal{E}^{hops}_{> 2} \neg red$ is satisfied at agent 5 because all of the agents in the route $\{5,6,3\}$ are $red$. However, it is violated at agent $6$ because there is no route with $f(6,l')>2$ that satisfies the property.
\item{$surround$:} $\phi_1 \odot^{dist}_{\leq r} \phi_2 := \phi_1 \wedge \neg \left(\phi_1 \mathcal{R}^{dist}_{ r} \neg (\phi_1 \lor \phi_2)\right) \wedge \mathcal{E}^{dist}_{ \neg r} \phi_1 \wedge \phi_1 \mathcal{R}^{dist}_{ r} \phi_2$ expresses the topological notion of being surrounded by a $\phi_2$-region, while being in a $\phi_1$-region, with an additional metric constraint. For example, $blue \odot^{dist}_{\leq 2} red$ is satisfied at agent 1.
\end{itemize}
We note that for operator $surround$, we added the term ($\wedge \phi_1 \mathcal{R}^{dist}_{ r} \phi_2$), which was not in the original definition in \cite{Bartocci2017}, to avoid false satisfaction in the case of isolated agents. For example, the $surround$ defined in \cite{Bartocci2017} will give satisfaction at agent 7 for the property $blue \odot^{dist}_{\leq 2} red$, which is not correct. Thus we additionally demand the agent to satisfy $blue \mathcal{R}^{dist}_{\leq r} red$. Other spatial operators such as $Everywhere$ and $Somewhere$ can also be derived from $reach$ and $Escape$ and are not presented here for brevity.
In addition to STREL syntax and qualitative semantics, qualitative semantics (robustness function) for a STREL formula $\phi$ can be defined by assigning a real-valued measure $\rho (\lambda,X,\phi,t,l)$ for the spatial temporal trace $x_l{(t_k)}$ at time $t_k$ such that $x_l{(t_k)} \models \phi \Leftrightarrow \rho (\lambda,X,\phi,t,l) >0 $. The robustness of a STREL formula is provided by the \emph{robustness functions} below.
\begin{align*}
&\begin{aligned}
\rho (\lambda,X,\mu,t,l) & = \iota(\mu,x_l{(t)})\\
\rho (\lambda,X,\neg \phi,t,l) & = - \rho (\lambda,X,\phi,t,l)\\
\rho (\lambda,X,\phi_1 \vee \phi_2 ,t,l) & = \max (\rho (\lambda,X,\phi_1,t,l),\rho (\lambda,X,\phi_2,t,l))\\
\end{aligned}\\
&\begin{aligned}
\rho (\lambda,X,\phi_1 U_{[a,b]} \phi_2,t,l) & = \qquad \\
\max_{t'\in [a,b]} \min\big( \rho (\lambda,X,&\phi_1,t',l), \min_{t''\in [t,t']} \rho (\lambda,X,\phi_2,t'',l)\big)\\
\end{aligned}\\
&\begin{aligned}
\rho (\lambda,X,\phi_1 \mathcal{R}_{\leq d}^f \phi_2 ,t,l) & =\max_{\tau \in Routes}\max_{l'\in \tau:f(l,l')\leq d} \\
\bigg[\min(\rho(\lambda,x,&\phi_2,l',t);\min_{j<\tau(l')}\rho(\lambda,x,\phi_1,\tau[j],t))\bigg]\\
\end{aligned}\\
&\begin{aligned}
\rho (\lambda,X,\mathcal{E}^f_{>d} \phi,t,l) & =\\
\max_{\tau \in Routes}&\max_{l'\in \tau:f(l,l')\leq d} \min_{j<\tau\left(l'\right)}\rho\left(\lambda,x,\phi,\tau[j],t\right)\\
\end{aligned}
\end{align*}
\subsection{Smooth approximation of the max/min functions}
\label{sec:prelim-smooth}
STREL robustness functions are indifferentiable. This stems from the existence of max and min functions in the semantics. The following smooth approximations can be used to replace those terms with differentiable ones \cite{Pant2017}.
The smooth approximation of the maximum and minimum functions are given by:
\begin{equation}
\begin{array}{c}
{\max_{soft}}(a_1,\ldots,a_n) \approx
\log \big (\sum_{i=1}^n \exp(a_i) \big),\\
{\min_{soft}}(a_1,\ldots,a_n) \approx -{\max_{soft}}(-a_1,\ldots,-a_n)
\end{array}
\end{equation}
We will use a similar approximation in the proposed robustness functions for STREL in Sec.\ref{sec:newstrel}.
\section{Problem formulation}
\label{sec:problem}
\emph{ Given} a system of N robotic agents $Sys =1,..,N$ with dynamics (\ref{dyn}), a 2D Euclidean space $\mathcal{W}$, a STREL property $\Psi$, communication conditions, an initial communication graph $\lambda (t_0)$, initial state of the system $X{(t_o)}$ and a planning horison $T \in \mathbb{N}$, we want to \emph{determine} the optimal control sequence $U^*_{t_0:t_T} = \{u_l{(t_k)}|l = 1,...,N;t_k = [t_0,t_T\}$ such that the resulting trajectory $Q= \{q_l(t_k)|l = 1,...,N;t_k = [t_0,t_T]\}$ results in satisfying the STREL formula $\Psi$.
This problem can be formulated as an optimization problem with the robustness of the STREL formula as objective function. The problem is subject to dynamics of the agents(\ref{dyn}) and the admissible controls $\mathcal{U}$. Formally, we present \emph{Problem 1 (Pb1)}
\begin{align*}\label{problem1}
&\begin{aligned}
U^*_{t_0:t_T} & = \argmaxA_{U_{t_0:t_T}} {\rho}\left(\lambda,X(q{(t_0)},U_{t_0:t_T}), \Psi, t_k,l\right)\\
s.t. \quad & u_l{(t_k)} \in \mathcal{U} ,\quad \forall l \in Sys; t_k \in [t_0,t_T]\\
& q_{l}{(t_{k+1})} = A_{l,d} q_{l}{(t_k)} + B_{l,d} u_{l}{(t_k)},\\
& \qquad \qquad \forall l \in Sys; t_k \in [t_0,t_T]\\
\end{aligned}\\
\end{align*}
Our approach to solving the control synthesis problem is as follows. First, we propose new quantitative semantics for STREL (Sec.\ref{sec:newstrel}). The proposed semantics are smooth, sound and allow for optimizing the communication QoS and spatial satisfaction. Next, we propose a novel approach to solving \emph{Pb1} using both heuristic and gradient-based optimization algorithms (Sec.\ref{sec:solOPT}). The result of solving \emph{Pb1} is a control sequence that maximizes the robustness of STREL specification. Finally, we choose a recurrent neural network architecture to learn real-time controllers from a dataset that contains samples of satisfying state-control trajectories above a specific robustness threshold (Sec.\ref{sec:learning_control}).
\section{New quantitative semantics for STREL}
\label{sec:newstrel}
In this section we present the proposed STREL quantitative semantics. The proposed semantics differ from the the semantics provided in \cite{Bartocci2017} in three ways. First, we provide a smooth approximation to allows for gradient-based optimization. Second, a sigmoid function that depends on the distance between agents is added to allow for optimizing the communication QoS. Third, a sigmoid function that depends on the number of satisfying/violating routes is added to maximize the spatial robustness.
\subsection{Smooth STREL}
The quantitative semantics STREL are undifferentiable due to max and min functions. We showed in Sec.\ref{sec:prelim-smooth} that smooth approximations with arbitrarily small errors exist. We denote smooth approximation of the robustness function $\rho(.)$ by $\Tilde{\rho}(.)$. Such an approximation allows for the use of gradient-based algorithms to solve Pb \ref{problem1}. For example, the update $x^{(k+1)} = x{(k)} + \gamma_k \nabla \Tilde{\rho}(\lambda,X,\Psi),$ can be iterated to find a local maxima, where $\gamma_k$ is the gradient coefficient at iteration $k$.
\subsection{Optimizing the Communication QoS}
In practice, it is often the case that the communication QoS is variable over space and the closer the agents are the higher is the communication QoS. The quantitative semantics as defined in \cite{Bartocci2017} considers a constant communication QoS within a radius given by the distance predicate. To take the distance variability into account, we introduced the functions $\sigma_{dist}^{\leq}$ and $\sigma_{dist}^{>}$ which depend on $f(l,l')$-Euclidean distance and $d \in \mathbb{R}$. $\sigma_{dist}^{\leq}$ and $\sigma_{dist}^{>}$ take values between $[-1,1]$ depending on the ratio $d_{norm}$ and are defined as follows:
\begin{align}
\sigma_{dist}^{\leq}(d_{norm}) &= -tanh(k_{d}(d_{norm}-1)) \label{eq:sigma_dist1}\\
\sigma_{dist}^{>}(d_{norm}) & = tanh(k_{d}*(d_{norm}-1)) \label{eq:sigma_dist2}
\end{align}
where $d_{norm}= \frac{f(l,l')}{r}$ is the Euclidean distance between agents $l,l'$ divided by $r$ and $k_{d}$ is a hyperparameter that determines how fast the function changes its value (i.e steepness of the graph). The behavior of \ref{eq:sigma_dist1} and \ref{eq:sigma_dist2} is depicted in Fig.(\ref{fig:sigma1}).\\
Now, consider the distance predicates for the spatial operators $reach$, $escape$. We have $f(l,l)\leq r$ and $f(l,l')\leq r$ with $f$-Euclidean distance and $r \in \mathbb{R} $. We report three interesting cases:
\begin{align*}
d_{norm} \rightarrow -\infty &\implies \sigma_{dist}^{\leq} \rightarrow 1, \: \quad \sigma_{dist}^{>} \rightarrow -1 \\
d_{norm} \rightarrow 1 &\implies \sigma_{dist}^{\leq} \rightarrow 0, \: \quad \sigma_{dist}^{>} \rightarrow 0 \\
d_{norm} \rightarrow \infty &\implies \sigma_{dist}^{\leq} \rightarrow -1, \quad \sigma_{dist}^{>} \rightarrow 1
\end{align*}
We note that $d_{norm} = 1$ iff $f(l,l')= r$ and is bounded below by zero. In addition to optimizing the communication QoS, $\sigma_{dist}^{\leq}(d_{norm})$ and $\sigma_{dist}^{\leq}(d_{norm})$ allow the robustness robustness to be optimized beyond the communication range $r$ defined by the distance predicate.
\begin{figure}[hbt!]
\centering
\begin{center}
\includegraphics[width=0.35\textwidth]{figures/sigma_loc.png}
\caption{Behavior of the functions $\sigma_{dist}^{\leq}$ and $\sigma_{dist}^{>}$}
\label{fig:sigma1}
\end{center}
\end{figure}
\subsection{Maximizing the Number of Routes}
When considering the spatial saisfaction of STREL, it is beneficial to maximize the number of routes that satisfy a formula. This means that we maximize the number of agents for operator $surround$ and the number of routes that satisfy $reach$ or $escape$. For this purpose, we introduce $\sigma_{Routes}(R)$ to the robustness function.
\begin{equation} \label{eq:sigma_routes}
\sigma_{Routes}(R) = \frac{1}{1+exp(R)}
\end{equation}.
We note that:
\begin{itemize}
\item $R = k_{R} (1-\frac{R^{+}}{R^{+}+\delta}) R^{-}- k_{R} \frac{R^{+}}{R^{+}+\delta} (R^{+}-R^{-})$
\item $R^+, R^-$ - the number of routes that satisfy (violate) the spatial operator.
\item $k_{R}$ - hyperparameter that determines how fast the function changes its value
\item $\delta$ is a small positive number used to prevent division by zero.
\end{itemize}
The behavior of $\sigma_{Routes}(R)$ in both cases can be seen in Fig. \ref{fig:sigma_routes}.
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width= 0.37\textwidth]{figures/sigma_routes.png}
\caption{Behavior of the function $\sigma_{Routes}(R)$ }
\label{fig:sigma_routes}
\end{center}
\end{figure}
\subsection{Robustness Function for Operators $reach$ and $escape$}
Now, we put all the pieces from the discussion above together and present the robustness function for the spatial operators $ reach$ and $escape$:
\begin{multline}\label{eq:new_reach}
\Tilde{\rho}\left(\lambda,X,\phi_1 \mathcal{R}^{f}_{\leq d} \phi_2, t,l\right) =\\ \min_{soft}\bigg[\sigma_{dist}^{\leq}\left(x_{l},x_{l'}\right), \sigma_{Routes}(R) \max_{soft}^{\tau \in Routes} \max_{soft}^{l'\in \tau:dist(l')\leq d} \\\min_{soft}\left(\rho\left(\lambda,x,\phi_2,l',t\right); \min_{soft}^{j<\tau\left(l'\right)}\rho\left(\lambda,x,\phi_1,\tau[j],t\right)\right)\bigg]
\end{multline}
\begin{multline}\label{eq:new_escape}
\Tilde{\rho}\left(\lambda,x, \mathcal{E}^{f}_{> d} \phi, t,l\right) = \min_{soft}\bigg[\sigma_{dist}^{>}\left(d_{norm}\right); \sigma_{Routes}(R)\\ \times\max_{soft}^{\tau \in Routes} \max_{soft}^{l'\in \tau:> d}\\\min_{soft}^{j<\tau\left(l'\right)}\rho\left(\lambda,x,\phi,\tau[j],t\right))\bigg]
\end{multline}
The spatial operators $surround$, $somewhere$ and $everywhere$ can be derived directly from $reach$ and $escape$ and are not provided here for brevity.
\subsection{Robustness of a Multiagent System}
Finally, we emphasize on the fact that the STREL semantics we presented are defined at individual agents. The easiest way to compute the robustness of the System (at all agents) is to take the minimum of the robustness of individual agents.
\begin{equation} \label{eq:rob_all_old}
\Tilde{\rho}(\lambda,X,\Psi,t) = \min_{soft}^{i\in \{1,2,..,N\}}(\Tilde{\rho}(\lambda,X,\Psi,i,t))
\end{equation}
However, the robustness function above will reflect the worst robustness score among individual agents without accounting for how many agents satisfy/violate the formula $\Psi$. We take a similar approach to counting the number of satisfying/violating routes in Sec. \ref{sec:sigma_routes} and introduce an additional function $\sigma_{Agents}(Ag)$ that modifies \ref{eq:rob_all_old}.
\begin{equation} \label{eq:sigma_agents}
\sigma_{Agents}(Ag) = \frac{1}{1+exp(Ag)}
\end{equation}
We note that:
\begin{itemize}
\item $Ag = -k_{ag}\frac{Ag^{-}}{Ag^{-}+\delta}\times Ag^{-}+ k_{ag}(1-\frac{Ag^{-}}{Ag^{-}+\delta})\times(Ag^{-}-Ag^{+})$
\item $Ag^{+},Ag^{-}$ is the number of agents that satisfy, violate the property in the system.
\item $\sigma_{Agents}$ behavior is a mirroring of $\sigma_{Routes}$
\end{itemize}
The robustness of the spatial temporal trace $X$ against the STREL property $\Psi$ is:
\begin{equation}
\Tilde{\rho}(X,\lambda,\Psi,t) = \sigma_{Agents}(Ag) \min_{soft}^{i\in \{1,2,..,N\}}(\Tilde{\rho}(X,\lambda,\Psi,i,t))
\end{equation}
\subsection{Soundness of the Proposed Semantics}
We now argue that the proposed formulation is \textit{sound}, i.e. a strict positive robustness indicates satisfaction of the specification, and a strict negative robustness indicates violation. For that, we show that the three functions we introduced $\sigma_{dist}(.),\sigma_{routes}(.),\sigma_{Agents}(.)$ do not affect the sign of the original robustness function in \cite{Bartocci2017}. This is enough to prove the soundness of our formulation as the correctness of the original robustness function was proven in \cite{Bartocci2017}.
First, $\sigma_{routes}(.) \: and \: \sigma_{Agents}(.)$ are positive and are multiplied by the robustness function provided by the original robustness function.
Second, $\sigma_{dist}(.)$ changes in the range $[-1,1]$ and it is negative only when the distance predicate is not satisfied (for example $f\leq d$ is violated for operator $reach$). We take the minimum between the robustness function and $\sigma_{dist}(.)$, the robustness function will still give positive values for satisfaction and negative values for violation as before.
Thus, we have shown that the soundness of the proposed formulation follows from the soundness of the original formulation in \cite{Bartocci2017}.
\section{Optimization Approach to Problem 1}
\label{sec:solOPT}
To make the proposed framework generalizable to various settings, we discuss the pros and cons of the different optimization approaches to solve problem 1. We namely talk about one gradient-based method, one heuristic method and two hybrid methods. We ran various simulations varied the parameters of the system such as the number of agents, complexity of agent's dynamics, horizon of execution and satisfaction requirements. We also varied the hyperparameters of the different optimization algorithms. We note to the reader, that we provide a concrete case study is provided in Sec.\ref{sec:CS} and the goal of this section is to provide intuition on the various optimization approaches and their behavior when solving \emph{Pb1}.\\
The proposed semantics are smooth which allows for using a simple gradient-based method such as gradient descent. Such a method can be fast, especially if the analytical gradient can be provided explicitly. However, gradient-based methods are exposed to premature convergence. Our simulations used the build-in MATLAB function \emph{fmin}. Generally, as the number of agents increases the number of minima/maxima of the robustness function increases. It is worth noting that the gradient-based method produces high satisfaction score when it does not converge prematurely. A typical approach in the literature to avoid premature convergence is the use of heuristic algorithms. Our experiments showed higher satisfaction frequency using particle swarm optimization algorithm. However, heuristic methods are generally more intense computationally. To get the best of the two world (gradient-based and heuristic methods), we explored two hybrid approachs, using particle swarm optimization with gradient descent updates and using particle swarm optimization followed by gradient descent optimization. The first approach showed the best satisfaction frequesncy overall as well as higher robustness for the satisfying control sequences. However, this aproach was very intense computationally. The latter approach (particle swarm optimization followed by gradient descent optimization) is much faster and achieves both high satisfaction frequency and robustness score for satisfying examples.
Tab. \ref{tb:opt} provides a qualitative summary of the run time complexity and success rate (satisfaction frequency of the generated control sequences) and magnitude of robustness score (for satisfying control sequences) of four algorithms, gradient descent (GD), Paritcle Swarm Optimization (PSO), Gradient-based Particle Swarm Optimization (GPSO) and Particle swarm Optimization followed by a gradient-based (PSO +GD).
\begin{table}[tbh!]
\begin{tabular}{cccc}
\textbf{algorithm} & \textbf{\begin{tabular}[c]{@{}c@{}}Run time \\ complexity\end{tabular}} & \textbf{Success rate} & \textbf{Robustness score} \\
\textbf{GD} & Small & Small & High \\
\textbf{PSO} & Medium & High & Small \\
\textbf{GPSO} & Very high & Very High & High \\
\textbf{PSO + GD} & Medium & High & High
\end{tabular}
\label{tb:opt}
\caption{ Comparison between optimization algorithms for STREL control synthesis}
\end{table}
\section{Learning controllers using neural networks}
\label{sec:learning_control}
\subsection{Generating a data set for training a neural net}
\label{sec:Data}
We generate a data set $\mathcal{D}$ of $m$ satisfying control-trace pairs (with positive robustness) $\mathcal{D} =\{X^{[j]},U^{[j]}|\Tilde{\rho}^{[j]}\geq \delta \}$. Where $\delta$ is the robustness margin. The robustness margin is used to avoid the error of the soft min/max approximation and to increase the resulting robustness of satisfaction for the trained RNN. The matrix $X^{[j]} , j =1,..,m$ represents the trace generated from solving Problem \emph{Pb1} with initialization $X^{[j](0)}$. Matrix $X^{[j]}$ is of shape $(n \times m)$ where $n = N \times 2 + N \times N + |P|$ (position, communication adjacency matrix and type information). In other words, the input of the neural net will be a matrix whose columns represent times and whose rows contain state information. The output of the neural net is the control sequence that satisfies the spatio-temporal property.
\subsection{Choosing and training deep neural net model for real-time control}
\label{sec:solNN}
Deep neural networks have been used as controllers in presence of formal specification in the literature (cf. \cite{liu2020recurrent}). Our assumption here is that we can choose a neural networks structure that can learn controllers that satisfy STREL properties. STREL has history-dependence due to its temporal operators. The control at each time step depends on the current state and the history trajectory. In other words, at each time step $t$, $U^{(t)} = g(q_0, U^{(0)},..,U^{(t-1)})$. This time dependency can be problematic if we are to use vanilla recurrent neural networks as it can suffer from vanishing/exploding gradients. That is why we utilize the unique LSTM structure. Unlike vanilla neural networks, LSTM has feedback channels and cell memory. This make it capable of processing entire sequences of data (such as speech or video). This has made LSTM a great choice for predication, processing and classification in times series data.
\section{Case study: Networked robotic agents}
\label{sec:CS}
Consider a network of 7 robotic agents in a two dimensional workspace $\mathcal{W}$ with dimensions $L \times L, \: L=5$ units (see Fig.\ref{fig:CSRNN}). Each agent $i$ at time $t$ has a state $X_i^{(t)}$ that consist of position information $Q^{(t)} \mathbb{R}^2$, type information $p^i \in P = \{red(R),blue(B),black(K)\}$. We define a communication protocol in a similar manner to that in Sec.\ref{sec:problem} In particular, two agents $i$ and $j$ as connected if both of the following conditions hold:
\begin{itemize}
\item The Euclidean distance $d(.)$ between agents $i$ and $j$ is less than a fixed communication range $d(q^{(t)}_i, q^{(t)}_j) < r$, we set the communication range to $r=2$ \textit{unites}.
\item In the corresponding Voronoi diagram, agents $i$ and $j$ are neighbors.
\end{itemize}
The red agents are controllable while the blue and black agents are not controllable. The dynamics of the agents is as follows:
\begin{align} \label{dyn_CS}
q^{(t+1)}_{R} &=q^{(t)}_{R}+u^{(t)}_{R};\\
q^{(t+1)}_{\neg d} &= q^{(t)}_{\neg d}) ;\\
u \in \mathcal{U}
\end{align}
Where the index ${R} = R_1,R_2; t=0,...,T-1; {\neg d} = B_1,B_2,K_1,K_{2},K_3;t=0,...,T-1$
Our goal is to move the red agents from the initial positions $A,B$ (top right corner and bottom left corner) to the final position in the center after time $T_2 = 13$, while staying connected to the black agents after $T_1 =3 $ time units from initial moment. In addition, we require all agents not to collide with one another during the execution time. The excusion horizon is $T=15$ time steps.
\begin{figure}[!tbp]
\centering
\begin{minipage}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figures/16cropped.jpg}
\caption{Trajectory generated by RNN}
\label{fig:CSRNN}
\end{minipage}
\hfill
\begin{minipage}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figures/66cropped.jpg}
\caption{Trajectory generated by PSO+GD}
\label{fig:CSOPT}
\end{minipage}
\end{figure}
\begin{figure}[!tbp]
\centering
\begin{minipage}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{figures/.jpg}
\caption{\textcolor{blue}{Figure with multiple trajectories will go here}}
\label{fig:CSRNN}
\end{minipage}
\end{figure}
The above requirements can be translated to the following STREL Formula:
\begin{multline}\label{CS1_dynamics}
\Psi_1(X) = G_{[2,T]}(dist_{i\neq j}(q_i^{(t)},q_j^{(t)})>r_1) \\
\bigwedge G_{[T_1,T]} Red \mathcal{R}^{\Delta}_{\leq d} Black \\
\bigwedge F_{[T_2,T]}(dist(q_{i,Red}^{(t)},C)\leq r_2)
\end{multline}
\subsection{problem Formulation and Approach}
As we showed in Sec.\ref{sec:problem}, the control problem can be formulated as an optimization problem. The formulation can be seen below.
\begin{align*}\label{CS1_problem}
U^{*(k:T-1)} = &\\
\argmaxA_{U^{(k:T-1)}}& \Tilde{\rho}\left(\lambda,X(q^{(0)},...,q^{(k-1)},q(q^{(k)}U^{(k:T-1))}), \Psi_1, t,l\right)\\
s.t. (\ref{CS1_dynamics})
\end{align*}
\subsection{Solution }\\
\subsubsection{Writing the objective function}We simplify the optimization problem by adding the constraints to the objective function with a sufficiently large penalty. The unconstrained problem is a long formula that we provide below.
\begin{multline}
\argmaxA_{U^{(0:T-1)}}
\min_{soft}\bigg[
\min_{soft}^{t \in [2,T]}(dist(q_i^{(t)})-r_1); \\
\min_{soft}^{t \in [T_1,T]} \bigg[ \min_{soft}\big[\sigma_{dist}^{\leq}\left(q_{l}^{(t)},q_{l'}^{(t)}\right), \\ \sigma_{Routes}(R) \times \max_{soft}^{\tau \in Routes} \max_{soft}^{l'\in \tau:dist(l')\leq d} \\\min_{soft}\big(\rho\left(\lambda,X,Black,l',t\right),
\\min_{soft}^{j<\tau\left(l'\right)}\rho\left(\lambda,x,Red,\tau[j],t\right)\big)\big] \bigg]; \\
\min_{soft}^{t \in [2,T]}(r_2 - dist(q_{i,red}^{(t)}, Center))
\bigg]\\ - \gamma g(u)
\end{multline}
Where $g(u)$ is a function that penalizes the control constraint violation.
\subsubsection{Choosing the optimization algorithm:} Our experiments have shown that the objective function in \ref{CS1_problem} has multiple local minima. We tried different methods to understand the nature of the objective function and its convergence. The table below shows a comparison between the use of a gradient-based method (quazi-newton), a heuristic method (Particle swarm optimization) and gradient-based PSO (GPSO). The results reflect the average performance given random initializations for the controls and agents positions.
\begin{table}[tbh!]
\begin{center}
\begin{tabular}{cccc}
\textbf{algorithm} & \textbf{\begin{tabular}[c]{@{}c@{}}satisfaction\\ frequency\\ (per 100 runs)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Average \\ robustness\\ ($\times 10^-4$)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}execution \\ time\\ (per run)\end{tabular}} \\
\textbf{GD} & 24.7 & 48 & 23.3 \\
\textbf{PSO} & 61.7 & 37 & 30 \\
\textbf{GPSO} & 83.4 & 55 & 82 \\
\textbf{PSO + GD} & 73.8 & 50 & 32.9
\end{tabular}
\end{center}
\caption{Comparison between optimization algorithms for STREL control synthesis}\label{tb:opt_CS}
\end{table}
As can be seen from the table above, the Gradient-based method is prune to premature convergence and in many cases the algorithm results in a control sequence that does not satisfy the STREL property. The heuristic method achieved higher satisfaction frequency but a low robustness score. This can be attributed to the nature of PSO as we need a large number of iterations to achieve higher robustness. The GPSO achieved higher robustness score and satisfaction frequency but it was expensive computationally. This is due to the fact that we us a numerical method to compute the gradient at each iteration and for each particle. The computational complexity of the GPSO is $\mathcal{O}()$.To avoid premature convergence to local and reduce the computational complexity of the optimization, we utilize PSO (Heuristic) at the beginning of the optimization process followed by a gradient-based method. The Heuristic algorithm will make sure we explore the whole space before proceeding to using a gradient-based method.\\
\subsubsection{Generating the examples}\\
We used a Windows 10, Core i7 @350GHz with 80 GB RAM. Using PSO+GD we generated 1058 positive examples, with average robustness $\Tilde{\rho}_{Avg} = 50 \times 10^{-4}$. \\
\subsubsection{Choosing the RNN model architecture:}We started by choosing the RNN architecture. Given the time dependence of the STREL, we decided to utilize the Long-Short Time Memory (LSTM) with the following architecture: four LSTM layers with $64$ hidden units respectively, separated by a dropout layer with dropout probabilty of 0.6 to prevent overfitting. We implemented the net using the Pythorch package in Python. \\
\subsubsection{Traning the net:} For the training, we used a Windows 10, Core i7 @350GHz with 80 GB RAM. We trained the net for $1000$ epochs using $1000$ examples. We split the set into 900 training points and 100 testing points. The training time was below 6 minutes.\\
\subsubsection{Testing the trained controller} \\
We tested the learned controller with 100 new examples. The performance of the net is captured in Tab. \ref{RNN_results_CS1}. It can be seen that the controller successfully generates satisfying control sequences in real-time. a sample of the trajectories generates from the sequence can be seen in Fig.\ref{fig:CSRNN} and Fig.\ref{fig:CSOPT}.
\begin{table}[tbh!]
\begin{tabular}{lll} \label{RNN_results_CS1}
\textbf{Method} & \textbf{\begin{tabular}[c]{@{}c@{}}Avg robustness \\ $\times 10^{-4}$\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}run time \\ (per trajectory) \end{tabular}} \\
\begin{tabular}[c]{@{}l@{}}PSO+GD \end{tabular} & 50 & 67.4 \\
\begin{tabular}[c]{@{}l@{}}RNN controller \end{tabular} & 37 & 0.002
\end{tabular}
\caption{Testing the learned controller}\label{tb:lstm}
\end{table}
\section{Conclusion and future work}
In this work, we proposed modification to the quantitative semantics of the STREL spatial and temporal operators. The modifications ensure optimizing the communication quality of service (QoS) and achieve satisfaction in a more robust sense than the traditional semantics. We show the soundness of the proposed semantics and utilize the semantics to formulate the control synthesis problem as an optimization problem. Finally, we propose a framework that relies on generating a dataset satisfying examples by solving the optimization problem and training a RNN to generate control sequences for a networked system that satisfy STREL specifications. The work of our framework is demonstrated on a communication aware networked system with a case study. In the case study we showed that the RNN controller can be used in real-time control. The project have two directions for future researh. The first direction is investigating the scaling of the model to a large number of agents, improving the time performance by utilizing explicit gradient for the objective function. The second direction is consider an online learning approach for the controller. To achieve online learning, we are considering the use of Reinforcement Learning.
\section{Introduction}
\label{sec:intro}
From satellite constellations to smart cities and synthetic biology, networked multiagent systems are rising in popularity. Often we are faced with the control synthesis problem which aims at steering robotic agents in the system from an initial configuration to a final configuration while satisfying spatio-temporal requirements such as maintaining connectivity or avoiding obstacles. In robotic networked systems, connectivity between robots is often coupled with the motion performance that determines the spatial configuration of agents in the system. Thus, optimizing the spatial configuration can improve the connectivity between agents in the system.
The problem of control synthesis in presence of temporal specifications has been addressed in the literature (\cite{Belta2017-zs,Tabuada2009-yz,Raman2014,Sadraddini2015-fh}). Temporal logics such as Signal Temporal Logic (STL) (\cite{STL2004}) are equipped with quantitative semantics, known as a robustness function, that measure how strong the signal satisfies a given specification. The robustness allows for mapping the control synthesis problem to an optimization problem with robustness as a cost function.
More recently, spatio-temporal logics have emerged as a formal way to capture spatial and temporal requirements. Some examples are SpaTeL \cite{Haghighi2015},STREL \cite{Bartocci2017,Bartocci2020} and SaSTL \cite{SaSTL}. Some of these logics are equipped with quantitative semantics and, similar to STL, can be used to map the control synthesis problem into an optimization problem. For example, \cite{Haghighi2016,Liu2018} and \cite{Liu2020} used the spatio-temporal logics SpaTeL (\cite{Haghighi2015}) and STREL (\cite{Bartocci2017}), to solve the control synthesis problem for robotic networked systems with spatio-temporal specifications.
In \cite{Haghighi2016}, SpaTeL was used to synthesize control of a swarm of robots by solving an optimization problem. SpaTeL looks at satisfaction of the specification over the system and thus it does not inform the satisfaction for individual agents. Furthermore, the Tree Spatial Superposition Logic (TSSL) logic (\cite{TSSL}) used to capture the spatial requirements is based on quadtrees and it becomes computationally expensive as the depth of quadtrees increases. In \cite{Liu2020}, the quantitative semantics of STREL were employed to solve the control problem.
STREL semantics as defined in \cite{Bartocci2017} has no means to optimize the spatial configuration and there is no spatial counting in its definition. In other words, the robustness does not account for how many routes in the spatial configuration and how many agents satisfy a given formula. In this work, we propose new quantitative STREL semantics and use it to solve the control synthesis problem. The proposed semantics vary the robustness score depending on the distance between agents and the number of routes that satisfy the specifications in the network. This allows for optimizing the spatial configuration and improve the connectivity between agents.
Several optimization methods have been used in the literature to solve the control synthesis problem in the existence of spatial and/or temporal specifications. For example, Mixed Integer Programming (MIP) and Mixed Integer Linear Program (MILP) encoding were used in \cite{Raman2014-qx,Haghighi2016,Liu2018}. Formulating MIP and MILP is a complicated task and their performance times are unpredictable. Another example is the use of gradient-based methods in \cite{Pant2017,Haghighi2019-as,varnai2020robustness,MehdipourAGM,Gilpin2021}. In general, gradient-based methods have fast convergence rates and are simple to implement. However, they are prone to premature convergence to local optima. In this work we focus on gradient-based methods and show how to overcome their drawbacks.
The STREL semantics in \cite{Bartocci2017} are nondifferentiable due to the existence of the minimum and maximum functions. Thus, gradient-based methods cannot be used for optimizing the robustness. Our proposed semantics uses a smooth approximation of the minimum and maximum functions (\cite{Pant2017,Li2018}) to allow for the use of gradient-based methods.
To avoid premature convergence, we propose optimizing the robustness in two stages. In the first stage, we conduct exploration using a heuristic algorithm to find a good initialization point. In the second stage, using the initialization from the first stage, we run a gradient-based algorithm that has a fast convergence rate. This approach provides better exploration of the search space and fast convergence compared to the use of pure heuristic or pure gradient-based methods.
In general, optimizing the robustness can be computationally expensive. This makes the optimization less likely to meet real-time requirements in practice. In this work, we outline our approach to real-time control using Recurrent Neural Networks (RNN). The RNN learns a controller from a dataset of samples containing agent state trajectories and control sequences that satisfy the spatio-temporal requirements. The samples in the dataset are generated by solving the control synthesis problem using optimization as described above. This method has shown success in several studies before (cf. \cite{liu2020recurrent,yaghoubi2020training}). In \cite{liu2020recurrent}, the authors showed that by training a feedforward RNN using a dataset of about 500 points, the model learns a feedback controller that satisfies a STL formula. In this work, we choose a RNN structure and train it with system states as input and control as output. Once trained, the RNN controller will predict in real-time the control policy at each time step based on the current state and the history of the system.
We demonstrate the efficiency of our approach to control synthesis in a case study. The goal of the case study is to control robotic agents in system to move from an initial configuration to a final configuration while satisfying spatio-temporal specifications given by a STREL formula.
The main contributions of this work can be summarized as follows:
\begin{enumerate}
\item We propose new smooth quantitative semantics for the Spatial Temporal Reach and Escape Logic (STREL) that allow for optimizing spatial configuration of the system.
\item Given a system of $N$ networked robotic agents, we propose a new optimization approach for solving the control synthesis problem. Our approach uses a heuristic algorithm to find an initialization point for a gradient-based algorithm. This approach achieves better overall performance compared to the optimization approaches used in the literature.
\item We provide Recurrent Neural Network-based real-time controllers for multiagent networked systems with spatio-temporal requirements specified in STREL.
\end{enumerate}
The rest of this paper is organized as follows. The preliminaries are provided in Sec.\ref{sec:prelim}. The control synthesis problem in presence of spatio-temporal specification is formulated in Sec.\ref{sec:problem}. Then in Sec.\ref{sec:newstrel}, we proposed new STREL quantitative semantics. The optimization approach to the control problem is discussed in Sec.\ref{sec:solOPT}. Choosing the structure of RNN and learning a real-time controller is covered in Sec.\ref{sec:learning_control}. In Sec.\ref{sec:CS}, we showcase the framework in a case study and discuss the results. Finally, we conclude by pointing out future research directions.
\section{Preliminaries}
\label{sec:prelim}
\subsection{System Dynamics and Connectivity Condition}
\label{sec:prelim_Dyn}
Consider a system of $N$ autonomous agents labeled from the set $Sys = \{1,2,\ldots,N\}$ in a Euclidean space $\mathcal{W} \subset \mathbb{R}^n$. Each agent $i \in Sys$, has a state $x_i{(k)}$ at time step $k$ and we consider the following dynamics
\begin{equation}
\label{dyn}
x_{i}{({k+1})} = f(x_{i}(k),u_i(k))
\end{equation}
where $x_i(k)= [q_i \: a_i] \in D$ is the state of agent $i$ and $q_i\in R^n$ is the position of the agent and $a_i\in \mathbb{B}^p$ is a vector that encodes the type of the agent and $p \in \mathbb{N}$ and the domain of the state is denoted by $D$. $u_i(k) \in \mathcal{U} \subset \mathbb{R}^m$ and $\mathcal{U}$ is the set of admissible controls.
For example, consider the team in Fig.\ref{fig:example1}, the State of agents $3$ can be encoded as $x_3= [q_3, p_3]$ and $q_3 \in \mathbb{R}^2$ and $a_3= red = [1 \: 0 \: 0] \in \mathbb{B}^3$ as we have only three types and the can be encoded in a three dimensional binary vector.
Now consider a planning horizon $H \in \mathbb{N}$. The state run for the agent $x_i{(0)}x_i{({1})} \ldots x_i{({H})}$ generated by $u_i(0) \ldots u_i({H-1})$ starting at $x_i{(0)}$ is denoted by $\mathbf{x_i}$. We denote the state of the team of agents at time $k$ by $X{(k)}$, and the state run of the team over the planning horizon $H$ by $\mathbf{X}$. Throughout the paper, we use the notation $[0,H]$ to denote the discrete interval of time steps $[0,H] \cap \mathbb{Z}$.
Given space $\mathcal{W}$ as described above, two agents $i$ and $j$ are connected at time $k$ if the Euclidean distance $dist$ between them is less than a fixed communication range $d$.
\begin{equation}\label{com_condition}
dist(x_i{(k)}, x_j{(k)}) < d
\end{equation}
Additional conditions can also be added (see the case study for an example).
The spatial model of the system at a fixed time step is represented as a graph $G = (V,E)$. The location of the agent $i$ is treated as a vertex $v_i \in V$. The connection between two agents $i$ and $j$ is represented by the weighted edge $e_{i,j} \in E$. The weight of the edge $e_{i,j}$ maps the connection between agents $i,j$ into a certain domain. For instance, $E:V\times V \rightarrow \mathbb{B}$ (on-off connection) or $E:V \times V \rightarrow \mathbb{R}$ (connection subject to Euclidean distance).
\subsection{Spatio-Temporal Reach and Escape Logic (STREL)}\label{sec:prelim-strel}
STREL is a logic capable of describing complex behaviors in mobile spatially distributed multi-agent systems. It can capture and monitor the spatio-temporal specification of spatially distributed multiagent systems.
A \emph{spatial-temporal trace} $\overrightarrow{X}$ is a function for all $v_i \in V \text{ and } k \in \mathbb{T} = [0,H] $ defined as
$$\overrightarrow{X}: V \times \mathbb{T} \rightarrow D$$
where $D$ represents the domain of the trace such as Boolean $\mathbb{B}$ or real $\mathbb{R}$. Any $v_i \in V$ yields a vector of temporal signals $\overrightarrow{X}(v_i)$ and we denote the signal at $v_i$ and time $k$ by $\overrightarrow{X}(v_i, k)$.
We work with spatial models that change over time. Thus, to obtain the spatial model at each time step, \emph{location service} is defined as a function
$$\lambda : \mathbb{T} \rightarrow \mathbf{G}$$
which returns a spatial model $G$ for each time step $k \in \mathbb{T}$.
The syntax of STREL is given by
$$
\psi := \mu \; |\;\neg \phi \; | \; \phi_1 \vee \phi_2 \; | \; \phi_1 U_{[a,b]} \phi_2 \; | \; \phi_1 \mathcal{R}_{\leq d}^f \phi_2 \; | \; \mathcal{E}^f_{>d} \phi
$$
where $\mu: \mathbb{R}^n \rightarrow \mathbb{R}$ is an Atomic Predicate (AP) , negation $\neg$ and disjunction $\vee$ are the classical Boolean operators. The Boolean operator conjunction $\wedge$ can be derived as $\phi_1 \wedge \phi_2 = \neg (\neg \phi_1 \vee \neg \phi_2)$.
$until$ $\mathcal{U}_{[a,b]}$ is a temporal operator with $[a,b]$ positive closed time interval ($a,b \in \mathbb{R_{\geq 0}}$). Operator $eventually$ $F$ can be derived as $ F_{[a,b]} \phi = true \: U_{[a,b]} \phi$. In addition, operator $always$ $\mathcal{G}$ can be derived from $eventually$ as $ \mathcal{G}_{[a,b]} \phi = \neg \: F_{[a,b]} \neg \phi$.
The spatial operators "reach" and "escape" are represented by $\phi_1 \mathcal{R}_{\leq d}^f \phi_2$ and $\mathcal{E}^f_{>d} \phi$ respectively. $f$ is a distance function and $f\leq d$ and $f> d$ are called \emph{distance predicates}.
Next, we present the \emph{STREL qualitative semantics} with respect to the spatio-temporal trace $\overrightarrow{X}(l,k) \in D_x $ at agent $l$ and time step $k$, where $D_x$ is the domain of the spatio-temporal trace. If the formula $\phi$ holds true for the spatio-temporal trace for agent $l$ at time $k$ and we write $\overrightarrow{X}(l,k) \models \phi $. We also will use the \emph{signal interpretation function} $\iota$ which maps the atomic proposition (AP) and spatio-temporal trace $\overrightarrow{X}(l,k)$ to Boolean or real domain $D$. Formally,
$$\iota : AP \times D_x \rightarrow D$$
The STREL formula $\phi$ with respect to the spatial-temporal trace $\overrightarrow{X}(l,k)$ is defined as follows:
\begin{align*}
&\begin{aligned}
\overrightarrow{X}(l,k) &\models \mu \Leftrightarrow \iota (\mu,\overrightarrow{X}(l,k))>0 \\
\overrightarrow{X}(l,k) &\models \neg \phi \Leftrightarrow \neg(\overrightarrow{X}(l,k) \models \phi) \\
\overrightarrow{X}(l,k) &\models \phi_1 \vee \phi_2 \Leftrightarrow \overrightarrow{X}(l,k) \models \phi_1 \vee \overrightarrow{X}(l,k) \models \phi_2 \\
\end{aligned}\\
&\begin{aligned}
\overrightarrow{X}(l,k) &\models \phi_1 U_{[a,b]} \phi_2 \Leftrightarrow \exists_{ {k'}\in [k+a,k +b]} \\
\text{ s.t. } & \overrightarrow{X}({k}',l) \models \phi_2 \text{ and }\forall_{ {k''} \in [k,{k'}]} \overrightarrow{X}({k}'',l) \models \phi_1 \\
\end{aligned}\\
&\begin{aligned}
\overrightarrow{X}(l,k) &\models \phi_1 \mathcal{R}^f_{\leq d} \phi_2 \Leftrightarrow \exists_{ \tau \in Routes(\lambda{(k)},l)} \\
&\exists_{ l' \in \tau : f(l,l') \leq d} \text{ s.t }\overrightarrow{X}({k}',l) \models \phi_2 \text{ and } \\
&\bigwedge_{ j< \tau(l')}, \overrightarrow{X}({k}',\tau[j]) \models \phi_1 \\
\end{aligned}\\
&\begin{aligned}
\overrightarrow{X}(l,k) &\models \mathcal{E}^f_{> d} \phi \Leftrightarrow \exists_{ \tau \in Routes(\lambda{(k)},l)} \exists_{ l'\in\tau:f(l,l')>d} \\
& \text{ s.t } \bigwedge_{j < \tau(l')}, \overrightarrow{X}({k}',\tau[j]) \models \phi
\end{aligned}\\
\end{align*}
where $\lambda{(k)}$ is the service location function at time $k$, $Routes(\lambda{(k)}, l)$ denotes an indexed sequence on the graph generated by the service function $\lambda{(k)}$ starting at node $l$, and $f(l,l')$ is a distance function.
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=.5\linewidth]{figures/STREL_example.png}
\caption{A team of 8 networked robotic agents}
\label{fig:example1}
\end{center}
\end{figure}
To explain the intuition behind the spatial operators, we present the following example
\begin{example} \label{ex:example1}
Consider a system of $8$ robotic agents in Fig.~\ref{fig:example1}, and a distance function $hops(l,l')$ which returns the shortest route (number of edges) between agents $l,l'$. The STREL spatial operator can be described as follows:
\begin{itemize}
\item{$reach$:} $\phi_1 \mathcal{R}^{f}_{\leq d} \phi_2$ is satisfied by $(\lambda{(k)},\overrightarrow{X}(l,k))$ iff $\phi_2$ is satisfied at agent $l'$ reachable from $l$ through a continuous route $\tau$, length of $\tau$ satisfies the distance predicate $f(l,l')\leq d$, and $\phi_1$ is satisfied at $i$ and all other agents in $\tau$. For instance, $black \: \mathcal{R}^{hops}_{\leq 1} red$ is satisfied at agent $2$ because agent $3$ is $red$ with a distance of at most 1 hop from agent $2$ and 2 is $black$. However, it is violated at agent 5 because it is not $black$.
\item{$escape$:} $\mathcal{E}^{f}_{> d} \phi$ is satisfied by $(\lambda{(k)},\overrightarrow{X}(l,k))$ iff there exists a continuous route $\tau$ from agent $l$ with a length that satisfies the distance predicate $f(l,l')>d$ in which $\phi$ is satisfied at all agents. For instance, $\mathcal{E}^{hops}_{> 2} \neg red$ is satisfied at agent 5 because all of the agents in the route $\{5,6,3\}$ are $red$. However, it is violated at agent $6$ because there is no route with $f(6,l')>2$ that satisfies $\mathcal{E}^{hops}_{> 2} \neg red$.
\item{$surround$:} $\phi_1 \odot^{f}_{\leqd} \phi_2 := \phi_1 \wedge \neg \left(\phi_1 \mathcal{R}^{f}_{\leq d} \neg (\phi_1 \lor \phi_2)\right) \wedge \mathcal{E}^{f}_{ > d} \phi_1 \wedge \phi_1 \mathcal{R}^{f}_{ \leq d} \phi_2$ expresses the topological notion of being surrounded by a $\phi_2$-region, while being in a $\phi_1$-region, with an additional metric constraint. For example, $blue \odot^{hops}_{\leq 2} red$ is satisfied at agent 1.
\end{itemize}
\end{example}
We note that for operator $surround$, the term ($\wedge \phi_1 \mathcal{R}^{f}_{ d} \phi_2$) was added to avoid false satisfaction in the case of isolated agents. For example, the $surround$ defined in \cite{Bartocci2017} will give satisfaction at agent 7 for the property $blue \odot^{f}_{\leq 2} red$, despite it not being surrounded by $red$ agents. Thus, we additionally demand the agent to satisfy $blue \mathcal{R}^{f}_{\leq d} red$. Other spatial operators such as $everywhere$ and $somewhere$ can also be derived from $reach$ and $escape$ and are not presented here for brevity.
In addition to the syntax and qualitative semantics provided above, qualitative semantics (robustness) for a STREL formula $\phi$ can be defined by assigning a real-valued measure $\rho (\lambda,\overrightarrow{X},\phi,k,l)$ for the spatial-temporal trace $\overrightarrow{X}(l,k)$ at time $k$ such that $\overrightarrow{X}(l,k) \models \phi \Leftrightarrow \rho (\lambda,\overrightarrow{X},\phi,k,l) >0 $. The qualitative semantics of a STREL formula is provided by the following \emph{robustness function}
\begin{align*}
&\begin{aligned}
\rho (\lambda,\overrightarrow{X},\mu,k,l) & = \iota(\mu,\overrightarrow{X}(l,k))\\
\rho (\lambda,\overrightarrow{X},\neg \phi,k,l) & = - \rho (\lambda,\overrightarrow{X},\phi,k,l)\\
\rho (\lambda,\overrightarrow{X},\phi_1 \vee \phi_2 ,k,l) & = \max (\rho (\lambda,\overrightarrow{X},\phi_1,k,l),\rho (\lambda,\overrightarrow{X},\phi_2,k,l))\\
\end{aligned}\\
&\begin{aligned}
\rho (\lambda,\overrightarrow{X},\phi_1 U_{[a,b]} \phi_2,k,l) & = \qquad \\
\max_{k'\in [a,b]} \min\big( \rho (\lambda,\overrightarrow{X},&\phi_1,k',l), \min_{k''\in [k,k']} \rho (\lambda,\overrightarrow{X},\phi_2,k'',l)\big)\\
\end{aligned}\\
&\begin{aligned}
\rho (\lambda,\overrightarrow{X},\phi_1 \mathcal{R}_{\leq d}^f \phi_2 ,k,l) & =\max_{\tau \in Routes}\max_{l'\in \tau:f(l,l')\leq d} \\
\bigg[\min(\rho(\lambda,\overrightarrow{X},&\phi_2,k,l');\min_{j<\tau(l')}\rho(\lambda,\overrightarrow{X},\phi_1,k,\tau[j]))\bigg]\\
\end{aligned}\\
&\begin{aligned}
\rho (\lambda,\overrightarrow{X},\mathcal{E}^f_{>d} \phi,k,l) & =\\
\max_{\tau \in Routes}&\max_{l'\in \tau:f(l,l')\leq d} \min_{j<\tau\left(l'\right)}\rho\left(\lambda,\overrightarrow{X},\phi,k,\tau[j]\right)
\end{aligned}
\end{align*}
The time horizon of a STREL formula $\phi$ is denoted by $hrz(\phi)$ and is defined as the smallest time point in the future for which the states of the agents are needed to compute the robustness at the current time.
\section{Problem Formulation and Approach}
\label{sec:problem}
Consider a system of $N$ robotic agents labeled from the set $Sys = \{1,\ldots,N\}$ with dynamics (\ref{dyn}) and a differentiable cost function $J(U(k),X({k+1}))$ which is the cost of steering the system from state $X({k})$ to state $X({k+1})$ using the control $U(k)$. Assume that spatio-temporal requirements are given by a STREL formula $\Psi$ over the state run of the system $X({0}) \ldots X({H})$, and $H \geq hrz(\Psi)$ is the planning horizon. Formally:
\textbf{Problem 1.}\emph{Given} system with dynamics (\ref{dyn}), a STREL formula $\Psi$, an initial communication graph $\lambda (0)$, the initial state of the system $X{(o)}$, a planning horizon $H \in \mathbb{N}$, and a cost function $J$; \emph{find} an optimal control sequence $U^*_{0:{H-1}} = \{u_l{(k)}|l = 1,\ldots,N;k = 0,\ldots,H\}$ such that the resulting spatio-temporal trace $\overrightarrow{X}$ results in maximizing the robustness of the STREL formula $\Psi$ and minimizing the cost function $J$.
\begin{align}\label{eq:problem1}
\begin{aligned}
U^{*}_{0:{H-1}} & = \argmaxA_{U_{0:{H-1}}} \rho\left(\lambda,\overrightarrow{X}(X{(0)},U_{0:{H-1}}), \Psi\right) \\
& \qquad - \gamma \sum_{k=0}^{{H-1}}J\left(U(k),X({k+1})\right) \\
s.t. \quad & u_l{(k)} \in \mathcal{U} ,\quad \forall l \in Sys;\forall k \in [0,{H-1}]\\
& x_{l}{({k+1})} = f(x_{l}(k),u_l(k)),\\
& \qquad \qquad \forall l \in Sys;\forall k \in [0,H]
\end{aligned}
\end{align}
where $\gamma$ is a trade-off coefficient for the cost function.
Our approach to solving the control synthesis problem is as follows. First, we propose new STREL quantitative semantics that are smooth, sound, and allow for optimizing the spatial configuration of agents in the system (Sec.~\ref{sec:newstrel}). Next, we solve \emph{Pb1} by employing both heuristic and gradient-based optimization algorithms (Sec.~\ref{sec:solOPT}). The result of solving \emph{Pb1} is a control sequence that maximizes the robustness of the STREL formula. Finally, we train a RNN to learn real-time controllers from a dataset that contains samples of state-control trajectories generated by solving the optimization problem (Sec.~\ref{sec:learning_control}).
\section{New quantitative semantics for STREL}
\label{sec:newstrel}
We present here a novel quantitative semantics for STREL differing from the original in~\cite{Bartocci2017} in three ways. First, we provide a smooth approximation to allows for gradient-based optimization. Second, a sigmoid function that depends on the distance between agents is added to allow for optimizing the spatial configuration. Third, a sigmoid function that depends on the number of satisfying/violating routes is added to maximize the spatial robustness. In addition, we show how the robustness of the system can be computed to account for the number of agents that satisfy or violate a given STREL formula.
\subsection{Smooth Robustness Approximation}
\label{sec:prelim-smooth}
STREL robustness function is nondifferentiable. This stems from the existence of max and min functions in the semantics. It was shown that smooth approximations with arbitrarily small errors exist (\cite{Pant2017,Li2018}).
The smooth approximation of the maximum and minimum functions is given by
\begin{equation}
\begin{array}{c}
\max(a_1,..,a_n)\approx {\max_{soft}}(a_1,..,a_n) \coloneqq \frac{1}{\beta}
\ln \big (\sum_{i=1}^n \exp(\beta a_i) \big),\\
\min(a_1,..,a_n) \approx {\min_{soft}}(a_1,..,a_n) \coloneqq -{\max_{soft}}(-a_1,..,-a_n)
\end{array}
\end{equation}
\cite{Pant2017} show that the approximation error is bounded.
\begin{equation}
\begin{array}{c}
0\leq \max(a_1,..,a_n) - {\max_{soft}}(a_1,..,a_n)\leq \frac{\ln(n)}{\beta}= \epsilon_{\beta} \\
\end{array}
\end{equation}
with the maximum error when all $a_1= \ldots =a_n = constant$. Therefore, the approximation error approaches $0$ as $\beta$ goes to $\infty$.
We use the approximation to replace those nondifferentiable terms with differentiable ones. The smooth approximation of the robustness function $\rho(.)$ is denoted by $\Tilde{\rho}(.)$. This approximation allows for the use of simple and fast gradient-based algorithms to solve Pb1.
\subsection{Optimizing the Spatial Configuration for connectivity}
In practice, it is often the case that the connectivity depends on the spatial configuration. For example, it is often the case that the closer the agents, the better the connectivity. The quantitative semantics as defined in \cite{Bartocci2017} considers constant connectivity links between agents. Formally, $E:V\times V \rightarrow \mathbb{B}$. To take the distance variability into account, we introduced the functions $\sigma_{dist}^{\leq}$ and $\sigma_{dist}^{>}$ which depend on the Euclidean distance between agents $dist(l,l')$ and a constant $d \in \mathbb{R}$. $\sigma_{dist}^{\leq}$ and $\sigma_{dist}^{>}$ take values between $[-1,1]$ depending on the ratio $d_{norm}$ and are defined as follows:
\begin{align}
\sigma_{dist}^{\leq}(d_{norm}) &= -tanh(k_{d}(d_{norm}-1)) \label{eq:sigma_dist1}\\
\sigma_{dist}^{>}(d_{norm}) &= tanh(k_{d}(d_{norm}-1)) \label{eq:sigma_dist2}
\end{align}
where $d_{norm}= \frac{dist(l,l')}{d}$ is the Euclidean distance between agents $l,l'$ normalized by $d$, and $k_{d}$ is a hyperparameter that determines how fast the function changes its value (i.e steepness of the graph). Note that $\sigma_{dist}$ allows the robustness to change beyond $d$. The behavior of \ref{eq:sigma_dist1} and \ref{eq:sigma_dist2} is captured for different values of $k_{d}$ in Fig.(\ref{fig:sigma1}).
Now, consider the distance predicates $dist(l,l)\leq r$ and $dist(l,l') > r$, we report three interesting cases:
\begin{align*}
d_{norm} \rightarrow -\infty &\implies \sigma_{dist}^{\leq} \rightarrow 1, \: \quad \sigma_{dist}^{>} \rightarrow -1 \\
d_{norm} \rightarrow 1 &\implies \sigma_{dist}^{\leq} \rightarrow 0, \: \quad \sigma_{dist}^{>} \rightarrow 0 \\
d_{norm} \rightarrow \infty &\implies \sigma_{dist}^{\leq} \rightarrow -1, \quad \sigma_{dist}^{>} \rightarrow 1
\end{align*}
We note that $d_{norm} = 1$ iff $f(l,l')= r$ and is bounded below by zero.
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=0.35\textwidth]{figures/sigma_loc.png}
\caption{Behavior of the functions $\sigma_{dist}^{\leq}$ and $\sigma_{dist}^{>}$}
\label{fig:sigma1}
\end{center}
\end{figure}
\subsection{Counting for Routes}\label{sec:sigma_routes}
Consider the robustness of the spatial operator $escape$
\begin{multline*}
\rho (\lambda,\overrightarrow{X},\mathcal{E}^f_{>d} \phi,k,l) =\\
\max_{\tau \in Routes}\max_{l'\in \tau:f(l,l')\leq d} \min_{j<\tau\left(l'\right)}\rho\left(\lambda,\overrightarrow{X},\phi,k,\tau[j]\right)
\end{multline*}
Let $\rho_{\tau \in Routes} = \max_{l'\in \tau:f(l,l')\leq d} \min_{j<\tau\left(l'\right)}\rho\left(\lambda,\overrightarrow{X},\phi,k,\tau[j]\right)$
Notice that $\rho = \max_{\tau \in Routes} $. In words, it is enough for one route $\tau \in Routes$ with $\rho_{\tau_1}>0$ to achieve the specification satisfaction. The robustness as defined in \cite{Bartocci2017} does not allow for varying the robustness score according to the number of satisfying/violating routes. In practice, it is beneficial to maximize the number of \textit{routes} that satisfy a formula. For example, for $escape$ ($\phi_1 \odot^f_{\leq d} \phi_2$), this maps to the maximization the number of agents satisfying $\phi_2$ that are surrounding agents satisfying $\phi_1$. Similarly, for operators $reach$ and $escape$ this will translate to maximizing the number of routes that satisfy the formula.
Let $R^+ \in \mathbb{N}$ and $ R^- \in \mathbb{N}$ be the number of routes that satisfy and violate a spatial operator, respectively. We introduce the function $\sigma_{Routes}(R^+,R^-)$ to allow for route counting as follows.
\begin{multline} \label{eq:sigma_routes}
\sigma_{Routes}(R^+,R^-) = \max_{soft}\big(\frac{1}{1+exp(k_{R} R^{-}))},\\ \frac{1}{1+exp(-k_{R} (R^{+}-R^{-}) ))} \big)
\end{multline}
where $k_{R}$ - hyperparameter that determines how fast the function changes its value.
The behavior of $\sigma_{Routes}(R^+,R^-)$ for $R^+ = 0$ and $R^+ >0$ can be seen in Fig. \ref{fig:sigma_routes}.
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width= 0.33\textwidth]{figures/sigma_routes.png}
\caption{Behavior of the function $\sigma_{Routes}(R^+,R^-)$ }
\label{fig:sigma_routes}
\end{center}
\end{figure}
\subsection{The Proposed Robustness for STREL}
Now, we put all the pieces from the discussion above together and present the proposed robustness function for STREL formula:
\begin{align*}
&\begin{aligned}
\Tilde{\rho} (\lambda,\overrightarrow{X},\mu,k,l) & = \iota(\mu,\overrightarrow{X}(l,k))\\
\Tilde{\rho}(\lambda,\overrightarrow{X},\neg \phi,k,l) & = - \Tilde{\rho}(\lambda,\overrightarrow{X},\phi,k,l)\\
\Tilde{\rho}(\lambda,\overrightarrow{X},\phi_1 \vee \phi_2 ,k,l) & = \max_{soft} (\Tilde{\rho}(\lambda,\overrightarrow{X},\phi_1,k,l),\Tilde{\rho}(\lambda,\overrightarrow{X},\phi_2,k,l))\\
\end{aligned}\\
&\begin{aligned}
\Tilde{\rho}(\lambda,\overrightarrow{X},\phi_1 U_{[a,b]} \phi_2,k,l) & = \max_{soft}_{k'\in [a,b]} \\
\min_{soft}\big( \Tilde{\rho}(\lambda,\overrightarrow{X},&\phi_1,k',l), \min_{soft}_{k''\in [k,k']} \Tilde{\rho}(\lambda,\overrightarrow{X},\phi_2,k'',l)\big)\\
\end{aligned}\\
\end{align*}
\begin{multline*}\label{eq:new_reach}
\Tilde{\rho}\left(\lambda,\overrightarrow{X},\phi_1 \mathcal{R}^{f}_{\leq d} \phi_2, k,l\right) =\\ \min_{soft}\bigg[\sigma_{dist}^{\leq}\left(x_{l},x_{l'}\right), \sigma_{Routes}(R^+,R^-) \max_{soft}^{\tau \in Routes} \max_{soft}^{l'\in \tau:dist(l')\leq d} \\\min_{soft}\left(\Tilde{\rho}\left(\lambda,\overrightarrow{X},\phi_2,k,l'\right); \min_{soft}^{j<\tau\left(l'\right)}\Tilde{\rho}\left(\lambda,\overrightarrow{X},\phi_1,k,\tau[j]\right)\right)\bigg]
\end{multline*}
\begin{multline*}\label{eq:new_escape}
\Tilde{\rho}\left(\lambda,\overrightarrow{X}, \mathcal{E}^{f}_{> d} \phi, k,l\right) = \min_{soft}\bigg[\sigma_{dist}^{>}\left(d_{norm}\right); \\ \sigma_{Routes}(R^+,R^-)\times\max_{soft}^{\tau \in Routes} \max_{soft}^{l'\in \tau:> d}\\\min_{soft}^{j<\tau\left(l'\right)}\Tilde{\rho}\left(\lambda,\overrightarrow{X},\phi,k,\tau[j]\right))\bigg]
\end{multline*}
The spatial operators $surround$, $somewhere$ and $everywhere$ can be derived directly from $reach$ and $escape$ and are omitted for space purposes.
\subsection{Counting for Agents}
The STREL semantics are defined at individual agents and The easiest way to compute the robustness of the System (at all agents) is to take the minimum of the robustness of individual agents.
\begin{equation} \label{eq:rob_all_old}
\Tilde{\rho}(\lambda,\overrightarrow{X},\Psi,t) = \min_{soft}^{i\in \{1,\ldots,N\}}(\Tilde{\rho}(\lambda,\overrightarrow{X},\Psi,i,t))
\end{equation}
Where $N$ is the number of agents in the system.
However, the robustness function above will reflect the worst robustness score among individual agents and does not vary by the number of agents that satisfy/violate the formula $\Psi$. Using similar approach to Route counting in Sec.\ref{sec:sigma_routes}, we introduce $\sigma_{Agents}(Ag^+,Ag^-)$ which allows for varying the robustness score depending on agent counting. Let $Ag^{+},Ag^{-}$ be the number of agents that satisfy, violate the specification respectively. We define $\sigma_{Agents}(Ag^{+},Ag^{-})$
\begin{multline}\label{eq:sigma_agents}
\sigma_{Agents}(Ag^+,Ag^-) = \max_{soft}\big( \frac{1}{1+exp(-k_{ag}Ag^{-})},\\
\frac{1}{1+exp(k_{ag}(Ag^{-}-Ag^{+})} \big)
\end{multline}
The robustness of the spatial-temporal trace $\overrightarrow{X}$ at time step $k$ against the STREL property $\Psi$ is:
\begin{multline}
\Tilde{\rho}(\lambda,\overrightarrow{X},\Psi,k) = \sigma_{Agents}(Ag^+,Ag^-) \\ \min_{soft}^{l\in \{1,\ldots,N\}}(\Tilde{\rho}(\lambda,\overrightarrow{X},\Psi,k,l))
\end{multline}
\subsection{Soundness of the Proposed Semantics}
In this section, we show that the proposed formulation is \textit{sound}, i.e. strict positive robustness score indicates satisfaction of the specification, and non-positive scores indicate violation of the specification. For that, we show that the three functions we introduced $\sigma_{dist}(.),\sigma_{routes}(.),\sigma_{Agents}(.)$ do not affect the sign of the original robustness function in \cite{Bartocci2017}. Once this is done, the soundness of our formulation follows from the soundness of the original robustness function in \cite{Bartocci2017}.
First, $\sigma_{routes}(.) \: and \: \sigma_{Agents}(.)$ are positive and are multiplied by the robustness function provided by the original robustness function.
Second, $\sigma_{dist}(.)$ changes in the range $[-1,1]$ and it is negative only when the distance predicate is not satisfied (for example $f\leq d$ is violated for operator $reach$). We take the minimum between the robustness function and $\sigma_{dist}(.)$, the robustness function will still give positive values for satisfaction and negative values for violation as before.
Thus, we have shown that the soundness of the proposed formulation follows from the soundness of the formulation in \cite{Bartocci2017}.
\section{Optimization Approach to Problem 1}
\label{sec:solOPT}
The control problem described in Sec.\ref{sec:problem} and given by (\ref{eq:problem1}) in Pb1 is solved by replacing the robustness $\rho(.)$ with the proposed robustness $\Tilde{\rho}(.)$ and setting $\epsilon_{min}$ equal to the value of the min/max approximation error. This yields a new optimization problem Pb2 .
\textbf{Solver.} Pb2 is a constrained non-linear optimization problem. Given proper conditions on function $f$ of the dynamics (\ref{dyn}), it can be solved using various gradient-based algorithms. In this work, we will focus on the use of Sequential Quadratic Programming (SQP). SQP solves constrained non-linear optimization problems by creating a sequence of quadratic approximations to the problem and solving these approximate problems.
For SQP to converge to a local minimum, it suffices that (\cite{polak2012optimization})a) all constraint functions be twice Lipschitz continuously differentiable. In Pb2, this maps to requiring the dynamics (\ref{dyn}) of problems we solve to be twice Lipschitz continuously differentiable; b) at points on the boundary of the inequality feasible set there exists a search direction towards the interior of the feasible set that does not violate the equality constraints. This is also true for $\Tilde{\rho}(.)$ since the equality constraints come from the dynamics and are always enforced for any control $u$.
\textbf{Initialization.} The search space can have many local maxima and we propose exploring the search space using the Particle Swarm Optimization (PSO) algorithm (\cite{kennedy1995particle}). PSO is a heuristic algorithm that optimizes a problem by iteratively improving a candidate solution with regard to a given measure of quality. It starts with a population of candidate solutions (particles), and moves these particles around in the search space according to simple mathematical formulae to improve the best value of quality among particles. PSO is computationally expensive and we run it only for a few iterations to find a good initialization for SQP.
\textbf{Comparison to optimization approaches in the literature }. The approach described above differs from the common control synthesis approach in the literature. Commonly, pure heuristic (such as PSO) or pure gradient descent methods (SQP) are implemented for solving the problem. The initialization is set to be a point in the feasible set. However, depending on the optimized function, heuristic methods are computationally expensive and gradient-based methods suffer from premature convergence quit often. Our approach (PSO+SQP) provides better initialization points and has a good convergence rate. A quantitative comparison is provided in Tab.\ref{tb:opt_CS}.
\section{Learning RNN controllers}
\label{sec:learning_control}
\textbf{Dataset generation.} Given a system and $M$ initialization points, we generate a dataset $\mathcal{D}$ of $m$ satisfying state-control pairs. Formally, $\mathcal{D} =\{X^{[j]},U^{[j]}|\Tilde{\rho}^{[j]}\geq \epsilon_{min} \}$, Where $\epsilon_{min} \geq \epsilon_{\beta}$ is the robustness margin. The robustness margin is used to compromise the error of the soft min/max approximation and to increase the resulting robustness of satisfaction for the trained RNN. The matrix $\overrightarrow{X}^{[j]} , j =1,\ldots,m$ denotes the $j^{th}$ spatial-temporal trace generated from solving Problem \emph{Pb1} with initialization $X^{[j]}(0)$.
\textbf{RNN implementation.} STREL has history-dependence due to the existence of temporal operators. This means that the control at each time step is dependent on the current state and the history trace, $u_i(k)=g(x_i(0),..,x_i(k))$. To address this issue, the function $g$ can be approximated as follows:
\begin{equation}\label{eq:RNN}
\begin{aligned}
h{(k)} &= \mathcal{R}(q(k),h({k-1}),W_1)\\
\hat{U}(k) &= \mathcal{N}(h(k), W_2)
\end{aligned}
\end{equation}
where $W_1,W_2$ are the weight matrices of the RNN, $h(k)$ is the hidden state at time step $k$ and $\hat{u}(k)$ is the predicted control at $k$. The network is trained to minimize the error between the predicted control and the optimized control given in the dataset
\begin{equation}
\min_{W_1,W_2} \sum_{\mathcal{D}} \sum_{\mathbb{T}}\norm{ U (k)-\hat{U}(k)}^2
\end{equation}
The hidden state works as a mean to pass the history information and so RNN solves the history dependence issue. Depending on the length of the history, many RNN implementations can lose the history information. In our case, this can happen if the planning horizon $H$ is big. This is a problem known in the literature as the vanishing/exploding gradient. That is why, similar to \cite{liu2020recurrent}, we utilize the Long Short Term Memory (LSTM) Network. LSTM has feedback channels and memory cells that can manage long term dependencies.
\section{Case study: Networked robotic agents}
\label{sec:CS}
This section will focus on demonstrating the efficacy of our proposed framework. We compare the results obtained using RNN-based controller with those obtained by solving an optimization problem. We also discuss the performance of the optimization methods discussed in Sec.\ref{sec:solOPT}.
The scripts for the case study were implemented in a PC with a Core i7 CPU @350GHz with 80 GB RAM. For optimization, a customized Particle Swarm Optimization (PSO) and a MATLAB build-in implementation of SQP \textit{fmin}. To implement the RNN we used the \textit{Python} package \textit{Pytorch}.
\subsection{Control synthesis for a multiagent netowrked system}
\textbf{System dynamics. }Consider a network of 7 robotic agents in a Euclidean space $\mathcal{W} \subset \mathbb{R}^2$ with dimensions $L \times L, \: L=5$ units (see Fig.\ref{fig:CSRNN}). Each agent $i$ at time $t$ has a state $x_i{(k)}$.
There are $2$ red agents (subscript $R$), $2$ blue and $3$ black agents are not controllable (subscript $\neg R$). The dynamics of the agents is as follows:
\begin{align} \label{dyn_CS}
\begin{aligned}
x_R{({k+1})} &=x_R{(k)}+u{(k)}_{R};\\
x_{\neg R}{({k+1})} &= x_{\neg R}{(k)}) ;\\
\end{aligned}
\end{align}
Where $u \in [-0.2,0.2]$, the index ${R} = \{R_1,R_2\}; {\neg R} = \{ B_1,B_2,K_1,K_{2},K_3\};k \in [0,H]$
\textbf{Connectivity conditions.} We assume that agents $i$ and $j$ are connected if both of the following conditions hold:
\begin{itemize}
\item The Euclidean distance $d(q{(k)}_i, q{(k)}_j)$ between agents $i$ and $j$ is less than a fixed communication range $r=2$.
\item In the corresponding Voronoi diagram, agents $i$ and $j$ are neighbors. This condition was added to emphasize on the flexibility of our approach to the control problem.
\end{itemize}
\textbf{STREL Specification.} Our goal is to move the red agents from the initial positions $A,B$ (top right corner and bottom left corner) to the area in the center after time $T_2 = 13$, while staying connected to the black agents in the time interval $[3,15]$ time units from initial moment. In addition, we require all agents not to collide with one another in the time interval $[0,15]$.
Let $dist(.)$ be the Euclidean distance and The above requirements can be translated to the following STREL Formula: \begin{multline}\label{CS1_dynamics}
\Psi_1(X) = G_{[0,15]}(dist_{i\neq j}(q_i{(k)},q_j{(k)})>0.15) \\
\bigwedge G_{[3,15]} Red \mathcal{R}^{dist}_{\leq 2} Black \\
\bigwedge F_{[13,15]}(dist(q_{i,Red}{(k)},C)\leq 0.4)
\end{multline}
Note that $hrz(\Psi) =15$.
\textbf{The control problem. } Following the outline provided in Sec.\ref{sec:problem}, we solve (\ref{eq:problem1}) with $\Tilde{\rho}$ as the objective function instead of $\rho$. We set $H=hrz(\Psi) =15$. The cost function is set to be $J\left(U(k),X(k{k+1})\right)= \sum_{i=1}^{7} \sum_{k = k0}^{k{H-1}} \norm{u_i(k)}^2$ and $\gamma = 0.1$
To compare the old robustness $\rho$ with the new robustness $\Tilde{\rho}$, we use the PSO to solve the control problem. We used PSO as our STREL formula $\Psi$ contains real-valued predicates and cannot be formulated as a MILP. We also cannot use gradient based algorithms as $\rho$ is undifferentiable. We ran the optimization 100 time with $\rho$ and $\Tilde{\rho}$ in the objective and we obtained 0 state-control trajectories with $\rho > 0$ and 69 state-control trajectories with $\Tilde{\rho} > 0$. We explain the results by noting that $\Tilde{\rho}$ has a smooth varying search space as opposed to $\rho$. This encourages the particles to move around and explore the search space for $\Tilde{\rho}$ which is not the case for $rho$ as the particles of PSO get trapped at the first local minimum.
\textbf{Dataset generation. } To generate the dataset for training the RNN, we need to solve the control problem to get $M$ large enough number of state-control trajectories with robustness above a given threshold $\Tilde{\rho}^{*}\geq \epsilon_{min}=0.001$. Choosing a good number of training points $M$ is often done experimentally and highly depends on the complexity of the model we need to approximate. In our case, we found that the RNN performs best for $M \geq 800$. We solved the control problem and generated $1400$ state-control pairs (data points) from random state initializations within fixed regions in $\mathcal{W}$. It took $ $ hours to excute the script and generate all the data points. We create a dataset $\mathcal{D} $which consists of $1050$ training points have robustness $\Tilde{\rho}^{*}\geq \epsilon_{min}=0.001$. We define the success rate as the percentage of data points with robustness higher than $\epsilon_{min}$. The satisfaction rate, average normalized robustness and computation times for SQP, PSO and PSO+SQP can be seen in Tab.\ref{tb:opt_CS}.
\begin{table}[tbh!]
\begin{center}
\begin{tabular}{clcc}
\textbf{Algorithm} & \textbf{Success rate} & \textbf{\begin{tabular}[c]{@{}c@{}}Average \\ robustness\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Time\\ (per run)\end{tabular}} \\
\textbf{SQP} & 44.7 & $0.0048$ & $23.3$ \\
\textbf{PSO} & 71.7 & $0.0037$ & $30$ \\
\textbf{PSO + SQP} & 93.8 & $0.0050$ & $32.9$
\end{tabular}
\end{center}
\caption{Comparison of optimization methods}\label{tb:opt_CS}
\end{table}
The $6.2 \% $ failure rate of PSO + SQP can be attributed to a) the robustness threshold $\epsilon_{min}=0.001$ we use to eliminate approximation error and achieve approximation with high robustness by the RNN, b) we stop the optimization after a certain number of evaluations to avoid long run times, c) the initial configuration was initialized randomly (within given bounded regions in $\mathcal{W}$) and some cannot satisfy the formula, and d)although the premature is less likely to happen, PSO+SQP still can converge to local optima. Over all PSO+SQP performs better than SQP and PSO in terms of premature convergence.
\textbf{Training the RNN}
The RNN structure we use to learn the controller consists of an LSTM network with 4 hidden layers. Each hidden layer has 128 nodes. We train the RNN using 850 training points and 200 testing points from the dataset $\mathcal{D}$ for 1000 epochs. The training process takes about 6 minutes.
\textbf{Results. } For the trained RNN, the average normalized robustness for 200 test points from $\mathcal{D}$ is $0.0037$ compared to $0.0050$ using PSO+SQP. The time of excuting one run using the learned RNN is $0.002$ seconds compared to $32.9$ seconds for PSO+SQP. The success rate for the RNN is $ 93\%$. This shows that our proposed control synthesis approach is much faster and has a high success rate. Sample trajectories generated by RNN can be seen in Fig.\ref{fig:CSRNN}. It can be seen that the red agents move from the initial configuration to the final configuration while maintaining the spatio-temporal specifications. Fig.\ref{fig:CSOPT} shows the trajectories generated by solving the optimization problem.
\begin{figure}[!hbt]
\begin{center}
\begin{minipage}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figures/16cropped.jpg}
\caption{Trajectory generated by RNN}
\label{fig:CSRNN}
\end{minipage}
\hfill
\begin{minipage}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figures/66cropped.jpg}
\caption{Trajectory generated by PSO+GD}
\label{fig:CSOPT}
\end{minipage}
\end{center}
\end{figure}
\section{Conclusion and future research}
In this work, we proposed a new approach to control synthesis for a multiagent system in presence of spatio-temporal specifications. We introduced new quantitative semantics of the Spatial Temporal Reach Escape Logic (STREL). The proposed semantics are sound, smooth and allow for optimizing the spatial configuration of the system. We solve the control synthesis problem as an optimization problem using Particle Swarm optimization (PSO) and Sequential Quadratic Programming. To provide real-time control, we train a Recurrent neural network to learn the controller from a dataset generated by solving the optimization problem. An interesting direction for future research is the use of Reinforcement learning to learn controllers online. Another research direction is addressing the scalability to more complex systems with large number of agents.
\section{INTRODUCTION}
Your goal is to simulate, as closely as possible, the usual appearance of typeset
papers. This document provides an example of the desired layout and contains
information regarding desktop publishing format, type sizes, and type faces.
\subsection{Full-Size Camera-Ready (CR) Copy}
If you have desktop publishing facilities, (the use of a computer to aid
in the assembly of words and illustrations on pages) prepare your CR paper
in full-size format, on paper 21.6 x 27.9 cm (8.5 x 11 in or 51 x 66 picas).
It must be output on a printer (e.g., laser printer) having 300 dots/in, or
better, resolution. Lesser quality printers, such as dot matrix printers,
are not acceptable, as the manuscript will not reproduce the desired quality.
\subsubsection{Typefaces and Sizes:} There are many different typefaces and a large
variety of fonts (a complete set of characters in the same typeface, style,
and size). Please use a proportional serif typeface such as Times Roman,
or Dutch. If these are not available to you, use the closest typeface you
can. The minimum typesize for the body of the text is 10 point. The minimum
size for applications like table captions, footnotes, and text subscripts
is 8 point. As an aid in gauging type size, 1 point is about 0.35 mm (1/72in).
Examples are as follows:
\subsubsection{Format:} In formatting your original 8.5" x 11" page, set top and
bottom margins to 25 mm (1 in or 6 picas), and left and right margins
to about 18 mm (0.7 in or 4 picas). The column width is 88 mm (3.5 in or 21 picas).
The space between the two columns is 5 mm(0.2 in or 1 pica). Paragraph
indentation is about 3.5 mm (0.14 in or 1 pica). Left- and right-justify your
columns. Cut A4 papers to 28 cm. Use either one or two spaces between sections,
and between text and tables or figures, to adjust the column length.
On the last page of your paper, try to adjust the lengths of the
two-columns so that they are the same. Use automatic hyphenation and
check spelling. Either digitize or paste your figures.
\begin{table}
\caption{An Example of a Table}
\label{table_example}
\begin{center}
\begin{tabular}{|c||c|}
\hline
One & Two\\
\hline
Three & Four\\
\hline
\end{tabular}
\end{center}
\end{table}
\section{UNITS}
Metric units are preferred for use in IEEE publications in light of their
international readership and the inherent convenience of these units in many fields.
In particular, the use of the International System of Units (SI Units) is advocated.
This system includes a subsystem the MKSA units, which are based on the
meter, kilogram, second, and ampere. British units may be used as secondary units
(in parenthesis). An exception is when British units are used as identifiers in trade,
such as, 3.5 inch disk drive.
\addtolength{\textheight}{-3cm}
\section{ADDITIONAL REQUIREMENTS}
\subsection{Figures and Tables}
Position figures and tables at the tops and bottoms of columns.
Avoid placing them in the middle of columns. Large figures and tables
may span across both columns. Figure captions should be below the figures;
table captions should be above the tables. Avoid placing figures and tables
before their first mention in the text. Use the abbreviation ``Fig. 1'',
even at the beginning of a sentence.
Figure axis labels are often a source of confusion.
Try to use words rather then symbols. As an example write the quantity ``Inductance",
or ``Inductance L'', not just.
Put units in parentheses. Do not label axes only with units.
In the example, write ``Inductance (mH)'', or ``Inductance L (mH)'', not just ``mH''.
Do not label axes with the ratio of quantities and units.
For example, write ``Temperature (K)'', not ``Temperature/K''.
\subsection{Numbering}
Number reference citations consecutively in square brackets \cite{c1}.
The sentence punctuation follows the brackets \cite{c2}.
Refer simply to the reference number, as in \cite{c3}.
Do not use ``ref. \cite{c3}'' or ``reference \cite{c3}''.
Number footnotes separately in superscripts\footnote{This is a footnote}
Place the actual footnote at the bottom of the column in which it is cited.
Do not put footnotes in the reference list.
Use letters for table footnotes (see Table I).
\subsection{Abbreviations and Acronyms}
Define abbreviations and acronyms the first time they are used in the text,
even after they have been defined in the abstract. Abbreviations such as
IEEE, SI, CGS, ac, dc, and rms do not have to be defined. Do not use
abbreviations in the title unless they are unavoidable.
\subsection{Equations}
Number equations consecutively with equation numbers in parentheses flush
with the right margin, as in (1). To make your equations more compact
you may use the solidus (/), the exp. function, or appropriate exponents.
Italicize Roman symbols for quantities and variables, but not Greek symbols.
Use a long dash rather then hyphen for a minus sign. Use parentheses to avoid
ambiguities in the denominator.
Punctuate equations with commas or periods when they are part of a sentence:
$$\Gamma_2 a^2 + \Gamma_3 a^3 + \Gamma_4 a^4 + ... = \lambda \Lambda(x),$$
where $\lambda$ is an auxiliary parameter.
Be sure that the symbols in your equation have been defined before the
equation appears or immediately following.
Use ``(1),'' not ``Eq. (1)'' or ``Equation (1),''
except at the beginning of a sentence: ``Equation (1) is ...''.
\begin{figure}[thpb]
\centering
\caption{Inductance of oscillation winding on amorphous
magnetic core versus DC bias magnetic field}
\label{figurelabel}
\end{figure}
\section{CONCLUSIONS AND FUTURE WORKS}
\subsection{Conclusions}
This is a repeat.
Position figures and tables at the tops and bottoms of columns.
Avoid placing them in the middle of columns. Large figures and tables
may span across both columns. Figure captions should be below the figures;
table captions should be above the tables. Avoid placing figures and tables
before their first mention in the text. Use the abbreviation ``Fig. 1'',
even at the beginning of a sentence.
Figure axis labels are often a source of confusion.
Try to use words rather then symbols. As an example write the quantity ``Inductance",
or ``Inductance L'', not just.
Put units in parentheses. Do not label axes only with units.
In the example, write ``Inductance (mH)'', or ``Inductance L (mH)'', not just ``mH''.
Do not label axes with the ratio of quantities and units.
For example, write ``Temperature (K)'', not ``Temperature/K''.
\subsection{Future Works}
This is a repeat.
Position figures and tables at the tops and bottoms of columns.
Avoid placing them in the middle of columns. Large figures and tables
may span across both columns. Figure captions should be below the figures;
table captions should be above the tables. Avoid placing figures and tables
before their first mention in the text. Use the abbreviation ``Fig. 1'',
even at the beginning of a sentence.
Figure axis labels are often a source of confusion.
Try to use words rather then symbols. As an example write the quantity ``Inductance",
or ``Inductance L'', not just.
Put units in parentheses. Do not label axes only with units.
In the example, write ``Inductance (mH)'', or ``Inductance L (mH)'', not just ``mH''.
Do not label axes with the ratio of quantities and units.
For example, write ``Temperature (K)'', not ``Temperature/K''.
\section{ACKNOWLEDGMENTS}
The authors gratefully acknowledge the contribution of National Research Organization and reviewers' comments.
References are important to the reader; therefore, each citation must be complete and correct. If at all possible, references should be commonly available publications.
\section{Introduction}
\label{sec:intro}
From satellite constellations to smart cities and synthetic biology, multi-agent systems are rising in popularity. We are often faced with the control synthesis problem which aims at steering robotic agents in the system from an initial state to a final state while satisfying spatio-temporal requirements such as maintaining connectivity or avoiding obstacles.
The problem of control synthesis in presence of temporal specifications has been addressed in the literature \cite{Belta2017-zs,Tabuada2009-yz,Raman2014,Sadraddini2015-fh}. Temporal logics such as Signal Temporal Logic (STL) \cite{STL2004} are equipped with quantitative semantics, known as a robustness function, that measures how strongly the signal satisfies a given specification. The robustness allows for mapping the control synthesis problem to an optimization problem with robustness as an objective function.
More recently, spatio-temporal logics have emerged as a formal way to capture spatial and temporal requirements. Some examples are SpaTeL~\cite{Haghighi2015}), SaSTL~\cite{SaSTL} and STREL~\cite{Bartocci2017,Bartocci2020}. Some of these logics are equipped with quantitative semantics and, similar to STL, can be used to map the control synthesis problem into an optimization problem \cite{Haghighi2016,Liu2020}.
In this work, we will employ the Spatial Temporal Reach Escape Logic (STREL) to capture the spatial and temporal requirements. STREL semantics as defined in \cite{Bartocci2017} has no spatial counting for the numbers of routes and agents that (dis)satisfy a given specification. Spatial counting can be useful in multi-agent robotic scenarios (see for example \cite{Yunus2020}). In many real-world tasks, the completion of a multi-agent system task depends on the number of (dis)satisfying routes and agents. For instance, one might want to maximize the number of robots that surround a given target. We propose new counting quantitative STREL semantics and use it to solve the control synthesis problem. The proposed semantics vary the robustness score depending on the distance between agents, the number of routes and the number of agents that (dis)satisfying given specifications.
Several optimization methods have been used in the literature to solve the control synthesis problem once it is mapped into an optimization problem. For example, Mixed Integer Linear Program (MILP) encoding was used in \cite{Raman2014,Haghighi2016,Liu2018}. Although this method has shown some promising results, formulating MILP problems is a complicated task and their performance times are unpredictable. Another example is the use of gradient-based methods( \cite{Pant2017,Haghighi2019-as,varnai2020robustness,MehdipourAGM,Gilpin2021}). In general, gradient-based methods have fast convergence rates and are simple to implement. However, they are prone to premature convergence to local optima if the initial candidate solution is far from the global optima. In this work, we aim to use a gradient-based method to solve the control synthesis problem.
The STREL semantics in \cite{Bartocci2017} are non-differentiable due to the existence of the $min/max$ functions. Thus, gradient-based methods cannot be used directly for optimizing the robustness. Our framework utilizes a smooth approximation of the $min/max$ functions \cite{Pant2017,Li2018} which will allow us to use gradient-based methods for optimization.
In this work, we propose using a hybrid optimization method that combines heuristic and gradient-based algorithms(see for example \cite{mavrovouniotis2017survey,VICTOIRE200451}). In this class of methods, the optimization is carried out in two stages. In the first stage, the search space is explored using a heuristic algorithm to find a good candidate solution, ideally near the global optima. In the second stage, the best candidate solution is improved by means of a gradient-based algorithm. This approach provides good exploration of the search space and fast convergence times.
In general, optimization can be infeasible for real-time control. Therefore, we outline our approach to real-time control using Recurrent Neural Networks (RNN) \cite{liu2020recurrent,yaghoubi2020training}. The RNN learns a controller from a dataset of state-control trajectory samples (state trajectories as input and control sequences as output) that satisfy the spatio-temporal specifications. The samples in the dataset are generated by solving the control synthesis problem as described above. Once trained, the RNN controller will predict in real-time the control policy at each time point based on the current state and the history of the system.
The main contributions of this work can be summarized as follows:
\begin{enumerate}
\item We propose new smooth counting quantitative semantics for the Spatial Temporal Reach and Escape Logic (STREL); that allow for optimizing the spatial configuration of the system;
\item Given a multi-agent networked system and spatio-temporal requirements captured by a STREL formula, we propose a hybrid optimization approach for solving the control synthesis problem
\item We provide Recurrent Neural Network-based real-time controllers for multi-agent networked systems with spatio-temporal requirements specified in STREL.
\end{enumerate}
The rest of this paper is organized as follows. The preliminaries are provided in Sec.~\ref{sec:prelim}. The control synthesis problem in presence of spatio-temporal specifications is formulated in Sec.~\ref{sec:problem}. Then in Sec.~\ref{sec:newstrel}, we proposed new STREL quantitative semantics. The optimization approach to the control problem is discussed in Sec.~\ref{sec:solOPT}. Choosing the structure of RNN and learning a real-time controller is covered in Sec.~\ref{sec:learning_control}. In Sec.~\ref{sec:CS}, the effectiveness of the framework is demonstrated in a case study and discuss the results. Finally, we conclude by pointing out future research directions.
\section{Preliminaries}
\label{sec:prelim}
\subsection{System Dynamics and Connectivity Conditions}
\label{sec:prelim_Dyn}
Consider a system of $N$ robotic agents labeled from the set $S = \{1,2,\ldots,N\}$. Each agent $l \in S$ has a state $X(l)[k]= (q_l[k] , a_l)$ at (discrete) time $k$, where $q_l\in \mathcal{Q} \subset \mathbb{R}^n$
is its dynamical state (e.g., position in plane or space) and $a_l \in \mathcal{A}$ is an attribute of agent $l$ (e.g., agent type), where $\mathcal{A}$ is the set of the attribute labels.
We assume that the dynamics of each agent $l\in S$ are given by
\begin{equation}
\label{dyn}
q_{l}{[k+1]} = f(q_{l}[k],U(l)[k]),
\end{equation}
where $U(l)[k] \in \mathcal{U} \subset \mathbb{R}^m$ is the control input for agent $l$ at time $k$, and $\mathcal{U}$ is the set of admissible controls.
\begin{example} \label{ex:example_state}
Consider a team of $N=8$ agents moving in a planar environment (see Fig.~\ref{fig:example1}). There are $3$ types of agents and the attributes set is $\mathcal{A}=\{red, blue,black\}$. The state of agent $l$ at time $k$ is $X(l) [k]= (q_l[k], a_l)$, where $q_l \in \mathbb{R}^2$ is the position of the agent, and $a_l\in \mathcal{A}$ is the type of the agent, i.e., $a_3=a_5=a_6 =red$, $a_1 =a_7=a_8 = blue$ and $a_4=a_2 = black$. \demo
\end{example}
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=.45\linewidth]{figures/STREL_example.png}
\caption{A system of $N=8$ robotic agents}
\label{fig:example1}
\end{center}
\end{figure}
For $H \in \mathbb{N}$, we use $\mathbf{X}_H(l)$ to denote the \emph{state run} $X(l){[0]}X(l){[1]} \ldots X(l){[H]}$ of agent $l$, which is given by the \emph{dynamical state} $q_l{[0]}q_l{[1]} \ldots q_l{[H]}$ generated from controls $U(l)[0] \ldots U(l)[H-1]$ starting at $q_l{[0]}$ along with the agent attribute $a_l$. We denote the state of the system (team of agents) at time $k$ by $X{[k]}$, i.e., $X{[k]}=(X(1)[k],\ldots,X(N)[k])$,
and the \emph{system state run} $X{[0]} X{[1]} \ldots X{[H]}$ by $\mathbf{X}_H$. Similarly, we denote the control of all agents at time $k$ by $U[k]=(u_1[k],\ldots,u_N[k])$ and the control sequence $U[0] \ldots U[H-1]$ by $\mathbf{U}_H$.
Two agents $l$ and $l'$ can communicate at time $k$ if they are connected. For instance, agents are connected if the Euclidean distance between them is less than a fixed communication range. Alternatively, agents may be connected according to a Delaunay triangulation or a Voronoi diagram (see Sec.~\ref{sec:CS}). We model inter-agent connectivity (communication) using an undirected graph $\lambda [k] = (S,E[k])$, where $E[k]\subseteq S\times S$. Specifically, $(l,l')\in E[k]$ means that $l$ and $l'$ are connected.
We use $\lambda$ to denote the sequence of connection graphs over time, i.e., $\mathbf{\lambda} = \lambda[0]\lambda[1] \ldots \lambda[H]$.
\subsection{Spatio-Temporal Reach and Escape Logic (STREL)} \label{sec:prelim-strel}
STREL is a logic capable of describing complex behaviors in mobile spatially distributed multi-agent systems. In this section, we present the syntax and semantics of STREL.
The \emph{syntax} of STREL is given by
$$
\varphi := \mu \; |\;\neg \varphi \; | \; \varphi_1 \vee \varphi_2 \; | \; \varphi_1 U_{[a,b]} \varphi_2 \; | \; \varphi_1 \mathcal{R}_{\leq d}^f \varphi_2 \; | \; \mathcal{E}^f_{>d} \varphi
$$
where $\mu$ is an atomic proposition; $\neg$ and $\vee$ are the standard negation and disjunction Boolean operators; $\mathcal{U}_{[a,b]}$ is the until temporal operator, where $[a,b]$ is a time interval with $a,b \in \mathbb{N}_{\geq 0}$; $ \mathcal{R}_{\leq d}^f $ and $\mathcal{E}^f_{>d}$ represent the spatial operators $reach$ and $escape$, respectively, and $f$ is a distance function. Atomic propositions AP can be Boolean-valued $\mu \in P = \{p_1,..p_{n_{a}}\}$ or real-valued $\mu = (g(X(l)[k]) \geq 0)$.
We note that additional operators can be derived as follows:
\begin{itemize}
\item Boolean $conjunction$ $\varphi_1 \wedge \varphi_2 = \neg (\neg \varphi_1 \vee \neg \varphi_2)$
\item Temporal $eventually$ $ F_{[a,b]} \varphi = true \: U_{[a,b]} \varphi$
\item Temporal $always$ $ \mathcal{G}_{[a,b]} \varphi = \neg \: F_{[a,b]} \neg \varphi$
\item Spatial $surround$ $\varphi_1 \odot^{f}_{\leq d} \varphi_2 = \varphi_1 \wedge $ \\ $\neg \left(\varphi_1 \mathcal{R}^{f}_{\leq d} \neg (\varphi_1 \vee \varphi_2)\right) \wedge \mathcal{E}^{f}_{ > d} \varphi_1 \wedge \varphi_1 \mathcal{R}^{f}_{ \leq d} \varphi_2$
\end{itemize}
The qualitative semantics of the Boolean and the temporal operators is simple and is coincident with that of STL and we refer the reader to \cite{donze2013efficient,Bartocci2017} for details and we only explain the intuition behind the spatial operators $reach$ and $escape$ in the following example.
\begin{example} \label{ex:reach_escape}
Consider a system of eight robotic agents in Fig.~\ref{fig:example1}, and a distance function $hops(l,l')$ which returns the shortest route (number of edges) between agents $l,l'$. If the formula $\varphi$ holds for the signal $X(l)[k]$, we write $X(l)[k] \models \varphi $. The validity of a STREL formula $\varphi$ (satisfaction/violation of the specification) with respect to $X(l)[k]$ for the STREL spatial operator can be described as follows:
$\bullet ~\mathbf{reach}$: $\varphi_1 \mathcal{R}^{f}_{\leq d} \varphi_2$ is satisfied by $X(l)[k]$ iff $\varphi_2$ is satisfied at agent $l'$ reachable from $l$ through a continuous route $\tau$, length of $\tau$ satisfies the \textit{distance predicate} $f(l,l')\leq d$, and $\varphi_1$ is satisfied at $l$ and all other agents in $\tau$. For instance, $black \: \mathcal{R}^{hops}_{\leq 1} red$ is satisfied at agent $2$ because agent $3$ is $red$ with a distance of at most 1 hop from agent $2$ and 2 is $black$. However, it is violated at agent 5 because it is not $black$.
$\bullet ~ \mathbf{escape}$: $\mathcal{E}^{f}_{> d} \varphi$ is satisfied by $X(l)[k]$ iff there exists a continuous route $\tau$ from agent $l$ with a length that satisfies the \textit{distance predicate} $f(l,l')>d$ in which $\varphi$ is satisfied at all agents. For instance, $\mathcal{E}^{hops}_{> 2} \neg red$ is satisfied at agent 5 because all of the agents in the route $\{5,6,3\}$ are $red$. However, it is dissatisfied at agent $6$ because there is no route with $f(6,l')>2$ that satisfies $\mathcal{E}^{hops}_{> 2} \neg red$.
We note that for operator $surround$, the term ($\wedge \varphi_1 \mathcal{R}^{f}_{ d} \varphi_2$) was added to avoid false satisfaction in the case of isolated cluster of agents that satisfy $\varphi_1$. For example, $surround$ as defined in \cite{Bartocci2017} will give satisfaction at agent $7$ for the property $blue \odot^{f}_{\leq 2} red$, despite it not being surrounded by $red$ agents. Thus, we additionally require the agent to satisfy $blue \mathcal{R}^{f}_{\leq d} red$.
\demo
\end{example}
In addition to the \textit{qualitative semantics}, we present the STREL \emph{quantitative semantics} (robustness) with respect to the signal $X(l)[k]$ at time $k$. If the formula $\varphi$ holds for the signal $X(l)[k]$ at time $k$, we write $X(l)[k] \models \varphi$.
The robustness of a given formula $\varphi$ is defined by assigning a real-valued measure, known as robustness function, $\rho_{_{}} (\lambda[k],X(l)[k],\varphi)$
for the signal $X(l)[k]$ at time $k$ such that $X(l)[k] \models \varphi \Leftrightarrow \rho_{_{}} (\lambda[k],X(l)[k],\varphi) >0 $.
\begin{equation}
\label{eq:strel-org}
\begin{aligned}
&\begin{aligned}
\rho_{_{}} (\lambda[k],X(l)[k],\mu) & = \iota(\mu,X(l)[k])\\
\rho_{_{}} (\lambda[k],X(l)[k],\neg \varphi) & = - \rho_{_{}} (\lambda[k],X(l)[k],\varphi)\\
\rho_{_{}} (\lambda[k],X(l)[k],\varphi_1 \vee \varphi_2 ) & = \max (\rho_{_{}} (\lambda[k],X(l)[k],\varphi_1), \\
& \qquad \rho_{_{}} (\lambda[k],X(l)[k],\varphi_2))\\
\end{aligned}\\
&\begin{aligned}
\rho_{_{}} (\lambda[k],X(l)[k],\varphi_1 U_{[a,b]} \varphi_2) & = \qquad \\
\max_{k'\in [a,b]} \min\big( \rho_{_{}} (\lambda[k],X(l)[k'],&\varphi_1), \\ \min_{k''\in [k,k']} \rho_{_{}} (&\lambda[k],X(l)[k''],\varphi_2)\big)\\
\end{aligned}\\
&\begin{aligned}
\rho_{_{}} (\lambda[k],X(l)[k],\varphi_1 \mathcal{R}_{\leq d}^f \varphi_2 ) & =\max_{\tau \in Routes}\max_{l'\in \tau:f(l,l')\leq d} \\
\bigg[\min(\rho_{_{}}(\lambda[k],X(l')[k],&\varphi_2);\\
\min_{j<\tau(l')}\rho_{_{}}(&\lambda[k],X(\tau[j])[k],\varphi_1))\bigg]\\
\end{aligned}\\
&\begin{aligned}
\rho_{_{}} (\lambda[k],X(l)[k],\mathcal{E}^f_{>d} \varphi) & =\\
\max_{\tau \in Routes}&\max_{l'\in \tau:f(l,l')\leq d} \min_{j<\tau\left(l'\right)}\rho_{_{}}\left(\lambda[k],X(\tau[j])[k],\varphi\right)
\end{aligned}
\end{aligned}
\end{equation}
where $\iota$ is the \emph{signal interpretation function} $\iota : AP \times (\mathbb{R}^n \times \mathcal{A}) \rightarrow D_{\iota}$; $\lambda{[k]}$ is the communication graph at time $k$; $Routes(\lambda{[k]}, l)$ denotes the set of indexed sequences of nodes on the communication graph $\lambda{[k]}$ starting at node $l$; and $f(l,l')$ is a distance function. The domain of the signal interpretation function can be Boolean ($D_{\iota}=\mathbb{B}$) or real ($D_{\iota}=\mathbb{R}$) depending on the atomic proposition.
For example, if $AP={p_1,...p_n}$ Boolean propositions such as type of agent then $\iota(p_i,X(l)[k])= \top$ iff $X(l)[k]= p_i$. Another example is the case of real valued signals $\mu = (g(X(l)[k])\geq 0)$ then $\iota(\mu,X(l)[k])=g(X(l)[k])$.
The \textit{time horizon} of a STREL formula $\varphi$ is denoted by $hrz(\varphi)$ and is defined as the smallest time point in the future for which the states of the agents are needed to compute the robustness at the current time.
\begin{theorem}\cite{Bartocci2017}
The STREL robustness defined by \eqref{eq:strel-org} is sound, i.e., positive robustness indicates satisfaction of the specification, and negative robustness indicates dissatisfaction of the specification.
\end{theorem}
\section{Problem Formulation and Approach}
\label{sec:problem}
Consider a system of $N$ robotic agents labeled from the set $S = \{1,\ldots,N\}$ with dynamics (\ref{dyn}) and a differentiable cost function $J(U[k],X[k])$. Assume that spatio-temporal requirements are given by a STREL formula $\Psi$ over the state run of the system $X({0}) \ldots X({H})$, and $H \geq hrz(\Psi)$ is the planning horizon.
\begin{problem}[Control Synthesis] Given a multi-agents system with dynamics (\ref{dyn}), a STREL formula $\Psi$, the initial communication graph $\lambda [0]$, the initial state of the system $X{[0]}$, a planning horizon $H \geq hrz(\Psi)$, and a differentiable cost function $J$; \emph{find} an optimal control sequence $U^*_H = \{\mathbf{U}(l){[k]}|l = 1,\ldots,N;k = 0,\ldots,H\}$ such that the resulting state run of the system $\mathbf{X}_H$ results in maximizing the robustness of the formula $\Psi$ and minimizing the cost function $J$.
\begin{align}\label{eq:problem1}
\begin{aligned}
\mathbf{U}_H^{*} & = \argmaxA_{\mathbf{U}_H} \rho_{_{}}\left(\mathbf{\lambda},\mathbf{X}_H, \Psi\right) \\
& \qquad - \gamma \sum_{k=0}^{{H-1}}J\left(U[k],X[k]\right) \\
s.t. \quad & U(l){[k]} \in \mathcal{U} ,\quad \forall l \in S;\forall k \in [0,{H-1}]\\
& q_{l}{[k]} = f(X(l)[k],U(l)[k]),\\
& \qquad \qquad \forall l \in S;\forall k \in {0,\ldots, H}
\end{aligned}
\end{align}
where $\gamma$ is a trade-off coefficient for the cost function.
\end{problem}
Our approach to solving the control synthesis problem is as follows. First, we propose new smooth and sound \emph{counting STREL quantitative semantics} (Sec.~\ref{sec:newstrel}). Next, we solve \eqref{eq:problem1} by employing a combination of heuristic and gradient-based optimization algorithms (Sec.~\ref{sec:solOPT}). The result of solving \eqref{eq:problem1} is a control sequence that maximizes the robustness of the STREL formula and minimizes the cost function. As the performance time for optimization might not meet real-time control requirements, we propose training a RNN to learn controllers from a dataset that contains samples of state-control trajectories generated by solving the optimization problem \eqref{eq:problem1} with different initializations (Sec.~\ref{sec:learning_control}).
\section{STREL Counting Quantitative Semantics}
\label{sec:newstrel}
In this section, we introduce the counting STREL quantitative semantics differ from the original semantics defined in~\cite{Bartocci2017} in three ways. First, the proposed semantics vary depending on the distance between agents. Second, we propose spatial counting to capture the spatial satisfaction for individual agents by introducing a sigmoid function that depends on the number of satisfying/violating routes at individual agents. Third, we propose spatial counting for satisfying/violating agents to capture the satisfaction of the specification for the whole system.
\subsection{Optimizing the Spatial Configuration for Connectivity}
In practice, connectivity between robotic agents in a networked system often depends on the spatial configuration. For example, it is often the case that the smaller the distance between agents, the better the connectivity. To take the distance variability into account, we introduced the function $\sigma_{dist}$ (Fig.~\ref{fig:sigma_dist}) which depends on the distance between agents $f(l,l')$ and a scalar $d$ as defined by the distance predicate of the STREL spatial operators. $\sigma_{dist}$ take values between $[-1,1]$ depending on the ratio $d_{norm}= \frac{f(l,l')}{d}$ and is defined for the cases $f(l,l')\leq d$ and $f(l,l')>d$ as follows:
\begin{align}
\sigma_{dist}^{\leq}(d_{norm}) &= -tanh(k_{d}(d_{norm}-1)) \label{eq:sigma_dist1}\\
\sigma_{dist}^{>}(d_{norm}) &= tanh(k_{d}(d_{norm}-1)) \label{eq:sigma_dist2}
\end{align}
where $k_{d}$ is a hyperparameter that determines how fast the function changes its value (i.e steepness of the graph).
We combine $\sigma_{dist}$ with the robustness $\rho_{_{}}$ by taking the $\min(\sigma_{dist},\rho_{_{}})$ (see Sec.~\ref{sec:newstrel}). Notice that $\sigma_{dist}$ allows the robustness score to change beyond the distance constant $d$ as defined by the distance predicate.
\subsection{Counting for Routes}\label{sec:sigma_routes}
Consider the original robustness function of the spatial operator $escape$
\begin{multline*}
\rho_{_{}} (\lambda[k],X(l)[k],\mathcal{E}^f_{>d} \varphi) =\\
\max_{\tau \in Routes}\max_{l'\in \tau:f(l,l')\leq d} \min_{j<\tau\left(l'\right)}\rho_{_{}}\left(\lambda[k],X(\tau[j])[k],\varphi\right)
\end{multline*}
Set the robustness of a given route $\tau$ to be $\rho_{\tau} = \max_{l'\in \tau:f(l,l')\leq d} \min_{j<\tau\left(l'\right)}\rho_{_{}}\left(\lambda[k],X(\tau[j])[k],\varphi\right)$. Notice that $\rho_{_{}} = \max \rho_{\tau}, \tau \in Routes $. This means that it is enough to have one route $\tau \in Routes$ with $\rho_{\tau}>0$ to achieve satisfaction of formula $\varphi$. Thus the robustness score does not vary depending on the number of (dis)satisfying routes. In practice, it is beneficial to maximize the number of \emph{routes} that satisfy a given formula. For instance, for operator $surround$ ($\varphi_1 \odot^f_{\leq d} \varphi_2$) this maps into maximizing the number of agents satisfying $\varphi_2$ that surround agents satisfying $\varphi_1$.
Let $R^+ \in \mathbb{N}$ and $ R^- \in \mathbb{N}$ be the number of routes that satisfy and dissatisfy a given spatial operator, respectively. We introduce the function $\sigma_{routes}$ (Fig.~\ref{fig:sigma_routes}) as follows
\begin{multline} \label{eq:sigma_routes}
\sigma_{routes}(R^+,R^-) = \max\big(\frac{1}{1+e^{k_{R} R^{-})}},\\ \frac{1}{1+e^{-k_{R} (R^{+}-R^{-}) }} \big)
\end{multline}
where $k_{R}$ is hyperparameter that determines how fast the function changes its value.
\begin{figure}
\centering
\begin{subfigure}[b]{0.235\textwidth}
\includegraphics[width=\textwidth]{figures/sigma_loc.jpg}
\caption{$\sigma_{dist}^{\leq}$ and $\sigma_{dist}^{>}$}
\label{fig:sigma_dist}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.235\textwidth}
\includegraphics[width=\textwidth]{figures/sigma_routes.jpg}
\caption{$\sigma_{routes}(R^+,R^-)$}
\label{fig:sigma_routes}
\end{subfigure}
\caption{Behavior of the functions $\sigma_{dist}$ and $\sigma_{routes}$}
\label{fig:sigma}
\end{figure}
\subsection{The Proposed Robustness for STREL}
Now, we integrate $\sigma_{dist}$ and $\sigma_{routes}$ and introduce the proposed robustness function for the spatial operators. The robustness for the Boolean and temporal operators does not change.
\begin{multline*}
\rho_c\left(\lambda[k],X(l)[k],\varphi_1 \mathcal{R}^{f}_{\leq d} \varphi_2, l,k\right) =\\ \min\bigg[\sigma_{dist}^{\leq}\left(X(l),X(l')\right), \sigma_{routes}(R^+,R^-) \max_{\tau \in Routes} \max_{l'\in \tau:dist(l')\leq d} \\\min\left(\rho_c\left(\lambda[k],X(l')[k],\varphi_2\right); \min_{j<\tau\left(l'\right)}\rho_c\left(\lambda[k],X(\tau[j])[k],\varphi_1\right)\right)\bigg]
\end{multline*}
\begin{multline}\label{eq: strel-ro}
\rho_c\left(\lambda[k],X(l)[k], \mathcal{E}^{f}_{> d} \varphi, l,k\right) = \min\bigg[\sigma_{dist}^{>}\left(d_{norm}\right); \\ \sigma_{routes}(R^+,R^-)\times\max_{\tau \in Routes} \max_{l'\in \tau:> d}\\\min_{j<\tau\left(l'\right)}\rho_c\left(\lambda[k],X(\tau[j])[k],\varphi\right))\bigg]
\end{multline}
\subsection{Counting for Agents}
The STREL semantics is defined at the level of individual agents.
A na\"{i}ve method to compute the robustness of the team of $N$ agents is to consider the minimum of the robustness of individual agents.
However, the robustness function will reflect the worst robustness score among agents and will not depend on the number of agents that satisfy/violate the formula $\varphi$. Following a similar approach to the route counting in Sec.~\ref{sec:sigma_routes}, we introduce $\sigma_{ag}(Ag^+,Ag^-)$ which allows for varying the robustness score depending on the number of agents that satisfy/violate the specification.
Let $Ag^{+},Ag^{-}$ be the number of agents that satisfy and violate the specification, respectively, then $\sigma_{ag}$ is given by:
\begin{multline}\label{eq:sigma_agents}
\sigma_{ag}(Ag^+,Ag^-) = \\
\max\big( \frac{1}{1+e^{-k_{ag}Ag^{-}}}, \frac{1}{1+e^{k_{ag}(Ag^{-}-Ag^{+})}} \big)
\end{multline}
The resulting robustness function (for the team of agents) is given by:
\begin{multline} \label{eq: strel-ro-all}
\rho_c(\lambda[k],X[k],\varphi) = \sigma_{ag}(Ag^+,Ag^-) \\ \min_{l\in \{1,\ldots,N\}}(\rho_c(\lambda[k],X(l)[k],\varphi))
\end{multline}
\begin{theorem}
The counting robustness of STREL defined by \eqref{eq: strel-ro} and \eqref{eq: strel-ro-all} is sound.
\end{theorem}
\begin{proof}[Sketch]:
A formal proof is omitted due to space constraints.
Informally, soundness can be viewed as a sign consistency between the counting robustness and the robustness formulation in \cite{Bartocci2017}. We show that the three introduced functions $\sigma_{dist}(.)$, $\sigma_{routes}(.)$, $\sigma_{ag}(.)$ do not affect the sign of the robustness function in \cite{Bartocci2017}.
First, $\sigma_{routes}(.) \: and \: \sigma_{ag}(.)$ are positive and are multiplied by the robustness function provided by the original robustness function and does not change the sign of the robustness score.
Second, $\sigma_{dist}(.)$ changes in the range $[-1,1]$ and it is negative only when the distance predicate is not satisfied (for example $f\leq d$ is dissatisfied for operator $reach$). We take the minimum between the robustness function and $\sigma_{dist}(.)$, the robustness function will still give positive values for satisfaction and negative values for violation as before. Thus, the soundness of the proposed formulation follows from the soundness of the formulation in \cite{Bartocci2017}. \qedsymbol
\end{proof}
\subsection{Smooth Robustness Approximation}
\label{sec:prelim-smooth}
STREL robustness function is non-differentiable. This stems from the existence of $max$ and $min$ functions in the semantics. It was shown that smooth approximation of the function $min$ and $max$ with arbitrarily small errors exist \cite{Pant2017,Li2018})
\begin{equation*}
\begin{array}{c}
\max(a_1,..,a_n)\approx {\widetilde{\max}}(a_1,..,a_n) = \frac{1}{\beta}
\ln \big (\sum_{i=1}^n e^{\beta a_i} \big),\\
\min(a_1,..,a_n) \approx {\widetilde{\min}}(a_1,..,a_n)= -{\widetilde{\max}}(-a_1,..,-a_n)
\end{array}
\end{equation*}
In~\cite{Pant2017}, the authors show that the approximation error is bounded.
\begin{equation}\label{eq:approx_error}
\begin{array}{c}
0\leq \max(a_1,..,a_n) = {\widetilde{\max}}(a_1,..,a_n)\leq \frac{\ln(n)}{\beta}= \epsilon_{\beta} \\
\end{array}
\end{equation}
We replace the non-differentiable terms with their smooth approximation and denote the smooth approximation of the robustness function by $\Tilde{\rho_c}(.)$.
\section{Control Synthesis for STREL Specifications}
\label{sec:solOPT}
The control synthesis problem described in Sec.~\ref{sec:problem} is a constrained non-linear optimization problem. We modify the problem by replacing the original robustness $\rho_{_{}}$ with the proposed smooth robustness $\Tilde{\rho_c}$ and adding a constraint $\Tilde{\rho_c} \geq \epsilon_{\beta}$, where $ \epsilon_{\beta}$ is the approximation error from \eqref{eq:approx_error}. Given the robustness function $\Tilde{\rho_c}(.)$ and dynamics (\ref{dyn}), we solve the problem by employing a hybrid optimization method PSO+SQP that utilizes a combination of heuristic and gradient-based optimization algorithms.
Although many gradient-based algorithms can be utilized, this discussion focuses on the use of Sequential Quadratic Programming (SQP) which creates and solves a sequence of quadratic approximations to the problem \cite{polak2012optimization}. The feasible space of optimization variables can have many local maxima and we propose to first explore the search space using Particle Swarm Optimization (PSO) algorithm~\cite{kennedy1995particle}
We then initialize the SQP by the candidate solution found using PSO. We note that PSO is computationally expensive and we run it only for a few iterations to find a good SQP initialization.
\textbf{Comparison to optimization approaches in the literature}.
The authors in~\cite{Liu2020,Haghighi2016,Raman2014} use MILP encoding to solve the control synthesis problem in presence of temporal or spatio-temporal specifications. In this approach, integer variable are introduced for every atomic proposition in the specification. Additional integer variables and constraints are then recursively defined for larger sub-formulas. The satisfaction of the specification becomes equivalent to the satisfaction of the integer constraint corresponding to the entire specification, reducing the control problem to a MILP.
This approach has certain setbacks. First, MILP problems are computationally complex and the run time of MILP-based solutions are unpredictable. Even in scenarios where a MILP solver finds a solution in an appropriate time, a small change in specification and/or initial conditions can radically alter run time. Second, MILP based solutions are generally centralized since the mixed integer constrains are interconnected and cannot be solved in a distributed manner.
Alternatively, gradient-based methods have been used in the literature when the robustness is smooth~\cite{Pant2017,Haghighi2019-as,varnai2020robustness,MehdipourAGM,Gilpin2021}. In this approach, the initialization is set to be a point in the feasible search space. However, this approach suffer often from premature convergence and require good initialization point to generate a solution that satisfies the given specification. Pure heuristic methods are less popular due to their expensive computational costs. Our approach (PSO+SQP) provides good initialization points and has a good convergence rate. A quantitative comparison of the performance of SQP, PSO and PSO+SQP can be found in Sec.~\ref{sec:CS} (Tab.\ref{tb:opt_CS}).
\section{Training RNN Controllers}
\label{sec:learning_control}
Solving the control synthesis problem by optimization can be infeasible for real-time applications. Therefore, we propose to use neural networks to predict the controls that satisfy a given STREL formula. We first create a dataset by solving the control synthesis problem using PSO+SQP optimization. To deal with the history dependence of temporal operators, we train RNNs using samples from the generated dataset to learn the controllers.
\textbf{Dataset generation.} Given a team of agents as described in Sec.~\ref{sec:prelim_Dyn}, a STREL formula $\varphi$, a planning horizon $H \geq hrz(\varphi)$ a set of $M$ initial states $\{ X[0]^{[1]}, \ldots, X[0]^{[M]} \}$ of the system and their corresponding initial communication graphs $\{ \lambda[0]^{[1]}, \ldots, \lambda[0]^{[M]} \}$, we create a dataset $D$ by solving the control synthesis problem described in Sec.~\ref{sec:solOPT}. The dataset $D$ consists of $m \leq M$ satisfying state-control trajectories, i.e $D =\{\mathbf{X}_H^{[j]},\mathbf{U}_H^{[j]}|\Tilde{\rho_c}^{[j]}\geq \epsilon_{min} \}$, where $\epsilon_{min} \geq \epsilon_{\beta}$ is the robustness margin. The robustness margin is used to compromise the approximation error and to increase the resulting robustness of satisfaction for the trained RNN
\textbf{RNN implementation.} STREL has history-dependence due to the existence of temporal operators. This means that the control at each time point is dependent on the current state and the history trajectory, $U[k]=g(X[0],..,X[k])$. To address this issue, the function $g$ can be approximated as follows:
\begin{equation}\label{eq:RNN}
\begin{aligned}
h{[k]} &= \mathcal{R}(X[k],h[k-1],W_1)\\
\hat{U}[k] &= \mathcal{N}(h[k], W_2)
\end{aligned}
\end{equation}
where $W_1,W_2$ are the weight matrices of the RNN, $h[k]$ is the hidden state at time point $k$ and $\hat{U}[k]$ is the predicted control at $k$. The network is trained to minimize the error between the predicted control and the optimized control given in the dataset
\begin{equation}
\min_{W_1,W_2} \sum_{D} \sum_{\mathbb{T}}\norm{ U [k]-\hat{U}[k]}^2
\end{equation}
The hidden state works as means to pass the history information and so RNN solves the history dependence issue. Depending on the length of the planning horizon $H$, many RNN implementations can lose the history information (the problem of vanishing/exploding gradient). That is why we utilize a Long Short Term Memory (LSTM) Network. LSTM have shown success at learning control under temporal specifications (see~\cite{liu2020recurrent} for example). LSTM has feedback channels and memory cells that can manage long term dependencies.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rnn1.png}
\caption{$t=1$}
\label{fig:rnn_1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rnn7.png}
\caption{$t=7$}
\label{fig:rnn_2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rnn10.png}
\caption{$t=10$}
\label{fig:rnn3}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rnn13.png}
\caption{$t=13$}
\label{fig:rnn4}
\end{subfigure}
\caption{Snapshots of the trajectory generated by the RNN-based controller at different times}
\label{fig:three graphs}
\end{figure*}
\section{Case study: Networked robotic agents}
\label{sec:CS}
This section demonstrates the efficacy of our proposed framework in a case study. We compare the results obtained using RNN-based controller with those obtained by solving an optimization problem. We also compare the performance of some optimization methods that were discussed in Sec.~\ref{sec:solOPT} as well as the performance under the original semantics and our proposed semantics.
The scripts for the case study were implemented in a PC with a Core i7 CPU @350GHz with 80 GB RAM. For optimization, we used a customized Particle Swarm Optimization (PSO) with $64$ particles and a MATLAB build-in implementation of SQP. To implement the RNN, we used the \emph{Python} package \emph{Pytorch}.
\begin{figure}[]
\begin{center}
\begin{minipage}[b]{0.40\textwidth}
\includegraphics[width=\textwidth]{figures/RNNBIG}
\caption{Trajectories generated using RNN ({\color{red}-}) and PSO+SQP ({\color{red}-~-}) for three different initializations}
\label{fig:CS}
\end{minipage}
\end{center}
\end{figure}
\textbf{System dynamics. } Consider a network of 7 robotic agents in a two dimensional Euclidean space with dimensions $L \times L, \: L=5$ units (see Fig.~\ref{fig:CS}). Each agent $l$ has a state $X(l){[k]}=(q_l,a_l)$ at time $k$.
There are 2 $red$, 2 $blue$ and 3 $black$ agents. The blue and black agents are not controllable, while the red agents are controllable and their dynamics is captured by the equation
\begin{align} \label{dyn_CS}
q{[k]} &=q{[k]}+u{[k]};
\end{align}
where $q\in \mathbb{R}^2$ is the position of the agent, $u \in [-0.2,0.2]$.
\textbf{Connectivity conditions.} Two agents $l$ and $l'$ are connected at time $k$ if both of the following conditions hold:
\begin{itemize}
\item The Euclidean distance between agents $l$ and $l'$ is less than a fixed communication range $d=2$.
\item In the corresponding Voronoi diagram, agents $l$ and $l'$ are neighbors at time $k$.
\end{itemize}
\textbf{STREL specification.} We aim to move the red agents from the initial positions (at the upper right corner and lower left corner) to the area in the center (within 0.4 units from the origin $C=(0,0)$) after time $T_2 = 13$, while staying connected to the black agents in the time interval $[3,15]$ time units from initial moment. In addition, we require all agents not to collide with one another in the time interval $[0,15]$.
The specification above are captured by the Formula
\begin{multline}\label{CS1_dynamics}
\Psi_1(X) = G_{[0,15]}(dist_{i\neq j}(q_i{[k]},q_j{[k]})>0.15) \\
\bigwedge G_{[3,15]} Red \mathcal{R}^{dist}_{\leq 2} Black \\
\bigwedge F_{[13,15]}(dist(q_{i,Red}{[k]},C)\leq 0.4)
\end{multline}
Note that $hrz(\Psi) =15$.
\textbf{The control problem.} Following the solution outline provided in Sec.~\ref{sec:problem}, we solve (\ref{eq:problem1}) with $\Tilde{\rho_c}$ as the objective function instead of $\rho_{_{}}$. We set $H=hrz(\Psi) =15$, the cost function $J\left(U[k],X[k]\right)= \sum_{i=1}^{7} \sum_{k = 0}^{H-1} \norm{u_i[k]}^2$ and $\gamma = 0.1$.
To compare the original robustness $\rho_{_{}}$ with our proposed robustness $\rho_c$, we used PSO to solve the control problem. We executed the optimization using $\rho_{_{}}$ and $\rho_c$ in the objective function, each $100$ times. We obtained no state-control trajectories with $\rho_{_{}} > 0$ and 69 state-control trajectories with $\rho_c> 0$. The results are intuitive as the proposed robustness has a varying search space (as opposed to the original robustness) due to the introduced functions $\sigma_{dist},\sigma_{routes},\sigma_{ag}$. This encourages the particles of PSO to move around and explore the search space
\textbf{Dataset generation. }To generate the dataset for training the RNN, we need to solve the control problem to get $m$ state-control trajectories with robustness score above a given threshold $\Tilde{\rho_c}^{*}\geq \epsilon_{min}=0.001$. Choosing a good number of training points $m$ is often done experimentally and highly depends on the complexity of the model we need to approximate. In our case, we found that the RNN performs best for $m \geq 800$. We solved the control problem and generated $1400$ state-control pairs (data points) from random state initializations within fixed regions (see Fig.~\ref{fig:CSRNN}). It took six hours to execute the script and generate all the trajectories. We created a dataset $D $, which consists of $1050$ training points have robustness $\Tilde{\rho_c}^{*}\geq \epsilon_{min}=0.001$. We define the success rate as the percentage of data points with robustness higher than $\epsilon_{min}$. The satisfaction rate, average normalized robustness and computation times for SQP, PSO and PSO+SQP are illustrated in Tab.~\ref{tb:opt_CS}.
\begin{table}[tbh!]
\begin{center}
\begin{tabular}{clcc}
\textbf{Algorithm} & \textbf{Success rate} & \textbf{\begin{tabular}[c]{@{}c@{}}Average \\ robustness\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Time\\ (seconds/run)\end{tabular}} \\
\textbf{SQP} & 44.7 & $0.0048$ & $23.3$ \\
\textbf{PSO} & 71.7 & $0.0037$ & $30$ \\
\textbf{PSO + SQP} & 93.8 & $0.0050$ & $32.9$
\end{tabular}
\end{center}
\caption{Comparison of optimization methods}\label{tb:opt_CS}
\end{table}
\textbf{Training the RNN}
The RNN structure we use to learn the controller consists of an LSTM network with four hidden layers. Each hidden layer has $64$ nodes. We split the dataset $D$ into $850$ training points and $200$ testing points and train the network for $700$ epochs. The training process takes about six minutes.
\textbf{Results. } For the trained RNN, the average normalized robustness for 200 test points from $D$ is $0.0037$ compared to $0.0050$ using PSO+SQP. The time of excuting one run using the learned RNN is $0.002$ seconds compared to $32.9$ seconds for PSO+SQP. The success rate for the RNN-based controller for new initializations is $ 93\%$. Fig.~\ref{fig:CS} shows the sample trajectories generated from the same initializations by RNN-based controller and using SQP+PSO. This demonstrates that our proposed control synthesis approach meets real-time control requirements while satisfying the given specification.
\section{Conclusion and Future Research}
In this work, we proposed a control synthesis framework for multi-agent networked system in presence of spatio-temporal specifications. We introduced new counting quantitative semantics for the Spatial Temporal Reach Escape Logic (STREL). The proposed semantics are sound, smooth and allow for optimizing the spatial configuration of the system. We solve the control synthesis problem as an optimization problem using Particle Swarm optimization (PSO) and Sequential Quadratic Programming (SQP). To provide real-time control, we train a recurrent neural network to learn the controller from a dataset generated by solving the optimization problem. An interesting direction for future research is the use of reinforcement learning to learn controllers online. Another research direction is to address the scalability to more complex systems with large number of agents.
\section{ACKNOWLEDGMENTS}
\bibliographystyle{IEEEconf}
\section{Introduction}
\label{sec:intro}
Multi-agent systems are used as models in many applications, ranging from robotics to power networks, smart cities, and synthetic biology. Planning and controlling the motions of multi-agent systems are difficult problems, which have received a lot of attention in recent years. One of the main challenges is specifying their motions. Existing approaches include consensus algorithms \cite{bullo2009distributed,mesbahi2010graph}, in which the specification is to reach a desired global state (e.g., minimum / maximum inter-robot separation, specified centroid, heading alignment, etc.) and abstractions, in which a team is parameterized by a set of features (e.g., mean, variance, orientation), \cite{Belta-TRO04,Lynch-swarm}.
Recently, spatio-temporal logics have emerged as formal ways to specify both spatial and temporal logic requirements for spatially distributed systems \cite{Haghighi2016,Liu2020}. Examples include Spatial Temporal Logic (SpaTeL)~\cite{Haghighi2015},
Spatial Aggregation Signal Temporal Logic
(SaSTL)~\cite{SaSTL}, and Spatio-Temporal Reach and Escape Logic (STREL)~\cite{Bartocci2017,Bartocci2020}.
SpaTeL is the unification of Signal Temporal Logic (STL)~\cite{STL2004} and the spatial logic Tree Spatial Superposition Logic (TSSL) \cite{TSSL}. Even though it was used as a specification language for a multi-robot system \cite{Haghighi2016}, SpaTel cannot capture the satisfaction for individual agents. Furthermore, TSSL is computationally very expensive.
SaSTL extends STL with two operators for expressing spatial aggregation and spatial counting, and it was used to monitor safety and performance requirements of smart cities.
STREL extends STL with the spatial operators \emph{reach} and \emph{escape}, from which it is possible to derive the spatial modalities \emph{everywhere}, \emph{somewhere}, and \emph{surround}. SaSTL and STREL allow for specifying the requirements for individual agents and can inform about the satisfaction of the specifications locally, as opposed to SpaTeL.
In this work, we employ STREL to specify complex spatio-temporal requirements for individual agents in multi-agent teams. STREL formulas use Boolean operators, such as conjunction ($\wedge$), disjunction ($\vee$), and negation ($\neg$); temporal operators, such as \emph{eventually} ($F_{[a,b]}$) and \emph{always} ($G_{[a,b]}$), where $[a,b]$ is a time interval; and spatial operators, such as \emph{reach} ($\mathcal{R}$), \emph{escape} ($\mathcal{E}$), and \emph{surround} ($\odot$). For example, the requirement \say{the agents must always surround the target in the time interval $[a,b]$} can be specified using the STREL formula $\varphi = G_{[a,b]} (agents \odot target)$. The original STREL semantics, as defined in \cite{Bartocci2017}, has no notion of spatial counting, which can be critical in some multi-agent robotic scenarios \cite{Yunus2020}. For instance, one might want to maximize the number of agents that surround a given target.
Furthermore, the original semantics does not account for the distance between agents, which can be critical to connectivity among agents in the team. To overcome these setbacks, we propose new counting quantitative STREL semantics that allows for spatial counting and depends on the distances among agents.
Similar to STL, SpaTel and STREL are equipped with quantitative semantics, or robustness functions, which quantify the degree of satisfaction of a formula by a (temporal and spatial) trajectory of a system, and allow for mapping control problems into optimization problems. Mixed Integer Linear Program (MILP) encodings were proposed in \cite{Raman2014,Haghighi2016,Liu2018} to solve such problems. Although this method has shown some promising results, MILP encodings are complicated, and the performance times of the MILP solvers are difficult to predict. Gradient-based methods were proposed in \cite{Pant2017,Haghighi2019-as,varnai2020robustness,MehdipourAGM,Gilpin2021}. These
have fast convergence rates and are simple to implement. However, they are prone to premature convergence and need to be initialized carefully.
To overcome the limitations of the two classes of optimization approaches mentioned above, we propose using a hybrid optimization method that combines heuristic and gradient-based algorithms \cite{mavrovouniotis2017survey,VICTOIRE200451}. The optimization is carried out in two stages. First, the search space is explored using a heuristic algorithm to find a good candidate solution, ideally near the global optima. In the second stage, the best candidate solution from the previous stage is improved by means of a gradient-based algorithm.
The STREL quantitative semantics is based on
$min/max$ functions and the robustness function is not differentiable. We use a smooth approximation \cite{Pant2017}, which allows us to employ gradient-based methods for optimization. We show that our approach provides better exploration of the search space and fast convergence times compared to the optimization approaches used in the literature.
Generating control inputs by solving optimization problems as described above is still computationally expensive, and not amenable for real-time control. In this paper, we propose an approach to real-time control using Recurrent Neural Networks (RNN) \cite{liu2020recurrent,yaghoubi2020training}. The RNN learns controllers from state-control trajectories generated off-line by solving the control synthesis problem with different initializations. Once trained, the RNN-based controller gives the control inputs based on the current state and the history states.
The main contributions of this work can be summarized as follows. First, we propose novel, smooth counting quantitative semantics for STREL, which allows for optimizing spatial configurations. Second, we propose a hybrid optimization approach for solving the control synthesis problem for a multi-agent networked system from spatio-temporal specifications given as STREL formulas.
Third, we provide fast, RNN-based real-time controllers for the control problem stated above.
Fourth, while hybrid algorithms have been used before, to the best of our knowledge, this is the first time they have been employed for control synthesis.
The rest of the paper is organized as follows. Notation and preliminaries are provided in Sec.~\ref{sec:prelim}. The control synthesis problem from STREL specifications is formulated in Sec.~\ref{sec:problem}.
The new STREL quantitative semantics is described
in Sec.~\ref{sec:newstrel}, and the corresponding optimization approach is discussed in Sec.~\ref{sec:solOPT}. We describe how the RNN-based controllers are learnt from data in Sec.~\ref{sec:learning_control}. The effectiveness of the overall proposed framework is demonstrated in a case study in Sec.~\ref{sec:CS}. We conclude with final remarks and directions for future work in Sec. \ref{sec:conclusion}.
\section{Preliminaries}
\label{sec:prelim}
\subsection{System Dynamics and Connectivity Conditions}
\label{sec:prelim_Dyn}
Consider a team of $N$ robotic agents labeled from the set $S = \{1,2,\ldots,N\}$. Each agent $l \in S$ has a state $x_l[k]= (q_l[k] , a_l)$ at (discrete) time $k$, where $q_l[k]\in \mathcal{Q} \subset \mathbb{R}^n$
is its dynamical state (e.g., position in space), and $a_l \in \mathcal{A}$ is an attribute that does not change over time (e.g., agent type), where $\mathcal{A}$ is a set of labels. The state of the team at time $k$ is denoted by $x[k]= [x_1[k]^T, \ldots, x_N[k]^T]^T$. We assume that the dynamics of each agent $l\in S$ is given by:
\begin{equation}
\label{dyn}
q_{l}{[k+1]} = f(q_{l}[k],u_l[k]),
\end{equation}
where $u_l[k] \in \mathcal{U} \subset \mathbb{R}^m$ is the control input for agent $l$ at time $k$, $\mathcal{U}$ is the set of admissible controls, and $f:\mathbb{R}^n\times \mathbb{R}^m\rightarrow\mathbb{R}^n$. The control inputs of the team at time $k$ are denoted by $u[k]=[u_1[k]^T,\ldots,u_N[k]^T]^T$.
\begin{example} \label{ex:example_state}
Consider a team of $N=8$ robotic agents (MANET\footnotemark ) moving in a $2D$ Euclidean space (see Fig.\ref{fig:example1}). The state of agent $l\in S=\{1,..,8\}$ at time $k$ is $x_l [k]= (q_l[k], a_l)$, where $q_l \in \mathbb{R}^2$ is the position of agent $l$, and $a_l\in \mathcal{A}=\{ {endDevice}, {coordinator},{router}\}$. Specifically, $a_3=a_5=a_6 =endDevice$, $a_1 =a_7=a_8 = coordinator$ and $a_4=a_2 = router$.
\demo
\end{example}
\footnotetext{A Mobile Ad-hoc sensor NETwork (MANET) is a team of robotic agents connected wirelessly. The agents are usually deployed to monitor environmental changes such as pollution, humidity, light and temperature. Each agent can be equipped with sensors, processors, radio transceivers, and batteries. It can move independently in any direction and change its connection links to the other agents. Moreover, the agents can be of different types and their behaviour and communication can depend on their types.}
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=.50\linewidth]{figures/STREL_example.png}
\caption{A team of $N=8$ robotic agents with types {\color{red} $endDevice$ (red)}, {\color{black} $router$ (black)}, and {\color{blue} $coordinator$ (blue)}. The lines are edges in the connection graph.}
\label{fig:example1}
\end{center}
\end{figure}
For $H \in \mathbb{N}$, we use $\mathbf{x}_l^H$ to denote the \emph{state run} $x_l{[0]}x_l{[1]} \ldots x_l{[H]}$ of agent $l$, which is given by the dynamical state sequence $q_l{[0]}q_l{[1]} \ldots q_l{[H]}$ generated by the control sequence $u_l[0] \ldots u_l[H-1]$ starting at $q_l{[0]}$, along with the agent attribute $a_l$. We denote the state run of the team $x{[0]} x{[1]} \ldots x{[H]}$ by $\mathbf{x}^H$. Similarly, we use $\mathbf{u}^{H-1}$ to denote the control sequence of the team $u[0] \ldots u[H-1]$.
Two agents $l$ and $l'$ can communicate at time $k$ if they are connected according to given conditions. For instance, agents may be connected if the Euclidean distance between them is less than a fixed communication range. Alternatively, agents may be connected according to a Delaunay triangulation or a Voronoi diagram (see Sec.~\ref{sec:CS}). We model the time-dependent inter-agent connectivity using an undirected graph (connection graph) $\lambda [k] = (S,E[k])$, where $E[k]\subseteq S\times S$. Specifically, $(l,l')\in E[k]$ means that $l$ and $l'$ are connected. We use ${\bm \lambda}^H$ to denote the sequence of connection graphs over time horizon $H$, i.e. $ {\bm \lambda}^H = \lambda[0]\lambda[1] \ldots \lambda[H]$. A \emph{route} $\tau =\tau_1 \tau_2 \ldots $ is a path on a connection graph with $\tau_i$ representing the label of the agent at position $i \in \mathbb{Z}_{\geq 0}$ on route $\tau$. The function ${\bm\tau}(l):S \rightarrow \mathbb{Z}_{>0} \cup \varnothing $ returns the position of agent $l$ on route $\tau$ if $l$ is on $\tau$ and returns $\varnothing$ otherwise. Finally, the set of routes on the connection graph $\lambda{[k]}$ that start at agent $l$ is denoted by $Routes(\lambda{[k]},l)$ where $\forall \tau \in Routes(\lambda{[k]},l),\tau_1=l$.
For notational simplicity, whenever there is no risk for confusion, we will drop the superscript $H$ from ${\bm \lambda}^{H}$, ${\bm x}^{H}$ and $x_l^{H}$ and use ${\bm \lambda}$, ${\bm x}$ and $x_l$, respectively. The horizon $H$ will be clear from context. We will use $[a,b]$ to denote the discrete time interval $[a,b] \cap \mathbb{Z}$ with $a,b \in \mathbb{Z}_{\geq 0}$.
\subsection{Spatio-Temporal Reach and Escape Logic (STREL)} \label{sec:prelim-strel}
STREL is a logic capable of describing complex behaviors in multi-agent teams. Formal definitions of the STREL syntax and semantics can be found in \cite{Bartocci2017}. Here, we give informal definitions for the syntax and the qualitative semantics. We provide, however, a formal definition of the quantitative semantics because we will refer to it later in the paper. Informally, STREL formulas are formed using atomic propositions $p$, which in this paper are attributes from the set $\mathcal{A}$, or predicates $\mu_{g(q_l[k])\sim r}$ defined over the dynamical states of the agents, where $g:\mathcal{Q}\rightarrow \mathbb{R}$, $r \in \mathbb{R}$, and
$\sim \in \{\leq,>\}$; logical operators ($\neg,\vee, \wedge $); temporal operators: \emph{eventually} ($F_{[a,b]}$) and \emph{always} ($G_{[a,b]}$) where $[a,b]$ is a discrete time interval with $a,b \in \mathbb{Z}_{\geq 0}$; and spatial operators: \emph{reach} ($\mathcal{R}_{\leq d}^f$), \emph{escape} ($\mathcal{E}^f_{>d}$) and \emph{surround} ($\odot^{f}_{\leq d}$), where $f$ is a distance function such as Euclidean distance $dist(l,l')$ or $hops(l,l')$, which returns the number of edges for the shortest route between $l,l'$, and $d$ is a scalar.
The qualitative semantics define the satisfaction of a given STREL formula for $({\bm \lambda}, x_l)$ at time $k$. For instance, $({\bm \lambda}, x_l[k]) \models p$ ($\models$ reads ``satisfies") if the attribute of agent $l$ is $a_l = p$; $({\bm \lambda}, x_l[k]) \models \mu_{g(q_l[k])\sim r}$ if $g(q_l[k])\sim r$; $({\bm \lambda}, x_l[k]) \models F_{[a,b]} \varphi$ if $\exists k'\in [k+a,k+b]$ such that $({\bm \lambda}, x_l[k']) \models \varphi$; and $({\bm \lambda}, x_l[k]) \models G_{[a,b]} \varphi$ if $\forall k'\in [k+a,k+b]:$ $({\bm \lambda}, x_l[k']) \models \varphi$. For the spatial operator \emph{reach}, $({\bm \lambda}, x_l[k]) \models \varphi_1 \mathcal{R}^{f}_{\leq d} \varphi_2$ if $\varphi_2$ is satisfied at agent $l'$ reachable from $l$ through a route $\tau$ with $\tau_1=l$ such that $f(l,l')\leq d$, and $\varphi_1$ is satisfied at $l$ and all the other agents between $l,l'$ on $\tau$ and we call such a route $\tau$ a \emph{satisfying route}. Similarly, $({\bm \lambda}, x_l[k]) \models \mathcal{E}^{f}_{> d} \varphi$ if there exists a route $\tau$ with $\tau_1=l$; and an agent $l'\in \tau$ such that $f(l,l')>d$, and $\varphi$ is satisfied at all agents $\tau_1\tau_2\ldots\tau_{{\bm\tau}(l')-1}$. The operator \emph{surround} expresses the notion of an agent with state that satisfies $\varphi_1$ being surrounded by agents with states satisfying $\varphi_2$ within a distance $f\leq d$. It can be derived from operators \emph{reach} and \emph{escape} as $\varphi_1 \odot^{f}_{\leq d} \varphi_2 = \varphi_1 \wedge \neg \left(\varphi_1 \mathcal{R}^{f}_{\leq d} \neg (\varphi_1 \vee \varphi_2)\right) \wedge \mathcal{E}^{f}_{ > d} \varphi_1 \wedge \varphi_1 \mathcal{R}^{f}_{ \leq d} \varphi_2$.
We note that we added the term ($\wedge \varphi_1 \mathcal{R}^{f}_{ d} \varphi_2$) to the definition provided in \cite{Bartocci2017} to avoid false satisfaction in the case of isolated agents that satisfy $\varphi_1$ while not being surrounded by agents satisfying $\varphi_2$ (see Ex.\ref{ex:reach_escape}). Additional operators can be defined or derived from the operators above and are not introduced here for brevity.
The \emph{time horizon} of a STREL formula $\varphi$ is denoted by $hrz(\varphi)$ and is defined as the smallest time point in the future for which the states of the agents are needed to determine the satisfaction of the formula. Hereafter, we assume that the horizon $H$ of $x_l^{H}$, ${\bm x}^{H}$ and ${\bm \lambda}^{H}$ is at least equal to $hrz(\varphi)$.
\begin{example} \label{ex:reach_escape}
Consider the team of robotic agents from Ex.\ref{ex:example_state} (Fig.\ref{fig:example1}), the distance function $hops(l,l')$ and atomic propositions from the set of attributes $\mathcal{A}=\{endDevice,router,coordinator\}$. The formula $router \: \mathcal{R}^{hops}_{\leq 1} \: endDevice$ is satisfied at agent $2$ because there is a (satisfying) route $\tau = \tau_1 \tau_2$ with $\tau_1 = 2$ of type $router$ and $\tau_2 = 3$ of type $endDevice$ with a distance of at most $1$ \emph{hop} from agent $2$. The formula $\mathcal{E}^{hops}_{> 2} \neg endDevice$ is satisfied at agent 5 because all of the agents in the route $\tau = \tau_1 \tau_2 \tau_3 = 5, 6, 3$ are $endDevices$. However, it is violated at agent $6$ because there satisfying routes with $hops(6,l')>2$. The formula $coordinator \odot^{hops}_{\leq 2} endDevice$ is satisfied at agent $1$ but violated at agents $7$ and $8$. Note that operator \emph{surround} as defined in \cite{Bartocci2017} would suggest that the same formula is satisfied at agents $7$ and $8$ despite them not being surrounded by agents of type $endDevice$.\demo
\end{example}
The quantitative valuation of a given STREL formula $\varphi$ is defined by a real-valued \emph{robustness function} $\rho_{_{}}$. The robustness $\rho_{_{}}$ of a STREL formula $\varphi$ with respect to $({\bm \lambda},x_l)$ at time $k$ is calculated recursively by \cite{Bartocci2017}:
\begin{subequations}\label{eq:strel-org}
\begin{gather}
\rho_{_{}} ({\bm \lambda},x_l,p,k) = \iota(p,x_l[k])\\
\rho_{_{}} ({\bm \lambda},x_l,\mu,k) = \iota(\mu,x_l[k])\\
\rho_{_{}} ({\bm \lambda},x_l,\neg \varphi,k) = - \rho_{_{}} ({\bm \lambda},x_l,\varphi,k)\\
\begin{aligned}
\rho_{_{}} ({\bm \lambda},x_l,\varphi_1 \vee \varphi_2 , k) = & \\ \max (\rho_{_{}} ({\bm \lambda},x_l,\varphi_1,&k), \rho_{_{}} ({\bm \lambda},x_l,\varphi_2,k))
\end{aligned}\\
\begin{aligned}
\rho_{_{}} ({\bm \lambda},x_l,F_{[a,b]} \varphi,k) & = \\ \max_{k'\in [k+a,k+b]} \rho_{_{}}&({\bm \lambda},x_l,\varphi,k')
\end{aligned}\\
\begin{aligned}
\rho_{_{}} ({\bm \lambda},x_l,G_{[a,b]} \varphi, k) & =\\ \min_{k'\in [k+a,k+b]} &\rho_{_{}} ({\bm \lambda},x_l,\varphi,k')
\end{aligned}\\
\begin{aligned}
\rho_{_{}} ({\bm \lambda},x_l,\varphi_1 \mathcal{R}_{\leq d}^f \varphi_2,k) & =\max_{\tau \in Routes(\lambda[k],l)}\max_{l'\in \tau:f(l,l')\leq d} \\
\bigg[\min(\rho_{_{}}({\bm \lambda},x_{l'},\varphi_2,k)&;
\min_{j<{\bm\tau}(l')}\rho_{_{}}({\bm \lambda},x_{\tau_j},\varphi_1,k))\bigg]
\end{aligned}\\
\begin{aligned}
\rho_{_{}} ({\bm \lambda},x_l,\mathcal{E}^f_{>d} \varphi,k) & =
\max_{\tau \in Routes(\lambda[k],l)}\max_{l'\in \tau:f(l,l')\leq d}\\
& \qquad \min_{j<{\bm\tau}(l')}\rho_{_{}}\left({\bm \lambda},x_{\tau_j},\varphi,k\right)
\end{aligned} \label{eq:strel-org-escape}
\end{gather}
\end{subequations}
where $\iota$ is the \emph{signal interpretation function} defined for atomic propositions and predicates by
\begin{equation}
\iota(p,x_l[k]) =\begin{cases}
\rho_{max}, & \text{if $a_l=p$}\\
-\rho_{max}, & \text{otherwise}
\end{cases}\\
\end{equation}
\begin{equation}
\iota(\mu_{g(q_l[k])\sim r},x_l[k]) = \begin{cases}
g(q_l[k])-r, & \text{if $\sim =\: >$}\\
r-g(q_l[k]), & \text{if $\sim =\: \leq$}
\end{cases}
\end{equation}
\begin{theorem}\cite{Bartocci2017}
The STREL robustness defined by \eqref{eq:strel-org} is sound, i.e., positive robustness indicates satisfaction of the specification, and negative robustness indicates violation of the specification.
\end{theorem}
\subsection{Smooth Robustness Approximation}
\label{sec:prelim-smooth}
The $\max$ and $\min$ functions can be approximated by \cite{Pant2017}:
\begin{equation*}
\begin{array}{l}
\max(a_1,...,a_n)\approx {\widetilde{\max}}(a_1,...,a_n) = \frac{1}{\beta}
\ln \big (\sum_{i=1}^n e^{\beta a_i} \big),\\
\min(a_1,...,a_n) \approx {\widetilde{\min}}(a_1,...,a_n)= -{\widetilde{\max}}(-a_1,...,-a_n)
\end{array}
\end{equation*}
and the approximation error is bounded by $\epsilon_{\beta}$:
\begin{equation}\label{eq:approx_error}
\begin{array}{l}
0\leq \max(a_1,..,a_n) - {\widetilde{\max}}(a_1,..,a_n)\leq \frac{\ln(n)}{\beta}= \epsilon_{\beta} \\
\end{array}
\end{equation}
with the maximum error when $a_1=a_
2=\ldots =a_n$. The approximation error approaches $0$ when $\beta \rightarrow\infty$.
\section{Problem Formulation and Approach} \label{sec:problem}
Consider a team of $N$ agents labeled from the set $S = \{1,\ldots,N\}$ with dynamics \eqref{dyn} and a cost function $J(u[k],x[k+1])$, which is the cost of ending in state $x[k+1]$ by applying the control inputs $u[k]$ at time $k$. Assume that spatio-temporal requirements are specified by a STREL formula $\varphi$ interpreted over the state run of the team $\mathbf{x}^H$, and $H = hrz(\varphi)$ is the planning horizon. The goal is to find a control sequence for the multi-agent team that maximizes the robustness of the STREL formula and minimizes the cost function.
\begin{problem}\label{problem:main}[Control Synthesis] Given a multi-agent team with dynamics (\ref{dyn}), STREL formula $\varphi$, initial connection graph $\lambda [0]$, initial state of the system $x[0]$, planning horizon $H = hrz(\varphi)$, and cost function $J$; find an optimal control sequence $\mathbf{u}^{*H-1}$ that maximizes the robustness score and minimizes the cost, i.e.:
\begin{align}\label{eq:problem1}
\begin{aligned}
\mathbf{u}^{*H-1} & = \argmaxA_{\mathbf{u}^{H-1}} \rho_c\left({\bm \lambda},{\bm x},\varphi,0 \right)
- \gamma \sum_{k=0}^{{H-1}}J\left(u[k],x[k+1]\right) \\
& s.t. \\
& q_{l}{[k+1]} = f(q_l[k],u_l[k]),\quad \forall l \in S, \forall k \in [0,H-1]\\
& u_l{[k]} \in \mathcal{U},\qquad \qquad \qquad \forall l \in S, \forall k \in [0,H-1]\\
\end{aligned}
\end{align}
where $\gamma >0$ is a trade-off coefficient and $\rho_c\left({\bm \lambda},{\bm x},\varphi,0 \right)$ is the robustness function for the team at time $0$.
\end{problem}
To solve Problem \ref{problem:main}, we first develop sound \emph{counting STREL quantitative semantics} with robustness $\rho_c$, which has spatial counting and allows to optimize the spatial configuration for connectivity (Sec.~\ref{sec:newstrel}). We use a smooth version of the robustness $\Tilde{\rho}_c$ in the objective function, and employ a combination of heuristic and gradient-based optimization algorithms (Sec.~\ref{sec:solOPT}) to find a control sequence that maximizes the robustness of the STREL formula and minimizes the cost function. In addition, as the execution time for the optimization might not meet real-time control requirements, we propose training a RNN to learn controllers from state-control trajectories generated by solving the optimization problem with different initializations (Sec.~\ref{sec:learning_control}). The trained RNN is then used to predict control inputs at each time.
\section{STREL Counting Quantitative Semantics}
\label{sec:newstrel}
To motivate the new proposed STREL quantitative semantics, we start by discussing the limitations of the original semantics defined by the robustness function in \eqref{eq:strel-org}.
First, the robustness does not depend on the distance between the agents. Consider the multi-agent team from Ex.\ref{ex:reach_escape}. The STREL formula $endDevice \: \mathcal{R}^{hops}_{\leq 2} router$ will have the same robustness score for agents $3,5$ and $6$ even though their distances to the nearest $router$ (agent $2$) are not the same: $hops(3,2)=hops(5,2)=1<hops(6,2)=2$. In practice, connectivity between robotic agents in a networked system depends on the spatial configuration and it is often the case that the smaller the distance between agents, the better the connectivity.
Second, the robustness from \eqref{eq:strel-org} does not have a notion of spatial counting. Consider again the team from Ex.\ref{ex:reach_escape} and note that the formula $router \: \mathcal{R}^{hops}_{\leq 1} endDevice$ will have the same robustness score for agents $2$ and $4$, even though agent $2$ has two satisfying routes $2,3$ and $2,5$, compared to one satisfying route $4,3$ at agent $4$. In practice, it is beneficial to maximize the number of \emph{satisfying routes} for a given formula at a given agent. Take, for instance, the formula ($robot \odot^f_{\leq d} target$), maximizing the number of \emph{satisfying routes} results in maximizing the number of agents of type $robot$ surrounding $target$.
Third, the STREL robustness from \eqref{eq:strel-org} is defined only at the level of individual agents. We need a way to compute the robustness score for the team, which takes into account the number of agents that satisfy/violate a given formula.
The new STREL quantitative semantics addresses these limitations and differs from the original semantics defined in~\cite{Bartocci2017} in three ways. First, the proposed semantics depends on the distances among agents. Second, it performs spatial counting of satisfying/violating routes at individual agents. Third, by defining spatial counting for satisfying/violating agents,
it captures the satisfaction of the specification by the team.
\subsection{Optimizing the Spatial Configuration for Connectivity}
To optimize the spatial configuration of a multi-agent team for connectivity, we require the robustness score to depend on the distance between agents. To this goal, we introduce a function $\sigma_{dist}$, which depends on the distance between agents $f(l,l')$ and a scalar $d$. Specifically, $\sigma_{dist}$ is a sigmoid function that take values between $[-1,1]$ depending on the ratio $d_{norm}= \frac{f(l,l')}{d}$ (Fig.\ref{fig:sigma_dist}) and is defined for $f(l,l')\leq d$ and $f(l,l')>d$ as follows:
\begin{align}
\sigma_{dist}^{\leq}(d_{norm}) &= -\tanh(k_{d}(d_{norm}-1)) \label{eq:sigma_dist1}\\
\sigma_{dist}^{>}(d_{norm}) &= \tanh(k_{d}(d_{norm}-1)) \label{eq:sigma_dist2}
\end{align}
where $k_{d}$ is a hyperparameter that determines how fast $\sigma_{dist}$ changes its value.
When computing the robustness, we take $\min(\sigma_{dist},\rho_c)$ (see Sec.~\ref{sec:newstrel}). Notice that $\sigma_{dist}$ allows the robustness score to vary beyond the distance constant $d$ as defined by $f(l,l') \sim d,\: \sim =\{\leq,>\}$ as opposed to the original definition in \eqref{eq:strel-org}.
\subsection{Spatial Counting for Routes}\label{sec:sigma_routes}
Consider the robustness function of the spatial operator $escape$ defined by \eqref{eq:strel-org-escape} and define the robustness of a given route $\tau$ as $\rho_{\tau} := \max_{l'\in \tau:f(l,l')\leq d} \min_{j<{\bm\tau}(l')}\rho_{_{}}\left({\bm \lambda},x_{\tau_j},\varphi,k\right)$. Notice that $\rho_{_{}}\left({\bm \lambda},x_l,\varphi,k\right) = \max_{\tau \in Routes(\lambda[k],l) } \rho_{\tau}, $, which means that it is enough to have one satisfying route $\tau \in Routes(\lambda[k],l)$ (with $\rho_{\tau}>0$) to satisfy formula $\varphi$ and the robustness score does not vary depending on the number of routes that satisfy/violate $\varphi$. We address this limitation by introducing an additional function $\sigma_{routes}$, which depends on the number of routes that satisfy/violate a given formula.
Formally, let $R^+, R^- \in \mathbb{N}$ be the number of satisfying/ violating routes for a given spatial operator, respectively. We define the function $\sigma_{routes}$ (Fig.\ref{fig:sigma_routes})
\begin{multline} \label{eq:sigma_routes}
\sigma_{routes}(R^+,R^-) = \max\big(\frac{1}{1+e^{k_{R} R^{-})}},\\ \frac{1}{1+e^{-k_{R} (R^{+}-R^{-}) }} \big)
\end{multline}
where $k_{R}$ is a hyperparameter that determines how fast $\sigma_{routes}$ changes its value.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.42\textwidth}
\includegraphics[width=\textwidth]{figures/sigma_loc.jpg}
\caption{$\sigma_{dist}^{\leq}$ and $\sigma_{dist}^{>}$}
\label{fig:sigma_dist}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.42\textwidth}
\includegraphics[width=\textwidth]{figures/sigma_routes.jpg}
\caption{$\sigma_{routes}(R^+,R^-)$}
\label{fig:sigma_routes}
\end{subfigure}
\caption{Behavior of the functions $\sigma_{dist}$ and $\sigma_{routes}$}
\label{fig:sigma}
\end{figure*}
\subsection{Spatial Counting for Agents}
As mentioned above, the STREL semantics is defined at the level of individual agents.
A na\"{i}ve method to compute the robustness of the team of $N$ agents is to consider the minimum of the robustness of individual agents.
\begin{equation}\label{eq:rho_all_naive}
\rho_{_{}}\left({\bm \lambda},{\bm x}, \varphi,k\right) = \min_{l \in S} \rho_{_{}}\left({\bm \lambda},x_l, \varphi,k\right)
\end{equation}
In this case, the robustness score for the team will reflect the worst robustness score among individual agents and will not depend on the number of agents that satisfy/violate the formula $\varphi$. Following a similar approach to route counting in Sec.~\ref{sec:sigma_routes}, we introduce $\sigma_{ag}(Ag^+,Ag^-)$, which allows for varying the robustness score depending on the number of agents that satisfy/violate the specification.
Let $Ag^{+},Ag^{-}$ be the number of agents that satisfy and violate the specification, respectively. Then $\sigma_{ag}$ is given by:
\begin{multline}\label{eq:sigma_agents}
\sigma_{ag}(Ag^+,Ag^-) = \\
\max\big( \frac{1}{1+e^{-k_{ag}Ag^{-}}}, \frac{1}{1+e^{k_{ag}(Ag^{-}-Ag^{+})}} \big)
\end{multline}
\subsection{Counting Robustness for STREL}
We are now ready to define the proposed counting robustness for the STREL spatial operators.
\begin{multline*}
\rho_c\left({\bm \lambda},x_l,\varphi_1 \mathcal{R}^{f}_{\leq d} \varphi_2,k\right) =\\ \min\bigg[\sigma_{routes}(R^+,R^-) \max_{\tau \in Routes} \max_{l'\in \tau:dist(l')\leq d} \\\min\left(\rho_c\left({\bm \lambda},x_{l'},\varphi_2,k\right); \min_{j<{\bm\tau}(l')}\rho_c\left({\bm \lambda},x_{\tau_j},\varphi_1,k\right)\right)\\ ;\sigma_{dist}^{\leq}\left(d_{norm}\right)\bigg]
\end{multline*}
\begin{multline}\label{eq: strel-ro}
\rho_c\left({\bm \lambda},x_l, \mathcal{E}^{f}_{> d} \varphi,k\right) = \min\bigg[ \sigma_{routes}(R^+,R^-) \\ \max_{\tau \in Routes} \max_{l'\in \tau:> d}\min_{j<{\bm\tau}(l')}\rho_c\left({\bm \lambda},x_{\tau_j},\varphi,k\right))\\ ; \sigma_{dist}^{>}\left(d_{norm}\right)\bigg]
\end{multline}
The robustness for the logical and temporal operators is the same as the one from \eqref{eq:strel-org}.
The robustness for the team at time $k$ is given by:
\begin{multline} \label{eq: strel-ro-all}
\rho_c({\bm \lambda},{\bm x},\varphi,k) = \sigma_{ag}(Ag^+,Ag^-) \\ \min_{l\in \{1,\ldots,N\}}(\rho_c({\bm \lambda},x_l,\varphi,k)).
\end{multline}
\begin{theorem}
The counting robustness of STREL defined by \eqref{eq: strel-ro} and \eqref{eq: strel-ro-all} is sound.
\end{theorem}
\begin{proof}[Proof]
[sketch] A formal proof is omitted due to space constraints.
Informally, soundness can be viewed as a sign consistency between the counting robustness $\rho_c$ and the original robustness in \cite{Bartocci2017} $\rho$. We show that the three functions $\sigma_{dist}$, $\sigma_{routes}$, $\sigma_{ag}$ introduced to $\rho$ do not affect the sign of the robustness, and thus show that $\rho_c$ is sound.
First, $\sigma_{routes}$ and $\sigma_{ag}$ are positive and are multiplied by the robustness function provided by the original semantics and, thus, do not change the sign of the robustness score.
Second, $\sigma_{dist}$ changes in the range $[-1,1]$ and it is negative only when the distance predicate $f(l,l') \sim d$ is violated. Since we take the minimum between the robustness function and $\sigma_{dist}$, the robustness function will still give positive values for satisfaction and negative values for violation as before. Thus, the soundness of the proposed counting robustness follows from the soundness of the original robustness.
\end{proof}
\section{Control Synthesis for STREL Specifications}
\label{sec:solOPT}
Problem \ref{problem:main} is a constrained non-linear optimization problem. The $max$ and $min$ functions in the semantics render the objective function non-differentiable. We modify \eqref{eq:problem1} by replacing the counting robustness $\rho_c$ with its smooth version of proposed smooth robustness $\Tilde{\rho_c}$ that we obtain by replacing the non-differentiable terms ($min/max$) with their smooth approximations described in Sec.~\ref{sec:prelim-smooth}.
We solve the new problem by employing a two-stage hybrid optimization method that utilizes a combination of heuristic and gradient-based optimization algorithms. In stage I, we explore the search space using a heuristic algorithm to find a good candidate solution. In stage II, we initialize a gradient-based algorithm with the best candidate solution found in stage I. Although many heuristic and gradient-based algorithms can be used for this approach, in this paper we use Particle Swarm Optimization (PSO)~\cite{kennedy1995particle} and Sequential Quadratic Programming (SQP)~\cite{polak2012optimization}, respectively. We name this two-stage optimization approach PSO+QSP. A quantitative comparison of the performance of SQP, PSO and PSO+SQP can be found in Sec.~\ref{sec:CS} (Tab.~\ref{tb:opt_CS}).
\section{Learning RNN-Based Controllers}
\label{sec:learning_control}
As already mentioned, solving the control synthesis problem by optimization can be expensive, and not feasible for real-time implementation. To address this, we propose to train a RNN using data obtained from off-line optimization, and then use it to generate the control at a given state.
\textbf{Dataset generation.} Given a multi-agent team as described in Sec.~\ref{sec:prelim_Dyn}, a STREL formula $\varphi$, a planning horizon $H \geq hrz(\varphi)$, a set of $M$ initial team states $\{ x[0]^{[1]}, \ldots, x[0]^{[M]} \}$ and their corresponding initial communication graphs $\{ \lambda[0]^{[1]}, \ldots, \lambda[0]^{[M]} \}$, we generate a dataset $D$ by solving the control synthesis problem described in Sec.~\ref{sec:solOPT} and choosing $m \leq M$ state-control trajectories with robustness above a threshold $\epsilon_{min}$, i.e $D =\{\mathbf{x}_{[j]}^H,\mathbf{u}^{H[j]}|\Tilde{\rho_c}^{[j]}\geq \epsilon_{min} \}$, where $\epsilon_{min} \geq \epsilon_{\beta}$ is the robustness margin used to account for the approximation error $\epsilon_{\beta}$.
\textbf{RNN implementation.} Due to the temporal operators, the satisfaction of STREL formulas is history-dependent. In other words, the control at each time step is, in general, dependent on the current state and past states $u[k]=g(x[0],..,x[k])$. For this reason, we choose Recurrent Neural Networks (RNN), which are neural networks with memory. To implement the RNN, we use a Long Short Term Memory (LSTM) Network~\cite{liu2020recurrent,Georgios}. LSTM has feedback channels and memory cells that can manage long-term dependencies by passing the history-dependence as hidden states. The function $g$ can be approximated as follows:
\begin{equation}\label{eq:RNN}
\begin{aligned}
h{[k]} &= \mathcal{R}(X[k],h[k-1],W_1)\\
\hat{u}[k] &= \mathcal{N}(h[k], W_2)
\end{aligned}
\end{equation}
where $W_1,W_2$ are the weight matrices of the RNN, $h[k]$ is the hidden state at time step $k$ and $\hat{u}[k]$ is the predicted control at $k$. The network is trained to minimize the error between the predicted control and the optimized control given in the dataset:
\begin{equation}
\min_{W_1,W_2} \sum_{D} \sum_{k=0}^{H-1}\norm{ u[k]-\hat{u}[k]}^2.
\end{equation}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rnn0.jpg}
\caption{$k=0$}
\label{fig:rnn0}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rnn1.jpg}
\caption{$k=1$}
\label{fig:rnn3}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rnn2.jpg}
\caption{$k=2$}
\label{fig:rnn5}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rnn4.jpg}
\caption{$k=4$}
\label{fig:rnn7}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rnn6.jpg}
\caption{$k=6$}
\label{fig:rnn8}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rnn8.jpg}
\caption{$k=8$}
\label{fig:rnn10}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rnn10.jpg}
\caption{$k=10$}
\label{fig:rnn11}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/rnn12.jpg}
\caption{$k=12$}
\label{fig:rnn13}
\end{subfigure}
\caption{Snapshots of the multi-agent team at different times. The agents are represented by labeled colored disks where labels are taken from the set $S=\{1,\ldots,7\}$ and colors correspond to types of agents: {\color{red} $endDevice$ (red)}{\color{black}, $router$ (black)}, and {\color{blue} $coordinator$ (blue)}.
The red solid lines represent the trajectories generated by agents $1$,$2$ using control inputs from the RNN-based controller. The solid and dashed blue lines represent the connection and Voronoi graphs of the team, respectively.}
\label{fig:rnns}
\end{figure*}
\section{Case Study: Networked Robotic Agents}
\label{sec:CS}
In this section, we demonstrate the efficacy of our proposed framework with a case study. First, we solve the control synthesis problem for a multi-agent team and a given STREL formula off-line to generate state-control trajectories using the optimization approach described in Sec.~\ref{sec:solOPT}. Then, we use the state-control trajectories that satisfy the STREL formula to train a RNN to predict control inputs for the team at each time step (Sec. \ref{sec:learning_control}). We compare the trajectories obtained using RNN with those obtained by optimization. We also compare the results of optimization using different solvers, such as PSO, SQP, and PSO+SQP (Sec.~\ref{sec:solOPT}); and compare the performance of PSO under the original semantics \eqref{eq:strel-org}, \eqref{eq:rho_all_naive} and our proposed semantics \eqref{eq: strel-ro}, \eqref{eq: strel-ro-all}.
All the computations described in this section were performed on a PC with a Core i7 CPU @350GHz with 80 GB RAM. For optimization, we used a customized Particle Swarm Optimization (PSO) with $64$ particles and a MATLAB built-in implementation of SQP. To implement the RNN, we used the \emph{Python} package \emph{Pytorch}.
\textbf{System description} Consider a team of $N=7$ robotic agents (Fig.\ref{fig:rnns}) labeled from the set $S=\{1,2,\ldots,7\}$ in a 2D Euclidean space. The state of agent $l$ at time step $k$ is defined by $x_l[k]= (q_l[k],a_l)$ where $q_l[k] \in \mathbb{R}^2$ is the position of agent $l$ at time $k$ and $a_l \in \mathcal{A}= \{{endDevice}, {coordinator}, {router}\}$ is the type of the agent with $a_1=a_2=endDevice$, $a_4=a_5=a_7= router$ and $a_3=a_6=coordinator$. Agents $1$ and $2$ are controllable with dynamics given by
\begin{align} \label{eq:dyn_CS}
q_l{[k+1]} &=q_l{[k]}+u_l{[k]},
\end{align}
where $l\in \{1,2\}$ and $u_l \in \mathcal{U} = [-0.2,0.2]$.
\textbf{Connectivity conditions} Two agents $l$ and $l'$ are connected at time $k$ if both of the following conditions hold:
\begin{itemize}
\item The Euclidean distance between agents $l$ and $l'$ is less than a fixed communication range $dist(l,l') \leq 2$.
\item In the corresponding Voronoi diagram, the cells corresponding to agents $l$ and $l'$ are adjacent at time $k$
\end{itemize}
Snapshots of the team and the corresponding Voronoi and connection graphs at different times can be seen in Fig.\ref{fig:rnns}.
\textbf{Spatio-temporal specifications} We aim to steer agents $1$ and $2$ of type $endDevice$ from the initial positions to the area in the center (circle with radius $= 0.4$) in the time interval $[12,13]$, while staying connected to at least one agent of type $router$ at all times in $[3,13]$. We require all agents to avoid collision by keeping a safe distance of at least $0.15$ from each other at all times in $[0,13]$. The requirements described above can be specified by the following STREL formula, with $hrz(\varphi) =13$:
\begin{multline}\label{eq:phi_cs}
\varphi = G_{[0,13]}(dist_{i\neq j}(q_i{[k]},q_j{[k]})>0.15) \\
\bigwedge G_{[3,13]} endDevice \mathcal{R}^{dist}_{\leq 2} router \\
\bigwedge F_{[12,13]}(dist(q_{i,endDevice}{[k]}, origin)\leq 0.4).
\end{multline}
\textbf{Control problem}
\emph{Given} the team of $N=7$ agents with dynamics \eqref{eq:dyn_CS}, STREL formula $\varphi$ (\eqref{eq:phi_cs}), initial connection graph $\lambda [0]$, initial state of the system $x[0]$, planning horizon $H = hrz(\varphi)=13$,
\emph{find} an optimal control sequence $\mathbf{u}^{*12}$ that solves Problem \ref{problem:main}, where in \eqref{eq:problem1}, the cost function $J\left(u[k],x[k+1]\right)= \sum_{i=1}^{N} \sum_{k = 0}^{H-1} \norm{u_i[k]}^2$, $\gamma = 0.01$, and $\rho_c$ is replaced by its smooth version $\Tilde\rho_c$ (see Sec. \ref{sec:solOPT}).
\textbf{Comparing original and proposed semantics} We used PSO to solve the control synthesis problems with (i) the na\"{i}ve team robustness \eqref{eq:rho_all_naive} based on the original agent robustness \eqref{eq:strel-org} and (ii) the proposed robustness \eqref{eq: strel-ro},\eqref{eq: strel-ro-all} in the objective function. We solved each one with $100$ initializations and obtained no satisfying state-control trajectories for the original robustness and 69 satisfying state-control trajectories for the proposed robustness. We explain the results by noting that the proposed robustness has a varying search space (as opposed to the original robustness) due to the introduced functions $\sigma_{dist},\sigma_{routes},\sigma_{ag}$, which help the PSO algorithm to explore the search space and make it less prune to premature convergence.
\begin{table}[]
\begin{center}
\begin{tabular}{clcc}
\textbf{Algorithm} & \textbf{Success rate} & \textbf{\begin{tabular}[c]{@{}c@{}}Average \\ robustness\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Time\\ (seconds/run)\end{tabular}} \\
\textbf{SQP} & 44.7 & $0.0048$ & $23.3$ \\
\textbf{PSO} & 71.7 & $0.0037$ & $30$ \\
\textbf{PSO + SQP} & 93.8 & $0.0050$ & $32.9$
\end{tabular}
\end{center}
\caption{Performance of different optimization methods}\label{tb:opt_CS}
\end{table}
\textbf{Dataset generation} We generated $m$ state-control trajectories with robustness score above a given threshold $\Tilde{\rho_c}^{*}\geq \epsilon_{min}=0.001>\epsilon_{\beta}$ by solving the control synthesis problem off-line. Choosing a good number of training samples $m$ is task-specific and depends on factors such as the complexity of the neural network and the complexity of the approximated function. In our case, we found that the RNN performs best with $m \geq 800$. We solved the control synthesis problem for the team with $1200$ random initializations. It took about \emph{6 hours} to execute the code and generate all the trajectories. We defined the \emph{success rate} as the percentage of state-control trajectories with robustness $\Tilde{\rho_c}^{*}\geq 0.001$. The success rate, average normalized robustness and computation times for solving the control synthesis problem with the proposed robustness using SQP, PSO and PSO+SQP are presented in Tab.~\ref{tb:opt_CS}.
\textbf{Training the RNN}
We used an LSTM network with four hidden layers to learn the controllers. Each hidden layer has $64$ nodes. We used $850$ trajectories for training and $275$ trajectories for testing. We trained the network for $700$ epochs. The training process took about six minutes.
\begin{figure}[]
\begin{center}
\includegraphics[width=0.30\textwidth]{figures/rnns.jpg}
\caption{Trajectories generated from two different initializations (red crosses) of agents $1$ and $2$ (labels not shown for figure readability purpose) using RNN-based controller (solid red) and optimization PSO+SQP (dashed red)}
\label{fig:CS_optVSrnn}
\end{center}
\end{figure}
\textbf{Results} The average robustness for the trajectories generated using the RNN-based controller from $275$ new initializations is $0.0037$ compared to $0.0050$ for trajectories generated using the proposed optimization method PSO+SQP. The average execution times for generating one trajectory using the RNN-based controller and PSO+SQP are $0.002$ seconds and $32.9$ seconds, respectively. The success rate for trajectories generated from new initializations using the RNN-based controller is $93\%$. Fig.\ref{fig:CS_optVSrnn} shows sample trajectories generated from two different initializations using the RNN-based controller and using SQP+PSO. The results presented above demonstrate that the learned RNN-based controllers achieve a high success rate and have significantly lower computation time compared to the PSO+SQP optimization.
\section{Conclusions and Future Research}
\label{sec:conclusion}
We proposed a framework for solving control synthesis problems for multi-agent networked systems from spatio-temporal specifications. We introduced new counting quantitative semantics for the Spatio-Temporal Reach Escape Logic (STREL), and used it to map the control problems to optimization problems. The proposed semantics are sound, smooth, and allow for spatial counting and optimizing the spatial configuration of the team for agents connectivity. We solve the optimization problem using a combination of heuristic and gradient-based algorithms in two stages. In the first stage, we utilize Particle Swarm Optimization (PSO) for search-space exploration to find the best candidate solution. In the second stage, Sequential Quadratic Programming (SQP) is initialized by the candidate solution obtained from PSO and employed until a pre-defined stopping criterion is met. To meet real-time control requirements, we learn Recurrent Neural Network (RNN) - based controllers from state-control trajectories generated by solving the optimization problem with different initializations. Directions of future research include using reinforcement learning for online control and extending the proposed framework to complex and large teams of agents.
\bibliographystyle{IEEEconf}
|
2,869,038,155,054 | arxiv | \section{Introduction}
Let $n$ be a positive integer and let $K$ be a field.
A linear algebraic group is a subgroup of $GL(n, K)$ defined by polynomial
equations. More precisely, for a linear algebraic group $G$ there exist
polynomials $f_1, \ldots, f_r \in K[x_1, \ldots, x_{n^2}]$ such that
$G = \{ g \in GL(n, K) \mid f_i(g) = 0 \mbox{ for } 1 \leq i \leq r \}$,
where $f_i(g)$ is $f_i$ evaluated at the $n^2$ entries of the matrix $g$.
Some well-known examples of algebraic groups are the general linear, the
special linear, the orthogonal and the symplectic groups.
Linear algebraic groups are studied extensively in the literature. But there
are only a few algorithms known to compute with such groups. Membership
testing is easy by definition, but other algorithmic problems are often
rather difficult to decide.
The aim of this note is to describe an algorithm to decide if a set of
polynomials $f_1, \ldots, f_r$ in $n^2$ indeterminates over an algebraically
closed field $K$ defines an algebraic group. More precisely, let
$V(f_1, \ldots, f_r) = \{ v \in K^{n \times n} \mid f_1(v) = \ldots =
f_r(v) = 0 \}$ denote the variety defined by the polynomials $f_1, \ldots,
f_r$ and denote $V^*(f_1, \ldots, f_r) = V(f_1, \ldots, f_r) \cap GL(n, K)$.
Our algorithm decides if $V^*(f_1, \ldots, f_r)$ forms a group under matrix
multiplication.
\section{Notation and Preliminaries}
Throughout, let $n$ be a positive integer, and $K$ be a field.
Let $x = (x_1, \ldots, x_{n^2})$ be a list of commuting indeterminates.
For brevity, we shall write $K[x]$ to mean the polynomial ring
$K[x_1, \ldots, x_{n^2}]$. Similarly, if $y = (y_1, \ldots, y_{n^2})$
is another list of commuting indeterminates, we shall write $K[x,y]$ to
mean $K[x_1,\ldots,x_{n^2}, y_1,\ldots,y_{n^2}]$, and so on.
We write $K^{n \times n}$ to denote the algebra of $n \times n$ matrices
over $K$. There is a natural isomorphism of vector spaces
\[ K^{n^2} \rightarrow K^{n \times n}
:\quad (v_1, \ldots, v_{n^2}) \quad\mapsto\quad
\left( \begin{array}{ccc}
v_1 & \ldots & v_n \\
\vdots & & \vdots \\
v_{n^2-n+1} & \ldots & v_{n^2}
\end{array} \right ). \]
We shall frequently identify vectors in $K^{n^2}$ with matrices in
$K^{n \times n}$ via this isomorphism. In particular, this isomorphism
lets us evaluate a polynomial $f \in K[x]$ at a matrix $v \in
K^{n \times n}$. Similarly, we can consider any point in $V(I)$ as a matrix.
\section{Varieties}
\label{extpro}
Let $K$ be a field, let $x = (x_1, \ldots, x_{n^2})$ be a list of commuting
indeterminates over $K$, let $f_1, \ldots, f_r \in K[x]$ and let $I = (f_1,
\ldots, f_r) \unlhd K[x]$. A first obstacle in computations with $V^*(I)$
is that this set is defined as an intersection of a variety and a group and
this makes it difficult to apply methods from algebraic geometry or group
theory directly.
As a first step in this section we show how $V^*(I)$ can be identified with
a variety. For
this purpose we consider the list $x$ as a matrix and thus define $det(x)$
as a polynomial in $K[x]$. Let $x_0$ be another indeterminate over $K$ and
write $\hat{x} = (x_0, x_1, \ldots, x_{n^2})$. Define $f_0 = x_0 det(x) -1
\in K[\hat{x}]$ and
\[ \hat{I} = I + (f_0) \unlhd K[\hat{x}].\]
We write the elements of $V(\hat{I})$ as $(v_0, v)$ with $v_0 \in K$ and
$v \in K^{n \times n}$ and thus we consider $V(\hat{I})$ as subset of
$K \oplus K^{n \times n}$. We define a componentwise multiplication on
$K \oplus K^{n \times n}$ via
\[ (v_0, v) (u_0, u) = (v_0 u_0, vu). \]
\begin{lemma}
\label{identify}
Let $K$ be an arbitrary field and $I \unlhd K[x]$.
\begin{items}
\item[\rm (a)]
The projection $\zeta : K \oplus K^{n \times n} \rightarrow K^{n \times n} :
(v_0, v) \mapsto v$ induces an bijection between $V(\hat{I})$ and $V^*(I)$.
\item[\rm (b)]
$V^*(I)$ is closed under multiplication (inversion) if and only if
$V(\hat{I})$ is closed under multiplication (inversion).
\end{items}
\end{lemma}
\begin{proof}
(a) Let $(v_0, v), (w_0, w) \in V(\hat{I})$ with $\zeta((v_0,v)) =
\zeta((w_0,w))$. Then $v=w$ and thus $v_0 = det(v)^{-1} = det(w)^{-1}
= w_0$. Hence $(v_0, v) = (w_0, w)$ and $\zeta$ is injective on
$V(\hat(I))$. By the construction of $\hat{I}$, the image of $\zeta$
coincides with the invertible elements in $V(I)$ and thus with
$V^*(I)$. \\
(b) The map $\zeta$ is compatible with multiplication and thus
(b) follows directly from (a).
\end{proof}
As a second step in this section we exhibit how one can readily decide
if $V(I)$ and $V^*(I)$ are equal. The following lemma is elementary and
can be proved readily using Hilbert's Nullstellensatz.
\begin{lemma}
\label{identical}
Let $K$ be an algebraically closed field, let $I \unlhd K[x]$ and
$J = (I, \det(x)) \unlhd K[x]$.
\begin{items}
\item[\rm (a)]
Then $V(J) \subseteq V(I)$ and $V^*(I)$ is the set-complement to
$V(J)$ in $V(I)$.
\item[\rm (b)]
$V(I) = V^*(I)$ if and only if $1 \in J$.
\end{items}
\end{lemma}
Note that both cases $V(I) = V^*(I)$ and $V(I) \neq V^*(I)$ can occur.
For example, if $I = \ideal{\det(x)-1}$, then $V(I) = V^*(I) = SL(n,K)$.
In contrast,
if $I = \ideal{0}$, then $V(I) = K^{n \times n}$ while $V^*(I) = GL(n,K)$,
and thus $V(I) \neq V^*(I)$.
\section{Closedness of $V^*(I)$ under inversion}
\label{secinv}
In this section we show how to decide if $V^*(I)$ is closed under
inversion. By definition, every element in $V^*(I)$ is invertible
in $GL(n,K)$. It remains to check if the inverses are contained
in $V^*(I)$.
Consider the list of indeterminates $x$ as a matrix in $K[x]^{n \times n}$.
Then there exists a formal inverse $N(x)$ for the matrix $x$. The matrix
$N(x)$ is an element of $K(x)^{n \times n}$ and it can be determined via
the classical adjoint. Each entry of $N(x)$ has the form $g_{ij}(x)/det(x)$
for a polynomial $g_{ij}(x) \in K[x]$.
\begin{theorem} \label{inv}
Let $K$ be an algebraically closed field and let $I = (f_1, \ldots, f_r)
\unlhd K[x]$. For $1 \leq i \leq r$ there exist $h_i(x) \in K[x]$ and
$l_i \in \mathbb N_0$ with $f_i(N(x)) = h_i(x)/det(x)^{l_i}$. Write $k_i(x) =
h_i(x) det(x) \in K[x]$. Then the following are equivalent.
\begin{items}
\item[\rm (A)]
$V^*(I)$ is closed under inversion.
\item[\rm (B)]
$h_i(v) = 0$ for $1 \leq i \leq r$ and for every $v \in V^*(I)$.
\item[\rm (C)]
$k_i(v) = 0$ for $1 \leq i \leq r$ and for every $v \in V(I)$.
\item[\rm (D)]
$k_i \in \sqrt{I}$ for $1 \leq i \leq r$.
\item[\rm (E)]
$h_i \in \sqrt{\hat{I}}$ for $1 \leq i \leq r$.
\end{items}
\end{theorem}
\begin{proof}
Let $J = (k_1, \ldots, k_r) \unlhd K[x]$. Then Condition (C) is equivalent
to $V(I) \subseteq V(J)$ and this, in turn, is equivalent to $J \subseteq
\sqrt{I}$ by Hilbert's Nullstellensatz. Hence (C) and (D) are equivalent.
Similarly, (B) and (E) are equivalent via Lemma \ref{identify}.
\smallskip
\noindent
(A) $\Rightarrow$ (B):
Let $v \in V^*(I)$, so $\det(v) \neq 0$. Put $w = v^{-1}$.
By construction $w = N(v)$, and by (A) we have $w \in V^*(I)$.
Hence for each $i=1,\ldots,r$ we have $0 = f_i(w) = f_i(N(v)) =
h_i(v)/\det(v)^{l_i}$, whence $h_i(v) = 0$.
\smallskip
\noindent
(B) $\Rightarrow$ (C):
Let $i \in \{1, \ldots, r\}$ and $v \in V(I)$. Then either $v \in V^*(I)$
and thus $h_i(v) = 0$ by (B), or $\det(v) = 0$. In either case, it follows
that $k_i(v) = \det(v) \, h_i(v) = 0$ as desired.
\smallskip
\noindent
(C) $\Rightarrow$ (A):
Let $v \in V^*(I)$, so $\det(v) \neq 0$. Now, $w = v^{-1}$ exists as
an element of $GL(n,K)$, and $w = N(v)$. By (C) it follows that $0 =
k_i(v) = \det(v) \, h_i(v) = \det(v) \, f_i(N(v)) = \det(v) \, f_i(w)$
for each $1 \leq i \leq r$. Whence each $f_i(w) = 0$ for $1 \leq i \leq r$.
Thus $w \in V(I)$. As $w$ is invertible, it follows that $w \in V^*(I)$.
\end{proof}
Both, Theorem \ref{inv} (D) and (E) translate directly to an algorithm for
checking if every element of $V^*(I)$ is invertible inside $V^*(I)$.
\section{Closedness of $V^*(I)$ under multiplication}
\label{secmul1}
Now we introduce a method to decide if $V^*(I)$ is closed under multiplication.
Recall that $\hat{x} = (x_0, \ldots, x_{n^2})$ and let $\hat{y} = (y_0,
\ldots, y_{n^2})$. Define $\varphi : K[\hat{x}] \rightarrow K[\hat{y}] : x_i \mapsto
y_i$ and
\[ \hat{I}_{xy} = \hat{I} + \varphi(\hat{I}) \unlhd K[\hat{x},\hat{y}].\]
\begin{theorem} \label{mult1}
Let $K$ be an algebraically closed field. Then the following are
equivalent.
\begin{items}
\item[\rm (A)]
$V^*(I)$ is closed under multiplication.
\item[\rm (B)]
$f_i(vw) = 0$ for $1 \leq i \leq r$ and for every $v,w \in V^*(I)$.
\item[\rm (C)]
$f_i(xy) \in \sqrt{\hat{I}_{xy}}$ for $1 \leq i \leq r$.
\end{items}
\end{theorem}
\begin{proof}
Note that (C) is equivalent to $V(\hat{I}_{xy}) \subseteq V(J)$ with
$J = (f_1(xy), \ldots, f_r(xy)) \unlhd K[\hat{x}, \hat{y}]$ by Hilbert's
Nullstellensatz. Using this equivalence it is easy to show that the
three statements are equivalent.
\end{proof}
\section{Deciding if $V^*(I)$ is a group}
Let $1_n$ denote the $n \times n$ identity matrix. The set $V^*(I)$ is a
group if $1_n \in V^*(I)$, $V^*(I)$ is closed under inversion and $V^*(I)$
is closed under multiplication. This can checked now in the following steps.
Let $K$ be an algebraically closed field and recall that $I = (f_1, \ldots,
f_r)$.
\bigskip
{\bf Algorithm 'IsGroup'} \\
For $1 \leq i \leq r$ do
\begin{items}
\item[(1)]
Check that $f_i(1_n) = 0$; if not, then return false.
\item[(2)]
Check that $f_i(N(x)) \in \sqrt{\hat{I}}$; if not, then return false.
\item[(3)]
Check that $f_i(xy) \in \sqrt{\hat{I}_{xy}}$; if not, then return false.
\end{items}
\bigskip
Note that checking membership in a radical of an ideal can be done with
the trick of Rabinowitch and does not require to determine the radical
explicitly.
We mention a variation of the above algorithm which is based on the
following theorem. Its proof is similar to the proof of Theorem
\ref{mult1} and we omit it here.
\begin{theorem} \label{mult2}
Let $K$ be an algebraically closed field and suppose that the identity
matrix is contained in $V(I)$. Then the following are equivalent.
\begin{items}
\item[\rm (A)]
$V^*(I)$ is a group.
\item[\rm (B)]
$f_i(vw^{-1}) = 0$ for $1 \leq i \leq r$ and for every $v,w \in V^*(I)$.
\item[\rm (C)]
$f_i(x N(y)) \in \sqrt{\hat{I}_{xy}}$ for $1 \leq i \leq r$.
\end{items}
\end{theorem}
Hence the above algorithm has the following variation.
\bigskip
{\bf Algorithm 'IsGroup'} \\
For $1 \leq i \leq r$ do
\begin{items}
\item[(1)]
Check that $f_i(1_n) = 0$; if not, then return false.
\item[(2)]
Check that $f_i(x N(y)) \in \sqrt{\hat{I}_{xy}}$; if not, then return false.
\end{items}
\bigskip
There are further variations of this method possible in special cases.
For example, if $V(I) = V^*(I)$, then it is not necessary to use the
ideal $\hat{I}$, but $I$ can be used directly instead.
\section{Examples}
\label{examples}
If $V(I)$ is closed under multiplication, then $V^*(I)$ is also closed
under multiplication. However, the converse does not necessarily hold,
as the following example shows.
\begin{exam}
Let $n = 2$ and consider $f_1 = x_3$ and $f_2 = x_2 (x_2 x_4 - 1)$
and $f_3 = x_1 x_2$.
Then
\[ V(I) = \left\{
\left( \begin{array}{cc} a & 0 \\ 0 & b \end{array} \right) \mid
a, b \in K \right\}
\quad\bigcup\quad
\left\{
\left( \begin{array}{cc} 0 & c \\ 0 & c^{-1} \end{array} \right) \mid
c \in K \setminus \{0\} \right\} \]
and
\[ V^*(I) = \left\{
\left( \begin{array}{cc} a & 0 \\ 0 & b \end{array} \right) \mid
a, b \in K \setminus \{0\} \right\}. \]
Thus $V^*(I)$ is closed under multiplication, while $V(I)$ is not.
\end{exam}
Our approach via ideals and varieties requires to work over an algebraically
closed field $K$. In various applications it would be of interest to consider
$V(I) \cap GL(n, k)$ where $k$ is not necessarily algebraically closed.
Clearly, if $V(I)$ is a group, then $V(I) \cap GL(n, k)$ is also a group.
The converse is not true, as the following example shows.
\begin{exam}
With $n = 1$, consider $f_1 = (x_1 -1)(x_1^2-2)$ and let $I =
\ideal{f_1} \,\unlhd\, \mathbb C[x_1]$. Then
\begin{items}
\item[$\bullet$]
$V(I) = \{ (1), (\sqrt{2}), (-\sqrt{2})\}$ which is not a group.
\item[$\bullet$]
$V(I) \cap GL(n, \mathbb Q) = \{ (1) \}$ which is a group.
\end{items}
\end{exam}
If $k$ is a finite field (of cardinality $q$) then we can restrict to
$V(I) \cap GL(n, k)$ by adding the polynomial conditions $x_i^q-x_i=0$
for each $i=1,\ldots,n^2$.
\begin{exam}
Let $\mathbb F_q$ denote the field with $q$ elements and let $f_1 = (x_1 -1)(x_1^2-2)
\in \mathbb F_5[x_1]$. For $t \in \mathbb N_0$ Consider
\[ I_t = (f_1, x_1^{5^t}-x_1) \unlhd \mathbb F_5[x_1].\]
Then $V(I_t) = \{ (1) \} \subset \mathbb F_{5^t}$ if $t$ is odd, and $V(I_t) =
\{ (1), (\sqrt{2}), (-\sqrt{2}) \} \subset \mathbb F_{5^t}$ if $t$ is even.
Thus $f_1$ defines a group iff we restrict to a field of cardinality
$5^t$ with $t$ odd.
\end{exam}
Further we exhibit some sample applications of our methods. We used the
CoCoA implementation of our methods to check if $V^*(I)$ is a group in
these examples.
\begin{exam}
Choose $n = 3$.
\begin{items}
\item[\rm (1)]
Consider
\begin{eqnarray*}
f_1 &=&
850x_1-475x_2-50x_3+1496x_4-836x_5-88x_6+238x_7-133x_8-14x_9, \\
f_2 &=&
125x_1-75x_2+25x_3+220x_4-132x_5+44x_6+35x_7-21x_8+7x_9
\end{eqnarray*}
and let $I = (f_1, f_2)$. Then $V^*(I)$ is a group.
\item[\rm (2)]
Consider
\begin{eqnarray*}
f_1 &=& -3*x_1+x_3-9*x_7+3*x_9, \\
f_2 &=& 52*x_1-16*x_3+169*x_7-52*x_9, \\
f_3 &=& 3*x_4-x_6,
\end{eqnarray*}
and let $I = (f_1, f_2, f_3)$. Then $1_3 \in V(I)$ holds and
$V(I) \neq V^*(I)$ can be readily observed. Further, $V^*(I)$ is not
a group, as it is not closed under inverses.
\item[\rm (3)]
Consider
\begin{eqnarray*}
f_1 &=& 22*x_1+77*x_2-6*x_4-21*x_5+48*x_7+168*x_8, \\
f_2 &=& 2*x_7+7*x_8, \\
f_3 &=& -14*x_1-49*x_2+4*x_4+14*x_5-28*x_7-98*x_8 \\
\end{eqnarray*}
and let $I = (f_1, f_2, f_3)$. Then $1_3 \not \in V(I)$
and hence $V^*(I)$ is not a group.
\end{items}
\end{exam}
\begin{exam}
Choose $n = 2$ and consider
\begin{eqnarray*}
f_1 &=& (x_1-1)(x_1^2 + 1),\\
f_2 &=& x_2, \\
f_3 &=& x_3 \\
f_4 &=& x_4 - 1
\end{eqnarray*}
and let $I = (f_1, f_2, f_3, f_4)$. Then $V(I)$ is finite
and can be listed explicitly as
\[ V(I) = \left\{
\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right),
\left( \begin{array}{cc} i & 0 \\ 0 & 1 \end{array} \right),
\left( \begin{array}{cc} -i & 0 \\ 0 & 1 \end{array} \right) \right\}. \]
Hence $1_2 \in V(I)$ and each element of $V(I)$ is invertible in $V(I)$
so that $V(I) = V^*(I)$ holds. But $V(I)$ is not a group, as it is not
closed under multiplication.
\end{exam}
\section{Comments}
We have implemented the algorithms described in the Section above in
CoCoA-5~\cite{CoCoA}; the implementations are publicly available in
the CoCoA-5 package \texttt{ArithGroup.cpkg5} (distributed starting
from CoCoA version 5.1.2).
The implementations are largely straightforward. We observe that the
programs do not need to compute the radical of an ideal (a costly
operation). Our methods entail checking whether the homomorphic images of
certain polynomials lie in the radical of the ideal $\hat{I}$.
Observe that it suffices to compute a polynomial equivalent (modulo
$\hat{I}$) to the actual homomorphic image; in other words we may employ
``normal form'' reductions modulo $\hat{I}$ when convenient to lower
the overall cost of evaluating $f_i(h)$ modulo $\hat{I}$.
\def$'${$'$}
|
2,869,038,155,055 | arxiv | \section{Introduction} \label{sec:intro}
Planetary nebulae (PNe), the gas and dust remnants of low- and intermediate-mass stars (LIMS, $\sim1<{\rm M/M_{\odot}}<$8), are key probes of the chemical evolution in galaxies. Understanding how LIMS evolve is extremely important in astrophysics since the LIMS population represents most of the stellar mass in galaxies. Most LIMS are believed to go through the asymptotic giant branch (AGB) phase, which prominently contributes to the integrated luminosity of a galaxy \citep[e.g.,][]{2011ASPC..445..391M}. Furthermore, AGB stars and the subsequent PNe are major dust producers. It is thus important to have
the best observational data sets to constrain the nucleosynthesis models at various metallicities.
At the end of their lives, LIMS become major producers of C and N. Nucleosynthesis of these elements occurs in the stellar core. Mass-loss brings these elements to the ISM after they are dredged up to the stellar surface. Planetary nebula abundances of the major elements are typically straightforward to measure
by analyzing emission lines. While ground-based telescopes allow the direct observation of most key elements in Local Group PNe, carbon remains elusive since its major collisionally excited and bright emission lines,
\ion{C}{2}] $\lambda$2325-29, \ion{C}{3}] $\lambda$1909, and \ion{C}{4} $\lambda$1550 \AA, are emitted in the satellite ultraviolet. Carbon recombination lines are emitted in the optical spectrum, and they are much fainter than the collisionally excited lines in the UV; combined with nearby oxygen emission features, they are useful to constrain the C/O ratio \citep{GR18}.
Yet carbon is an essential element: carbon and its compounds relate to the origin of life in the Universe, which makes it fundamental to understand where it forms and how
its abundance grows over time. Furthermore, stellar evolution theory
predicts that processes defining final yields of nitrogen and carbon, such as third dredge-up (TDU) and hot bottom burning (HBB), strongly depend on the progenitor mass, thus carbon concentration in PNe (especially relative to nitrogen) is a signature of the mass-range of the progenitor LIMS, thus, ultimately, of their age.
To understand dust formation and evolution in the context of stellar and galactic evolution, \citet{SGG07} have observed dust features in Magellanic Cloud PNe from the {\it Spitzer Space Telescope Infrared Spectrograph} (Spitzer/IRS) spectra, and gas-phase PN properties from {\it Hubble Space Telescope (HST)} imaging and UV spectroscopy, and found that nebular gas chemistry, dust composition, and PN morphology are correlated. The IRS spectra carry a wealth of information on the dust continuum and solid-state dust features. Intermediate-mass LIMS in the Magellanic Clouds produce symmetric (i.e., non-bipolar), carbon-rich PNe with carbon-rich dust features (such as polycyclic aromatic hydrocarbons (PAHs), hydrogenated amorphous carbon grains (HACs), etc.), while high-mass progenitors produce generally bipolar, nitrogen-rich, PNe with oxygen-rich dust (e.g., amorphous and crystalline silicates).
We have learned from the Magellanic Cloud PN project that the simultaneous availability of IRS spectra, narrow and broad-band {\it HST} images, optical, and UV spectra, yields detailed insight into the post-AGB and PN evolution. We need to build a similar data set for compact Galactic PNe -- PNe whose maximum angular radii are smaller than $\sim$4$-$5 \arcsec-- to extend the analysis across metallicities.
Compact Galactic PNe, defined and analyzed by \citet{SSV16}, have many advantages with respect to extended PNe when used as evolutionary probes. The relevant property that makes compact PNe compelling in this study is that their spectra, from UV to optical to IR, can be acquired with just one pointing to include the whole nebula, which in turn provides plasma diagnostics and chemical analysis of the PN as a whole.
Their compact shapes allowed the study of dust content with Spitzer/IRS spectroscopy and with other spectroscopy without the problem of aperture correction. Furthermore, compact Galactic PN spectra, no matter whether UV, optical, or IR, are analyzed identically in the Galactic samples and in the Magellanic Clouds, thus the comparative analysis of the various samples is direct and unambiguous.
A detailed search through the literature disclosed that only a few Galactic PNe with Spitzer/IRS spectroscopy have reliable gas-phase carbon abundances \citep{VSD17}. We thus embarked on this spectroscopic study of carbon in compact Galactic PNe. Our observing goals were to acquire a sizable set of UV spectra to detect strong UV carbon transitions of PNe whose Spitzer/IRS spectra were also available, and whose nitrogen, oxygen, and other elemental abundances were available in the literature, to measure their gas-phase carbon content, and to study their correlation with the dust-phase carbon and, in the context of PN progenitors and their evolution, to compare their surface chemistry with the final surface chemical abundances of AGB stars, with the final goal of constraining mass and metallicity (and age) of the progenitors. The {\it HST} is the only telescope, and the {\it Space Telescope Imaging Spectrograph (STIS)} \citep[STIS:][]{STIS, STIS_OOP}
the best instrument, that can be used to measure carbon abundances in compact Galactic PNe.
This paper represents the first systematic study of carbon abundances -- from direct observations of UV lines -- of compact Galactic PNe with known dust-phase chemistry. With this study, we considerably augmented the observational data with which to constraint AGB evolution in the Galaxy.
Prior to this study, there were only 7 Galactic PNe, compact or otherwise, whose {\it HST} UV spectra could be employed to determine the gas-phase abundance of carbon \citep{Henry2015, 2015ApJ...803...23D} in addition to \citet{Henry2008} observations of the halo PN DdDm 1 (PN~G061.9+41.3). Another ~$\sim$30 Galactic PNe had been previously observed with the {\it International Ultraviolet Explorer} (IUE), providing reliable carbon abundances mostly for nearby, extended PNe \citep[and references therein]{VSD17}. Dust and gas-phase carbon properties have been studied by \citet{DIR14}, based on a sample of mostly extended Galactic PNe, and with aperture corrections applied to several targets.
Finally, as a comparison, carbon abundances are available for 11 Small Magellanic Cloud (SMC) and 24 large Magellanic Cloud (LMC) PNe, all from UV emission lines and observed with the {\it HST} \citep{Stanghellini05, Stanghellini09}.
\section{Observing Program} \label{sec:Program}
\subsection{Observations} \label{sec:Observations}
Our observations were obtained in {\it HST} program GO--15211, which was extended by the mission in program GO--16013, with observations taking place between 2018 Jan 05 and 2020 Sept 20.
We selected our targets to be spatially compact Galactic PNe (with apparent radii $\theta\leq$5\arcsec), preferentially already observed in the optical wavelengths with {\it HST} (e.g., program GO-11657), which, in addition to providing their size and morphology, greatly simplifies target acquisition.
We observed each target with FUV/G140L and NUV/G230L spectroscopic configurations with STIS.
The aperture was placed on the center of each nebula to detect the central stars (CSs) as well.
The program is not dissimilar from the UV {\it HST} program targeting LMC PNe (GO-9120), since PNe in the LMC have similar maximum extensions to our compact Galactic PNe.
Our allocation of 75 targets in Cycles 25 and 26 was only partially fulfilled, as expected in ``snapshot'' mode.
Because we were primarily interested in obtaining total fluxes in critical lines of C, in order to facilitate a direct comparison with ground-based optical observations, each target was observed with the $6\arcsec\times6\arcsec$ aperture.
As all or most of the flux from these targets is emitted within about 5\arcsec\ \citep{SSV16}, this aperture is nearly equivalent to slitless spectroscopy and carries the advantage of excluding bright UV sources in the field which could pose a risk to the MAMA detectors.
This choice comes at the cost of greatly diminished spectral resolution, which is set primarily by the angular extent of each target.
Our observing plan specified exposures with both the FUV-MAMA detector with the low-resolution grating G140L and the NUV-MAMA with G230L.
The observing log is presented in Table~\ref{tab:ObsLog}.
Observations for three targets failed due to instrument or telescope problems,
but they are included in Table~\ref{tab:ObsLog} (with zero observing time) for completeness. Also notable, PN~G286.0--06.5 was observed in both programs.
We planned exposure times to achieve a signal-to-noise (S/N) ratio $>10$ in the brightest emission lines of carbon over the extent of the target. This is sufficient to obtain good C elemental abundances since one or more of the observable ions C$^+$, C$^{+2}$, and C$^{+3}$ will dominate the emission.
The exposure durations were however limited to a maximum of 1200~s to ensure that both FUV and NUV spectra could be obtained for each target within a single orbit; single-orbit visits are required for ``snapshot" programs.
All data analyzed for this program are available in the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations we analyzed can be accessed via \dataset[https://doi.org/10.17909/t9-n5ef-9894]{https://doi.org/10.17909/t9-n5ef-9894}.
\subsection{Data Reduction and Spectrum Extraction} \label{subsec:reduction}
We reduced the data through flat-field correction with the distributed python package \textbf{stistools}, which embodies the STScI CALSTIS calibration pipeline as a library.
Our observations used an available-but-unsupported observing mode, which obligated us to rectify the flattened images and extract the one-dimensional spectra with custom software.
The raw data were first processed with the CALSTIS task \textit{basic2d} to perform 2d image reduction.
To initialize the data quality array, \textit{basic2d} uses a bad pixel reference table and performs a bitwise OR with the initial data quality file. This routine appropriately combines data quality information from neighboring pixels before performing the OR operation in order to take Doppler smearing and binning into account.
The primary cause for dark current in MAMA detectors is believed to be a phosphorescent glow from impurities in the detector window. In short time scales, the glow varies exponentially with temperature; in a long time scale, the behavior becomes more complex. With \textit{basic2d} we subtracted the dark signal using relevant reference files and updated the science data quality files for bad pixels in the dark reference file.
Lastly, we used \textit{basic2d} to correct for pixel-to-pixel and large-scale sensitivity gradients using a p-flat and l-flat reference file respectively. P-flats are configuration-(grating, central wavelength, detector, etc.) dependent flat field images, with no large-scale sensitivity variation. L-flats are sub-sampled flat field images that contain large-scale sensitivity variation across the detector. \textit{basic2d} combines these two types of flat field and then corrects the science image by dividing it with the combined flat field image. The data quality and error arrays are updated again to account for flat-fielding.
To perform spectral extraction, we need to consider that
the spectral trace orientation on STIS detectors slowly changes over time and is also subject to an offset that is unique to each observation caused by the MSM (mode selection mechanism) positioning. To rectify this we use the IRAF task \textit{mktrace}, which corrects the orientation of spectral trace, re-centers it on a new row, and provides the new center as an output. However, it takes an approximate center as an input.
To determine our best approximate center, we fit a 1d Gaussian model to a column with a strong signal from the trace. We also use this Gaussian model to determine approximate box parameters, used in our last extraction step. Although the Gaussian fitting does provide approximate box parameters, we did adjust these manually on the actual image.
The \textit{x2d} task then rectifies the image using bi-linear point interpolation. For each point in the rectified output image, there is a corresponding point in the distorted input image and the four nearest pixels are bi-linearly interpolated to determine the value to be assigned to the point in the output image.
This mapping from output pixel to input images is done by using the dispersion relation
and the spectral trace table generated by \textit{mktrace}.
Pixel number as a function of wavelength is given by the dispersion relation and the displacement in cross-dispersion direction at each pixel along the dispersion direction is given by the trace table. Appropriate corrections are applied to take binning into account.
Lastly, the \textit{x2d} task converts the counts to surface brightness in [ergs cm$^{-2}$ sec$^{-1}$ \AA$^{-1}$ arcsec$^{-2}]$.
Examples of the 2d spectra are shown in the upper and middle panels of Figs.~1 through~4. The 2d spectra clearly show the footprint of the nebular shape as observed in correspondence of the major nebular emission lines. For example, the elliptical (Figs. 1, 2, and 3) vs. bipolar (Fig. 4) shapes are clearly identified for most individual spectral images. The 2d spectra also show the presence of stellar continua.
After masking the bad pixel in the 2d image, we extract the 1d spectra using our own python routine. We chose a spectral extraction box as large as the largest feature in the 2d spectral image, and we subtract the average background from two regions on either side of the spectral trace. If the PN had been previously observed with the {\it HST} cameras -- i.e., it has been spatially resolved -- we use the measured photometric diameter as a guide to the initial guess for the extraction box size.
The final spectra have been calibrated through the default wavelength calibration since we did not observe simultaneous comparison arcs. This is not an issue for UV spectra of Galactic PNe, where the major emission lines are easily recognizable, as we can see in the 2d spectral images. We thus adjusted the zero-point of the wavelength solution of each extracted 1d spectra based upon the brightest known nebular emission features.
Figs. 1 through 4 (lower panels) also show the extracted 1d spectra of the PNe.
\section{Analysis} \label{sec:analysis}
\subsection{Emission Line Measurements} \label{subsec:fluxes}
The rectified 2d spectrograms and the extracted 1d spectra of the observed PNe have been inspected for features. For the following, we discuss only targets that show at least one nebular emission line, with sufficient S/N for the subsequent analysis. We list these PNe in Table~\ref{tab:Parameters}. Table 2 also gives the ancillary parameters that are relevant for our analysis and include the distance from the Galactic plane, the He, N, and O atomic abundances, the nebular morphology, and the dust type derived from IRS spectral analysis. All parameters are referenced in the Table.
We measured the emission-line fluxes with the IRAF task {\it splot}. The emission line intensities have been measured via Gaussian fit, which is a good model for all the measured lines. The flux uncertainties given for the emission lines are the random error estimates assuming the Gaussian shape. We checked that this approximation worked for all of the lines used for abundance measurements. Naturally, this procedure assumes, as it is often the case in spectroscopic analysis, that the continuum level is well identified. If the continuum was mismatched, there would be additional uncertainty, up to about 10$\%$, for the flux lines.
All fluxes and their uncertainties, derived from the above measurement procedure, were scaled to $I_{\rm H\beta}=100$.
The observed intensities and extinction-corrected fluxes are related by:
\begin{equation}
\frac{I_{\rm \lambda}}{I_{\rm \beta}} = \frac{F_{\rm \lambda}}{F_{\rm \beta}} 10^{cf_{\rm \lambda}} \times 100
\end{equation}
Here, $c$ is the logarithmic extinction at $H_{\beta}$ for each target, and $f_\lambda$ is the wavelength dependent reddening function from \citet{Cardelli1989}.
We obtain the final line intensities by using the H$_{\beta}$ fluxes and extinction constants from \citet{SSV16}, except for PN~G001.6+8.3 and PN~G281.0-05.6, whose parameters are from \citet{CKS}.
The measured line fluxes and intensities, and the $H_{\beta}$ fluxes and extinction constants, are given in Tables~\ref{tab:Flux1}--\ref{tab:Flux4}.
More details on individual nebular spectra are given in Section \ref{individual}.
\subsection{Plasma Diagnostics} \label{subsec:diag}
The electron densities ($N_{\rm e}$) and temperatures ($T_{\rm e}$) adopted for the abundance calculation are given in Table~\ref{tab:Diag}. Most of the diagnostics have been taken from the literature (cited within the Table), since the UV ranges observed in this work do not include any diagnostic lines.
When both T$_{\rm e}$[\ion{N}{2}] and T$_{\rm e}[$\ion{O}{3}] where available for the same PN, we used T$_{\rm e}$[\ion{N}{2}] for the C${^+}$ abundances, and T$_{\rm e}[$\ion{O}{3}] for the C$^{2+}$ and C$^{3+}$ abundance calculation. We always used the [\ion{S}{2}] densities preferentially, if available for a given PN. When the electron density or temperature was not available, we calculated them from published diagnostic line intensities \citep{AMO92} using the \textit{pyneb} package in python \citep{LMS15}.
In one case we could not find plasma diagnostics nor diagnostic flux ratios in the literature, thus we adopted a typical values of $N_{\rm e}$ to calculate abundances, as noted in Table~\ref{tab:Diag}.
\subsection{Abundance Analysis}
\label{subsec:abundanalysis}
We measured ionic abundances from the line fluxes and ancillary diagnostics with the \textit{pyneb} package, and the atomic data set therein, as also given in Table 8. All ionic abundances are intended in terms of H$^{+}$; heretofore, we use the term ''ionic abundance" to mean ''ionic abundance ratios to H$^{+}$".
The derived abundances of the carbon ions are presented in Table 9; ionic abundances of other elements derived from the same UV spectra are in Table 10.
Uncertainties in the line intensity and plasma diagnostics both contribute to the final abundance errors.
We measured the uncertainties in the ionic abundances due to line uncertainties (including reddening correction) by Monte-Carlo simulation assuming Gaussian distribution centered on the measured intensities. The resulting uncertainties are in the 0.001--0.04 dex range for the ionic abundances.
The contribution to the abundance uncertainties from the line ratio are the only ones that we can actually calculate with our data set. On the other hand, by far the dominant source of uncertainty in PN abundances stems from plasma diagnostics. We adopted electron density and temperature values from the literature, and none of the original references in Table~\ref{tab:Diag} give uncertainties for these parameters.
We thus estimate the magnitude of final abundance uncertainty as it stems from guessed 5$\%$ and 10$\%$ uncertainties in both the electron density and temperature. Temperature shifts due to inhomogeneity in the atomic data sets, e.g. \citet{JDD}, are folded in these assumed values.
We found that the electron density uncertainty has no effect of the final ionic abundances. In fact, for all ions and all PNe, a 10$\%$ uncertainty in the density produces ionic abundance uncertainties in the 1$\times$10$^{-3}$--3.5$\times$10$^{-5}$ dex range. On the other hand, a 10$\%$ uncertainty in the T$_{\rm e}$ propagates into a mean uncertainty of 0.39$\pm$0.03 dex in the ionic abundances. The assumption of $\Delta T_{\rm e}\sim 10\%$ is conservative, while a $\pm$5$\%$ uncertainty is more realistic, and translates into a 0.17$\pm$0.01 dex uncertainty in the final abundances.
The final uncertainties are thus dictated by our guesses of the electron temperature uncertainties. Since these initial guesses translate into abundance uncertainties in such narrow ranges, we will use their averages as final uncertainties.
It is worth noting that a formal analysis would yield to a final C/H abundances uncertainty which is $\sqrt(N)$ times the ionic uncertainty adopted, where N is the number of available ions.
We calculated atomic carbon abundances from the ionic values, using the scheme by \citet{KB94} to correct for unobserved ionization stages. In our spectra we expect to see the transitions relative to \ion{C}{2}], \ion{C}{3}], and \ion{C}{4}.
In the section below, where applicable, we describe how the atomic carbon abundance has been derived, and which literature data we used to derive the Ionization Correction Factor for carbon (ICF(C)). Both ionic and atomic carbon abundances, and the ICF's, are listed in Table~\ref{tab:Carbon}.
\section{Individual PNe}\label{individual}
\subsection{PN~G003.9--14.9}
The 2d STIS UV spectrograms spatially resolve this PN for the first time (this PN was previously unobserved with {\it HST}). We found that an extraction box of 2\arcsec\ encompasses most of the spectra in both gratings. The nebular has an elliptical shape, although the morphology determination is uncertain on the UV spectra. The G140L image shows a very good spectral trace with high S/N; it shows the \ion{C}{4} emission feature with a P-Cygni profile, whose spatial extension indicates its stellar origin. We could not detect \ion{He}{2} $\lambda$1640 \AA~ in the G140L spectrum, even if a faint emission feature corresponding to \ion{He}{2} $\lambda$4686 has been detected in the optical spectrum \citep{1994A&AS..106..559T}. The P-Cygni feature with emission centered around $\lambda$1232 \AA~ could be \ion{N}{5} $\lambda$1239--43 \AA, similar to what was observed in the LMC PNe SMP~18 and SMP~25 (Stanghellini et al. 2005). The G230L spectrum does not show emission lines.
\subsection{PN~G006.1+08.3}
A box of 2\arcsec\ encompasses the nebular UV spectrum of this Galactic Bulge, roughly elliptical PN, previously unobserved with {\it HST}. This PN had not been previously spatially resolved from the ground \citep{2003A&A...405..627T}. Our STIS program acquired only the G230L spectrum, showing a (likely) \ion{C}{3}] emission line at $\lambda$1890~\AA~($\Delta\lambda\sim-17$). Another notable feature at $\lambda\sim$2810~\AA~ is unidentified. We note a strong nebular continuum for $\lambda>2300$ \AA. The stellar spectrum is prominent, although an estimate of the CS temperature is problematic, given the lack of the G140L spectrum. We could not find the optical line's strengths to estimate the ICF to correct for the unobserved \ion{C}{2}] line, thus the atomic carbon abundance in Table~\ref{tab:Carbon} is a lower limit thereof.
\subsection{PN~G025.3--04.6}
The STIS G230L spectrum shows nebular continuum emission and a likely emission feature which could be identified as \ion{C}{3}] at $\lambda$1907--09 \AA. The G140L spectrum does not show obvious emission lines. The atomic carbon abundance in Table~\ref{tab:Carbon} has been derived from
C=ICF(C)$\times$C$^{2+}$. We used the fluxes by \citet{GHG14} to derive the oxygen abundances needed to estimate the ICF.
\subsection{PN~G038.4--3.3}
Both STIS spectra of this PN are noisy. We were unable to make unambiguous line identifications.
\subsection{PN~G042.9--06.9}
The only emission feature observed for this PN is in the G230L spectrum, which we interpret to be \ion{C}{3}] at $\lambda$1907-09~\AA. We used the optical emission lines from \citet{GHG14} to correct for the unseen emission lines via ICF analysis, as in PN~G025.3--04.6.
\subsection{PN~G053.3+24.0}
The \ion{He}{2} feature at $\lambda$1640 \AA~in the G140L spectrum is very strong, which is indicative of a medium to high excitation PN. The only other feature in the G140L spectrum is a very faint, noisy emission at $\lambda\sim$1243~\AA~that could be \ion{N}{5}. The flux of the emission line identified as \ion{C}{3}] at $\lambda$1907-09 \AA~in the G230L spectrum corresponds to a very broad feature, which can be used as an upper limit to the emission line flux.
We also list in Table 4 a couple of possible \ion{O}{3} features, although the wavelengths of the identified lines are not perfect matches to those observed. This is not a low-excitation PN, thus we did not apply the ICF corrections to measure the atomic carbon abundance, following \citet{KB94}'s prescription.
\subsection{PN~G068.7+14.8}
The \ion{C}{4} emission has a P-Cygni profile and is not spatially extended, while the \ion{C}{2}] and \ion{C}{3}] carbon lines show spatial extension. We infer that the former is of stellar origin. Since the optical $\lambda$4686\AA~ emission line has been detected \citep{1994A&AS..106..559T}, this could be a medium to high excitation PN. We thus derive the abundance without ICF, as in PN~G053.3+24.0. The G140L spectrum shows a faint line emission at $\lambda\sim$1242~\AA, which could be \ion{N}{5}, possibly of stellar origin as well.
\subsection{PN~G097.6--02.4}
Both UV STIS spectra are very noisy, and we were unable to make unambiguous line identifications.
\subsection{PN~G107.4--02.6}
The STIS G140L spectrum is too noisy for line detection, and the \ion{C}{3}] emission line is the only line detected in the G230L spectrum. There is no information in the literature about the low-excitation \ion{O}{2} lines thus we can not calculate the ICF to get the total carbon abundance. As a result, the carbon abundance in Table~\ref{tab:Carbon} is a lower limit thereof. This is a similar case to PN~G006.1+08.3.
\subsection{PN~G232.8--04.7}
Both UV STIS spectra are too noisy for line identification.
\subsection{PN~G264.4--12.7}
Both the \ion{C}{4} and the \ion{N}{5} features in the G140L spectrum have P-Cygni profiles, and they both look stellar in origin. The \ion{C}{2}] and \ion{C}{3}] lines are very faint and extended in the G230L spectrum, and their abundances could not be measured.
\subsection{PN~G275.3--4.7}
The presence of the \ion{He}{2} in the G140L spectrum indicates a medium to high excitation PN. In order to correct for the missing \ion{C}{2}] line we use the optical oxygen lines from the literature \citep{2002ApJS..138..285M}, from which we estimate ICF(C)=1.037.
\subsection{PN~G278.6--06.7}
The G140L spectrum is characterized by strong \ion{He}{2} and \ion{C}{4} emission lines from the whole volume of the PN, thus indicating a medium to high excitation PN, which agrees with the presence of the optical emission line at $\lambda$4686~\AA, corresponding to \ion{He}{2} emission \citep{1994A&AS..106..559T}. There is a noisy emission line corresponding to \ion{N}{5} in the G140L spectrum as well.
As clearly seen in Figure~3, the spatial distribution of the \ion{C}{3} and the \ion{C}{2} emission shows ionization stratification.
The atomic carbon abundance is the sum of the measured ionic abundances.
\subsection{PN~G281.0--05.6}
The \ion{C}{3}] line is the only emission line detected in the G230L spectrum and presents itself as a very broad emission. We correct for the undetected \ion{C}{2}] emission from the optical oxygen lines in the literature \citep{2002ApJS..138..285M}, finding ICF(C)=1.084, which has been used to calculate the total atomic abundance in Table~\ref{tab:Carbon}. There are P-Cygni lines corresponding to \ion{N}{5} and \ion{C}{4} in the STIS G140L spectrum. Their extensions indicate that they are probably of stellar origin.
\subsection{PN~G286.0--06.5}
There is a very noisy emission, not measured, that could be \ion{He}{2} at $\lambda$1640~\AA. There are \ion{N}{5} and \ion{C}{4} emission lines in the G140L spectrum with P-Cygni profiles; their extension indicates that they are probably of stellar origin. The atomic carbon abundance has been calculated by the sum of the \ion{C}{2} and \ion{C}{3} abundances.
\subsection{PN~G295.3--09.3}
In the G140L spectrum, the emission line corresponding to \ion{N}{5} has P-Cygni profile, and the emission flux given has high uncertainty given the shape of the underlying continuum. The identification of the [\ion{O}{3}] line is uncertain. The only nebular carbon line detected here is \ion{C}{3}]. There are no lower excitation transition intensities available in the literature for this PN to correct for \ion{C}{2}], thus the atomic abundance in the Table is a lower limit thereof.
\subsection{PN~G351.3+07.6}
Both \ion{N}{5} and \ion{C}{4} emissions in the G140L spectrum have P-Cygni profiles. There are no emission features detected in the G230L spectrum. This is a similar case to PN~G003.9--14.9 and PN~G264.4--12.7.
\section{Comparison between PN abundances and stellar evolutionary models}
In this section, we characterize the observed PN sample in the framework of stellar evolution of LIMSs. The chemical abundances of PNe reveal the nucleosynthesis and mixing processes experienced by the star during the previous evolutionary phases.
Before entering the PNe stage, stars of masses 0.8$\leq$M/M$_{\odot}\leq$8 experience H- and He- shell burning phases while climbing along the AGB \citep{sh65,Iben1975,Iben1976}.
The low-mass threshold ($0.8~M_\odot$) is partly dependent
on the description of mass-loss adopted, as a more efficient mass-loss during the red giant branch (RGB) and the phases following the
core helium burning favors a rapid loss of the external mantle, which might prevent the star from experiencing the thermal pulses. The high-mass threshold ($8~M_\odot$) is sensitive to the assumption of core overshoot during the main sequence since the core mass at the beginning of the AGB is correlated with the amount of extra-mixing assumed during the core H-burning phase.
The high-mass threshold is valid in this context for the solar and slightly sub-solar metallicity, and it was determined on the basis of the stellar models used in the present investigation, that adopt a moderate overshoot from the external border of the convective core during the main sequence. If we do not take into account extra-mixing, this high-mass threshold would shift to $\sim 10~M_{\odot}$. The high-mass threshold of AGB evolution also depends on metallicity. Since a star develops a more massive core during the main sequence the lower the metallicity,
the low-mass threshold for carbon ignition is $8~M_{\odot}$ at solar metallicity, but would be $7.5~M_{\odot}$ at $Z=3\times 10^{-4}$, and even lower for lower metallicities (see e.g. Dell'Agli et al. 2019).
The AGB evolution is characterized by the gradual expansion and cooling of the external regions of the star, which favor the loss of the entire envelope with high rates of mass-loss, and the injection in the interstellar medium of gas reprocessed by internal nucleosynthesis \citep[see][for an exhaustive review]{Herwig2005}.
The surface chemistry during the AGB phase varies due to two main physical processes, whose relative importance depends on the mass of the progenitor.
Stars with M$<$4M$_{\odot}$ experience repeated episodes of TDU. These are deep inward penetrations of the surface convection during which the innermost layers of the stellar envelope reach triple-$\alpha$ nucleosynthesis sites; such sites are greatly enriched in $^{12}$C, which is then rapidly transported to the stellar surface, owing to the high efficiency of the convective currents \citep[e.g.][]{iben83,busso99}.
Repeated TDU events can lead the carbon-to-oxygen number ratio to exceed unity (C/O$>$1), and the AGB becomes a carbon star. Stars with M$\geq$4M$_{\odot}$ experience HBB, whose ignition occurs when the temperature of the convective envelope reaches values higher than 30-40 MK, which allows for efficient H-burning via proton capture nucleosynthesis in the most internal regions of the envelope \citep{renzini81,blocker91,sackmann91}.
This process has the main result of converting carbon into nitrogen in the surface regions, thus preventing the formation of carbon stars and enhancing nitrogen abundances.
The latest generation of AGB models \citep[see][for a summary review]{karakas14} are the best to describe the detailed evolution of the chemical variation at the stellar surface. By comparing the chemical pattern measured in the PNe with the chemical abundances predicted for the final stage of the AGB evolution for stars with different masses and metallicities, it is possible to characterize the individual sources in terms of their epoch of formation and initial chemistry \citep{VSD15,VSD16,VSD17}.
For model comparison we use the ATON models \citep{ventura98} at solar \citep[Z=0.014;][]{ventura18} and sub-solar (Z=0.008, 0.004; \citep{ventura13}) metallicities.
These models are presently the only ones where the full integration of the equations of stellar structure and the AGB evolution of the star are self-consistently coupled with the dust formation process in the wind. The models used here have been
extensively compared with those from other research groups, particularly at solar \citep{ventura18}
and sub-solar \citep{ventura15b, ventura16b} metallicities.
These comparisons outlined significant dissimilarities in the $M>3~M_{\odot}$ domain for what attains the evolution of the main physical parameters and the modification of the surface chemistry of the stars experiencing HBB, that are related to the differences in the description of turbulent convection, particularly in the inner regions of the convective envelope. On the other hand, consistency was found in the low-mass domain, where stars do not experience the HBB. All sources analyzed in this paper descend from $M\leq 2.5~M_{\odot}$ progenitors; therefore the conclusions
drawn in the present context are substantially independent of the stellar models used.
In Figure~\ref{fig:CNCO} we examine the PN carbon abundances in the context of stellar and nebular evolution. In the left panel, we compare data and models in the (C/H) -- (N/H) plane, and in the right panel in the (C/H) -- (O/H) plane. The plotted data, in black symbols, refer to the
ionic abundances derived in this paper. The errorbars showed in the figure give the typical carbon abundance uncertainty, in dex, if we assume a 5 or 10$\%$ uncertainties in the electron temperatures. Note that uncertainties in electron densities, line fluxes, and reddening are too small to make a difference in the plotted bar, as described in the Analysis section. Also note that these uncertainties stem from an initial guess, and are narrowly distributed, thus can be used for all the plotted points. The model surface chemistry of different initial mass and metallicity are indicated with color symbols. The models with the same initial metallicity have been connected with lines, and the initial stellar masses are also indicated in the Figure.
The ATON models and the chemical loci in Fig.~\ref{fig:CNCO} do not include the effects of deep mixing during the RGB evolution, which is effective for M$<2$M$_{\odot}$. For these models, the N/H in Fig.~\ref{fig:CNCO} is the lower limit of the theoretical expectations for stars exposed to deeper mixing while ascending the RGB. The extra-mixing affects carbon abundances as well in stars with M$\sim 1$M$_{\odot}$. In these stars -- experiencing only a few, if any, TDU events -- the extra-mixing has the effect of lowering C/H, thus the model carbon abundances should be interpreted as upper limits.
In the following discussion, we match the data points and models, assuming that binary interaction with stellar companions is not affecting the evolution and nucleosynthesis of the primary AGB star. This means we assume that the progenitor stars do not evolve through the common-envelope (CE) stage, i.e., they are either single stars or members of wide binary stars. We found that PNe with carbon abundances can be sorted into two major groups.
\subsection{PNe with low carbon abundances}
Planetary nebulae PN~G025.3-04.6 and PN~G042.9-06.9 (open squares in Fig.~\ref{fig:CNCO}), PN~G053.3+24.0 (crossed square), and PN~G295.3-09.3 (filled square) are characterized by similar, low carbon abundances (log(C/H)+12$\sim$7.9) and similar C/O ratios below unity. The carbon abundance of PN~G053.3+24.0 is uncertain. By their carbon abundances, it is unlikely that any of these PNe had progenitors with mass in the 1.5\,M$_{\odot}\leq$\,M\,$\leq3$\,M$_{\odot}$ range, since, if that was the case, they would exhibit significantly higher carbon abundances (log(C/H)+12$>$8.5). Furthermore, their nitrogen abundances (log(N/H)+12$<$8) seem to indicate that their progenitors did not go through the HBB process, seemingly excluding high-mass ($>$3\,M$_{\odot}$) progenitors. This scenario is reinforced by their low He abundances (log(He/H)+12$<$11.2, see Table \ref{tab:Parameters}), which seem to indicate that their progenitors did not experience a second dredge up.
From the comparison with models (Fig.~\ref{fig:CNCO}, left panel) the carbon abundances of these PNe would be compatible with those expected in the external layers of $\sim$1\,M$_{\odot}$ AGB stars with initial half-solar metallicity (red triangles), but the nitrogen abundances measured for these PNe are higher than the final surface abundances of AGB stars with such metallicity and mass.
The effect of extra-mixing on the RGB is included in the left plot of Figure~\ref{fig:CNCO}. To this end, we estimate the differences of the final yields if we include extra-mixing, for initial masses of 1 and 1.5\,M$_{\odot}$, following the prescriptions of \citet{Lagarde19}, and references therein. The yellow area of the figure indicates the final surface chemical abundances for progenitors in the 1-1.5 M$_{\odot}$ mass range and half-solar metallicity that had experienced extra-mixing on the RGB. If we assume that the progenitor mass of the observed low-carbon PNe is in the $\sim1.1-1.2$\,M$_{\odot}$ range, the extra-mixing would make both the carbon and nitrogen abundances compatible with the observations.
Most of the PNe in this group have oxygen abundances compatible with roughly half-solar (Z=0.008; open squares inig.~\ref{fig:CNCO}) metallicity models; the only exception is PN~G295.3-09.3 (filled square), which has a lower O abundance. This PN could still derive from a similar evolutionary path to the other PNe in this group, except with a lower metallicity progenitor. The right panel of Fig.~\ref{fig:CNCO} shows well the effect of initial metallicity on the O/H abundances and it is used to resolve the degeneracy between initial composition and CNO evolutionary effects.
Three of the low-carbon PNe (i.e., all except PN~G053.3$+$24.0) have been observed with Spitzer/IRS, thus their dust type is known. All three are ORD (oxygen-rich dust) PNe with amorphous dust type -- PN~G042.9$-$06.9 displays additional weak crystalline silicate features -- in agreement with the gas-phase carbon abundances and the observed nebular C/O ratios below unity, thus reinforcing the connection between the gas-phase to dust-phase chemistry, and our initial-mass and metallicity interpretation.
PNe in the low-carbon group are characterized by morphology that departs from symmetry, such as bipolar and point-symmetric. Observational analysis of large PN samples associates asymmetric morphology with high nitrogen abundances and low Galactic latitude, both hinting to younger, more massive progenitors \citep[e.g.,][]{MVS00}. Interestingly, PNe with low carbon abundance seem to be located away from the Galactic plane (see Table \ref{tab:Parameters}), based on their distances and uncertainties calibrated with Gaia parallaxes \citep{SBL20}, which is incompatible with high mass progenitors.
From the viewpoint of modeling, bipolar PN morphology has been linked to the presence of binary (sub)stellar companions \citep[e.g.,][]{JO17,DE20}, or to magnetic fields \citep[e.g.,][]{GS97},
although it has been shown that strong deviations from spherical symmetry, via the action of magnetic fields, generally require a binary companion \citep[e.g.,][]{NO07,GS14}. It appears that the observations for the low-carbon PNe agree with a low-mass progenitor, possibly with a sub-stellar companion.
\subsection{PNe with enhanced carbon abundances}
Planetary nebulae PN~G006.1+08.3, PN~G278.6-06.7, PN~G281.0-05.6 (open circles in Fig.~\ref{fig:CNCO}), PN~G107.4-02.6, PN~G275.3-04.7, and PN~G286.0-06.5 (filled circles), and
PN~G068.7+14.8 (not in the Figure for lack of ancillary abundances)
have enhanced C abundance – or lower limit thereof – (log(C/H)+12$\geq$8.5), which is compatible with several TDU episodes in the progenitor star. Following this interpretation, these PNe should have progenitors with masses in the 1.5$-$3.0\,M$_{\odot}$ range, which were formed around 0.25-1.5\,Gyr ago.
To verify this interpretation we should also consider the N and O abundances available in the literature (see Table \ref{tab:Parameters}). Two nebulae, PN~G006.1+08.3 and PN~G281.0-05.6, have nitrogen and oxygen abundances compatible with masses in the 1.5$-$3.0\,M$_{\odot}$ range, formed with a half-solar metallicity -- or slightly higher. A progenitor of 1.5$-$3.0\,M$_{\odot}$ and metallicity between half-solar and solar is compatible also with PN~G278.6-06.7 (note that above 1.5 M$_{\odot}$ the effects of extra-mixing are marginal, Lagarade et al. 2019). For this PN, a progenitor of a higher mass ($\sim$3.5M$_{\odot}$) and lower metallicity would also comply with the observed C, N, and O abundances, although its argon abundance would rule it out: the expected argon abundance at Z=0.004 is 12+log(Ar/H)=5.65 while the measured abundance is definitively higher \citep[measured log(Ar/H)+12$\sim$6]{GHG14}. The N and O abundances of PN~G286.0-06.5 and PN~G275.3-04.7 are compatible with Z=0.004 models of masses in the range of 1.5$-$3.0\,M$_{\odot}$. The same type of progenitor is plausible for PN~G107.4-02.6, even if in this case N abundance is not available from the literature.
Three of these PNe have round or elliptical morphology, two of them have uncertain morphology from the 2d UV spectrograms (no resolved optical imaging available), and one (PN~G286.0-06.5) is an elongated bipolar.
All PNe in the high-carbon group have CRD, either aromatic or aliphatic (two objects display both dust types), consistently with the gas-phase abundances, showing complete agreement between the dust-phase and gas-phase carbon.
\section{Discussion}
We determined that several PNe in our sample have C/O$<$1. We can infer that their progenitors did not go through the carbon star phase by comparing their chemistry to the stellar AGB models. PN~G25.3--4.5, PN~G42.9--06.9, and PN~G295.3--09.3
seem to have evolved from $\sim 1.1-1.2$M$_{\odot}$ progenitor stars, whose surface C and N abundances are the results of a few TDU events and deep mixing during the RGB, respectively (yellow area in the left panel of Fig.~\ref{fig:CNCO}). All low-carbon PNe have faint spectra, and they are far from the Galactic plane. The latter observable is consistent with the scenario that the progenitors of these low-carbon PNe are low-mass AGB that were in binary systems, with a sub-stellar body as a companion. \citet{DE20} has shown that all fourteen C/O$<$1 AGB stars observed with ALMA under their ATOMIUM program are aspherical, suggesting that binary interaction may dominate the evolution of low-mass AGB stars with low C/O.
It is worth noting that the carbon abundance of PN~G053.3+24.0 is uncertain, thus its classification within this evolutionary group also uncertain. Note also that its optical morphology \citep{SSV16} is rather different from that of the other PNe in this group.
The group of PNe that have enhanced carbon abundances could have progenitors with masses in the $\sim 1.5-2.5$ M$_{\odot}$ range, which were formed around 0.25-1.5\,Gyr ago. Unfortunately, all carbon abundances of this group of PNe are either lower limits or are uncertain. Nonetheless, their status of carbon-rich PNe is supported by their Spitzer/IRS CRD dust types.
In Figure 6 we plot the log(C/O) vs. log(O/H)+12 for the compact Galactic PNe studied here and elsewhere, together with the samples of Magellanic Cloud PNe. In this plot we included all compact PNe with UV-based carbon abundances published in the literature or studied in this paper. We indicate the PN population by the symbol shape: triangles for the SMC, squares for the LMC, circles for compact Galactic PN (this study; O, N, and C abundances from Tables 2 and 9); plus signs, crosses, and asterisks for compact Galactic PNe from other studies, respectively, \citet{2015ApJ...803...23D, 2000ApJ...531..928H, KB94}. We use the symbol's color to indicate the dust status of the PNe from Spitzer/IRS spectroscopy: Cyan symbols: featureless dust spectra, Red symbols: carbon-rich dust (CRD) PNe; blue symbols: oxygen-rich dust (ORD) PNe, black symbols: no IRS dust information. Since we selected the Galactic PNe, both from this study and from the literature, based on their apparent sizes ($\theta<$5\arcsec), their diameters are smaller than the Spitzer/IRS aperture, thus the comparison with Magellanic Cloud and compact Galactic PNe of the other samples is meaningful, as all spectra include the flux from the whole nebular surface.
We found complete segregation of CRD PNe in the C/O$>$1 quadrant, and of ORD PNe in the C/O$<$1 quadrant. This occurs independently on stellar or galactic metallicity. Our carbon analysis indicates that the sample studied here has predominantly super solar carbon abundance, with median carbon $<$C/H$>_{\rm med}\pm\sigma=6.31\pm3.52\times10^{-4}$.
We plot on the Figure the fit by \citet{Nicholls} derived by interpolating stellar abundances (see references cited therein). The nebular enrichment of carbon is clearly seen for CRD PNe, independent of the studied population, a confirmation of PN carbon enrichment role \citep[e.g.]{2018MNRAS.473..241H}, with the added value of the correlation with dust composition.
It is worth noting that the correspondence between dust and gas abundances -- i.e., all CRD PNe have C/O$>$1, and all ORD PNe have C/O$<$1 -- is stronger in our study than in the work by \citet{DIR14}, who found a few exceptions to this correspondence, likely due to the mismatch between the {\it Spitzer} and other spectral apertures, and the inclusion of extended Galactic PNe in their sample.
\section{Summary}
We selected 75 compact, or moderately extended, Galactic PNe to be observed with {\it HST}/STIS through the G230L and G140L gratings to detect their UV emission lines for carbon abundance measurements. Only 30 of the targets have been observed in two ''snapshot" programs, and we measured carbon abundances of 11 targets. With the support of ancillary data sets we found a striking correlation between gas-phase (this and other studies of UV-based carbon abundances in compact Galactic PNe) and dust-phase (Spitzer/IRS) carbon abundances, i.e., {\it all} carbon-rich dust (CRD) PNe studied here have C/O$>$1, and all ORD PNe have C/O$<$1. By studying these correlations together with those found in Magellanic Cloud PNe we found that this one-to-one correlation is independent of the initial progenitor's metallicity.
We compared the loci of the C, N, O abundance patterns on different diagnostic planes for our PN sample with the footprints of the final yields from stellar evolution models. We found that the progenitors of most carbon-poor PNe are likely in the M/M$_{\odot}\le 1.2$ range, with slightly sub-solar metallicity. Identifying such old progenitors is
useful to calibrate a radial metallicity gradient for old Galactic probes \citep{SH18}. It is worth noting that, while Gaia distances from parallaxes are not available for all the CSs of the compact Galactic PNe studied here, statistical distances based on Gaia DR2 parallaxes indicate that PNe in this group are generally far from the Galactic plane, an additional indication of a very old Galactic population.
We also found that the carbon-enhanced PNe in our sample are the likely progeny of carbon stars in the $1.5\le$ M/M$_{\odot}\le 3$ range.
This work presents a limited but important sample of carbon abundances from UV lines in compact Galactic PNe, it augments considerably the number of Galactic PNe whose carbon abundances have been measured based on {\it HST} spectra, and it greatly expands the sample for which gas-phase and dust-phase carbon can be simultaneously available in compact PNe.
\acknowledgments
This research is based on observations made with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-–26555. These observations are associated with GO programs 15211 and 16013. We thank an anonymous Referee for important comments on this paper. We acknowledge the usage of the pyneb package, and thank Christophe Morisset for his help. DAGH acknowledges support from the ACIISI, Gobierno de Canarias and the European Regional Development Fund (ERDF) under grant with reference PROID2020010051 as well as from the State Research Agency (AEI) of the Spanish Ministry of Science and Innovation (MICINN) under grant PID2020-115758GB-I00.
\vspace{5mm}
\facilities{HST(STIS)}
\startlongtable
\begin{deluxetable*}{lllllrD}
\tablecaption{Observing Log\label{tab:ObsLog}}
\tablewidth{0pt}
\tablehead{
\colhead{PN G} & \colhead{Name} & \colhead{Date} & \colhead{Obs ID\tablenotemark{a}} & \colhead{Grating} & \colhead{Duration\tablenotemark{b}} & \multicolumn2c{Aperture}\\
& & & & & \colhead{(s)} & \multicolumn2c{(arcsec)}
}
\decimals
\startdata
003.9--14.9 & Hb 7 & 2019 May 22 & odk350kqq & G140L & 218 & 2.0 \\
& & & odk350krq & G230L & 185 & {} \\
004.3--02.6 & H 1-53 & 2019 May 23 & odk370tzq & G140L & 218 & 0.25 \\
& & & odk370u0q & G230L & 185 & {} \\
006.1+08.3 & M 1-20 & 2018 Jun 06 & odk362khq & G140L & 0 & . \nodata \\
& & & odk362kiq & G230L & 1205 & 2.0 \\
011.1+07.0 & Sa 2-237 & 2019 Mar 09 & odk354yeq & G140L & 37 & 0.25 \\
& & & odk354yfq & G230L & 30 & {} \\
025.3--04.6 & K 4-8 & 2018 Aug 28 & odk304haq & G140L & 236 & 1.00 \\
& & & odk304hbq & G230L & 208 & {} \\
032.5--03.2 & K 3-20 & 2018 Sep 10 & odk357d9q & G140L & 1200 & 0.25 \\
& & & odk357daq & G230L & 1200 & {} \\
038.4--03.3 & K 3-20 & 2018 Sep 10 & odk358buq & G140L & 1200 & 0.50 \\
& & & odk358bvq & G230L & 1200 & {} \\
038.7--03.3 & M 1-69 & 2018 May 28 & odk371d1q & G140L & 152 & 0.25 \\
& & & odk371d2q & G230L & 155 & {} \\
042.9--06.9 & NGC 6807 & 2018 Nov 05 & odk306gaq & G140L & 20 & 1.00 \\
& & & odk306gbq & G230L & 17 & {} \\
048.5+04.2 & K 4-16 & 2018 Nov 05 & odk308ieq & G140L & 1200 & 0.25 \\
& & & odk308ifq & G230L & 1200 & {} \\
053.3+24.0 & Vy 1-2 & 2019 Mar 30 & odk310xcq & G140L & 206 & 3.00 \\
& & & odk310xdq & G230L & 169 & {} \\
068.7+14.8 & Sp 4-1 & 2018 Aug 06 & odk313a3q & G140L & 155 & 1.25 \\
& & & odk313a4q & G230L & 180 & {} \\
095.2+00.7 & K 3-62 & 2018 Jul 22 & odk364pdq & G140L & 1200 & 0.25 \\
& & & odk364peq & G230L & 1200 & {} \\
097.6--02.4 & M 2-50 & 2019 Jul 11 & odk315o2q & G140L & 1200 & 0.275 \\
& & & odk315o3q & G230L & 1200 & {} \\
107.4--02.6 & K 3-87 & 2018 Mar 06 & odk318n3q & G140L & 1200 & 0.275 \\
& & & odk318n4q & G230L & 1200 & {} \\
232.8--04.7 & M 1-11 & 2018 Jun 02 & odk367bwq & G140L & 1200 & 1.50 \\
& & & odk367bxq & G230L & 1200 & {} \\
264.4--12.7 & He 2-5 & 2018 Aug 20 & odk321egq & G140L & 175 & 2.75 \\
& & & odk321ehq & G230L & 149 & {} \\
275.3--04.7 & He 2-21 & 2020 Jul 28 & oe7322hvq & G140L & 1200 & 2.50 \\
& & & oe7322hwq & G230L & 1200 & {} \\
278.6--06.7& He 2-26 & 2018 Jan 07 & odk323ruq & G140L & 175 & 2.50 \\
& & & odk323rvq & G230L & 154 & {} \\
281.0--05.6 & IC 2501 & 2018 Mar 03 & odk369fcq & G140L & 30 & 5.00 \\
& & & odk369fdq & G230L & 26 & {} \\
285.4+01.5 & Pe 1-1 & 2019 Jun 10 & odk324c6q & G140L & 1200 & 0.25 \\
& & & odk324c7q & G230L & 1200 & {} \\
285.4+02.2 & Pe 2-7 & 2019 Feb 02 & odk325ngq & G140L & 1200 & 0.25 \\
& & & odk325nhq & G230L & 1200 & {} \\
286.0--06.5 & He 2-41 & 2019 Feb 02 & odk326ydq & G140L & 623 & 2.00 \\
& & & odk326yeq & G230L & 567 & {} \\
& & 2019 Oct 31 & oe7326rmq & G140L & 623 & {} \\
& & & oe7326rnq & G230L & 567 & {} \\
295.3--09.3 & He 2-62 & 2018 Jan 05 & odk329d3q & G140L & 87 & 1.00 \\
& & & odk329d4q & G230L & 76 & {} \\
309.0+00.8 & He 2-96 & 2018 Jun 15 & odk331e1q & G140L & 1200 & 0.25 \\
& & & odk331e2q & G230L & 1200 & {} \\
336.9+08.3 & St Wr 4-10& 2019 Feb 27 & odk337pdq & G140L & 0 & . \nodata \\
& & & odk337peq & G230L & 0 & \nodata \\
340.9--04.6 & Sa 1-5 & 2020 Aug 27 & oe7338stq & G140L & 1200 & 1.25 \\
& & & oe7338suq & G230L & 1200 & {} \\
343.4+11.9 & H 1-1& 2020 Sep 20 & oe7340fxq & G140L & 0 & \nodata \\
& & & oe7340fyq & G230L & 0 & {} \\
351.3+07.6 & H 1-4 & 2019 May 05 & odk345h5q & G140L & 842 & 1.00 \\
& & & odk345h6q & G230L & 765 & {} \\
355.2--02.5 & H 1-29 & 2019 Jun 17 & odk374feq & G140L & 496 & 1.00 \\
& & & odk374ffq & G230L & 512 & {} \\
\enddata
\tablenotetext{a}{Observation identifiers beginning with ``odk3'' correspond to {\it HST} program GO-15211, and those beginning with ``oe73'' correspond to GO-16013.}
\tablenotetext{b}{A duration of zero indicates an on-board failure of the exposure.}
\end{deluxetable*}
\clearpage
\begin{deluxetable}{ l l r r r l l }
\tablewidth{0pt}
\tabletypesize{\footnotesize{}}
\tablecolumns{5}
\tablewidth{0pt}
\tablecaption{PN parameters\label{tab:Parameters}}
\tablehead {\colhead{PN~G}& \colhead{$|z_{\rm kpc}|$\tablenotemark{a}}&
\colhead{${\rm log}(He/H)+12$\tablenotemark{b}}& \colhead{${\rm log}(N/H)+12$\tablenotemark{b}}& \colhead{${\rm log}(O/H)+12$\tablenotemark{b}}& \colhead{Morph.}& \colhead{Dust Type\tablenotemark{c}} \\}
\startdata
006.1+08.3 & 1.09&
11.02$\pm$0.05& 7.78$\pm$0.08& 8.56$\pm$0.08& E\tablenotemark{d}& CRD; aromatic/aliphatic\\
025.3--04.6 & 0.84&
11.01$\pm$0.03& 7.79$\pm$0.06& 8.59$\pm$0.09& P\tablenotemark{e}& ORD; amorphous\\
042.9--06.9\tablenotemark{f}& 0.84&
10.99$\pm$0.03& 8.00$\pm$0.24& 8.57$\pm$0.08&B\tablenotemark{e}& ORD; crystalline/amorphous\\
053.3+24.0\tablenotemark{f}& 3.16&
11.23$\pm$0.05& 7.90$\pm$0.05& 8.46$\pm$0.05& B\tablenotemark{e}& N/A\\
068.7+14.8& 3.24&
$\dots$& $\dots$& $\dots$& R\tablenotemark{e}& CRD; aromatic\\
107.4--02.6& 0.56&
11.01$\pm$0.05& $\dots$& 8.29$\pm$0.19& E\tablenotemark{e}& CRD; aliphatic\\
275.3--04.7& 0.37&
11.08$\pm$0.04& 7.64$\pm$0.09& 8.44$\pm$0.10& E\tablenotemark{e}& CRD; aliphatic\\
278.6--06.7& 0.45&
11.00$\pm$0.04& 8.00$\pm$0.13& 8.52$\pm$0.11& E\tablenotemark{e}& CRD; aliphatic\\
281.0--05.6& 0.61&
$\dots$& 8.16$\pm$0.00& 8.63$\pm$0.10& E\tablenotemark{e}& CRD; aromatic/aliphatic\\
286.0--06.5& 0.76&
10.99$\pm$0.03& 7.66$\pm$0.08& 8.32$\pm$0.07& B\tablenotemark{e}& CRD; aliphatic\\
295.3--09.3& 1.63&
11.01$\pm$0.03& 7.87$\pm$0.12& 8.15$\pm$0.07& B\tablenotemark{e}& ORD; amorphous\\
351.3+07.6& 2.90&
$\dots$& $\dots$& $\dots$& BC\tablenotemark{e}& ORD; amorphous\\
\enddata
\tablenotetext{a}{Calculated using the Gaia DR2-calibrated distance scale \citep{SBL20}, where d$_{\rm scale}$/d$_{\rm par}$=0.95$\pm$0.25. PN~G251.3+07.6 distance was derived directly from the DR2 Gaia parallax.}
\tablenotetext{b}{The He, N, and O abundances used in this study are from \citet{GHG14} except for PN~G042.9--06.9 and PN~G275.3-04.7 \citep{PMS}, and PN~G053.3+24.0 \citep{S06}.}
\tablenotetext{c}{Dust types from \citet{SGG12}, except for PN~G006.1+08.3 \citep{PC09,GH10} and PN~G281.0--05.6 \citep{OT14}.}
\tablenotetext{d}{Uncertain morphology, based only on UV slitless spectroscopy (this study).}
\tablenotetext{e}{Morphology derived from WFC3 imaging through a selection of filters \citep{SSV16}. }
\tablenotetext{f}{May be a halo PN.}
\end{deluxetable}
\pagebreak
\begin{deluxetable*}{lc DD h DD h DD}
\tablecaption{Relative Emission Line Fluxes\label{tab:Flux1}}
\tablewidth{0pt}
\tabletypesize{\footnotesize{}}
\tablehead{
\colhead{Wave} & \colhead{ID} &
\multicolumn4c{006.1+08.3} & \colhead{} &
\multicolumn4c{025.3--04.6} & \colhead{} &
\multicolumn4c{042.9--06.9}\\
\cline{3-6} \cline{8-11} \cline{13-16}
\colhead{(\AA)} & &
\multicolumn2c{F$_{\lambda}$} & \multicolumn2c{I$_{\lambda}$} & &
\multicolumn2c{F$_{\lambda}$} & \multicolumn2c{I$_{\lambda}$} & &
\multicolumn2c{F$_{\lambda}$} & \multicolumn2c{I$_{\lambda}$}
}
\decimals
\startdata
1907+09 & \ion{C}{3}] &
5.08\pm0.46 & 173.92\pm15.75 & &
10.03\pm0.96 & 45.65\pm4.39 & &
6.86\pm0.51 & 23.53\pm1.76 \\
$\sim$2810 & ? &
4.57\pm0.13 & 26.55\pm0.74 & &
. \nodata & . \nodata & &
. \nodata & . \nodata \\
\hline
& log F$_{H\beta}$ &
\multicolumn{4}{c}{$-11.93\pm0.01$} & &
\multicolumn{4}{c}{$-12.44\pm0.10$} & &
\multicolumn{4}{c}{$-11.41\pm0.01$} \\
& $c_{H\beta}$ &
\multicolumn{4}{c}{$1.17\pm0.10$} & &
\multicolumn{4}{c}{$0.50\pm0.10$} & &
\multicolumn{4}{c}{$0.41\pm0.10$} \\
\enddata
\end{deluxetable*}
\begin{deluxetable*}{lc DD h DD h DD}
\tablecaption{Relative Emission Line Fluxes --- Continued\label{tab:Flux2}}
\tablewidth{0pt}
\tabletypesize{\footnotesize{}}
\tablehead{
\colhead{Wave} & \colhead{ID} &
\multicolumn4c{053.3+24.0} & \colhead{} &
\multicolumn4c{068.7+14.8} & \colhead{} &
\multicolumn4c{107.4--02.6} \\
\cline{3-6} \cline{8-11} \cline{13-16}
\colhead{(\AA)} & &
\multicolumn2c{F$_{\lambda}$} & \multicolumn2c{I$_{\lambda}$} & &
\multicolumn2c{F$_{\lambda}$} & \multicolumn2c{I$_{\lambda}$} & &
\multicolumn2c{F$_{\lambda}$} & \multicolumn2c{I$_{\lambda}$} \\
}
\decimals
\startdata
1640 & \ion{He}{2} &
88.67\pm0.42 & 101.63\pm0.48 & &
. \nodata & . \nodata & &
. \nodata & . \nodata \\
1907+09 & \ion{C}{3}] &
15.39\pm0.72 & 17.90\pm0.84 & &
197.86\pm0.67 & 615.87\pm2.08 & &
9.10\pm0.34 & 396.6\pm14.86 \\
2325--29 & \ion{C}{2}] &
20.74\pm0.52 & 24.11\pm0.60 & &
38.24\pm0.70 & 118.48\pm2.15 &&
. \nodata & . \nodata \\
2470 & [\ion{O}{2}] &
3.11\pm0.31 & 3.49\pm0.34 & &
. \nodata & . \nodata & &
. \nodata & . \nodata \\[0.1cm]
3023 & \ion{O}{3} &
4.89\pm0.52 & 5.20\pm0.55 & &
. \nodata & . \nodata & &
. \nodata & . \nodata \\
3043+47 & \ion{O}{3} &
2.09\pm0.55 & 2.22\pm0.59 & &
. \nodata & . \nodata & &
. \nodata & . \nodata \\
\hline
& log F$_{H\beta}$ &
\multicolumn{4}{c}{$-11.51\pm0.01$} & &
\multicolumn{4}{c}{$-11.95\pm0.10$} &&
\multicolumn{4}{c}{$-13.21\pm0.20$} \\
& $c_{H\beta}$ &
\multicolumn{4}{c}{$0.05\pm0.10$} & &
\multicolumn{4}{c}{$0.38\pm0.10$} &&
\multicolumn{4}{c}{$1.25\pm0.10$} \\
\enddata
\end{deluxetable*}
\pagebreak
\begin{deluxetable*}{lc DD h DD h DD}
\tablecaption{Relative Emission Line Fluxes\label{tab:Flux3}}
\tablewidth{0pt}
\tabletypesize{\footnotesize{}}
\tablehead{
\colhead{Wave} & \colhead{ID} &
\multicolumn4c{275.3--04.7} & \colhead{} &
\multicolumn4c{278.6--06.7} & \colhead{} &
\multicolumn4c{281.0--05.6} \\
\cline{3-6} \cline{8-11} \cline{13-16}
\colhead{(\AA)} & &
\multicolumn2c{F$_{\lambda}$} & \multicolumn2c{I$_{\lambda}$} & &
\multicolumn2c{F$_{\lambda}$} & \multicolumn2c{I$_{\lambda}$} & &
\multicolumn2c{F$_{\lambda}$} & \multicolumn2c{I$_{\lambda}$} \\
}
\decimals
\startdata
1548+50 & \ion{C}{4} &
134.6\pm0.44 & 1206.5\pm3.94 & &
33.89\pm0.28 & 141.51\pm1.19 & &
. \nodata & . \nodata \\
1640 & \ion{He}{2} &
48.97\pm0.42 & 396.5\pm3.40 & &
18.70\pm0.22 & 73.03\pm0.87 & &
. \nodata & . \nodata \\
1907+09 & \ion{C}{3}] &
120.1\pm1.28 & 1139.9\pm12.15 & &
137.67\pm0.32 & 621.27\pm1.46 & &
54.73\pm0.47 & 271.20\pm2.32 \\
2325 & \ion{C}{2}] &
. \nodata & . \nodata & &
16.71\pm3.31 & 74.97\pm1.48 & &
. \nodata & . \nodata \\
2424 & [\ion{Ne}{4}] &
11.81\pm0.97 & 88.09\pm7.23 & &
. \nodata & . \nodata & &
. \nodata & . \nodata \\
2836 & \ion{O}{3} &
. \nodata & . \nodata & &
3.53\pm0.18 & 7.32\pm0.37 & &
. \nodata & . \nodata \\
3133 & \ion{O}{3} &
. \nodata & . \nodata & &
6.39\pm0.53 & 11.31\pm0.94 & &
. \nodata & . \nodata \\
\hline
& log F$_{H\beta}$ &
\multicolumn{4}{c}{$-12.15\pm0.10$} & &
\multicolumn{4}{c}{$-11.55\pm0.01$} & &
\multicolumn{4}{c}{$-10.67\pm0.01$} \\
& $c_{H\beta}$ &
\multicolumn{4}{c}{$0.8034\pm0.10$} & &
\multicolumn{4}{c}{$0.50\pm0.10$} & &
\multicolumn{4}{c}{$0.53\pm0.05$} \\
\enddata
\end{deluxetable*}
\begin{deluxetable*}{lc DD h DD h DD}
\tablecaption{Relative Emission Line Fluxes\label{tab:Flux4}}
\tablewidth{0pt}
\tabletypesize{\footnotesize{}}
\tablehead{
\colhead{Wave} & \colhead{ID} &
\multicolumn4c{286.0--06.5} & \colhead{} &
\multicolumn4c{295.3--09.3} & \colhead{} &
\multicolumn4c{351.3+07.6} \\
\cline{3-6} \cline{8-11} \cline{13-16}
\colhead{(\AA)} & &
\multicolumn2c{F$_{\lambda}$} & \multicolumn2c{I$_{\lambda}$} & &
\multicolumn2c{F$_{\lambda}$} & \multicolumn2c{I$_{\lambda}$} & &
\multicolumn2c{F$_{\lambda}$} & \multicolumn2c{I$_{\lambda}$}
}
\decimals
\startdata
1640 & \ion{He}{2} &
. \nodata & . \nodata & &
. \nodata & . \nodata & &
8.60\pm0.63 & 56.10\pm4.09 \\
1658--66 & [\ion{O}{3}] &
2.00\pm0.12 & 13.35\pm0.79 & &
. \nodata & . \nodata & &
. \nodata & . \nodata \\
1907+09 & \ion{C}{3}] &
76.02\pm0.12 & 620.04\pm1.43 & &
33.53\pm0.45 & 127.40\pm11.72 & &
. \nodata & . \nodata \\[0.1cm]
2325--29 & \ion{C}{2}] &
10.49\pm0.18 & 84.81\pm1.48 & &
. \nodata & . \nodata & &
. \nodata & . \nodata \\
2470 & [\ion{O}{2}] &
2.91\pm0.18 & 14.15\pm0.85 & &
7.02\pm0.60 & 19.21\pm1.64 & &
. \nodata & . \nodata \\
2836 & \ion{O}{3} &
. \nodata & . \nodata & &
3.82\pm0.40 & 7.27\pm0.76 & &
. \nodata & . \nodata \\
\hline
& log F$_{H\beta}$ &
\multicolumn{4}{c}{$-11.90\pm0.10$} & &
\multicolumn{4}{c}{$-11.94\pm0.10$} & &
\multicolumn{4}{c}{$-12.35\pm0.10$} \\
& $c_{H\beta}$ &
\multicolumn{4}{c}{$0.70\pm0.10$} & &
\multicolumn{4}{c}{$0.44\pm0.10$} & &
\multicolumn{4}{c}{$0.69\pm0.10$} \\
\enddata
\end{deluxetable*}
\begin{deluxetable*}{lrllrrl}
\tabletypesize{\scriptsize}
\tablewidth{0pt}
\label{tab:Diag}
\tablecaption{Plasma diagnostics}
\tablehead {
\colhead{PN G}& \colhead{log(N$_{\rm e})$}& \colhead{Ion}& \colhead{Ref.}&\colhead{T$_{\rm e}$}& \colhead{Ion}& \colhead{Ref.} \\
\colhead{}& \colhead{[cm$^{-3}$]}& \colhead{}& \colhead{} & \colhead{[$10^3$ K]}& \colhead{}& \colhead{}\\
}
\startdata
006.1+08.3 & 4.00 & [\ion{S}{2}] & WL07 &9.86 & [\ion{O}{3}] & WL07 \\
025.3--04.6 & 4.06& [\ion{S}{2}] & GHG14 & 10.64 & [\ion{O}{3}] & GHG14 \\
042.9--06.9 & 5.00 & [\ion{S}{2}] & GHG14 & 10.27 & [\ion{O}{3}] & GHG14 \\
053.3+24.0 & 3.06 & [\ion{S}{2}] & WLB05 & 10.40 & [\ion{O}{3}] & WLB05 \\
068.7+14.8 & 3.27 & [\ion{O}{2}] & WLB05 & 11.24 & [\ion{O}{3}] & WLB05 \\
107.4--02.6 & 3.04 & [\ion{S}{2}] & K96 & 10.30 & [\ion{O}{3}] & K96 \\
275.3-04.7& 3.08& [\ion{O}{2}] & GHG14 & 10.11 & [\ion{N}{2}] & GHG14 \\
& $\dots$ & $\dots$ & $\dots$ & 12.98 & [\ion{O}{3}] & GHG14 \\
278.6--06.7 & 3.63 & [\ion{S}{2}] & SSG12 & 11.63 & [\ion{O}{3}] & \tablenotemark{a}\\
281.0--05.6 & 3.62 & [\ion{S}{2}] & \tablenotemark{a} & 9.87 & [\ion{O}{3}] & K86 \\*
286.0--06.5 & 3.35 & [\ion{S}{2}] & SGG12 & 11.12 & [\ion{O}{3}] & \tablenotemark{a}\\
295.3--09.3 & $>$4.00\tablenotemark{b} & [\ion{S}{2}] & \tablenotemark{a} & 11.87 & [\ion{O}{3}] & \tablenotemark{a} \\
351.3+07.6 & $\dots$ & $\dots$ & $\dots$ & 11.77& [\ion{O}{3}] & \tablenotemark{a} \\
\enddata
\tablerefs{GHG14: \cite{GHG14}; K86: \citet{K86}; SK89: \citet{SK89}; SGG12: \citet{SGG12}; WLB05: \citet{WLB05}; WL07: \citet{WL07}}
\tablenotetext{a}{This study; diagnostics flux ratios are from \citet{AMO92}.}
\tablenotetext{b}{For abundance analysis we assume log N$_e = 4.5$}
\end{deluxetable*}
\begin{deluxetable}{ l ll}
\tablecolumns{3}
\tablewidth{0pt}
\tablecaption{References, atomic data}
\tablehead {
\colhead{Ion}& \colhead{A-values}& \colhead{Collisional excitation} \\
}
\startdata
C$^{+}$& \citet{Galavis98}& \citet{Blum92}\\
C$^{+2}$& \citet{Wiese96}& \citet{Berrington85}\\
C$^{+3}$& \citet{Wiese96}& \citet{Aggarwal2004}\\
O$^{+}$& \citet{Zeippen1982}& \citet{Kisielius2009}\\
Ne$^{3+}$& \citet{Godefroid1984}& \citet{Giles1981}\\
\enddata
\end{deluxetable}
\begin{deluxetable}{ l r r r r l}
\tabletypesize{\footnotesize}
\tablecolumns{5}
\tablewidth{0pt}
\label{tab:Carbon}
\tablecaption{Carbon abundances}
\tablehead {
\colhead{PN~G}& \colhead{log($C^+/H^+$)}& \colhead{log($C^{2+}/H^+$)} & \colhead{log($C^{3+}/H^+$)}&
\colhead{ICF(C)}& \colhead{log$(C/H)+12$} \\
}
\startdata
006.1+08.3 & $\dots$ & -3.42& $\dots$ & $\dots$& 8.58\tablenotemark{a}\\
025.3--04.6 & $\dots$ & $-4.14$ & $\dots$ & 1.04& 7.88\\
042.9--06.9 & $\dots$ & -4.29& $\dots$ & 1.90& 7.99\\
053.3+24.0 & -4.44 & -4.48 & $\dots$& $\dots$& 7.84\tablenotemark{b} \\
068.7+14.8 & -3.96 & -3.19& $\dots$& $\dots$& 8.88\\
107.4--02.6 & $\dots$ & -3.10& $\dots$ & $\dots$ & 8.90\tablenotemark{a}\\
275.3--04.7 & $\dots$& -3.34& -3.67& 1.04& 8.85\\
278.6--06.7 & -4.24& -3.29 & -4.22& $\dots$& 8.80 \\
281.0--05.6 & $\dots$ & -3.12& $\dots$& 1.08& 8.92\\
286.0--06.5 & -4.07 & -3.15 & $\dots$& $\dots$& 8.90\\
295.3--09.3 & $\dots$ & -4.05& $\dots$ & $\dots$& 7.95\tablenotemark{a}\\
\enddata
\tablenotetext{a}{The atomic abundance is a lower limit because we could not correct for unseen emission lines}
\tablenotetext{b}{The atomic abundance is uncertain (see text).}
\end{deluxetable}
\begin{deluxetable}{ l r r r r}
\tabletypesize{\footnotesize}
\tablecolumns{5}
\tablewidth{0pt}
\label{tab:OtherAb}
\tablecaption{Other abundances}
\tablehead {
\colhead{PN~G}& \colhead{log($He^2+/H^+$)}& \colhead{log($O^{+}/H^+$)} & \colhead{log($Ne^{3+}/H^+$)}& \colhead{} \\
}
\startdata
053.3+24.0 & -1.89 & -4.54 & \\
275.3-04.7 & $\dots$& $\dots$& -4.41\\
278.6--06.7 & -2.04 & $\dots$ & $\dots$ \\
286.0--06.5 & $\dots$& -4.00 & $\dots$ \\
295.3--09.3 & $\dots$ & -4.23& $\dots$ \\
351.3+07.6 & -2.15 & $\dots$ & $\dots$ \\
\enddata
\end{deluxetable}
\begin{figure}[ht!]
\plotone{fig1.pdf}
\caption{False-color rendering of the spectrograms for PN~G053.3+24.0 in the Far-UV (upper) and Near-UV (middle), with identifications of the strongest emission lines (blue labels). The vertical extent corresponds to the boundaries of the extraction aperture. Note that significant NUV emission originates from the superposition of many weak, blended nebular emission lines. Also shown (lower) are the 1-D summed spectra in the Far-UV (blue curve) and Near-UV (orange curve).}
\end{figure}
\begin{figure}[ht!]
\plotone{fig2.pdf}
\caption{False-color rendering of the spectrograms, as in Fig.~1 but for PN~G275.3--04.7. The bright region shortward of $\sim1300$~\AA\ is from geocoronal L$\alpha$.}
\end{figure}
\begin{figure}[ht!]
\plotone{fig3.pdf}
\caption{False-color rendering of the spectrograms, as in Fig.~1 but for PN~G278.6--06.7.}
\end{figure}
\begin{figure}[ht!]
\plotone{fig4.pdf}
\caption{False-color rendering of the spectrograms, as in Fig.~1 but for PN~G286.0--06.5 }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.495\columnwidth]{CN_new.pdf}
\includegraphics[width=0.495\columnwidth]{CO_new.pdf}
\vskip-30pt
\caption{Left panel: log(N/H)+12 vs. log(C/H)+12; right panel: log(O/H)+12 vs. log(C/H)+12. In both panels, carbon abundances are from this paper (Table 9, and other abundances are from the literature (Table \ref{tab:Parameters} and references therein).
The different black symbols represent the abundance groups from our carbon analysis and interpretation, as given in the legend: Open squares are low-carbon PNe, which we interpret as descending from half-solar metallicity stars; filled squares are low-carbon PNe descending from low metallicity stars; the crossed square represents PN~G053.3+24.0, whose carbon abundance is uncertain.
The open circles represent enhanced carbon PNe descending from half-solar metallicity stars, whereas filled circles are for enhanced carbon PNe descending from low-metallicity stars (see text).
We also show in the panel the representative carbon abundance errorbars derived from a 5$\%$ and 10$\%$ uncertainty in the electron temperature, for reference.
In both panels, the symbols in color, connected by lines, represent the final abundances of AGB stars with initial metallicities Z=0.014 (blue squares), Z=0.008 (red triangles), and Z=0.004 (green pentagons), calculated from the models. The numbers indicate the initial mass of the model, in $M_{\odot}$.
In the left panel, the dashed red arrows point to the final C and N abundances of the 1\,M$_{\odot}$ (left arrow) and 1.5\,M$_{\odot}$ (right arrow) models, if deep mixing in RGB is taken into account. The yellow area in the left panel highlights the range of final C and N values of the ejecta from stars with mass 1\,M$_{\odot}< $M$ < 1.5\,$M$_{\odot}$, if deep mixing in the RGB is taken into account.}
\label{fig:CNCO}
\end{figure}
\clearpage
\begin{figure}
\includegraphics[width=0.75\columnwidth]{fig6.pdf}
\caption{Compact PNe in 3 galaxies, in the log(C/O) vs. log(O/H)+12 plane. This plot includes all SMC, LMC, and compact Galactic PNe with $\theta<5\arcsec$, whose carbon abundances has been measured from the UV emission lines.
Symbol shapes represent the PN sample: Triangles: SMC PNe, with abundances from \citet{VSD17} and references therein; squares: LMC PNe, with abundances from \citet{VSD16} and referenced therein; circles: compact Galactic PNe from this work, with carbon abundances from Table 9 and oxygen abundances from table 2; crosses: compact Galactic PNe, with abundances from \citet{2015ApJ...803...23D} (D15); plus signs: compact Galactic PNe, with abundances from \citet{2000ApJ...531..928H} (H00), and asterisks: compact Galactic PNe, with abundances from \citet{KB94} (KB94).
The symbols are color-coded for their dust type according to the analysis of their Spitzer/IRS spectra \citep{SGG07,SGG12}; cyan: featureless (F) IRS spectra; red: carbon-rich dust (CRD), blue: oxygen-rich dust (ORD), and black: no dust information available through Spitzer/IRS analysis.
The thick solid line represents the fit to the locus of stars \citep{Nicholls}; the light solid line represents C=O; the light dashed lines marks the solar values of C and O \citep{Asplund}.}
\end{figure}
\clearpage
|
2,869,038,155,056 | arxiv | \section{Introduction}
The X-ray emission from most Narrow-Line Seyfert 1 galaxies (NLS1) is
characterised by a steep soft X-ray spectrum and rapid
variability. The most extreme such objects are 1H\,0707-495 and
IRAS13228-3809, which both show a sharp drop above 7~keV in XMM Newton
spectra (Boller et al 2002, 2003). 1H\,0707-495 has been further
studied several times with XMM, including a long 500~ks dataset in
2008 which revealed broad iron K and L lines and a soft lag of about
30~s (Fabian et al 2009; Zoghbi et al 2010, 2011). In contrast,
IRAS13224-3809 only had 64~ks of XMM data (Ponti et al 2010; Gallo et
al 2004), despite showing spectacular variability during ROSAT (Boller
et al 1997) and ASCA (Dewangan et al 2002) observations. New
observations totalling 500~ks have now been made with XMM in 2011 and
reported here.
The unusual spectrum and 7~keV drop of both objects have been
interpreted as due to either intervening absorption or strong
relativistic blurring of a reflection component (Boller et al 2002,
2003; Fabian et al 2004). 1H\,0707-495 dropped into a low state for
about 2 months at the start of 2011 during which an XMM spectrum
showed evidence for even more blurring. The results are consistent
with the power-law component of the X-ray source lying within one
gravitational radius of the central black hole (Fabian et al 2012). In
the normal state, one third of this component extends to $\sim
20r_{\rm g}$.
The combination of the above results with the reverberation lags in
1H\,0707-495 and in over a dozen other sources (Emanoulopolous,
McHardy \& Papdakis 2010; Tripathi et al 2011; De Marco et al 2011,
2012; Zoghbi \& Fabian 2011 and Zoghbi et al 2012) provides very strong
support for the reflection model for the X-ray emission of Seyfert
galaxies. In this model the primary power-law component lies above the
inner accretion disc around the black hole and produces the X-ray
reflection component by irradiation of the disc (see e.g. Fabian \& Ross
2010). The soft lags are then the light travel time difference between
the power-law and reflection components as detected by the
observer. The new data presented in this paper are interpreted within
the reflection model.
IRAS13224-3809 is a radio quiet (1.4~GHz flux of 5.4~mJy, Feain et al
2009) NLS1 at redshift $z=0.066$. For a flat $\Lambda$CDM cosmology
with $H_0=71\hbox{$\km\s^{-1}\Mpc^{-1}\,$}$, its luminosity distance is 293~Mpc.
\section{Observations and Data Reduction}
IRAS~13224-3809 was observed for $\sim 500$~ks with the {\em
XMM-Newton} satellite (Jansen et al 2001) from 2011 July 19 to 2011
July 29 (Obs. IDs 0673580101, 0673580201, 0673580301, 0673580401). We
focus on the data from the EPIC-pn camera (Str\"uder et al 2001). The
first observation was taken in full window imaging mode, and the
following three in large window imaging mode. All of the data were
reduced in the same way, using the {\em XMM-Newton} Science Analysis
System (SAS v.11.0.0) and the newest calibration files.
The data were cleaned for high background flares, resulting in a final
total exposure of 300~ks. The data were selected using the condition
{\sc pattern} $\le 4$. Pile-up effects were not significant in any of
the observations.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth,angle=0]{total_0.3-10keV_corrected_lightcurve_200s.eps}
\caption{Full band XMM lightcurve (0.3--10~keV) lightcurve of
IRAS13224-3809. Bins are 200~s. }
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth,angle=0]{peaks_0.3-10keV_50s.eps}
\caption{Lightcurves of regions of peak count rate per orbit in the
0.3--10~keV band. Bins are 50~s. }
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth,angle=0]{total_HR_0.3-1_1-4keV_lc_1000s.eps}
\caption{Hard (1--4~keV) and soft band (0.3--1~keV) light curves
(top two panels), softness ratio and log (total background
rate) (lower two panels). Bins are 1000~s.}
\end{figure*}
The source spectra were extracted from circular regions of radius 35
arcsec, which were centered on the maximum source emission, and the
background spectra were chosen from a circular region of the same size
and on the same chip. The position of the background regions were
chosen to avoid the Cu-K emission lines from the electronic circuits
behind the pn CCD that contaminate the background at 8.0 and
8.9~keV. The response matrices were produced using {\sc rmfgen} and
{\sc arfgen} in {\sc SAS}.
The spectra from the four observations were merged before fitting
using {\sc mathpha} in {\sc FTOOLS}, and the resulting combined
spectrum was rebinned to contain a minimum of 20 counts per bin.
Spectral fitting was performed using {\sc xspec} v12.5.0
(Arnaud 1996). Quoted errors correspond to 90\% confidence
level. Energies are given in the rest frame of the source. Quoted
abundances refer to the solar abundances in Anders \& Grevesse (1989).
\section{Lightcurve and Variability}
The lightcurve of the long XMM observation in 2011 is shown in
Fig.~1. Start and stop dates are 2011 July 19 and 29. The source is
clearly highly variable with several pronounced upward spikes of
emission. The observation consists of 4 orbits of XMM, the first two
of which are contiguous. Gaskell (2003) used ASCA data to show that
the X-ray light curves can be lognormal, in the sense that a frequency
histogram of log(count rate) is gaussian. We find that this is only a
fair description of Orbit 1 and is a poor description of the Orbits 3
and 4.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth,angle=0]{Soft_vs_HR_1000s.eps}
\caption{Softness ratio plotted versus Soft count rate. }
\end{figure}
The light curve is reminiscent of that seen with the ROSAT HRI, reported
by Boller et al (1997). Observations were made every day for a month
resulting in a total exposure of $\sim110$~ks and mean observation
length $\sim3$~ks. 5 flares of emission were found rising above about
150~ct~ks$^{-1}$. Using {\sc webpimms} (Mukai 1993) and a 0.1(0.2)~keV
blackbody model (see next Section for detailed spectrum) we find that
150~ct~ks$^{-1}$ corresponds to about 6(3.4) EPIC~pn~ct~s$^{-1}$. We
see 3 flares above 6~pn~ct~$s^{-1}$ and $\sim11$ above 3.4~pn~ct~s$^{-1}$ in
500~ks. The numbers depend on the precise spectral shape,
but are similar enough to indicate that the behaviour of the source is
similar to that in 1997.
The bright spikes of emission are shown at higher time resolution in
Fig.~2. The most rapid large rise occurs in Orbit 2, where the count
rate jumps by by about 4.5~ct~s$^{-1}$ from $(2.42-2.44)\times
10^5$~s, i.e. 2000~s. Using the spectral model developed in Section 3,
this corresponds to about $7\times 10^{40}{\rm\thinspace erg}$~s$^{-2}$ in the
0.3--2~keV band. An even faster event occurs at the peak of Orbit 4
where the count rate jumps up and down by 3~ct~s$^{-1}$ in just a few
100~s. This means that active regions of the source at that time must
be smaller than a few 100 light seconds in size. The rate of change
of luminosity is thus approaching the value of $10^{42}{\rm\thinspace erg}$~s$^{-2}$,
especially if the energy band is extended down to 0.1~keV, which is
the highest previously recorded, by Brandt et al (1999). That paper
discusses the lower limit on the radiative efficiency required by the
source in converting mass to energy to produce such a luminosity
gradient (Fabian 1979); the efficiency must be high and approaching 50
per cent. This is difficult to envisage taking place without invoking
non-spherical geometry and/or relativistic effects.
Light curves in both Soft (0.3--1~keV) and Hard (1--4~keV) bands are
shown in Fig.~3, together with the Soft/Hard ratio and background
count rate (note that the log of the background rate is shown as the
values are mostly small). The pronounced large spikes in count rate
show no strong spectral variation. A trend for the spectrum to be
sometimes softer when the source is faint is apparent (Fig.~4). Orbit 3
shows considerable hardness ratio variations that do not seem to
correlate with the count rate of either the source or background.
\section {Spectral fits}
Following the phenomeonological approach of Fabian et al (2009; see
also Ponti et al 2010), we fit the spectrum over the 0.3--0.4~keV and
1.2--2.2~keV bands with a simple absorbed power-law plus blackbody
model and then show the residuals to that model over the full
0.3--9~keV band (Fig.~4). Clear, broad emission residuals
corresponding to the iron K and L bands are apparent. A good fit can
be made over this range with that simple model plus two
relativistically blurred lines, at 0.92 and 6.7~keV using the {\sc
Laor} model. The blurring parameters are tied between the two lines,
yielding an inclination of $\sim60$~deg and an emissivity index of
6.6. The equivalent width of the lines are 59.6~eV and
1.74~keV. Untying the parameters leads to a slightly better fit but unphysical
energies (1.02 and 7.06~keV) and a lower inclination ($\sim30$~deg).
In practice we do not expect that the emission peaks are due to single
lines but to line and absorption complexes in the reflection
spectrum. We have therefore fitted the data with a physical model consisting
of blackbody, power-law and two reflection components, one of high and
the other of low ionization, similar to the best fitting model for
1H\,0707-495. The motivation for the two ionization components is to
model better a turbulent accretion disc. The model used is {\tt
phabs*zphabs*(blackbody+
kdblur*(atable\{reflionx.mod\}+atable\{reflionx.mod\})}, where the
relativistic-blurring convolution model {\sc kdblur} acts on the
ionized reflection model {\sc reflionx} of Ross \& Fabian 2005)). The
results of this fit are shown in Table~2 and Figs.~6 and 7.
\begin{figure}
\centering
\includegraphics[width=0.65\columnwidth,angle=-90]{IRAS_KandL.ps}
\caption{Ratio of observed spectrum to a model spectrum. The model
consists of a power-law, blackbody and two Laor broad lines which
have been are fitted to the data. The normalizations of the Laor
lines have been set to zero before displaying. }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.57\columnwidth,angle=-90]{best2extendx.ps}
\caption{Full band pn spectrum fitted with double reflectors and a
blackbody component. }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.6\columnwidth,angle=-90]{specplot.ps}
\caption{Components of the best-fitting model shown in Fig.~6. }
\end{figure}
\begin{table}
\caption{Values of model parameters used in emisivity profile
determination. The absorption component
is fixed at the Galactic value. }
\centering
\begin{tabular}{llll}
\hline
\textbf{Component} & \textbf{Parameter} & \textbf{Value}
\\
\hline
Absorption & Galactic $N_{\rm H}\hbox{$\cm^{-2}\,$}$ & $5.3 \times 10^{20}$ \\
powerlaw & Photon index, $\Gamma$ & $2.70^{+0.007}_{-0.01}$
\\
& Norm & $3.86\times 10^{-4}$ \\
\hline
{\sc relconv} & Inclination, $i$\,deg & $63.8\pm0.4$ \\
& $R_{\rm br}$\,r$_{\rm g}$ & $2.1\pm0.3$ \\
& Inner Index, $q_1$ & $>9$ \\
& Outer Index, $q_2$ & $3.4^{+0.05}_{0.1}$ \\
& Spin, $a$ & $0.988\pm 0.001$\\
\hline
blackbody & temperature, $kT$\,keV & $0.103\pm0.0007$
\\
& Norm & $3.65\times 10^{-5}$ \\
\hline
{\sc extendx} & Iron abundance / solar & $>16$ \\
& Ionization parameter, $\xi_1$ & $20.7\pm0.4$ \\
& Ionization parameter, $\xi_2$ & $325^{+38}_{-11}$ \\
& Norm$_1$ & $3.0\times 10^{-8}$ \\
& Norm$_2$ & $5.0\times 10^{-6}$ \\
& Redshift, $z$ & $4.06\times 10^{-2}$ \\
\hline
&$\chi^2/{\rm dof}$ & $955/906$ \\
\hline
\end{tabular}
\label{par.tab}
\end{table}
We obtain the best spectral fit if the higher ionization component is
replaced by a gaussian line at 0.86~keV (which is then
relativistically blurred along with the lowly-ionized component). The
need fro any absorption edge now disappears. This suggests that the
spectral model we are using is incomplete. It is possible that the
blackbody component is part of the disc emission and helps heat the
disc, so increasing the collisional part of the Fe-L emission which,
for a temperature of 0.3~keV (the peak of the blackbody) peaks between
0.8 and 0.9~keV. Generating the appropriate grids of models for this is
beyond the scope of the present work.
\subsection{The spin of the black hole}
The spectral fit shown in Fig.~6 requires a steep emissivity profile
from $\sim 1.35 r_{\rm g}$ which, if identified as the innermost
stable circular orbit (ISCO), means that the black hole is close to
maximal spin. Fitting the spectrum with the blurring kernel {\sc
reflconv} (Dauser et al 2010) we find the spin to be $0.988\pm0.001$
(see also Table~1).
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth,angle=-90]{inc_a.ps}
\caption{68, 90 and 99 per cent confidence contours for spin and
disc inclination.}
\end{figure}
The value of spin is robust to changes in the model with differences
in $a$ being less than one per cent in all models tried. A small
improvement in $\chi^2$ is obtained by including a cold iron line
(rest energy 6.4~keV); a slightly broad one with $\sigma=0.23{\rm\thinspace keV}$
being preferred. Including a small edge at 1.15~keV produces a more
significant drop in $\chi^2$. This may indicate the presence of a hot
wind, as found for 1H\,0707-495 by Dauser et al (2012), or just be a
deficiency in the model since it occurs where the reflection
components are dropping steeply (particularly see Fig.~5). The
variation of spin with inclination is shown in Fig.~8, where no strong
correlation between these parameters is evident.
Although the statistical uncertainty on the measured spin is well
below one per cent, there are larger systematic uncertainties which
have yet to be determined. The most important is perhaps the implicit
identification of the innermost radius of the reflector with the
ISCO. Computations suggest that the uncertainty here is small and
could be less than $0.5r_{\rm g}$ (Reynolds \& Fabian 2008; Shafee et
al 2008). The work of Schnittmann et al (2012) emphasises that
emission from matter on plunge orbits is beamed mostly into the black
hole. We note that the requirement for a low ionization component
emphasises that the disc remains dense and thus thin within the final
gravitational radius.
\subsection{Inferring the position and size of the power-law source}
The break in the emissivity profile at only $\sim 2.1 r_{\rm g}$
indicates that the power-law source is close to the black hole, within
a few gravitational radii, and thus must be small and confined within
that radius (e.g. Wilkins \& Fabian 2011, 2012).
Confirmation that the source is very close to the black hole comes
from the reflection fraction. This is the ratio of the reflection
components to the power-law component, normalized so that unity
corresponds to a reflector subtending $2\pi$~sr. This is not
straightforward to calculate for a high $\Gamma$ source since the {\sc
reflionx} model does not tabulate the total flux, but only that
above 0.1~keV (the flux at lower energies is of course included in the
computations). We assess the reflection fraction by comparing the
ratio of the amplitude of the Compton hump around 30~keV of the low
ionization reflection component to the power law with that predicted
by the {\sc pexrav} model. The result is a reflection fraction of
about 15, which is a strong indication of light bending close to the
black hole (Martocchia \& Matt 1996; Miniutti \& Fabian 2004).
The emissivity profile has been determined in more detail by fitting
the spectrum above 3~keV by the sum of relativistically-blurred
emission profiles from contiguous radii of the disc (see Wilkins \&
Fabian 2010 for more details; the energy range is restricted to the
Fe-K band as the Fe-L band consists of many overlapping emission
lines). The result (Fig.~9, top panel) indicates a triple power-law
emissivity profile, which integrates to show where the observed
photons originate from on the disc (Fig.~9, lower panel). 80 per cent
of the photons are reflected within about $2.5 r_{\rm g}$ with the
remaining 20 per cent mostly coming from within $10-20 r_{\rm g}$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth,angle=0]{iras_emis.eps}
\includegraphics[width=\columnwidth,angle=0]{cumulativeflux_iras_both1h_v2.eps}
\caption{Top: Emissivity profile obtained by fitting the data in the
broadened Fe-K band with emission from many annuli. Bottom:
Integrated photon flux as function of radius.}
\end{figure}
Motivated by this, in order to estimate the size and location of the
major primary X-ray source, we generated a grid of emissivity profiles
for a range of cylindrical X-ray source regions of varying radial
extent and at varying heights above the plane of the accretion disc
using the high speed GPU-based general relativistic ray tracing code
of Wilkins \& Fabian 2012. The grid consists of sources extending
between 1 and $50 r_{\rm g}$ radially, the bases of which are between
1.0 and $3.1 r_{\rm g}$ above the plane of the accretion
disc. Initially, the thickness of the source is set to be $0.5 r_{\rm
g}$.
These emissivity profiles were then fitted to the profile of the Fe-K
emission line using a modified version of the {\sc KDBLUR} convolution
model, leading to the constraints shown in Fig. 10. These fits imply
the source is either radially extended to $\sim1 r_{\rm g}$ at a
height of $h\sim 2-25 r_{\rm g}$ or extended out to around $2-3.5
r_{\rm g}$ at a height of $\sim 1.7 r_{\rm g}$.
\begin{figure}
\centering
\includegraphics[width=0.6\columnwidth,angle=-90]{source_rh.ps}
\caption{68, 90 and 99 per cent confidence contours for the radius
and height (lower edge) of a slab source of thickness $0.5 {\rm r_g}$. }
\end{figure}
The integrated emissivity profile of IRAS\,13224-3809 is compared with
the normal and low state profiles of 1H\,0707-495 in Fig.~9. It
appears to be sandwiched between the two. This could be either due to
the source always having two primary emission components, one compact
as shown in Fig.~10 the second a more extended component, or to the
source varying in size with time, the larger source likely being
associated with the brighter phases. This will be investigated in
later work.
\section{Rapid variability and the soft lag}
The fractional RMS variability spectrum, computed according to the
prescription of Edelson et al (2002), is shown in Fig.~11. It
resembles that of many other sources in which reflection is present,
resembling a combination of variable power-law and reflection
components. The amplitude of the variability of the power-law
component needs to be greater than that of the reflection in order that
the broad Fe-K line appears inerted in this Figure.
Using the light curves of the four orbits, we compute the Fourier
phase lag between the hard and soft energy bands, following the
technique described in (Nowak 1999). The background-subtracted
light curve segments range in length from $8.34 \times 10^{5}$~s to
$1.24 \times 10^{5}$~s with 10~s bins. The soft band is defined from
0.3 -- 1 keV, where the soft-excess dominates the spectrum. The hard
band, 1.2 -- 5 keV, is dominated by emission from the power law
continuum. From the Fourier transforms of the hard and soft band
light curves, $\widetilde{S}$ and $\widetilde{H}$ respectively, we
compute their phase difference, $\phi(f) = \mathrm{arg}[\langle
\widetilde{H}^{\ast}\widetilde{S} \rangle]$, where $\ast$ denotes
complex conjugate. We convert this to a frequency-dependent time lag,
$\tau(f) \equiv \phi(f)/2\pi f$. Using this sign convention, a
negative lag means that the soft band light curve lags behind the hard
band.
The results are show in the lag-frequency spectrum in
Fig.~\ref{lag_freq}. The hard flux lags behind the soft by hundreds
of seconds at frequencies less than $\sim~2 \times 10^{-4}$~Hz.
At frequencies $\nu
\sim~[3 - 5] \times 100{\rm\thinspace s}$. The light-crossing time of $2 r_{\rm g}$
for a mass of $5\times 10^6\hbox{$\rm\thinspace M_{\odot}$}$ is $\sim 50{\rm\thinspace s}$, so a total lag of
100~s or so is reasonable.
\begin{figure}
\centering
\includegraphics[width=\columnwidth,angle=0]{fvar.eps}
\caption{Fractional RMS variability spectrum using 500~s bins. . }
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{lag_freq.eps}
\caption{Lag-frequency spectrum for this 500~ks observation. The lag
is calculated between the soft energy band (0.3 -- 1.~keV) and the
hard band (1.2 -- 4.~keV). We adopt the convention that negative lag
mean the soft band lags behind the hard band. The most negative lag
(at $3.4 \times 10^{-4}$~Hz) is $-92.1 \pm 30.7$~s. }
\label{lag_freq}
\end{figure}
\section{Discussion}
IRAS\,13224-3809 is remarkably similar in overall X-ray behaviour to
1H\,0707-495. The variability of IRAS\,13224-3809 may be the most
extreme. We shall explore the behaviour of the source as a function of
time and flux in more detail in later work.
The X-ray spectra of both sources require high iron abundance ($A_{\rm
Fe}\sim 10-20$). In recent work, Wang et al. (2012) have presented
a strong correlation between metallicity, as measured by the
Si~\textsc{iv} O~\textsc{iv}~]~/~C~\textsc{iv} ratio, and outflow
strength in quasars, as obtained via the blueshift and asymmetry index
(BAI) of the C~\textsc{iv} emission line. Their results indicate
highly significant super--solar metallicity ($Z/Z_\odot \geq 5$) for
quasars with BAI$\geq 0.7$. This results indicates that metallicity
likely plays an important role in the formation and acceleration of
quasar outflows as expected, for instance, if quasar outflows are
predominantly line--driven.
As mentioned above, both IRAS~13224--3809 and 1H~0707--495 are
characterised by extremely blueshifted C~\textsc{iv} emission lines
with almost no contribution at rest wavelength. Their UV spectra
indicate that BAI$\geq 0.9$ in both sources, as shown in
Fig.~\ref{BAI}. If the metallicity--BAI correlation of Wang et
al. (2012) extends or saturates above their largest observed BAI
($\sim 0.76$), one infers that IRAS~13224--3809 and 1H~0707--495 are
characterised by $Z/Z_\odot \geq 8$. A strong indication for
super--solar metallicity in both sources is consistent with the strong
FeII lines in the optical spectra and was also inferred by Leighly (2004)
via photoionisation modelling of the UV spectra.
\begin{figure}
\begin{center}
\includegraphics[width=0.35\textwidth,height=0.45\textwidth,angle=-90]{CivRatio.ps}
\caption{The C~\textsc{iv} emission line profile from the HST--STIS
observation performed on June 1999 with the G~140L grating is shown
in the observed frame. Data have been slightly rebinned for visual
clarity. The vertical line shows the expected wavelength of the
C~\textsc{iv} emission line for a redshift $z=0.0658$.}
\label{BAI}
\end{center}
\end{figure}
A $\sim 100{\rm\thinspace s}$ soft lag is detected, which is a direct prediction of
the reflection modelling used for the source. With the many other lags
now seen, this justifies the reflection spectrum approach. It is
consistent with the spectral modelling which indicates that the bulk
of the primary continuum emission source is only a few gravitational
radii in size and distance from the black hole. The spin of the black
hole is high and close to maximal. This may be the result of secular
evolution dominating in Narrow-Line Seyfert 1 galaxies, as inferred by
Orban de Xivry et al (2011).
\section*{Acknowledgements}
ACF thanks
the Royal Society for support. RCR thanks the Michigan Society of
Fellows and NASA for support through the Einstein Fellowship Program,
grant number PF1-120087. EK is supported by the Gates Cambridge Scholarship.
|
2,869,038,155,057 | arxiv | \section{Introduction}\label{intro}
Kinetic models for (continuous) opinion formation have been first introduced and discussed in \cite{Tos06}, starting from the study of a multi-agent system in which agents undergo binary interactions so that the personal opinion could be changed by means of compromise and self-thinking \cite{BKR03,BKVR03,FPTT17}.
In most of the problems related to socio-economic studies of multi-agent systems \cite{NPT,PT13}, the variable is assumed to vary in an unbounded domain (mainly the positive half-line). On the contrary, the opinion variable is assumed to take values in the bounded interval $\mathcal{I}= (-1, 1)$, the values $\pm 1$ denoting the extremal opinions.
Among the various models introduced in \cite{Tos06} (cf. also \cite{Bou,DMPW}), one Fokker--Planck type equation has to be distinguished in view of its equilibrium configurations, which are represented by Beta-type probability densities supported in the interval $(-1, 1)$. This Fokker--Planck equation for the opinion density $v(t,y)$, with $|y| < 1$, is given by
\varpi\label{op-FP}
\frac{\partial v(t,y)}{\partial t} = \frac \lambda 2\frac{\partial^2 }{\partial y^2}\left((1-y^2)
v(t,y)\right) + \frac{\partial }{\partial y}\left((y -m)v(t,y)\right).
\mathbf{e}
In \fer{op-FP}, $\lambda$ and $m$ are given constants, with $\lambda >0$ and $-1 <m<1$.
Suitable boundary conditions at the boundary points $y = \pm 1$ then guarantee conservation of mass and momentum of the solution \cite{FPTT17}.
Equation \fer{op-FP} possesses steady states which solve
\[
\frac
\lambda 2\frac{d}{d y}\left((1-y^2)v(y)\right) + (y -m) v(y)= 0.
\]
In case a mass density equal to unity is chosen, the steady state equals a probability density of Beta type, given by
\varpi\label{beta}
v_{m,\lambda}(y)= C_{m,\lambda} (1-y)^{-1 + \frac{1-m}\lambda} (1+y)^{-1 + \frac{1+m}\lambda}.
\mathbf{e}
In \fer{beta} the constant $C_{m,\lambda}$ is such that the mass of $v_{m,\lambda}$
is equal to one. Since $-1 <m<1$, $v_{m,\lambda}$ is integrable on $\mathcal{I}$.
Note that $v_{m,\lambda}$ is continuous on $\mathcal{I}$, and as soon as $\lambda > 1+|m|$ tends to infinity as $y \to \pm 1$.
A better understanding of the social meaning of the parameters $\lambda$ and $m$ appearing in \fer{op-FP} comes from the microscopic description of the opinion change in a multi-agent system through binary interactions among agents, leading to the Boltzmann type kinetic equation considered in \cite{Tos06}.
Given a pair of agents with opinions $x$ and $x_\ast$, it was assumed in \cite{Tos06} that any elementary interaction between them modifies the entering opinions according to
\begin{equation}
\begin{split}
x'&=x+\gamma (x_\ast-x)+D(x)\eta, \\
x_\ast'&=x_\ast+\gamma (x-x_\ast)+D(x_\ast)\eta_\ast.
\label{eq:binary}
\end{split}
\end{equation}
The right-hand side of \fer{eq:binary} describes the modification of the opinion in terms of the quantity $\gamma(x_\ast-x)$ (respectively $\gamma(x_\ast-x)$), that measures the \emph{compromise} between opinions with intensity $\gamma$, $0<\gamma<1$, and a random contribution, given by the random variable $\eta$ (respectively $\eta_\ast$), modelling stochastic fluctuations induced by the \emph{self-thinking} of the agents. $D(\cdot)\geq 0$ is an opinion-dependent diffusion coefficient modulating the amplitude of the stochastic fluctuations, that is the variance of $\eta$ and $\eta_\ast$. In \cite{Tos06} the two random variables were assumed to be independent and identically distributed with zero mean and variance $\sigma^2$. Let us further set
\varpi\label{lam}
\lambda = \frac{\sigma^2}\gamma.
\mathbf{e}
Then, interactions of type \fer{eq:binary} with small values of $\lambda$ characterize compromise dominated societies, while interactions with large values of $\lambda$ characterize self-thinking dominated societies.
Introducing the distribution function $f=f(t,\,x):\mathbb R_+\times [-1,\,1]\to\mathbb R_+$, such that $f(t,\,x)dx$ is the fraction of agents with opinion in $[x,\,x+dx]$ at time $t$, the binary rules~\eqref{eq:binary} give rise to a Boltzmann-type kinetic equation, that in weak form reads
\begin{multline}
\frac{d}{dt}\int_{-1}^1\varphi(x)f(t,\,x)\,dx \\
=\frac{1}{2}\int_{-1}^1\int_{-1}^1\ave{\varphi(x')+\varphi(x_\ast^\prime)-\varphi(x)-\varphi(x_\ast)}f(t,\,x)f(t,\,x_\ast)\,dx\,dx_\ast,
\label{eq:boltz}
\end{multline}
where $\varphi:[-1,\,1]\to\mathbb R$ is an arbitrary test function, i.e. any observable quantity depending on the microscopic state of the agents, and where we denoted by $\langle \cdot \rangle$ the mathematical expectation. Choosing $\varphi(x)=1$, one shows that the integral of $f$ with respect to $x$ is constant in time, i.e. that the total number of agents is conserved. This also implies that $f$ can be thought as a probability density for every $t>0$. Choosing instead $\varphi(x)=x$, and considering that \fer{eq:binary} implies
\[
\langle x^\prime+x^\prime_\ast \rangle = x + x_\ast,
\]
one concludes that
\begin{equation}
\frac{d}{dt}\int_{-1}^1 xf(t,\,x)\,dx= 0.
\label{eq:mean}
\end{equation}
Therefore the mean opinion $m:=\int_{-1}^1 xf(t,\,x)\,dx$ is conserved in time.
As shown in \cite{Tos06}, one can recover an explicit expression of the asymptotic distribution function at least in the so-called \textit{quasi-invariant regime}, i.e. the one in which the variation of the opinion in each binary interaction is small. To describe such a regime, one scales the parameters $\gamma$, $\sigma^2$ in~\eqref{eq:binary} as
\begin{equation}
\gamma\to\epsilon\gamma, \qquad \sigma^2\to\epsilon\sigma^2,
\label{eq:scaling}
\end{equation}
where $\epsilon>0$ is an arbitrarily small scaling coefficient. Moreover, to study the large time behavior of the system, one introduces the new time scale $t \to\epsilon t$ and scales the distribution function as $v(t,\,x):=f(\frac{t}{\epsilon},\,x)$. In this way, at every fixed $t>0$ and in the limit $\epsilon\to 0^+$, $v$ describes the large time trend of $f$. Moreover, as shown in \cite{Tos06}, if $D(x) = \sqrt{1-x^2}$, $v(t,x)$ satisfies the Fokker--Planck equation \fer{op-FP}.
Since the value of $\lambda$ is left unchanged by the scaling \fer{eq:scaling} leading from the Boltzmann-type equation \fer{eq:boltz} to the Fokker--Planck type equation \fer{op-FP}, the parameter $\lambda$ maintains its meaning also in the target equation. The roles of the constants $\lambda$ and $m$ are evident also by looking at the shape of the steady Beta distribution \fer{beta}. We can observe that, by fixing for example $m>0$, increasing the values of $\lambda$, and consequently moving from a compromise dominated to a self-thinking dominated society, such a distribution may depict a transition from a strong consensus around the mean to a milder consensus, and further to a radicalisation in the extreme opinion $x=1$ up to the appearance of a double radicalisation in the two opposite extreme opinions $x=\pm 1$.
In view of the described social meaning, a relevant problem related to the solution to the Fokker--Planck equation \fer{op-FP} is to understand at which speed the solution $v(t)$ converges to its equilibrium configuration, and to reckon how this rate depends on the parameters $\lambda$ and $m$. Indeed, as outlined before, it is easily recognized that different values of these parameters give raise to situations in which the extremal opinions are not attracting, and this happens for $\lambda < 1-|m|$, or situations in which opinions are polarized around the extreme ones ($\lambda > 1+|m|$). Also, it is not clear if these (different) steady states are reached very quickly in time, independently of the values of the parameters.
As discussed in \cite{FPTT17}, in analogy with the methods developed for the classical Fokker--Planck equation \cite{T99}, the large-time behavior of the solution to \fer{op-FP} can be fruitfully studied by resorting to entropy methods \cite{MTV}.
This corresponds to the study of the evolution in time of various Lyapunov functionals, the most known being the Shannon entropy of the solution relative to the steady state. We recall here that the relative Shannon entropy of two probability densities $f$ and $g$ supported on the bounded interval $\mathcal{I}$ is {defined} by the formula
\varpi\label{relH}
H(f,g)= \int_{\mathcal{I}} f(x) \log \frac {f(x) }{g(x) }\, dx.
\mathbf{e}
Note that $H(f,g)$ can be alternatively written as
\[
\int_\mathcal{I} \left( \frac{f(x)}{g(x)} \log \frac{f(x)}{g(x)} - \frac{f(x)}{g(x)} +1\right ) g(x)dx,
\]
which is the integral of a nonnegative function.
As shown in \cite{FPTT17}, the relative entropy $H(v(t), v_{m,\lambda})$ decreases in time, and its time variation can be expressed by the \emph{entropy production} term
\varpi\label{ep}
\tilde I(v(t), v_{m,\lambda}) =
\int_ {\mathcal{I}}\frac \lambda 2 (1-y^2)
\left(\partial_y \log \frac {v(t,y)}{ v_{m,\lambda}(y)}\right )^2 v(t,y) dy.
\mathbf{e}
While for the classical Fokker--Planck equation \cite{T99}, exponential in time convergence at explicit rate follows in consequence of the logarithmic Sobolev inequality, the results in presence of the weight in \fer{ep} are less satisfactory. Various convergence results have been obtained in \cite{FPTT17} by resorting to a generalization of the so-called Chernoff inequality with weight, first proven by Klaassen \cite{Kla}. The main consequence of this inequality \cite{FPTT17}, was to show that exponential convergence to equilibrium with an explicit rate holds at least for initial values $v_0$ for \fer{op-FP} close to the steady state \fer{beta} in the weighted $L^2$-norm
\varpi\label{L2}
\|v_0-v_{m,\lambda}\|_*^2 := \int_\mathcal{I} |v_0(y)-v_{m,\lambda}(y)|^2 v_{m,\lambda}(y)^{-1}\, dy.
\mathbf{e}
Also, a weaker convergence result was proven for general initial data, by showing that the standard $L^1$-distance decays to zero at a polynomial rate (without any explicit rate of convergence).
Related results have been obtained by Epstein and Mazzeo in \cite{EM10} for the adjoint equation
\varpi\label{ad-FP}
\frac{\partial u(t,x)}{\partial t} = \frac \lambda 2 (1-x^2) \frac{\partial^2 u(t,x) }{\partial x^2}- (x-m)\frac{\partial u(t,x) }{\partial x} , \quad t>0,\quad x \in \mathcal{I}.
\mathbf{e}
Indeed, the Fokker--Planck equation \fer{op-FP} is naturally coupled to \fer{ad-FP} since, at least formally, if $v$ is a solution of \fer{op-FP}, then
\varpi\label{rel1}
u(t,x) = \frac{v(t,x)}{v_{m,\lambda}(x)}
\mathbf{e}
is a solution of \fer{ad-FP} (remark that the notation we have chosen for the solutions $v(t,y)$ of \fer{op-FP} and $u(t,x)$ of \eqref{ad-FP} is the same as in the paper \cite{EM10} to which we will often refer in the sequel of the paper).
Among other results, in \cite{EM10} exponential convergence in $L^1(\mathcal{I})$ of $v(t)$ towards $v_{m,\lambda}$ has been proven (without rate) by resorting to classical analysis of semigroups.
In this paper we aim at proving that entropy methods can also produce exponential convergence in $L^1(\mathcal{I})$ towards equilibrium with an explicit rate, at least in some range of the parameters $\lambda$ and $m$.
The result follows from a new weighted logarithmic-Sobolev inequality satisfied by the Beta functions \fer{beta} when they belong to $L^2(\mathcal{I})$. In this case, we will prove that there exists an explicitly computable constant $K_{m,\lambda} >0$ such that, for any
probability density $\varphi \in L^1(\mathcal{I})$ absolutely continuous with respect to $v_{m,\lambda}$
\varpi\label{ok}
H(\varphi, v_{m,\lambda}) \leq K_{m,\lambda} \tilde I(\varphi, v_{m,\lambda}).
\mathbf{e}
Inequality \fer{ok} requires that $\lambda >0$, $m\in \mathcal{I}$ be such that
\[
1-\frac \lambda 2 >0, \quad {\rm if}\,\, m=0, \quad
1-\frac \lambda 2 \geq |m|, \quad {\rm if}\,\, m\neq 0.
\]
and allows us to obtain exponential convergence in relative entropy with an explicitly computable rate.
In more details, this is the plan of the paper: we will start by recalling in Section \ref{sol} an existence result for the initial-boundary value problem for the Fokker--Planck equation \fer{op-FP}, as follows from the analysis of Wright--Fisher type equations presented in \cite{EM10} for the adjoint equation \fer{ad-FP}. Then, the proof of the new logarithmic-Sobolev inequality for Beta functions and its consequences on the large-time behavior of the solution to equation \fer{op-FP} will be studied in Section \ref{LS}.
Last, in Sections \ref{dist} and \ref{concl} we will discuss the case $m=0$, $\lambda =1$ which leads to a uniform density at equilibrium, and we will address some concluding remarks.
\section{Existence and properties of solutions}\label{sol}
For given constants $\lambda >0$ and $m\in \mathcal{I}$, let us consider the initial-boundary value problem
\varpi\label{main}
\left\{
\begin{aligned}
&\partial_t v(t,y)= \frac \lambda 2\, \partial_y^2 \left((1-y^2) v(t,y)\right ) +\partial_y \left((y-m)v(t,y)\right ),\quad t>0,\quad y \in \mathcal{I}\\
&v(0,y)=v_0(y) \ge 0 \in L^1(\mathcal{I}),
\end{aligned}
\right .
\mathbf{e}
with boundary conditions
\varpi\label{bc-mom}
\lim_{y\to -1^+} (1-y^2)v(t,y)= \lim_{y\to 1^-} (1-y^2)v(t,y)= 0, \quad t>0
\mathbf{e}
and
\varpi\label{bc-mass}
\left\{
\begin{aligned}
&\lim_{y\to -1^+} (y-m)v(t,y) + \frac\lambda{2} \frac{\partial}{\partial y}\left((1-y^2) v(t,y)\right) = 0, \quad t>0\\
&\lim_{y\to 1^-} (y-m)v(t,y) + \frac\lambda{2} \frac{\partial}{\partial y}\left((1-y^2) v(t,y)\right) = 0,\quad t>0 .
\end{aligned}
\right .
\mathbf{e}
Conditions \eqref{bc-mom} and \eqref{bc-mass} are suggested by the nature of the problem, since they imply momentum and mass conservation of the (possible) solution to the Fokker--Planck equation.
While condition \fer{bc-mom} is automatically satisfied for a sufficiently
regular density $v$, condition \fer{bc-mass} requires an exact balance
between the so-called advective and diffusive fluxes on the boundaries
$y= \pm 1$. This condition is usually referred to as the \emph{no-flux} boundary
condition \cite{FPTT17}.
The linear Fokker--Planck equation in \fer{main} has a variable diffusion coefficient and the variable $y$ belongs to the bounded interval $\mathcal{I}$, and this requires to consider boundary conditions. An alternative formulation would be to consider the pure initial value problem on the whole real line, by introducing the diffusion coefficient $(1-y^2) \chi(\mathcal{I})$, where $\chi(X)$ denotes the characteristic function of the set $X\subseteq \mathbb R$. The initial value problem for Fokker--Planck equations with general non smooth coefficients has been recently considered by Le Bris and Lions \cite{LL08}. However, diffusion coefficients as $(1-y^2) \chi(\mathcal{I})$ are not included in their analysis, and the results in \cite{LL08} do not apply. For such a problem a general theory about existence, uniqueness and continuous dependence on initial data still does not exist.
On the other hand, a quite general theory has been recently developed by Epstein and Mazzeo in \cite{EM10} for the equation \fer{ad-FP}. Their results give some insight also on our Fokker--Planck equation \fer{op-FP}, subject to no-flux boundary conditions as given in \fer{bc-mass}.
Equation \fer{ad-FP} is a Wright--Fisher type equation, of the form
\[
\partial_t u(t,x)= a(x) \partial_x^2 u(t,x) +b(x) \partial_x u(t,x),\quad t>0,\quad x \in (A,B)
\]
where $A$, $B \in \mathbb R$, $a\in C^\infty([A,B])$, $b \in C^\infty([A,B])$ with
\[
a(x)=(x-A)(B-x)\tilde a(x),\quad \tilde a \in C^\infty([A,B]), \quad \tilde a(x) >0 \text{\ for all \ } x\in [A,B],
\]
and
\[
b(A)\geq 0,\quad b(B)\leq 0.
\]
Since our results heavily depend on the precise analysis by Epstein and Mazzeo on the solutions of the Wright--Fisher--type equations, we collect in the next Theorem the results we need about these solutions. All the details can be extracted from \cite{EM10}. In the rest, we will use as usual the notation $\bar\mathcal{I} = [-1,1]$.
\begin{thm}[Epstein--Mazzeo \cite{EM10}] \label{EM}
For all constants $\lambda >0$ and $m\in \mathcal{I}$ let us consider the initial-boundary value problem \fer{main}
with no-flux boundary conditions, as given by \fer{bc-mass}.
Then, there exists a kernel $q_t(x,y):\{t>0, x\in \bar\mathcal{I}, y\in \mathcal{I}\} \rightarrow \mathbb R$ such that
\varpi\label{Qt}
Q_tv_0(y):= \int_{-1}^1 q_t(x,y) v_0(x) dx
\mathbf{e}
is a classical solution of the Cauchy problem.
The kernel $q_t(x,y)$ satisfies the properties
\begin{enumerate}[1)]
\item $q_t(x,y) \in C^\infty\left((0,\infty)\times \bar\mathcal{I}\times \mathcal{I}\right)$;
\item $q_t(x,y) >0$ on $(0,\infty)\times \bar\mathcal{I}\times \mathcal{I}$;
\item for $y\to -1^+$ we have $q_t (x,y) \sim (1+y)^{-1+\frac {1+m}\lambda } \varphi(t,x)$ for all $t>0$, $x\in \bar\mathcal{I}$ with $\varphi \in C^\infty$;
\item for $y\to 1^-$ we have $q_t (x,y) \sim (1-y)^{-1+\frac {1-m}\lambda } \tilde \varphi(t,x)$ for all $t>0$, $x\in \bar\mathcal{I}$ with $\tilde \varphi \in C^\infty$;
\item for all $t>0$ and all $x\in \bar\mathcal{I}$ we have
\[
\begin{aligned}
&\lim_{y\to -1^+} \left(\frac \lambda 2 \partial_y \left((1-y^2) q_t(x,y)\right ) +(y-m)q_t(x,y)\right )=0\\
&\lim_{y\to 1^-} \left(\frac \lambda 2 \partial_y \left((1-y^2) q_t(x,y)\right ) +(y-m)q_t(x,y)\right )=0.
\end{aligned}
\]
\end{enumerate}
As a consequence, the solution $v(t,y)= Q_tv_0(y)$ satisfies
\begin{enumerate}[1')]
\item $v(t,y)\in C^\infty\left((0,\infty)\times \mathcal{I}\right)$;
\item $v(t,y) >0$ on $(0,\infty)\times \mathcal{I}$;
\item for $y\to -1^+$ we have $v(t,y) \sim (1+y)^{-1+\frac {1+m}\lambda } \psi(t)$ for all $t>0$ with $\psi\in C^\infty$;
\item for $y\to 1^-$ we have $v(t,y) \sim (1-y)^{-1+\frac {1-m}\lambda } \tilde \psi(t)$ for all $t>0$ with $\tilde \psi \in C^\infty$;
\item for all $t>0$ we have (no flux boundary conditions)
\[
\begin{aligned}
&\lim_{y\to -1^+} \left(\frac \lambda 2 \partial_y \left((1-y^2) v(t,y)\right ) +(y-m)v(t,y)\right )=0\\
&\lim_{y\to 1^-} \left(\frac \lambda 2 \partial_y \left((1-y^2) v(t,y)\right ) +(y-m)v(t,y)\right )=0.
\end{aligned}
\]
\end{enumerate}
Moreover, $v\in C((0,\infty), L^1(\mathcal{I}))$ and
\[
\lim_{t\to 0^+} \|v(t) -v_0\|_{L^1} =0.
\]
\end{thm}
In consequence of the validity of no-flux boundary conditions (property \emph{5')}) conservation of mass follows. Hence, since $v_0$ is a probability density, the solution $v(t)= Q_tv_0$ remains a probability density for all $t>0$. Indeed
\[
\begin{aligned}
\frac d{dt} \int_{-1}^1 v(t,y) dy & = \int_{-1}^1 \partial_t v(t,y) dy = \int_{-1}^1 \partial_y \left(\frac \lambda 2 \partial_y \left((1-y^2) v(t,y)\right ) +(y-m)v(t,y)\right )dy\\
& = \left[\frac \lambda 2 \partial_y \left((1-y^2) v(t,y)\right ) +(y-m)v(t,y)\right ]_{-1}^1 =0.
\end{aligned}
\]
The steady states for equation \eqref{main} are given by the Beta densities \fer{beta}.
Some remarks are in order. First of all, by means of $\emph{3')}$ and $\emph{4')}$ of Theorem \ref{EM} we conclude that, for any given initial datum $v_0$ that is a probability density, the solution $v(t)=Q_tv_0$ has the same behavior at the boundary of $\mathcal{I}$ of the corresponding steady state $v_{m,\lambda}$.
Consequently, in reason of the regularity of both functions, the probability density $v(t)$, solution of the initial value problem, is absolutely continuous with respect to the steady state $v_{m,\lambda}$ for all times $t >0$,
\varpi\label{ac}
\frac {v(t)}{ v_{m,\lambda}} \in C_b^\infty(\mathcal{I})
\mathbf{e}
and it can be continuously extended to $\bar\mathcal{I}$.
In addition, if the condition
\varpi\label{cond1}
1-\lambda >|m|
\mathbf{e}
is satisfied, both the steady state and the solution $v(t)$ vanish on the boundary of the domain.
\section{Weighted logarithmic-Sobolev inequalities and large time behavior.}\label{LS}
As briefly discussed in the Introduction, our main goal is concerned with the study of the large-time behavior of the solution to the Fokker--Planck equation \fer{op-FP}. This problem has been considered by Epstein and Mazzeo \cite{EM10}, who studied the large-time behavior of equation \fer{ad-FP}, and used this to prove exponential convergence in $L^1$ for large times of the solution $v(t)=Q_t v_0$ of the Cauchy problem \eqref{main} to the corresponding steady state $v_{m, \lambda}$ for the whole range of the allowed parameters $m\in \mathcal{I}$ and $\lambda >0$.
While their result, obtained by classical semigroup arguments is very general, the rate of the exponential convergence was not explicitly computed. A stronger result was recently obtained in \cite{FPTT17}. This result has been shown to hold for a large class of Fokker--Planck equations with non constant diffusion coefficients and bounded domains, by resorting to classical entropy type inequalities.
Different Lyapunov functionals can be actually evaluated along the solution of the Fokker--Planck equation \fer{op-FP} and, in presence of some regularity of the solution itself, can be proven to be monotone decreasing in time.
Among them, the relative Shannon entropy defined in \fer{relH}, the Hellinger distance, the reverse relative Shannon entropy, and the weighted $L^2$-distance.
Thanks to Theorem \ref{EM}, we know that the solution of the o\-pi\-nion for\-mation equation \eqref{main} fulfills the conditions which allow the application of the formal results contained in \cite{FPTT17}.
In particular, the following result about exponential convergence to equilibrium follows.
\begin{thm}[\cite{FPTT17}]\label{l2}
Let $\lambda >0$ and $m\in \mathcal{I}$ . Let $v_0$ a probability density satisfying
\varpi\label{vic}
\|v_0-v_{m,\lambda} \|_*^2 = \int_\mathcal{I}\frac {\left(v_0(y)-v_{m,\lambda}(y)\right )^2}{v_{m,\lambda}(y)} dy <\infty
\mathbf{e}
where $v_{m,\lambda}$ is the stationary solution \eqref{beta} of the Fokker--Planck equation \eqref{main}. Then, the solution $v(t,y)= Q_tv_0(y)$ of \eqref{main} defined in \eqref{Qt} converges exponentially in time towards the steady state, and the following holds true
\varpi\label{L22}
\|v(t)-v_{m,\lambda} \|_*^2 \leq e^{-2t} \|v_0-v_{m,\lambda} \|_*^2 , \quad t >0.
\mathbf{e}
\end{thm}
Inequality \fer{L22} implies exponential convergence in $L^1$. Indeed by Cauchy--Schwartz inequality, for any pair $f$, $g$ of probability densities on $\mathcal{I}$ it holds
\[
\begin{aligned}
\int_\mathcal{I} |f(y)-g(y)|\, dy &= \int_\mathcal{I} \frac{|f(y)-g(y)| }{\sqrt{v_{m,\lambda}(y)} }\sqrt{v_{m,\lambda}(y)}\, dy\\
&\leq
\left( \int_\mathcal{I} \frac {\left(f(y)-g(y)\right )^2 }{v_{m,\lambda}(y)} \, dy \right )^{\frac 12}\left( \int_\mathcal{I} {v_{m,\lambda}}(y) \, dy\right )^{\frac 12}\\
&\leq
\left( \int_\mathcal{I} \frac {\left(f(y)-g(y)\right )^2 }{v_{m,\lambda}(y)} \, dy \right )^{\frac 12}.
\end{aligned}
\]
Hence, \eqref{L22} implies
\varpi\label{conv-expL1-L2}
\left \| v(t)-v_{m,\lambda}\right \|_{L^1} \leq e^{-t} \left( \int_\mathcal{I} \frac {(v_0(y)-v_{m,\lambda}(y))^2}{v_{m,\lambda}(y)} dy\right )^{\frac 12}
\mathbf{e}
for the whole set of allowed parameters $m\in \mathcal{I}$ and $\lambda >0$.
It is important to outline that condition \fer{vic}, at least when $v_{m,\lambda}$ is equal to zero at the boundaries, is quite restrictive, and requires the initial data $v_0$ to be very close to the steady state. On the contrary, if $\left(v_{m,\lambda}\right )^{-1}$ is bounded (and this happens when
$\lim_{y \to -1^+} v_{m,\lambda}(y)= \lim_{y\to 1^-}v_{m,\lambda}(y)=+\infty$), condition \fer{vic} is satisfied any time $v_0$ is close to $v_{m,\lambda}$ in the $L^2$ distance.
In what follows, we will prove that exponential convergence in $L^1$ can be obtained also for initial values more general than the ones satisfying Theorem \ref{l2}. To this extent, we will show that the Beta functions \fer{beta}, in a certain well defined range of the parameters $\lambda$ and $m$, satisfy a weighted logarithmic-Sobolev inequality. The result allows us to apply to our Fokker--Planck equation for opinion formation the same strategy one can apply to the classical Fokker--Planck equation \cite{AMTU}.
Let us briefly recall the main steps of the (entropy) method for the classical one-dimensional Fokker--Planck equation. Given the initial value problem
\varpi\label{FP}
\left\{
\begin{aligned}
&\partial_t f(t,x)= \partial_x^2 f(t,x) + \partial_x(xf(t,x)),\quad x\in \mathbb R, t>0\\
&f(0,x)=f_0(x) \ge 0 \in L^1(\mathbb R)
\end{aligned}
\right .
\mathbf{e}
where the initial value is a probability density function, one studies the evolution of the relative entropy functional $H(f(t), M)$, given by
\varpi\label{entr-class}
H(f(t), M) = \int_\mathbb R f(t,x) \log \frac{f(t,x)}{M(x)} dx
\mathbf{e}
where $M$ is the Maxwellian (Gaussian)
\varpi\label{Maxw}
M(x)= \frac 1{\sqrt{2\pi}} e^{-\frac {|x|^2}{2}},
\mathbf{e}
which can be easily recognized as the unique steady state of equation \fer{FP}. It is well known (cf. for example \cite{McK66}) that, if $f(t)$ is a solution of the Cauchy problem \eqref{FP}, the relative entropy is monotone nonincreasing, and its time derivative is given by
\varpi\label{derivata}
\frac d{dt} H(f(t), M) = - I(f(t), M), \quad t>0
\mathbf{e}
where $I(f(t), M)$ is the relative Fisher information (the entropy production) defined as
\varpi\label{fisher-class}
I(f(t), M) = \int_{\mathbb R} \left(
\partial_x \log \frac {f(t,x)}{M(x)}\right )^2 f(t,x) dx.
\mathbf{e}
Relation \eqref{derivata} coupled with the logarithmic-Sobolev inequality (cf. for example \cite{T99})
\[
H(f(t), M) \leq \frac 12 I (f(t),M),\quad t>0
\]
leads to the exponential decay to zero of the relative entropy \cite{T99,T13} with explicit rate.
Last, resorting to the well-known Csisz\'ar--Kullback--Pinsker inequality \cite{C}
\varpi\label{CK}
\|f-g\|_{L^1}^2 \leq 2 H(f,g), \quad f,g\in L^1
\mathbf{e}
one obtains exponential convergence in $L^1$ to the Maxwellian density (always with sub-optimal explicit rate).
Going back to our problem, let us assume that the entropy of the initial value relative to the Beta steady state is bounded
\varpi\label{Hfinita}
H(v_0, v_{m,\lambda}) <\infty.
\mathbf{e}
Evaluating the time derivative of the relative entropy (cf. the computations in \cite{FPTT17}), one obtains for the solution to the Fokker--Planck equation \eqref{main} a relation analogous to \eqref{derivata}, which now reads
\varpi\label{derivata-peso}
\frac d{dt} H(v(t), v_{m,\lambda}) = - \tilde I(v(t), v_{m,\lambda}), \quad t>0.
\mathbf{e}
In \fer{derivata-peso} $\tilde I$ defines the weighted Fisher information
\varpi\label{w-fisher}
\tilde I(v(t), v_{m,\lambda}) =
\int_\mathcal{I} \frac \lambda 2 (1-y^2)
\left(\partial_y \log \frac {v(t,y)}{ v_{m,\lambda}(y)}\right )^2 v(t,y) dy.
\mathbf{e}
As one can easily verify, the weight $\frac \lambda 2 (1-y^2)$ is due to the variable diffusion coefficient in equation \eqref{main}.
It is clear that, if one can prove that, for some universal constant $C >0$ the relative entropy is bounded by
\[
H(v, v_{m,\lambda}) \leq C \tilde I(v, v_{m,\lambda}),
\]
one obtains, as in the classical case, the exponential convergence to equilibrium of the relative entropy of the solution at the explicit rate $C$.
We prove indeed that the following holds.
\begin{thm}\label{LS-theo}
Let $\lambda >0$, $m\in \mathcal{I}$ be such that
\varpi\label{cond}
\begin{aligned}
& 1-\frac \lambda 2 >0, \quad m=0\\
& 1-\frac \lambda 2 \geq |m|, \quad m\neq 0.
\end{aligned}
\mathbf{e}
and let $v_{m,\lambda}$ be the Beta function on $\mathcal{I}$ defined by \fer{beta}. Then, there exists an explicit constant $K_{m,\lambda} >0$ such that, for any
probability density $\varphi \in L^1(\mathcal{I})$ absolutely continuous with respect to $v_{m,\lambda}$ it holds
\varpi\label{LS-peso}
H(\varphi, v_{m,\lambda}) \leq K_{m,\lambda} \tilde I(\varphi, v_{m,\lambda}).
\mathbf{e}
The constant $K_{m,\lambda} >0$ is explicitly computable and equals
\varpi\label{costante}
K_{m,\lambda} = \left(1-\frac \lambda 2 + \sqrt{\left(1-\frac \lambda 2\right )^2-m^2}\right)^{-1}.
\mathbf{e}
\end{thm}
\begin{rem}\label{remark}
It is worth underlying that conditions \eqref{cond} are equivalent to the condition that the corresponding Beta-type function $v_{m,\lambda}$ belongs to $L^2(\mathcal{I})$.
\end{rem}
A direct consequence of Theorem \ref{LS-theo} is the following
\begin{thm}
Let the parameters $\lambda >0$, $m\in \mathcal{I}$ satisfy the conditions \fer{cond} of Theorem \ref{LS-theo}, and let $v(t)=Q_tv_0$ be the solution to the initial-boundary value problem \fer{main} with no-flux boundary conditions, and initial data
$v_0 \in L^1(\mathcal{I})$ a probability density such that the relative entropy $H(v_0, v_{m,\lambda})$ is finite. Then, the relative entropy decays exponentially to zero at an explicit rate, and
\varpi\label{conv}
\|v(t) -v_{m,\lambda}\|_{L^1} \leq\sqrt 2 e^{- \frac 1 {2K_{m,\lambda}} t} \sqrt {H(v_0, v_{m,\lambda})}.
\mathbf{e}
In \fer{conv} $K_{m,\lambda}>0$ is given by \fer{costante}.
\end{thm}
\begin{proof}
We already stressed in \eqref{ac} that, starting from the initial condition $v_0$, the result by Epstein and Mazzeo implies that the solution $v(t)=Q_t v_0$ defined in \eqref{Qt} is absolutely continuous with respect to $v_{m,\lambda}$ for all $t>0$. Therefore, we can apply \eqref{derivata-peso} and then the weighted logarithmic-Sobolev inequality \eqref{LS-peso} with $\varphi(y)= v(t,y)$ for all $t>0$ to get
\[
\frac d{dt} H(v(t), v_{m,\lambda}) \leq - \frac 1{K_{m,\lambda} }H(v(t), v_{m,\lambda}), \quad t>0
\]
and this gives
\[
H(v(t), v_{m,\lambda}) \leq e^{-\frac 1{K_{m,\lambda} } t} H(v_0, v_{m,\lambda}).
\]
Then by the Csisz\'ar--Kullback--Pinsker inequality \eqref{CK} we obtain
\varpi\label{conv-expL1-logsob}
\|v(t) -v_{m,\lambda}\|_{L^1} \leq \sqrt 2 e^{- \frac 1{2 K_{m,\lambda} } t} \sqrt {H(v_0, v_{m,\lambda})}, \quad t>0.
\mathbf{e}
\end{proof}
Let us come back to the proof of Theorem \ref{LS-theo}.
The starting point is the well known Bakry--Emery result about logarithmic-Sobolev inequality.
\begin{thm}[Bakry--Emery \cite{BE}]
Let $M$ be a smooth, complete manifold and let $d\nu =e^{-\Psi}dx$ be a probability measure on $M$, such that $\Psi \in C^2(M)$ and $D^2\Psi + Ric \geq \rho I_n$, $\rho>0$.
Then, for every probability measure $\mu$ absolutely continuous with respect to $\nu$, we have
\varpi\label{BE}
H(\mu,\nu) \leq \frac 1{2\rho} I(\mu,\nu)
\mathbf{e}
where
\[
H(\mu,\nu)=\int_M \log \frac {d \mu}{d \nu} d\mu
\]
and
\[
I(\mu, \nu) = \int_M \left|\nabla \log \frac{d \mu}{d \nu} \right |^2 d \mu .
\]
\end{thm}
If $M=[a,b]$ is an interval of the real line, $d \nu = g dx$ and $ d\mu= f dx$, with $f$ and $g$ probability densities, the assumptions in Bakry--Emery criterion read as follows
\varpi
\begin{aligned}
&g(x)=e^{-\Psi(x)},\\
&\Psi \in C^2([a,b])\\
&\min_{[a,b]}\Psi''(x)\geq \rho >0.
\end{aligned}
\mathbf{e}
Then, for any $f$ probability density on $[a,b]$ absolutely continuous with respect to $g$, inequality \eqref{BE} becomes
\varpi\label{BE-int}
\int_a^b f(x) \log \frac {f(x)}{g(x)} dx \leq \frac 1{2\rho} \int_a^b \left(\frac d{dx} \log \frac{f(x)}{g(x)} \right )^2f(x) dx.
\mathbf{e}
Of course this is a non--weighted logarithmic-Sobolev result. We are going to identify who will play the role of $\mu$ and $\nu$. If we take $M=\mathcal{I}$ and $\nu= v_{m,\lambda} dx$ then two problems appear.
The first one is that the open interval $\mathcal{I}$ is not a complete manifold and the other one is that even if we prove that $v_{m,\lambda}(y) = e^{- \Psi (y)}$ with $\Psi$ satisfying Bakry--Emery Theorem, then for any $\varphi$ probability density absolutely continuous with respect to
$v_{m,\lambda}$ we would get the logarithmic-Sobolev inequality
\[
\int_{-1}^1 \varphi(x) \log \frac {\varphi(x)}{v_{m,\lambda}(x)} dx \leq \frac 1{2\rho} \int_{-1}^1 \left( \frac d{dx} \log \frac {\varphi(x)}{v_{m,\lambda}(x)}\right )^2 \varphi(x) dx.
\]
This is not enough to obtain \eqref{LS-peso} since $\frac \lambda 2 (1-y^2) \leq 1$ for $\lambda <2$ (which is implied by conditions \eqref{cond}).
It turns out that actually $v_{m,\lambda}$ satisfies $v_{m,\lambda}(y) = e^{- \Psi (y)}$ with $\Psi$ fulfilling Bakry--Emery conditions. Since we are going to prove a stronger inequality in a different way, we leave the details to the interested reader.
\medskip
\noindent{\bf Proof of Theorem \ref{LS-theo}.}
The main idea is to resort to a change of variable which transforms the weighted logarithmic-Sobolev inequality \eqref{LS-peso} we are looking for into a usual logarithmic-Sobolev inequality for a different probability density which satisfies the assumptions of the Bakry--Emery criterion.
Given the partial differential equation
\[
\partial_t v(t,y)= \frac \lambda 2 \partial_y^2 \left((1-y^2) v(t,y)\right ) +\partial_y \left((y-m)v(t,y)\right ),\quad t>0,\quad y \in \mathcal{I}
\]
with steady state $v_{m,\lambda}$, its adjoint equation reads
\varpi\label{main-adj}
\partial_t u(t,x)= \frac \lambda 2 (1-x^2) \partial_x^2 u(t,x) - (x-m)\partial_x u(t,x), \quad t>0,\quad x \in \mathcal{I}.
\mathbf{e}
If we now set in \fer{main-adj}
\[
f(t,s)=u(t,x)
\]
where
\[
\frac {ds}{dx} = \frac 1{\sqrt {1-x^2}}, \quad x\in \mathcal{I},
\]
equation \eqref{main-adj} is transformed into a Fokker--Planck equation with constant diffusion, given by
\varpi\label{FP-adj-free}
\partial_t f(t,s)= \frac \lambda 2 \partial_s^2 f(t,s) - \frac {\left(1-\frac \lambda 2\right ) \sin s -m}{\cos s}\, \partial_s f(t,s), \quad t>0, s\in \left(-\frac \pi 2, \frac \pi 2\right ).
\mathbf{e}
The adjoint equation of \eqref{FP-adj-free} is in turn
\varpi\label{FP-free}
\partial_t g(t,z)= \frac \lambda 2 \partial_z^2 g(t,z) + \partial_z \left( \frac {\left(1-\frac \lambda 2\right ) \sin z -m}{\cos z}\, g(t,z)\right ), \quad t>0, z\in \left(-\frac \pi 2, \frac \pi 2\right ).
\mathbf{e}
We denote
\varpi\label{w'}
W_{m,\lambda}'(z):= \frac {\left(1-\frac \lambda 2\right ) \sin z -m}{\cos z}
\mathbf{e}
and
\[
W_{m,\lambda}(z)= \int _{0}^z W_{m,\lambda}'(\sigma) d \sigma.
\]
The steady states of Equation \eqref{FP-free} are
\varpi\label{exp}
g_{m,\lambda}(z)= C_{m,\lambda} e^{-\frac 2 \lambda W_{m,\lambda}(z)} = e^{-\left(\frac 2 \lambda W_{m,\lambda}(z) -\log P_{m,\lambda} \right )}
\mathbf{e}
for $C_{m,\lambda} >0$ as in \eqref{beta}
and explicitly
\varpi\label{beta-2}
g_{m,\lambda}(z)= C_{m,\lambda} \frac 1{(\cos z)^{ 1-\frac 2\lambda}} \frac {\left(1+\tan \frac z2\right )^{\frac {2m}{\lambda}}}{\left(1-\tan \frac z2\right )^{\frac {2m}{\lambda}}}.
\mathbf{e}
One can check that
\[
\begin{aligned}
& g_{m,\lambda}(z) \sim R_{m,\lambda}\left( \frac \pi 2-z\right )^{\frac 2\lambda -1-\frac {2m}\lambda},\quad z\to \frac \pi 2^-\\
& g_{m,\lambda}(z) \sim \tilde R_{m,\lambda}\left( \frac \pi 2+z\right )^{\frac 2\lambda -1+\frac {2m}\lambda},\quad z\to -\frac \pi 2^+
\end{aligned}
\]
with $R_{m,\lambda}$, $\tilde R_{m,\lambda}$ positive constants.
Moreover, we have
\varpi\label{stati-staz}
\frac {g_{m,\lambda}(\arcsin y)}{\sqrt{1-y^2}} = v_{m,\lambda}(y),\quad y\in \mathcal{I}
\mathbf{e}
with $v_{m,\lambda}$ as in \eqref{beta} or, equivalently,
\[
g_{m,\lambda} (z)= v_{m,\lambda}(\sin z) \cos z, \quad z\in \left( -\frac \pi 2, \frac \pi 2\right ).
\]
It is immediate to show that $g_{m,\lambda}$ satisfies the assumptions of Bakry--Emery criterion on $ \left(-\frac \pi 2, \frac \pi 2\right )$. Since the latter is an open interval (and so it is not a complete manifold), we will overcome this difficulty by a suitable approximation argument.
Resorting to \eqref{exp}, we need to evaluate $\frac 2\lambda W_{m,\lambda}''(z)$. We obtain
\[
\frac 2\lambda W_{m,\lambda}''(z)= \frac 2\lambda \frac d{dz} W_{m,\lambda}'(z) =\frac 2\lambda \frac{\left(1-\frac \lambda 2\right )+ m\sin z}{\cos^2 z}, \quad z\in \left( -\frac \pi 2, \frac \pi 2\right ).
\]
Therefore, provided $1-\frac \lambda 2 \geq |m|$,
\[
\inf_{ \left( -\frac \pi 2, \frac \pi 2\right )} W_{m,\lambda}''(z) \ge 0
\]
and the function $W_{m,\lambda}(z)$ is convex on $ \left( -\frac \pi 2, \frac \pi 2\right )$.
If $m=0$,
\varpi\label{rho_0}
\min_{\left( -\frac \pi 2, \frac \pi 2\right )} W_{0,\lambda}''(z)= W_{0,\lambda}''(0)= 1-\frac\lambda 2:= \rho_{0,\lambda}.
\mathbf{e}
Consequently, in order to apply Bakry--Emery criterion, we have to assume $1-\frac \lambda 2 >0$.
Let us now set $m \not=0$.
Since
\[
\frac d {dz} W_{m,\lambda}''(z)= \left(-\frac 1{\cos^3 z}\right )\left(m\sin^2 z +(\lambda-2)\sin z+m\right ),
\]
for any given $m\in \mathcal{I}$, $m\neq 0$ and $\lambda$ such that $1-\frac \lambda 2 \geq |m|$, there exists $\bar z_{m,\lambda} \in \left(-\frac \pi 2,\frac \pi 2\right )$ such that
\varpi\label{rho_mlambda}
\min_{\left( -\frac \pi 2, \frac \pi 2\right )} W_{m,\lambda}''(z)= W_{m,\lambda}''(\bar z_{m,\lambda})=
\frac 12 \left(1-\frac \lambda 2 + \sqrt{\left(1-\frac \lambda 2\right )^2-m^2}\right) := \rho_{m,\lambda}>0.
\mathbf{e}
If we could apply Bakry--Emery criterion directly on $ \left(-\frac \pi 2,\frac \pi 2\right )$ we would obtain, for all $f$ probability densities on $ \left(-\frac \pi 2,\frac \pi 2\right )$ absolutely continuous with respect to $g_{m,\lambda}$, the logarithmic Sobolev inequality
\[
\int_{-\frac \pi 2}^{\frac \pi 2} f(z) \log \frac {f(z)}{g_{m,\lambda}(z)} dz \leq \frac \lambda{4 \rho_{m,\lambda}} \int_{-\frac \pi2}^{\frac \pi 2} \left(\frac d{dz} \log \frac{f(z)}{g_{m,\lambda}(z)} \right )^2f(z) dz,
\]
where the explicit constants $\rho_{m,\lambda}$ are defined in \eqref{rho_0} and \eqref{rho_mlambda}.
Since $\left(-\frac \pi 2,\frac \pi 2\right )$ is not a complete manifold we perform an approximation argument.
Let us fix $m\in \mathcal{I}$ and $\lambda >0$ satisfying \eqref{cond} and let $f$ be a probability density on $\left(-\frac \pi 2,\frac \pi 2\right )$ absolutely continuous with respect to $g_{m,\lambda}$.
For $\epsilon >0$ let us define
\begin{align*}
&
f_\epsilon=\frac 1{A_\epsilon} f\chi_{\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]}, \text{ with } A_\epsilon = \int_{-\frac \pi 2 +\epsilon} ^{\frac \pi 2 -\epsilon} f(z) dz\\
&
g_\epsilon = \frac 1{B_\epsilon}g_{m,\lambda}\chi_{\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]}, \text{ with } B_\epsilon = \int_{-\frac \pi 2 +\epsilon} ^{\frac \pi 2 -\epsilon} g_{m,\lambda}(z) dz.
\end{align*}
Of course $f_\epsilon$ and $g_\epsilon$ are probability densities and $A_\epsilon \to 1$, $B_\epsilon \to 1$ for $\epsilon \to 0$. Moreover by \eqref{exp}
\[
g_\epsilon (z)= e^{-\left(\frac 2 \lambda W_{m,\lambda}(z) -\log P_{m,\lambda} + \log B_\epsilon \right )} \chi_{\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]}(z)
\]
and $f_\epsilon$ is absolutely continuous with respect to $g_\epsilon$ on $\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]$.
For all $\epsilon >0$ we have
\[
\frac {d^2}{dz^2} \left(\frac 2 \lambda W_{m,\lambda}(z) -\log P_{m,\lambda} + \log B_\epsilon\right ) = \frac 2 \lambda W''_{m,\lambda}(z) \geq \frac 2\lambda \rho_{m,\lambda}.
\]
Since $g_\epsilon$ satisfies the assumptions of Bakry--Emery criterion on $\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]$, we get for all $\epsilon >0$
\varpi\label{BE-approx}
\int_{-\frac \pi 2+\epsilon}^{\frac \pi 2-\epsilon} f_\epsilon(z) \log \frac {f_\epsilon(z)}{g_\epsilon(z)} dz \leq \frac \lambda{4 \rho_{m,\lambda}} \int_{-\frac \pi2+\epsilon}^{\frac \pi 2-\epsilon} \left(\frac d{dz} \log \frac{f_\epsilon(z)}{g_\epsilon(z)} \right )^2f_\epsilon(z) dz.
\mathbf{e}
Now assume that
\varpi\label{Fisher-bounded}
\int_{-\frac \pi2}^{\frac \pi 2} \left(\frac d{dz} \log \frac{f(z)}{g_{m,\lambda}(z)} \right )^2f(z) dz <\infty.
\mathbf{e}
As far as the right hand side of \eqref{BE-approx} is concerned, by Lebesgue's dominated convergence theorem we get for $\epsilon \to 0$
\[
\begin{aligned}
&\int_{-\frac \pi2+\epsilon}^{\frac \pi 2-\epsilon} \left(\frac d{dz} \log \frac{f_\epsilon(z)}{g_\epsilon(z)} \right )^2f_\epsilon(z) dz\\
& = \frac 1 {A_\epsilon} \int_{-\frac \pi 2}^{\frac \pi 2} \left(\frac d{dz} \log \left(\frac {f(z)}{A_\epsilon} \frac {B_\epsilon}{g_{m,\lambda}(z)} \right ) \right )^2 f(z) \chi_{\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]}dz\\
& = \frac 1 {A_\epsilon} \int_{-\frac \pi 2}^{\frac \pi 2} \left(\frac d{dz} \left(\log \frac{f(z)}{g_{m,\lambda}(z)} + \log \frac {B_\epsilon}{A_\epsilon}\right ) \right )^2 f(z) \chi_{\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]}(z) dz\\
& = \frac 1 {A_\epsilon} \int_{-\frac \pi 2}^{\frac \pi 2} \left(\frac d{dz} \log \frac{f(z)}{g_{m,\lambda}(z)} \right )^2 f(z) \chi_{\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]}(z) dz \to
\int_{-\frac \pi2}^{\frac \pi 2} \left(\frac d{dz} \log \frac{f(z)}{g_{m,\lambda}(z)}\right )^2f(z) dz.
\end{aligned}
\]
Letting $\epsilon\to 0$, for the left hand side we obtain
\[
\begin{aligned}
&\int_{-\frac \pi 2+\epsilon}^{\frac \pi 2-\epsilon} f_\epsilon(z) \log \frac {f_\epsilon(z)}{g_\epsilon(z)} dz = \int_{-\frac \pi 2}^{\frac \pi 2} \frac {f(z)}{A_\epsilon} \log \left(\frac {f(z)}{A_\epsilon} \frac {B_\epsilon}{g_{m,\lambda}(z)} \right ) \chi_{\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]}(z)dz\\
& = \frac 1{A_\epsilon} \int_{-\frac \pi 2}^{\frac \pi 2}f(z) \log \frac {f(z)}{g_{m,\lambda}(z)} \chi_{\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]}(z) dz +
\frac 1{A_\epsilon} \log \frac {B_\epsilon} {A_\epsilon} \int_{-\frac \pi 2}^{\frac \pi 2} f(z) \chi_{\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]}(z) dz \\
&\to \int_{-\frac \pi 2}^{\frac \pi 2} f(z) \log \frac {f(z)}{g_{m,\lambda}(z)} dz.
\end{aligned}
\]
Indeed, by Lebesgue's dominated convergence theorem
\[
\frac 1{A_\epsilon} \log \frac {B_\epsilon} {A_\epsilon} \int_{-\frac \pi 2}^{\frac \pi 2} f(z) \chi_{\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]}(z) dz \to 0, \quad \epsilon \to 0,
\]
and thanks to the identity
\[
\begin{aligned}
& \int_{-\frac \pi 2}^{\frac \pi 2}f(z) \log \frac {f(z)}{g_{m,\lambda}(z)} \chi_{\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]}(z) dz\\
&= \int_{-\frac \pi 2}^{\frac \pi 2}\left( \frac{f(z)}{ g_{m,\lambda}(z)} \log \frac {f(z)}{g_{m,\lambda}(z)} -\frac{f(z)}{ g_{m,\lambda}(z)} +1\right ) g_{m,\lambda}(z) \chi_{\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]}(z) dz\\
&\quad\quad + \int _{-\frac \pi 2}^{\frac \pi 2}\left(f(z)- g_{m,\lambda}(z)\right ) \chi_{\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]}(z) dz,
\end{aligned}
\]
by the Lebesgue's dominated and monotone convergence theorems we conclude
\[
\frac 1{A_\epsilon} \int_{-\frac \pi 2}^{\frac \pi 2}f(z) \log \frac {f(z)}{g_{m,\lambda}(z)} \chi_{\left[-\frac \pi 2+\epsilon,\frac \pi 2-\epsilon \right]}(z) dz \to \int_{-\frac \pi 2}^{\frac \pi 2}f(z) \log \frac {f(z)}{g_{m,\lambda}(z)} dz, \quad \epsilon \to 0.
\]
Finally, for all $f$ probability densities on $ \left(-\frac \pi 2,\frac \pi 2\right )$ absolutely continuous with respect to $g_{m,\lambda}$ it holds
\varpi\label{logg}
\int_{-\frac \pi 2}^{\frac \pi 2} f(z) \log \frac {f(z)}{g_{m,\lambda}(z)} dz \leq \frac \lambda {4 \rho_{m,\lambda}} \int_{-\frac \pi2}^{\frac \pi 2} \left(\frac d{dz} \log \frac{f(z)}{g_{m,\lambda}(z)} \right )^2f(z) dz,
\mathbf{e}
where $\rho_{m,\lambda}$ are defined as in \eqref{rho_0} and \eqref{rho_mlambda}.
Going back to the original functions, by means of the change of variables
\[
z=\arcsin y
\]
the logarithmic Sobolev inequality \fer{logg} transforms into a weighted logarithmic-Sobolev inequality. In fact, for any $f$ probability density on $\left(-\frac \pi 2,\frac \pi 2\right )$ absolutely continuous with respect to $g_{m,\lambda}$
\begin{multline}
\int_{-1}^{1} f(\arcsin y) \log \frac {f(\arcsin y)}{g_{m,\lambda}(\arcsin y)} \frac 1{\sqrt{1-y^2}} dy \leq \\
\frac 1{2\rho_{m,\lambda}} \int_{-1}^{1} \frac \lambda 2 \left(\frac d{dy} \log \left(\frac{f(\arcsin y)}{g_{m,\lambda}(\arcsin y)}\right ) \sqrt{1-y^2} \right )^2f(\arcsin y) \frac 1{\sqrt{1-y^2}} dy.
\end{multline}
Now, by \eqref{stati-staz} we get
\begin{multline}
\int_{-1}^{1} \frac {f(\arcsin y) } {\sqrt{1-y^2}} \log \frac { \frac {f(\arcsin y) } {\sqrt{1-y^2}} }{v_{m,\lambda}(y)} dy \leq \\
\frac 1{2\rho_{m,\lambda}} \int_{-1}^{1} \frac \lambda 2(1-y^2) \left(\frac d{dy} \log \left(\frac{ \frac {f(\arcsin y) } {\sqrt{1-y^2}} }{v_{m,\lambda}(y)}\right ) \right )^2 \frac {f(\arcsin y) }{\sqrt{1-y^2}} dy.
\end{multline}
In order to complete the proof of inequality \eqref{LS-peso} it enough to observe that $\varphi \in L^1(\mathcal{I})$ is a probability density absolutely continuous with respect to $v_{m,\lambda}$ if and only if $\varphi(y)=\frac {f(\arcsin y) } {\sqrt{1-y^2}}$
with $f \in L^1 \left(\left(-\frac \pi 2,\frac \pi 2\right )\right )$ is a probability density absolutely continuous with respect to $g_{m,\lambda}$.
Inequality \eqref{LS-peso} is then proven with
\varpi\label{Kmlambda}
K_{m,\lambda} = \frac 1{2\rho_{m,\lambda}}.
\mathbf{e}
\medskip
\hfill$\square$
\begin{rem} It is worth comparing the results of exponential convergence in $L^1$ contained in \eqref{conv-expL1-L2} and \eqref{conv-expL1-logsob}.
By \eqref{Kmlambda}, for $m$ and $\lambda$ satisfying conditions \eqref{cond} we get \eqref{conv-expL1-logsob}:
\[
\|v(t) -v_{m,\lambda}\|_{L^1} \leq \sqrt 2 e^{- \rho_{m,\lambda} t} \sqrt {H(v_0, v_{m,\lambda})}, \quad t>0,
\]
with $\rho_{m,\lambda}$ as in \eqref{rho_mlambda}.
On the other hand, for all $m\in \mathcal{I}$ and $\lambda >0$ we get \eqref{conv-expL1-L2} :
\[
\left \| v(t)-v_{m,\lambda}\right \|_{L^1} \leq e^{-t} \left( \int_{-1}^1 \frac {(v_0(y)-v_{m,\lambda}(y))^2}{v_{m,\lambda}(y)} \, dy\right )^{\frac 12},\quad t>0.
\]
Since $\rho_{m,\lambda}\leq 1$
for all $m$, $\lambda$ satisfying conditions \eqref{cond}, the rate of exponential convergence in the second estimate is sharper than the first one.
Let us compare now the assumptions
\[
H(v_0, v_{m,\lambda}) = \int_{-1}^1 v_0(y)\log \frac {v_0(y)}{v_{m,\lambda}(y)}\, d y< \infty
\]
and
\[
\int_{-1}^1 \frac {(v_0(y)-v_{m,\lambda}(y))^2}{v_{m,\lambda}(y)} dy <\infty
\]
for the values of the parameters which fulfill conditions \eqref{cond}.
Since
\[
x\log x \geq x-1 + \frac 12 (x-1)^2\chi_{\left\{x\leq 1\right\} }(x), \quad x>0
\]
we get
\[
\begin{aligned}
&\int_{-1}^1 v_{m,\lambda}(y) \frac {v_0(y)}{v_{m,\lambda}(y)}\log \frac {v_0(y)}{v_{m,\lambda}(y)}\, d y\\
& \geq \int_{-1}^1 v_{m,\lambda}(y)\left( \frac {v_0(y)}{v_{m,\lambda}(y)} -1\right )\, d y + \frac 12 \int_{-1}^1 v_{m,\lambda}(y)\left( \frac {v_0(y)}{v_{m,\lambda}(y)} -1\right )^2 \chi_{\left\{v_0(y) \leq v_{m,\lambda}(y) \right \}}(y) \, d y\\
&= \frac 12 \int_{-1}^1 \frac {(v_0(y)-v_{m,\lambda}(y))^2}{v_{m,\lambda}(y)} \chi_{\left\{v_0(y) \leq v_{m,\lambda}(y) \right \}}(y) \, d y.
\end{aligned}
\]
So for $v_0 \leq v_{m,\lambda}$ the rate of convergence contained in \eqref{conv-expL1-L2} is stronger than that in
\eqref{conv-expL1-logsob}.
Moreover,
\[
x\log x \leq x-1 + \frac 12 (x-1)^2, \quad x\geq 1
\]
and so for $v_0 \geq v_{m,\lambda}$ we get
\[
\frac 12 \int_{-1}^1 \frac {(v_0(y)-v_{m,\lambda}(y))^2}{v_{m,\lambda}(y)} \, d y \geq \int_{-1}^1 v_{m,\lambda}(y) \frac {v_0(y)}{v_{m,\lambda}(y)}\log \frac {v_0(y)}{v_{m,\lambda}(y)}\, d y.
\]
In this case, the convergence obtained by the new weighted logarithmic-Sobolev inequality could be the only one available.
Of course, in all the other cases the two conditions seem not to be comparable.
\end{rem}
\section{A distinguished case}\label{dist}
From Theorem \ref{LS-theo} one can extract some interesting consequences. The case $m =0$, $\lambda = 1$ corresponds to the uniform density
\[
v_{0,1}(x) = \frac 12, \quad x \in \mathcal{I}.
\]
Hence, considering that $K_{0,1} = 1$, for a given probability density $h$ on $\mathcal{I}$, inequality \fer{LS-peso} takes the form
\varpi\label{poi}
\int_\mathcal{I} h(x) \log h(x) \, dx + \log 2 \le \frac 12 \int_\mathcal{I} (1-x^2) \frac{(h'(x))^2}{h(x)}\, dx.
\mathbf{e}
A more suitable form is obtained by setting $h(x) = f^2(x)$ into \fer{poi}. One obtains the inequality
\varpi\label{LS-d}
\int_\mathcal{I} f^2(x) \log f^2(x) \, dx + \log 2 \le 2 \int_\mathcal{I} (1-x^2) (f'(x))^2\, dx,
\mathbf{e}
satisfied by all functions $f$ in $L^2(\mathcal{I})$ of $L^2$-norm equal to one. Inequality \fer{LS-d} is the analogous of the standard \emph{Euclidean logarithmic-Sobolev inequality} established in Gross \cite{Gro}, which in one-dimension reads
\varpi\label{LSI}
\int_\mathbb R f^2(x) \log f^2(x) \, dx + \frac 12 \log(2\pi e^2) \le 2 \int_\mathbb R (f'(x))^2\, dx,
\mathbf{e}
and it is valid for all functions $f$ such that
\[
\int_\mathbb R f(x)^2\, dx = \int_\mathbb R x^2f^2(x) \, dx = 1.
\]
Note that the main difference between the logarithmic-Sobolev inequality \fer{LSI} and the new inequality \fer{LS-d}, apart from the different interval of integration, is the presence of the weight on the right-hand side.
Clearly, the constraint $\|f\|_2 = 1$ can be easily cut to give the (general) inequality
\varpi\label{poi1}
\int_\mathcal{I} w^2(x) \log w^2(x) \, dx - \|w\|_2^2 \log \frac{\|w\|_2^2}2 \le 2 \int_\mathcal{I} (1-x^2) (w'(x))^2\, dx,
\mathbf{e}
which is valid for any function $w \in L^2(\mathcal{I})$.
\section{Numerical experiments}\label{nume}
In this short Section, we will focus on some numerical experiments that illustrate the time-evolution of the weighted logarithmic Sobolev inequality \fer{LS-peso} for various values of the parameter $\lambda$, and $m =0$. To this extent, we make use of numerical schemes for the Fokker--Planck equation \fer{op-FP}, recently considered in \cite{PZ}, that preserve the structural properties, like non negativity of the solution, entropy dissipation and large time behavior. These properties are essential for a correct description of the underlying physical problem.
The experiments have been done by choosing as initial density a bimodal normal distribution centered in $\pm 1/2$, normalized in the interval $(-1,1)$.
It is clearly shown in Figure \fer{fig:test1} that inequality \fer{LS-peso} gives a better approximation to the entropy decay towards equilibrium for small values of the parameter. In all cases, however, exponential in time decay follows.
\begin{figure}\centering
{\includegraphics[width=7cm]{lambda02.eps}}
\hspace{+0.35cm}
{\includegraphics[width=7cm]{lambda04.eps}}\\
\vspace{+0.45cm}
{\includegraphics[width=7cm]{lambda06.eps}}
\vspace{+0.35cm}
{\includegraphics[width=7cm]{lambda08.eps}}
\caption{Time evolution of the weighted logarithmic Sobolev inequality \fer{LS-peso} for the Fokker--Planck model as a function of the parameter $\lambda$. }\label{fig:test1}
\end{figure}
In Figure \fer{fig:test2} it is shown that the numerical method correctly reproduce the equilibrium Beta density \fer{beta} of the Fokker--Planck equation \fer{op-FP} for any value of the parameter $\lambda$.
\begin{figure}\centering
{\includegraphics[width=7cm]{dist_lambda02.eps}}
\hspace{+0.35cm}
{\includegraphics[width=7cm]{dist_lambda04.eps}}\\
\vspace{+0.45cm}
{\includegraphics[width=7cm]{dist_lambda06.eps}}
\vspace{+0.35cm}
{\includegraphics[width=7cm]{dist_lambda08.eps}}
\caption{Comparison between the analytic and numerical steady state solutions of the Fokker--Planck model for different values of the parameter $\lambda$. Top left $\lambda=0.2$, top right $\lambda=0.4$, bottom left $\lambda=0.6$, bottom right $\lambda=0.8$. }\label{fig:test2}
\end{figure}
\section{Conclusions}\label{concl}
In this paper, we investigated the large-time behavior of the solution of a Fokker--Planck type equation arising in the study of opinion formation. The same equation, in adjoint form, is well-known under the name of Wright--Fisher equation, and has been exhaustively studied, among others, in a recent paper by Epstein and Mazzeo \cite{EM10} from the point of view of semigroup theory. Our approach to the analysis of the large-time behavior of the solution is different, and relies on the classical study of the evolution of the relative Shannon entropy, which is of common use in the field of kinetic theory. The study of lower bounds for the relative entropy production leads to a new type of logarithmic-Sobolev inequality with weight, satisfied by the Beta-type densities, which allow us in various cases to conclude with exponential convergence to the equilibrium with an explicit rate.
The case in which the Beta-type density reduces to a uniform variable separates in a natural way from the others, and gives rise to the corresponding of the Euclidean logarithmic-Sobolev inequality.
\vskip 2cm
\section*{Acknowledgement} This work has been written within the
activities of GNFM and GNAMPA groups of INdAM (National Institute of
High Mathematics).
The support of the Italian Ministry of Education, University and Research (MIUR) through the
``Dipartimenti di Eccellenza Program (2018--2022)'' - Dept. of Mathematics ``F. Casorati'', University of Pavia, is kindly acknowledged.
The authors also kindly acknowledge R. Mazzeo for fruitful explanations on the paper \cite{EM10}, and M. Zanella, who performed the numerical experiments of Section \ref{nume} by means of the entropic numerical scheme introduced in \cite{PZ}.
|
2,869,038,155,058 | arxiv | \section{Introduction}
Strange quark stars are hypothetical type of compact exotic stars which are firstly speculated in Refs.~\cite{alcock,haensel1986,farhi}.
Describing the structure of stars, including mass
$(M)$ and radius ($R$) will be available by solving hydrostatic
equilibrium equations. In the case of compact stars, due to high
density, General Relativity (GR) dominates and Newtonian
hydrostatic equilibrium should be replaced by its GR counterpart.
Assuming a static and spherical symmetric geometry and an
isotropic matter, Einstein field equations lead to
Tolman$-$Oppenheimer$-$Volkoff equation (TOV) \cite{weinberg}.
The structure of compact stars has been investigated by
Refs.~\cite{Haensel,harko,Singh,{maharaj}} and
many others by solving TOV equation using a suitable equation of
state. Now in this paper, we deal with the strange quark matter
(SQM) which its EOS is well described by MIT Bag Model \cite{Haensel}.\\
nuclear matter in very high densities can be anisotropic and it is a great
motivation to study the effect of anisotropy in the structure of
relativistic stars \cite{harko,Ruderman}. Additionally, existing a solid core and a
strong magnetic field in neutron stars can be related to
anisotropy in the matter of star \cite{harko,Bailin}.
Solving anisotropic TOV equation requires a physically reasonable
assumption. For instance, in Ref.~\cite{harko} a specific density
profile $\rho(r)$ has been chosen and in Ref.~\cite{maharaj} a special
metric function $\Lambda(r)$ has been utilized to solve it
analytically. But in this paper, we try to apply a numerical
method for solving modified TOV equation by considering a
perturbation term. Initially, we consider an isotropic and free
charge matter and obtain a solution for radial pressure $p_r(r)$. Then in anisotropic
case we use the previous $p_r(r)$ as the unperturbed solution in order to study $R$ and $M$ in this kind of stars.
It should be noted that in this work we are dealing with a non-rotating strange quark star but we can expect that rotation allow a larger maximum mass (about $40\%$) according to Ref. \cite{Haensel}.\\
This paper is organized as follows: In \S2 we first solve the TOV
equation for an isotropic matter and then compare it with a
charged and anisotropic one. Discussion and concluding remarks are
represented in \S3.
\section{TOV Equation}
\label{modified} For describing mass and radius of a self
gravitating configuration in relativistic term we have to consider
a suitable metric. The line element in interior of star, assuming
a static and spherically symmetric geometry can be written as:
\begin{equation}
ds^{2}=-c^{2}e^{2\phi}dt^{2}+e^{2\Lambda}dr^{2}+r^{2}d\Omega^{2},
\label{metric}
\end{equation}
where $c$ is the light speed while $\phi$ and $\Lambda$ are spherically symmetric metric functions.\\
In order to obtain a General Relativistic hydrostatic equilibrium,
we have to solve the Einstein equations:
\begin{equation}
\label{ein}
G_{\mu\nu}=\frac{8\pi G}{c^{4}}T_{\mu\nu},
\end{equation}
for the metric given in Eq. (\ref{metric}). Also we must choose a
reasonable energy-momentum tensor which satisfies the following
conservation law:
\begin{equation}
\nabla_{\nu}T^{\mu\nu}=0.
\label{cons}
\end{equation}
Now in the next subsections we try to obtain the TOV equation for
two different cases.
\subsection[]{Uncharged Isotropic Matter}
First of all we consider an uncharged perfect fluid. The
energy-momentum tensor of a perfect fluid is given by \cite{camenzind}:
\begin{equation}
T^{\mu}_{\nu} =
\begin{pmatrix}
-\rho c^{2} & 0 & 0 & 0 \\
0 & p & 0 & 0 \\
0 & 0 & p & 0 \\
0 & 0 & 0 & p
\end{pmatrix},
\label{T1}
\end{equation}
where $p$ and $\rho$ are the pressure and mass density
respectively which are spherical symmetric \cite{harko}.
We also define the gravitational mass as \cite{glender}:
\begin{equation}
m(r)=\int_{0}^{r} 4\pi x^{2}\rho(x) dx.
\label{mm}
\end{equation}
By using the above equation, the Einstein equations (\ref{ein})
and the conservation law (\ref{cons}) for metric (\ref{metric})
yield to hydrostatic equilibrium equation:
\begin{equation}
\frac{dp}{dr}=-(\rho c^{2}+p)\frac{4\pi Gp
r^{3}+mGc^{2}}{rc^{2}(rc^{2}-2mG)}.
\label{TOV}
\end{equation}
Noticeably $m$, $p$, and $\rho$ are functions of radial coordinate (\textit{$r$}). Equation (\ref{mm}) together with Eq. (\ref{TOV}) is called TOV
equation which for its solvability it has to be supplemented by an
EOS \cite{Haensel-book}. In this work we consider the MIT Bag Model for
EOS of SQM as:
\begin{equation}
\frac{p}{c^{2}}=\alpha(\rho-\rho_{s}).
\label{EOS}
\end{equation}
According to Ref.~\cite{negi} for SQM2 model
$\rho_{s}= 3.056\times10^{17}kg$ $m^{-3}$ and $\alpha=0.324$ .
Now we can solve TOV equation numerically, using physical boundary conditions \cite{camenzind}:
\begin{subequations}
\begin{center}
\begin{align}
p(0) &= p_{c}\label{bond1}\\
m(0) &= 0\label{bond2}\\
p(R) &= 0\label{bond3}\\
m(R) &= M\label{bond4}
\end{align}
\end{center}
\end{subequations}
In addition, in the stellar interior, following conditions should be satisfied:
\begin{subequations}
\begin{center}
\begin{align}
p > 0 \label{cond1}\\
\frac{dp}{dr} < 0 \label{cond2}\\
\sqrt{\frac{dp}{d\rho}} \leq c\label{cond3}\\
\frac{dp}{d\rho}\geq 0\label{cond4}
\end{align}
\end{center}
\end{subequations}
Conditions (\ref{cond1}) and (\ref{cond2}) are trivial for
preserving hydrostatic equilibrium and (\ref{cond3}) refers to
causality inside the star which does not permit
$\alpha > 1$ in Eq. (\ref{EOS}) \cite{Haensel}. Also the EOS should satisfy the microscopic stability
condition (\ref{cond4}), otherwise the star collapses spontaneously\cite{shapiro}. In the next section we will use the obtained pressure $p(r)$
as the unperturbed radial pressure for anisotropic case.
\subsection[]{Charged Anisotropic Matter}
In this section instead of a perfect fluid we are dealing with an anisotropic matter. It means that pressure in radial and tangential directions are not necessarily the same. With this assumption, which is predicted in very high density ranges\cite{harko}, the energy momentum tensor in CGS unit system will be \cite{maharaj,newmalheiro}:
\begin{equation}
T^{\mu}_{\nu} =
\begin{pmatrix}
-(\rho c^{2}+\frac{E^{2}}{8\pi }) & 0 & 0 & 0 \\
0 & p_r-\frac{E^{2}}{8\pi } & 0 & 0 \\
0 & 0 & p_t+\frac{E^{2}}{8\pi } & 0 \\
0 & 0 & 0 & p_t+\frac{E^{2}}{8\pi }
\end{pmatrix},
\label{TA}
\end{equation}
where $p_{t}$ and $p_{r}$ are tangential and radial pressure
respectively. The gravitational mass which is the total
contribution of the energy density takes the new form as
\cite{malheiro}:
\begin{equation}
m(r)=\int_{0}^{r} 4\pi x^{2} \left( \rho(x)+\frac{E(x)^{2}}{8\pi
c^{2}} \right) dx,
\label{mmnew}
\end{equation}
where $E$ is the radial electric field, defined as:
\begin{equation}
E(r)=\frac{1}{r^{2}}\int_{0}^{r}4\pi x^{2} \rho_{ch}e^{\Lambda}
dx
\label{Electric}
\end{equation}
and $\rho_{ch}$ is the charge density. Defining $\Delta= p_{t}-p_{r}$ as anisotropy factor
\cite{harko}, Einstein equations (\ref{ein}) and conservation
equation (\ref{cons}) lead to:
\begin{equation}
e^{-2\Lambda}=1-\frac{2mG}{rc^{2}},
\label{lambda}
\end{equation}
\begin{equation}
\frac{dp_{r}}{dr}= \left(\frac{2}{r}\Delta -\rho
c^{2}-p_{r}\right) \frac{4\pi
Gr^{3}(p_{r}-\frac{E^{2}}{8\pi})+mGc^{2}}{rc^{2}(rc^{2}-2mG)}+E\rho_{ch}e^{\Lambda}.
\label{TOVA}
\end{equation}
It should be noted that if $\Delta>0$, in Eq. (\ref{TOVA}) the magnitude of radial pressure derivative decreases. This causes a softer falling of radial pressure which results higher maximum masses and their corresponding radii. Equations (\ref{mmnew}) and (\ref{TOVA}) represent the modified
form of TOV equation. It is easy to see that putting $E=0$ and
$\Delta=0$ in Eq. (\ref{TOVA}), simply return uncharged and isotropic cases. In an anisotropic
matter, MIT Bag Model is \cite{harko}:
\begin{equation}
\frac{p_{r}}{c^{2}}=\alpha(\rho-\rho_{s}).
\label{EOS1}
\end{equation}
We should notice that tangential pressure $p_{t}$ does not necessarily vanish on
the surface of star, in contrary to $p_r$\cite{harko,maharaj}. Solving modified TOV equation requires some extra assumptions.
Assuming that anisotropy has a slight effect on $p_r$, we add a
perturbation to isotropic pressure profile to obtain $p_r$ as:
\begin{equation}
p_{r}(r)_{Anisotropic}=p(r)_{Isotropic}+\delta p(r).
\label{assume}
\end{equation}
The following perturbation function has to satisfy these boundary
conditions:
\begin{subequations}
\begin{align}
p_{r}(0)=p_{c},\label{s1}\\
p_{r}(R)=0,\label{s2}\\
\frac{dp_{r}}{dr}|_{r=0}=0,\label{s3}\\
\frac{dp_{r}}{dr}|_{r=R}=0.\label{s4}
\end{align}
\end{subequations}
Regarding to Eq. \ref{assume}, $p(r)_{Isotropic}$ and $p_{r}(r)_{Anisotropic}$ have to satisfy mentioned boundary conditions. So $\delta p(r)$ must meet those conditions too. Gaussian function is the simplest choice (but not the only one\footnote{For example Lorentzian
function could be another choice for $\delta p(r)$ which yields no
significant difference.}) for fulfilment of the conditions \ref{s1} through \ref{s4} by setting the position of Gaussian pressure profile at $\mu=\frac{2R}{3}$ in the following equation:
\begin{equation}
\delta p(r)=A \exp \left( \frac{-(r-\mu)^{2}}{2\sigma^{2}}
\right).
\label{gaussian}
\end{equation}
One can see that this Gaussian perturbation can satisfy mentioned boundary conditions physically. We need $\delta$ and its derivative to be nearly zero at origin and also on the surface. In fact we can tune $A$ and $\sigma$ so that the perturbation to be so small at those points. According to Ref.~\cite{harko} the anisotropy can cause an increase in the radial pressure. Hence we have
chosen a positive and small enough value compared to $p_r$ for
parameter $A$.
Eventually studying charged case requires one more physically assumption. According to Ref.~\cite{malheiro} electrical charge density can be related to the
density of matter as follows:
\begin{equation}
\rho_{ch}=f\times\rho,
\label{ff}
\end{equation}
which is a physically reasonable assumption\cite{malheiro}. In this equation $f$ is the charge
fraction that can be considered $f\leq10^{-5} esu$ $g^{-1}$ to satisfy
causality, stability, and electrical neutrality conditions of stars \cite{glender}. Therefore in the most general case (charged anisotropic matter), there is 7 variables namely $m, p_{r}, p_{t}, \rho, \rho_{ch}, E, \Lambda $ and 7 equations (\ref{mmnew}) - (\ref{assume}) and (\ref{ff}). We have used Runge Kutta Fehlberg fourth-fifth order method (RKF45 Method) to solve the corresponding coupled ODE systems in each cases below:
\begin{enumerate}
\item Isotropic uncharged ($\Delta=0, E=0$ ),
\item Isotropic charged ($\Delta=0, E\neq0$),
\item Anisotropic uncharged ($\Delta\neq0, E=0$),
\item Anisotropic charged ($\Delta\neq0, E\neq0$).
\end{enumerate}
\section{Discussion}
In this paper unlike the analytical solutions for anisotropic matter such as Ref.~\cite{harko}
and Ref.~\cite{maharaj} we have used numerical result of isotropic solution and
a Gaussian perturbation rather than using a totally assumed function.
In the following subsections we will discuss the result of adding anisotropy and electric charge to TOV equations.
Our solution indicate that adding positive Gaussian anisotropy and electric charge
to TOV equation increase the maximum mass of neutron star and can predict more massive neutron stars in agreement with Ref.~\cite{Demorest}.
\subsection{Charge Effect}
Solving modified TOV equation reveals that adding electric charge makes the radial pressure $p_r$ to be increased, as we can see in Fig. \ref{fig:p_rE}. But the sensitivity of tangential pressure $p_t$ to the electric charge is much more, so $\Delta$ raises up in charged case as is apparent from Fig. \ref{fig:deltaE}. This behavior is not only
the direct result of our speculation about $p_r$, but also comes
from the nature of TOV equation.\\
Fig. \ref{fig:MmaxE} and Fig. \ref{fig:pichE} refer to uncharged and charged isotropic
cases. One may expect adding electric charge to TOV equation causes an
increase in the maximum mass of strange quark star $M_{max}$ and
also in the corresponding radius $R$ for obtained $M_{max}$. Since
in GR gravitational mass of star and total energy are
proportional, adding charge density $\rho_{ch}$ increases the
total energy and therefore the above result makes sense.
\cite{glender}. However, electrical neutrality condition of stars does not allow a significant growth in maximum masses and their corresponding radii.
\subsection{Anisotropy Effect}
It is shown in Fig. \ref{fig:Mmax} and Fig. \ref{fig:pich} that
the maximum mass of strange quark star has been raised by adding
Gaussian perturbation for obtaining $p_r$. Furthermore, according to Fig. \ref{fig:deltaE} the tangential pressure
$p_{t}$ and anisotropy $\Delta$ have a maximum and they do not
vanish on the surface. In fact our solution indicate that if there is an anisotropy in the star,
it should be maximum on the surface. \\
Recent measurements indicate that there exist a pulsar of mass 1.97 $\pm$ 0.04 $M_{\odot}$ and this mass rules out nearly all currently proposed equations of state\cite{Demorest}. We have used a trial and error method to get the mentioned mass above. Our calculations indicate that an anisotropy amplitude of $A=3\times10^{33}Nm^{-2}$ with a standard deviation of $\sigma=3\times10^{3}m$ and $\mu = \frac{2R}{3}$ in Eq. (\ref{gaussian}) can survive SQM equation of state which satisfies boundary and hydrostatic equilibrium conditions by $1\%$ and $10\%$ uncertainties at the origin and on the surface respectively as is depicted in Fig. \ref{fig:p_rE}.
\begin{figure}
\centering
\includegraphics[scale=0.4]{p_rE.eps} \\
\caption {Radial pressure $p_{r}$ for various cases: isotropic uncharged, isotropic charged, and anisotropic uncharged
matter with $f=5\times10^{-5}esu$ $g^{-1}$, $A=3\times10^{33}Nm^{-2}$, $\sigma=3\times10^{3}m$ and $\mu = \frac{2R}{3}$ versus radial coordinate
$r$, all having the same central density of $\rho_{c}=1\times10^{18} kg$ $m^{-3}$.}
\label{fig:p_rE}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.4]{DeltaEA.eps} \\
\caption {Anisotropy factor $\Delta$ for anisotropic charged
and uncharged matter with $f=5\times10^{-5}esu$ $g^{-1}$ versus radial coordinate
$r$ with a same central density $\rho_{c}=1\times10^{18} kg$ $m^{-3}$.}
\label{fig:deltaE}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.4]{M-rho_E.eps} \\
\caption {Gravitational mass M versus central density $\rho_c$ in two
cases: Uncharged (solid curve) and charged isotropic (dashed
curve) matter. }
\label{fig:MmaxE}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.4]{M-R_E.eps} \\
\caption {Gravitational mass M versus radius of star R in two
cases: Uncharged isotropic (solid curve) and charged isotropic
(dashed curve) matter. }
\label{fig:pichE}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.4]{Mrho_AN.eps} \\
\caption {Gravitational mass M vs central density $\rho_{c}$ for
uncharged isotropic (solid curve) and uncharged anisotropic (dashed curve) matter.}
\label{fig:Mmax}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.4]{M-R_AN.eps} \\
\caption {Gravitational mass M vs radius of star R in two cases:
Uncharged isotropic (solid curve) and uncharged anisotropic (dashed curve) matter.}
\label{fig:pich}
\end{figure}
\section{Summary}
We have studied anisotropic charged strange quark stars with a Gaussian perturbation. Our calculations have shown that the electrical charge effect is much less effective in increasing the maximum mass of star in comparison to anisotropy effect. Also our work has supported that anisotropy can be one of the candidates of describing massive neutron stars with current equations of state.
|
2,869,038,155,059 | arxiv | \section{Introduction}
\subsection{Motivation}
Understanding the behaviour of physical systems undergoing a continuous phase transition at their critical point is one of the major challenges of modern statistical physics, both on the physics and the mathematical sides. In this paper, we focus on percolation systems which provide models of random subgraphs of a given lattice. Bernoulli percolation is maybe the most studied such model, and breakthroughs in the understanding of its phase transition have often served as milestones in the history of statistical physics. The random-cluster model (also called Fortuin-Kasteleyn percolation), another example of a percolation model, was introduced by Fortuin and Kasteleyn around 1970 \cite{For70,ForKas72} as a generalisation of Bernoulli percolation.
It was found to be related to many other models of statistical mechanics, including the Ising and Potts models, and to exhibit a very rich critical behaviour.
The arrival of the renormalization group (RG) formalism (see \cite{Fis98} for a historical
exposition) led to a (non-rigorous) deep physical and geometrical understanding of continuous phase transitions. The RG formalism suggests that ``coarse-graining'' renormalization transformations correspond to appropriately changing the scale and the parameters of the model under study. The large scale limit of the critical regime then arises as the fixed
point of the renormalization transformations.
A striking consequence of the RG formalism is that, the critical fixed point being usually unique, the scaling limit at the critical point must satisfy translation, rotation
and scale invariance. In seminal papers \cite{BPZ84b,BPZ84a}, Belavin, Polyakov and
Zamolodchikov went even further by suggesting a much stronger invariance of statistical physics models at criticality: since the scaling limit
quantum field theory is a local field, it should be invariant by any map which is
{\em locally} a composition of translation, rotation and homothety, which led them to postulate full conformal invariance. These papers gave birth to Conformal Field Theory, one of the most studied domains of modern physics. Of particular importance from the point of view of physics and for the relevance of our paper is the fact that the scaling limits of different random-cluster models at criticality are expected to be related to a range of 2D conformal field theories. Existence of a conformally invariant scaling limit was rigorously proved for the random-cluster model in two special cases corresponding to cluster-weight $q=2$ \cite{Smi10,CheDumHon12a,KemSmi16} and $q=1$ (in this case the proof only applies to a related model called site percolation on the triangular lattice \cite{Smi01}) only.
For percolation models in two dimensions, conformal invariance translates into predictions for so-called crossing probabilities of topological rectangles (also called quads): as the scale of the quad increases to infinity, the crossing probability should converge to a quantity that depends only on the extremal distance of the quad.
In this paper, we show that crossing probabilities of quads are bounded in terms of the extremal distance of the quad only, thus hinting to their conformal invariance. In addition, we prove that pivotal points are abundant, a fact which is very useful in the study of the geometry of large clusters. While we are currently unable to show existence and conformal invariance of the scaling limit for general cluster weight $q \in [1,4]$, the properties derived in this paper should serve as stepping stones towards a better understanding of the critical phase, as was the case for $q = 1$ and $q = 2$.
\subsection{Definition of the random-cluster model}\label{sec:1.2}
As mentioned in the previous section, the model of interest in this paper is the random-cluster model, which we now define.
For background, we direct the reader to the monograph \cite{Gri06} and the lecture notes \cite{Dum17a}.
Consider the square lattice $(\mathbb Z^2,\mathbb E)$, that is the graph with vertex-set $\mathbb Z^2=\{(n,m):n,m\in\mathbb Z\}$ and edges between nearest neighbours. In a slight abuse of notation, we will write $\mathbb{Z}^2$ for the graph itself. In this paper we will mainly work with the random-cluster model on discrete domains that are specific subgraphs of $\mathbb Z^2$ (together with a boundary), defined as follows.
Let $\gamma=(x_0,\ldots,x_{\ell-1})$ be a simple loop on $\mathbb Z^2$, \emph{i.e.}\@ $x_0,\ldots,x_{\ell-1}$ are distinct vertices, and $x_i$ is a neighbour of $x_{i+1}$ for $0\le i<\ell$ (where $x_\ell:=x_0$). Consider the (finite) set $E$ of edges enclosed by the loop (including the edges $\{x_i,x_{i+1}\}$ of $\gamma$) and define the graph $\mathcal D=(V,E)$ induced by $E$. Any graph obtained in this way is called a \emph{(discrete) domain}. Notice that one can always reconstruct the loop $\gamma$ (up to re-ordering of the vertices) from the data of the domain $\mathcal D$, and we define the boundary $\partial \mathcal D$ to be the set of vertices on the loop $\gamma$. We point out that the boundary of a discrete domain differs from the more standard graph-theoretical notion of boundary: in particular, a vertex of $\partial \mathcal D$ may have all its neighbours in the domain (see Fig.~\ref{fig:domain0}).
\begin{figure}[htbp]
\centering
\includegraphics[width=4cm]{domain0.pdf}
\caption{An example of a discrete domain $\mathcal D=(V,E)$. The loop $\gamma$ is represented by the bold line surrounding $\cal D$. The dots represent the elements of the vertex set $V$.}
\label{fig:domain0}
\end{figure}
A percolation configuration $\omega=(\omega_e:e\in E)$ on a domain $\mathcal{D}=(V,E)$ is an element of~$\{0,1\}^{E}$. An edge $e$ is said to be {\em open} (in $\omega$) if $\omega_e=1$, otherwise it is {\em closed}. A configuration $\omega$ is identified to the subgraph of $\mathcal{D}$ with vertex-set $V$ and edge-set $\{e\in E:\omega_e=1\}$. When speaking of connections in $\omega$, we view $\omega$ as a graph. For $A,B, S \subset V$, we say that $A$ is {\em connected to $B$ in $S$} if there exists a path in $\omega$ going from $A$ to $B$ and using vertices in $S$ only. We denote this event by $A\xleftrightarrow{S} B$ (we omit $S$ when it is the full domain).
A {\em cluster} is a connected component of $\omega$.
{\em Boundary conditions} on $\mathcal{D}$ are given by a partition $\xi$ of $\partial \mathcal{D}$. We say that two vertices of $\partial \mathcal{D}$ are {\em wired together} if they belong to the same element of the partition $\xi$.
\begin{definition} Let $\mathcal D=(V,E)$ be a discrete domain. The random-cluster measure on $\mathcal{D}$ with edge-weight $p \in (0,1)$, cluster-weight $q>0$ and boundary conditions $\xi$ is given by
\begin{equation}\label{eq:RCM_def1}
\phi_{\mathcal{D},p,q}^\xi[\omega]=\frac{1}{Z^\xi(\mathcal{D},p,q)} (\tfrac{p}{1-p})^{|\omega|}q^{k(\omega^\xi)} \qquad \omega\in \{0,1\}^{E},
\end{equation}
where $|\omega|=\sum_{e\in E}\omega_e$,
$\omega^\xi$ is the graph obtained from $\omega$ by identifying wired vertices of $\partial \mathcal{D}$ together,
$k(\omega^\xi)$ is the number of connected components of $\omega^\xi$,
and $Z^\xi(\mathcal{D},p,q)$ is a normalising constant called the {\em partition function} which is chosen in such a way that $\phi_{\mathcal{D},p,q}^\xi$ is a probability measure.
\end{definition}
When $q = 1$, $\phi_{\mathcal{D},p,q}^\xi$ is a product measure, which is to say that the states of different edges are independent; this model is also called Bernoulli percolation with parameter $p$.
Two specific families of boundary conditions will be of special interest to us. On the one hand, the {\em free} boundary conditions, denoted 0, correspond to no wirings between boundary vertices. On the other hand, the {\em wired} boundary conditions, denoted 1, correspond to all boundary vertices being wired together.
For $p\in (0,1)$, $q\ge1$ and $i=0,1$, the family of measures $\phi_{\mathcal{D},p,q}^i$ converges weakly as $\mathcal{D}$ tends to $\mathbb Z^2$. The limiting measure on $\{0,1\}^{\mathbb{E}}$ is denoted by $\phi_{\mathbb{Z}^2,p,q}^i$ and is called {\em infinite-volume} random-cluster measures with free or wired boundary conditions, when $i = 0$ or $i = 1$, respectively.
The random-cluster model undergoes a phase transition at a critical parameter $p_c=p_c(q)$ in the following sense: if $p>p_c(q)$, the $\phi_{\mathbb{Z}^2,p,q}^1$-probability $\theta(p,q)$
that 0 is connected to infinity
is strictly positive, while for $p<p_c(q)$, it is equal to 0.
In the past ten years, considerable progress has been made in the understanding of this phase transition: the critical point was proved in \cite{BefDum12} (see also \cite{DumRaoTas17,DumMan14,DCRT16}) to be equal to
$$p_c(q)=\frac{\sqrt q}{1+\sqrt q}.$$
It was also proved in \cite{DumSidTas16,BetheAnsatz2,RaySpi19} that the phase transition is continuous (\emph{i.e.}~that $\theta(p_c,q)=0$) if $q\in[1,4]$, and discontinuous (\emph{i.e.}~that $\theta(p_c,q)>0$) for $q>4$.
\bigbreak
{\em As we are interested in continuous phase transitions only, in the whole paper we will fix $q\in[1,4]$ and $p=p_c(q)$, and drop them from the notation. For this range of parameters, there is a unique infinite-volume random-cluster measure, so we omit the superscript corresponding to the boundary conditions and denote it simply by $\phi_{\mathbb{Z}^2}$. }
\subsection{Crossing probabilities in quads}\label{sec:crossing_applications}
A (discrete) {\em quad} $(\mathcal{D}; a,b,c,d)$ is a discrete domain $\mathcal{D}$ along with four vertices $a,b,c,d\in\partial \mathcal{D}$ found on $\partial \mathcal{D}$ in counterclockwise order. These vertices define four closed arcs $(ab)$, $(bc)$, $(cd)$, and $(da)$ corresponding to the parts of the boundary between them (here by \emph{closed} arc, we mean that the extremities $x$ and $y$ belong to the arc $(xy)$).
In order to define extremal distances associated to discrete quads, let us explain how the discrete domain $\mathcal{D}$ can be seen as a continuous domain of the plane. First, consider the counter-clockwise loop $\gamma$ around $\mathcal D$ (up to cyclic permutation this loop is unique), and identify it to a continuous piecewise linear curve in $\mathbb R^2$ by seeing all its edges as segments of length 1. Then, the continuous domain associated to $\mathcal D$ is obtained by taking the bounded connected component of $\mathbb R^2\setminus \gamma$.
The \emph{extremal distance} $\ell_{\mathcal{D}}\left[\left(ab\right),\left(cd\right)\right]$ between $\left(ab\right)$ and $\left(cd\right)$ inside $\mathcal{D}$ is defined as the unique $\ell > 0$ such that there exists a conformal map from the continuous domain associated to $\mathcal D$ to the rectangle $(0,1) \times (0,\ell)$, with $a,b,c,d$ being mapped (by the continuous extension of the conformal map) to the corners of $[0,1] \times [0,\ell]$, in counterclockwise order, starting with the lower-left corner.
As mentioned in the previous section, conformal invariance of critical models exhibiting a continuous phase transition may be formulated using crossing probabilities of large quads. More precisely, $(\mathcal{D}; a,b,c,d)$ is said to be {\em crossed} (from $(ab)$ to $(cd)$) in a configuration $\omega$, if it contains a path of open edges linking $(ab)$ to $(cd)$. It is expected that the probability that the blow up by $n$ of a given quad $(\mathcal{D},a,b,c,d)$ is crossed
converges as $n$ tends to infinity to a non-degenerate limit that depends only on $\ell_{\mathcal{D}}\left[\left(ab\right),\left(cd\right)\right]$.
While we are currently unable to prove this result, we show that crossing probabilities remain bounded away from 0 and 1 uniformly in the extremal distance.
\begin{theorem}[Crossing estimates in general quads]\label{thm:sRSW}\label{thm:RSWquads}
Fix $1\le q< 4$ and $p=p_c(q)$. For every $M>0$, there exists
$\eta=\eta(M)\in (0,1)$ such that for any discrete quad $\left(\mathcal{D},a,b,c,d\right)$ and any boundary conditions $\xi$,
\smallbreak
\begin{itemize}
\item if $\ell_{\mathcal{D}}\left[\left(ab\right),\left(cd\right)\right] \leq M$\,, then $\mathbb{\phi}_{\mathcal{D}}^{\xi}[\left(ab\right)\xleftrightarrow{\mathcal{D}}\left(cd\right)] \geq \eta$;
\item if $\ell_{\mathcal{D}}\left[\left(ab\right),\left(cd\right)\right] \geq M^{-1}$, then $\mathbb{\phi}_{\mathcal{D}}^{\xi}[\left(ab\right)\xleftrightarrow{\mathcal{D}}\left(cd\right)] \leq 1-\eta$.
\end{itemize}
\end{theorem}
Such crossing estimates are very useful for the study of the critical model. They initially emerged in the study of Bernoulli percolation in the late seventies under the coined name of Russo-Seymour-Welsh (RSW) theory \cite{Rus78,SeyWel78}. This theory has been instrumental in basically every result on critical Bernoulli percolation on the square lattice since then. Progress in the theory has been made in the past few years during which the RSW theorem was generalised to the random-cluster model first in \cite{BefDum12} for $q\ge1$ and specific boundary conditions, then in \cite{DumHonNol11} for $q=2$ and general boundary conditions, and finally in \cite{DumSidTas16} for $1\le q\le 4$ and arbitrary boundary conditions.
One of the drawbacks of previous results is that the estimates were {\em restricted to rectangles} (the estimates are not expressed in terms of a conformally invariant measurement of size). This restriction is substantial in terms of applications due to the fact that boundary conditions do influence the configuration heavily in $\mathcal{D}$, and that the roughness of the boundary could dictate the strength of this influence. For instance, it could a priori prevent the existence of open paths reaching the boundary of a domain, especially if the boundary is fractal (which will be the case if it is the boundary of another cluster).
In Theorem~\ref{thm:sRSW}, the crossing probability bounds hold in \emph{arbitrary discrete quads with arbitrary boundary conditions}. In particular, they are independent
of the local geometry of the boundary. The only other instance of such general estimates is the paper \cite{CheDumHon13} treating the specific case of $q=2$ in which much more is known thanks to discrete holomorphic observables.
\subsection{Applications}
Theorem~\ref{thm:sRSW} has many implications for the study of the critical regime. We simply mention them briefly below, and refer to the corresponding sections for further details.
\begin{description}
\item[Tightness of interfaces:] It was recognised by Aizenman and Burchard \cite{AizBur99} that crossing estimates imply tightness when considering the scaling limit of interfaces (see Theorem~\ref{thm:tightness}).
While tightness for random cluster interfaces was already proved \cite{KemSmi16,CheDumHon12a} using previously known crossing estimates, we would like to mention that the implication is quite straightforward when using Theorem~\ref{thm:sRSW}.
\item[Non-simple curves in the scaling limit:] Theorem~\ref{thm:sRSW} implies that at large scales, macroscopic clusters typically touch each other and that their boundaries are non-simple (see Theorem~\ref{thm:self-touching}).
Let us mention that the family of interfaces describing boundaries of large cluster in the critical random-cluster model with cluster-weight $q\in(0,4]$ is conjectured \cite{RohSch05} to converge to the Conformal Loop Ensemble (CLE) \cite{SW} with parameter
$$\kappa=\kappa(q):=4\pi/\arccos(-\sqrt q/2).$$
Thus, our result rigorously excludes the possibility that the scaling limit of random-cluster models with $q \in [1,4)$ is described by a CLE with parameter $\kappa\le 4$ (as these are made of simple loops not touching each other).
\item[Quasi-multiplicativity, localization, well-separation for arm events:] While these properties were already obtained in specific cases ($q=1,2$, or general $q \in [1,4]$ but only for alternating arm-events with an even number of arms), we prove this statement for the first time in complete generality (see Propositions~\ref{prop:separation},~\ref{prop:quasimultiplicativity}, and~\ref{prop:localization}).
\item[Universal arm exponents:] We obtain up-to-constant estimates for the probability of five alternating arms in the full plane, and two and three alternating arms in a half-plane (see Proposition~\ref{prop:universal}).
It is noteworthy that these critical exponents do not vary for different random-cluster models despite the fact that these models belong to different universality classes.
\item[The four arm exponent is strictly smaller than 2:]
We obtain a lower bound on the probability of four arms between scales (see Proposition~\ref{prop:four arm} for a precise statement).
This is a consequence of the value of the five arm exponent discussed above, and the strict monotonicity of arm exponents, which in turn follows from Theorem~\ref{thm:sRSW}.
Bounds on the four arm exponent have important consequences for the geometry of interfaces.
In particular, they may be used to prove the existence of polynomially many pivotals.
When $1\le q\le 3$, we even prove a quantitative lower bound on the four arm exponent (Proposition~\ref{prop:beta}) which is of interest when trying to prove the existence of exceptional times for the Glauber dynamics.
\item[New bounds on the one-arm half-plane exponent:] We prove new bounds on the half plane one arm exponent when boundary conditions are free. More precisely, we show that when $q<2$ (resp.~$2<q\le 4$), this exponent is strictly smaller (resp.~larger) than $1/2$. This will be used in subsequent papers to study the effect of a defect line in the random-cluster model, and the order of the phase transition.
\item[The six-arm exponent is strictly larger than 2:]
Another consequence of the universal value of the five arm exponent and of the strict monotonicity of arm exponents is
an upper bound on the probability of having six alternating arms (Corollary~\ref{cor:six arm}). We mentioned it since it is very useful when studying percolation models at criticality, in particular when studying the Schramm-Smirnov topology \cite{SchSmi11} (see the detailed discussion in Section~\ref{sec:6}).
\item[New bounds for the one, two and four-arm exponents:]
A byproduct of our proof is the following family of surprising bounds.
For $1\le q\le2$, the one-arm, two-arm and four-arm exponents can be rigorously bounded from above by $1/4$, $1/2$ and $3/2$, respectively, thus improving on the existing bounds, even in the case of Bernoulli percolation. For Bernoulli percolation, these bounds can be further improved to $1/6$, $1/3$, and $4/3$ (to be compared with the conjectured values $5/48$, $1/4$, and $5/4$). We refer to Section~\ref{sec:perco} for details.
\item[Scaling relations:]
The existence of pivotals mentioned above is an important ingredient of the proof~\cite{DumMan20} of scaling relations connecting the different critical exponents of the random-cluster model.
\end{description}
\subsection{Idea of the proof of the main theorem}\label{sec:idea}
The starting point is the crossing estimates obtained for every $1\le q\le 4$ in \cite{DumSidTas16}. These estimates can be written under different forms. Here, we choose the following one. Write $\Lambda_n$ for the domain spanned by the vertex-set $\{-n,\dots, n\}^2$, and $\Lambda_n(x)$ for its translate by $x \in \mathbb{Z}^2$.
For a box $B:=\Lambda_r(x)$, let $\overline B:=\Lambda_{2r}(x)$ be the twice bigger box and $\mathrm{Circ}_B$ be the event that there exists a circuit in $\omega$ surrounding $B$ and contained in $\overline B$. The main theorem of \cite{DumSidTas16} (together with Proposition~5 in the same paper) implies the existence, for every $1\le q\le 4$, of $c_{\rm cir}>0$ such that for every domain $\mathcal{D}$ and every $B$ with $\overline B\subset \mathcal{D}$,
\begin{equation}\label{eq:RSW}
\phi_\mathcal{D}^0[\mathrm{Circ}_B]\ge c_{\rm cir}.
\end{equation}
Note that the previous estimate is valid also for $q=4$ (unlike our main result),
and that it does not require the existence of a macroscopic cluster touching the boundary.
In fact, the main difficulty of our result consists in proving the existence of large clusters touching possibly fractal boundaries with free boundary conditions. Indeed, a statement similar to that of Theorem~\ref{thm:RSWquads} may be deduced directly from \cite{DumSidTas16} if the measure $\phi_\mathcal{D}^\xi$ is replaced by the measure in a domain which is macroscopically larger than $\mathcal{D}$ (see Section~\ref{sec:3.3}).
General considerations on the extremal distance together with~\eqref{eq:RSW} reduce Theorem~\ref{thm:sRSW} to the following proposition, which will therefore be the focus of our attention. Call a domain $\mathcal{D}$ $R$-{\em centred} if $\mathcal{D}$ contains $\Lambda_{2R}$ but not $\Lambda_{3R}$.
\begin{proposition}\label{prop:crucial_exist}
For $1\le q<4$, there exists $c_0>0$ such that for every $R\geq 1$ and any $R$-centred domain $\mathcal{D}$,
\begin{align}\label{eq:crucial_exist}
\phi_\mathcal{D}^0 [\Lambda_R\xleftrightarrow{\Lambda_{9R}} \partial\mathcal{D}]\ge c_0.
\end{align}
\end{proposition}
The proof of Proposition~\ref{prop:crucial_exist} is the core of the argument. Historically, results on crossing estimates are based on three different techniques.
First, for $q=1$ or for general $q$ but specific boundary conditions, one may prove that crossing probabilities in squares are bounded away from $0$ using self-duality. Then, probabilistic arguments involving the FKG inequality enable one to extend these estimates to rectangles, see e.g.~\cite{Rus78,SeyWel78,BefDum12}. The use of self-duality relies on symmetries of the domain and of the boundary conditions, and is therefore inefficient for general quads or boundary conditions.
A second technique based on renormalization arguments was implemented in \cite{DumSidTas16,DumTas18} for arbitrary boundary conditions but only for rectangles, and was used to prove~\eqref{eq:RSW}. While this technique treats arbitrary boundary conditions, it only applies when the boundary is at a macroscopic distance from the domain to be crossed.
A third technique, which allows one to prove that crossings touch the boundary, relies on the second moment method for the number of boundary vertices connected to a given set. This strategy works well when $\mathcal{D}$ has a flat boundary, and was indeed used in \cite{DumSidTas16} to show crossing estimates for rectangles with free boundary conditions (see~\eqref{eq:wRSW} below), but does not extend to general quads.
Indeed, except in the special case of the random-cluster model with $q=2$ \cite{DumHonNol11,CheDumHon13}, up-to-constant estimates on connection probabilities for vertices on the boundary are not available for general boundaries. Thus, the second moment method, as described above, becomes essentially impossible to implement.
The strategy used here to prove Proposition~\ref{prop:crucial_exist} will be different than all of the above.
It contains two different parts, and may be viewed as a combination of a first moment estimate and renormalization methods. Indeed, we start with a (sub-optimal) polynomial first moment estimate, then use a renormalization procedure to replace the second moment estimate and prove the existence of points on $\partial \mathcal{D}$ connected to $\Lambda_R$ with positive probability.
For $r \geq 0$, call {\em $r$-box} any translate of $\Lambda_r$ by a vertex $x$ in $(1\vee r)\mathbb Z^2$. Notice that a $0$-box is the same as a vertex of $\mathbb Z^d$. Consider $R\geq 1$ and a $R$-centred domain $\mathcal{D}$, and let $\mathbf M_r(\mathcal{D},R)$ be the number of $r$-boxes intersecting $\partial \mathcal D$
that are connected to $\Lambda_R$ in $\mathcal{D} \cap \Lambda_{7R}$ (the difference between the factors $7R$ here and $9R$ in Proposition~\ref{prop:crucial_exist} appears for technical reasons).
In particular, $\mathbf M_0(\mathcal D,R)$ counts the number of vertices on $\partial \mathcal D$ that are connected to $\Lambda_R$ in $\mathcal D\cap \Lambda_{7R}$.
Hence, our goal is akin to showing that there exists a uniform constant $c>0$ such that
\begin{equation}\label{eq:3}
\phi_{\mathcal D}^0[\mathbf M_0(\mathcal D,R)\ge 1]\ge c
\end{equation}
for every $R \geq 1$ and every $R$-centred domain $\mathcal D$.
As already mentioned, the first step towards Proposition~\ref{prop:crucial_exist}
is to lower-bound the first moment of $\mathbf M_r(\mathcal{D},R)$, which is the object of the following proposition. Introduce
\begin{equation}
M(r,R):=\inf\{\phi_{\mathcal{D}}^0[\mathbf M_{r}(\mathcal{D},R)]:\mathcal{D}\text{ $R$-centred}\}.
\end{equation}
\begin{proposition}[non-sharp scale-to-scale lower bound on first moment]\label{prop:fund}
For $1\le q< 4$, there exists $c_1>0$ such that for every $R\ge r\ge1$,
\begin{align}\label{eq:fund1}
M(r,R)\ge c_1(R/r)^{c_1}.
\end{align}
\end{proposition}
Let us make some remarks about this result. First, we would like to emphasise that it is non-trivial, in the sense that it does not follow directly from the RSW estimates. In order to put the proposition above into perspective, let us give the estimate that we obtain if one uses only the RSW result~\eqref{eq:RSW} to estimate $\mathbf M_r(\mathcal D,R)$. By a standard scale-to-scale gluing procedure, the RSW result~\eqref{eq:RSW} gives a lower bound of $(r/R)^C$ on the probability that a $r$-box intersecting the boundary of $\mathcal D$ is connected to $\Lambda_R$, where $C>0$ is a positive constant on which we have almost no control (it is a priori very large). Since the total number of $r$-boxes intersecting the boundary of $\mathcal D$ is of order $(R/r)^d$ for some $d\in [1,2]$, we would obtain an estimate of the form
\begin{equation}
\label{eq:4}
\phi_{\mathcal{D}}^0[\mathbf M_{r}(\mathcal{D},R)]\gtrsim \left(\frac R r\right)^{d-C}.
\end{equation}
This lower bound does not establish the proposition above, due to the lack of control on~$C$. Another way to see that there is something subtle in the proposition above is explained in the next section: we expect that the estimate~\eqref{eq:fund1} does not hold for $q=4$ (even if the RSW-result ~\eqref{eq:RSW} does).
A second remark is that this estimate is non-sharp, in the following sense.
For a fixed fractal domain $\mathcal D$, the expectation $\phi_{\mathcal{D}}^0[\mathbf M_{r}(\mathcal{D},R)]$ is thought to behave like $(R/r)^{\eta+o(1)}$ for some $\eta>0$, but a priori the constant $c_1$ in \eqref{eq:fund1} is smaller than $\eta$. In particular, the second moment method cannot be used to deduce~\eqref{eq:3} from the first moment estimate. Instead, we use a new renormalization technique, inspired from the theory of branching processes, and involving the following quantity:
\begin{equation}\label{eq:p(R)}
p(R):=\inf\{\phi_\mathcal{D}^0 [\Lambda_R\xleftrightarrow{\Lambda_{9R}} \partial\mathcal{D}]: \mathcal{D}\text{ $R$-centred}\}.
\end{equation}
\begin{proposition}[renormalization]\label{prop:renormalization}
For $1\le q\le 4$, there exists $c_2>0$ such that for every $R/20\ge r\ge 1$,
\begin{equation}\label{eq:recursive}
p(R)\ge c_2M(r,R)\min\{p(r),(\tfrac rR)^2\}.
\end{equation}
\end{proposition}
Once we have established the Propositions~\ref{prop:fund} and~\ref{prop:renormalization}, one can easily conclude the proof of Proposition~\ref{prop:crucial_exist} as follows:
\begin{proof}[Proposition~\ref{prop:crucial_exist}]
Choose a constant $\lambda \geq 20$ large enough that $c_1c_2\lambda^{c_1}\geq 1$.
Then~\eqref{eq:fund1} and~\eqref{eq:recursive} applied with $\lambda r\le R \le \lambda^2 r$ imply
$$p(R) \geq \min\{\lambda^{-4}, p(r)\}.$$
Consider the sequence $u_n:=\min\{p(R):\lambda^n\le R< \lambda^{n+1}\}$.
By applying the equation above to $r=\lambda^n$, we have
$$u_{n+1} \geq \min\{\lambda^{-4}, p(\lambda^n)\}\ge \min\{\lambda^{-4}, u_n\},$$
which implies that $u_n\geq \min(\lambda^{-4},u_0)$ for every $n\ge 0$ by induction.
Since $u_0>0$ (by the finite energy property), we conclude that $\inf \{p_R: R\geq 1\} > 0,$ which corresponds to the statement of the proposition.
\end{proof}
\begin{remark}
We will see in Proposition~\ref{prop:polynomially many} that we can prove an even stronger result, namely that with probability bounded by a universal strictly positive constant, in every $R$-centred domain $\mathcal{D}$, $\Lambda_R$ is connected to polynomially many points of $\partial\mathcal{D}$.
\end{remark}
\subsection{Origin of first moment bound and why is $q=4$ excluded}\label{sec:q=4}
The careful reader will have noticed that~\eqref{eq:RSW} and Proposition~\ref{prop:renormalization} are valid for every $1\le q\le 4$, while Proposition~\ref{prop:fund} and Theorem~\ref{thm:sRSW} require $q<4$ additionally.
At this stage, it is useful to explain why $q=4$ is excluded from the latter two statements.
Write $\phi_{\mathbb H}^0$ for the infinite-volume measure\footnote{The measure is obtained as the weak limit (as $R\rightarrow\infty$) of the measures $\phi_{[-R,R] \times [0,R]}^0$.} in the half-plane $\mathbb H:=\mathbb Z\times\mathbb Z_+$
and set
\begin{equation}
\pi_1^+(R):=\phi_{\mathbb{H}}^0[0\longleftrightarrow\partial\Lambda_R].
\end{equation}
The proof of Proposition~\ref{prop:fund} will crucially rely on the following lemma.
\begin{lemma}\label{lem:mn}
For $1\le q<4$, there exists $c_3>0$ such that for every $R\ge r\ge1$,
\begin{align*}
\pi_1^+(R)\ge c_3(r/R)^{1-c_3}\pi_1^+(r).
\end{align*}
\end{lemma}
This lemma is a simple application (see Section~\ref{sec:5.1a}) of the following result from \cite{DumSidTas16}:
For every $1\le q<4$ and $\rho>0$, there exists $c_{\rm bound}=c_{\rm bound}(\rho)>0$ such that
\begin{equation}\label{eq:wRSW}
\phi_{\mathcal{R}}^0[(ab)\longleftrightarrow(cd)]>c_{\rm bound}
\end{equation}
for every rectangle $\mathcal{R}:=[0,\rho R]\times[0,R]$ of aspect ratio $\rho$, where $a,b,c,d$ are the four corners of the rectangle, indexed in counter-clockwise order, starting from the bottom-right corner.
Since Lemma~\ref{lem:mn} is our starting point, we can summarise the innovation in this paper as follows:
we start from crossing estimates for domains with flat boundaries and extend these estimates to fractal domains.
When $q=4$, we expect the scaling limit of the critical random-cluster model to be described by CLE(4).
The probability that a macroscopic loop of CLE(4) comes within distance $\varepsilon$ of a point on a flat boundary is of order $\varepsilon$.
We therefore expect that $\pi_1^+(R)$ decays like $1/R$ when $q=4$, which contradicts the conclusion of Lemma~\ref{lem:mn}.
As a consequence, the probabilities of crossing rectangles in~\eqref{eq:wRSW} are expected to tend to $0$ as $N$ increases.
Thus, Theorem~\ref{thm:sRSW} should be {\em wrong} for $q=4$, even in the special case of flat boundaries.
We take this opportunity to state the following question.
\begin{question}
Show that for $q=4$, $\pi_1^+(R)$ decays (up to multiplicative constants) like $1/R$, and
probabilities in~\eqref{eq:wRSW} tend to 0 as $R$ tends to infinity.
\end{question}
To conclude, let us mention that Proposition~\ref{prop:fund} will be a direct consequence of the previous lemma together with the following result.
\begin{proposition}\label{prop:10}
For $1\le q< 4$, there exists $c_4>0$ such that for every $R\ge r\ge0$,
\begin{align}\label{eq:fund}
M(r,R) \ge c_4\frac{R\pi_1^+(R)}{1\vee r\pi_1^+(r)}.
\end{align}
\end{proposition}
The special form of the denominator is meant to accommodate the case $r = 0$.
This case, albeit not important for the application of the proposition (indeed Proposition ~\ref{prop:fund} only uses $r \geq 1$),
will serve as a stepping stone in the proof of~\eqref{eq:fund}.
The proof of this proposition uses parafermionic observables, as did that of \eqref{eq:wRSW} in \cite{DumSidTas16}. While these observables were previously used to study the critical phase of several 2D models \cite{Smi10,DumSidTas16,BefDumSmi15,DumSmi12,DumGla18}, the present use is new, and we believe that the amount of information extracted from these observables is superior to previous results dealing with general values of $q$; of course when $q=2$ much more is known due to further properties of parafermionic observables that are specific to this cluster-weight.
To conclude this section, let us show how to deduce Proposition~\ref{prop:fund} from Lemma~\ref{lem:mn} and Proposition~\ref{prop:10}.
\begin{proof}[Proposition~\ref{prop:fund}]
Insert the bound of Lemma~\ref{lem:mn} into~\eqref{eq:fund} to obtain the desired result.
\end{proof}
\subsection{Organisation of the paper}
Section~\ref{sec:background} recalls some basics of the random-cluster model.
There are three steps in the proof of Theorem~\ref{thm:RSWquads}:
\begin{itemize}
\item Proving the statements related to the non-sharp first moment estimate, namely Lemma~\ref{lem:mn} and Proposition~\ref{prop:10};
they are postponed to Sections~\ref{sec:3} and ~\ref{sec:4.1}.
Proposition~\ref{prop:fund} was already shown to follow from these two results.
\item Proving the renormalization procedure of Proposition~\ref{prop:renormalization}; this is done in Section~\ref{sec:2}.
\item Showing how Proposition~\ref{prop:crucial_exist} implies Theorem~\ref{thm:RSWquads}; this is done in Section~\ref{sec:3.3}.
Indeed, Proposition~\ref{prop:crucial_exist} was already shown to follow from Proposition~\ref{prop:fund} and Proposition~\ref{prop:renormalization} in Section~\ref{sec:idea}.
\end{itemize}
Consequences of Theorem~\ref{thm:sRSW} for probabilities of arm events and properties of scaling limits are given in Sections~\ref{sec:4} (except Section~\ref{sec:4.1}) and~\ref{sec:6}, respectively. These are not necessary for the proof of Theorem~\ref{thm:sRSW}.
\paragraph{Convention regarding constants}
In this paper, $(c_i)_{i\geq0}$ denote constants specific to the statements in which they appear, and are fixed throughout the paper.
The constants $c,c',c''$ and $C,C',C''$ denote small and large quantities, respectively, whose enumeration is restarted in each proof.
\subsection{Acknowledgments}
The first author is supported by the ERC CriBLaM, the NCCR SwissMAP, the Swiss NSF and an IDEX Chair from Paris-Saclay. The second author is supported by the NCCR SwissMAP and the Swiss NSF. The third author is supported by the ERC grant CRISP and NCCR SwissMAP. We thank Alex Karrila for pointing out Fact~\ref{fact} to us and helping with Section~\ref{sec:3.3}.
\section{Background}\label{sec:background}
We will use standard properties of the random-cluster model. They can be found in \cite{Gri06}, and we only recall them briefly below.
\medbreak\noindent 1. {\em FKG inequality}: Fix $q\ge 1$ and a domain $\mathcal D=(V,E)$ of $\mathbb{Z}^2$.
An event $A$ is called {\em increasing} if for any $\omega\le\omega'$ (for the partial order on $\{0,1\}^E$), $\omega\in A$ implies that $\omega'\in A$.
For every increasing events $A$ and $B$,
\begin{align}\label{eq:FKG}
\phi_{\mathcal{D}}^\xi[A\cap B]&\ge \phi_{\mathcal{D}}^\xi[A]\phi_{\mathcal{D}}^{\xi}[B].\end{align}
\medbreak\noindent 2. {\em Comparison between boundary conditions}: For every increasing event $A$ and every $\xi'\ge\xi$, where $\xi'\ge\xi$ means that the wired vertices in $\xi$ are also wired in $\xi'$,
\begin{align} \label{eq:CBC}
\phi_{\mathcal{D}}^{\xi'}[A]&\ge \phi_{\mathcal{D}}^\xi[A].
\end{align}
\medbreak\noindent 3. {\em Spatial Markov property}: for any configuration $\omega'\in\{0,1\}^E$ and any subdomain $\mathcal F=(W,F)$ with $F\subset E$,
\begin{equation}\label{eq:SMP} \phi_{\mathcal{D}}^\xi[\cdot_{|F}\,|\,\omega_e=\omega'_e,\forall e\notin F]\ge \phi_{\mathcal{F}}^{\xi'}[\cdot],
\end{equation} where the boundary conditions $\xi'$ on $\mathcal{F}$ are defined as follows:
$x$ and $y$ on $\partial \mathcal{F}$ are wired if they are connected in $\omega_{|E\setminus F}^\xi$.
\medbreak\noindent 4. {\em Mixing property}:
There exists $c_{\rm mix}>0$ such that for every $R\ge1$, every $\mathcal{D}\supset\Lambda_{R}$, every boundary condition $\xi$ on $\mathcal{D}$
and every event $A$ depending on edges in $\Lambda_{R/2}$, we have that
\begin{align}\label{eq:mix2}
c_{\rm mix}\,\phi_{\mathcal{D}}^0[A]\le \phi_{\mathcal{D}}^\xi [A]
\le c_{\rm mix}^{-1}\,\phi_{\mathcal{D}}^0 [A].
\end{align}
This property is not trivial and can be obtained using~\eqref{eq:RSW}, see e.g.~\cite{Dum13}.
\section{The renormalization step: proof of Proposition~\ref{prop:renormalization}}\label{sec:2}
In this section, we fix $R/20 \ge r\ge1$. Recall that {\em $r$-boxes} are translates of $\Lambda_r$ by vertices $x\in r\mathbb Z^2$. It is worth keeping in mind that $r$-boxes, having side length $2r$, overlap.
For a $R$-centred domain $\mathcal{D}$, introduce the subdomain $\mathcal{D}_r\subset \mathcal{D}$ obtained as the connected component of the origin in the union of the $r$-boxes included in $\mathcal{D}$ and at $L^\infty$-distance at least $10r$ of $\partial\mathcal{D}$. See Fig.~\ref{fig:2} for an illustration. Notice that the condition $R \geq 20r$ ensures that $\Lambda_R$ is contained in $\mathcal{D}_r$.
\begin{figure}[htbp]
\centering
\includegraphics[width=.5\textwidth]{domainDr.pdf}
\caption{An illustration of the domain $\mathcal D_r$, a seed $S$, and its associated domain $\mathcal D_S$.}
\label{fig:2}
\end{figure}
A {\em $r$-seed} of $\mathcal{D}$ is a $r$-box $S = \Lambda_r(x)$ such that $\Lambda_{2r}(x) \subset \mathcal{D}$ but $\Lambda_{3r}(x) \nsubset \mathcal{D}$;
in other words, such that the translate of $\mathcal{D}$ by $-x$ is $r$-centred (see Fig.~\ref{fig:2} for an example).
Let
\begin{align*}
\mathcal{D}_S&:=\Lambda_{20r}(x)\cap \mathcal{D}
\end{align*}
and say that $S$ is {\em $c_\square$-activated for} a configuration $\xi$ in $\mathcal{D}_r$
if
\begin{equation}\label{eq:def square}
\phi_{\mathcal{D}_r\cup\mathcal{D}_S}^0[S\xleftrightarrow{\Lambda_{7R}\cup \mathcal D_S} \Lambda_R|\,\omega_{|\mathcal{D}_r}=\xi]\ge c_{\square},
\end{equation}
where $c_\square>0$ is a constant that will be selected properly in the next lemma.
One may observe that the small domain $\mathcal{D}_S$ around the seed does not necessarily intersect the domain $\mathcal{D}_r$ or the box $\Lambda_{7R}$; in such cases the left-hand side of the equation above is always equal to 0, and the seed $S$ is never $c_\square$-activated.
Let $\mathbf N_r(\mathcal{D},R,c_\square)$ be the number of $r$-seeds of $\mathcal{D}$ that are $c_\square$-activated for the configuration in~$\mathcal{D}_r$. We emphasise that $\mathbf N_r(\mathcal{D},R,c_\square)$ is measurable with respect to the configuration restricted to $\mathcal D_r$.
Even though $\mathbf M_r(\mathcal{D},R)$ is defined in terms of boundary $r$-boxes connected to $\Lambda_R$, while $\mathbf N_r(\mathcal{D},R,c_\square)$ is defined in terms of $r$-seeds that are $c_\square$-activated (and therefore not really connected to $\Lambda_R$),
one should consider these two quantities comparable. The following lemma provides a bound between the expectation of $\mathbf N_r(\mathcal{D},R,c_\square)$ and that of $\mathbf M_r(\mathcal{D},R)$.
\begin{lemma}\label{lem:10}
There exist $c_5,c_\square>0$ such that for every $1\le r\le R/20$ and every $R$-centred domain $\mathcal{D}$,
\begin{align}\label{eq:crucial_exp}
\phi_{\mathcal{D}}^0[\mathbf N_{r}(\mathcal{D},R,c_\square)]
\ge c_5\phi_{\mathcal{D}_r}^0[\mathbf M_r(\mathcal{D}_r,R)].
\end{align}
\end{lemma}
\begin{proof}
By definition,~\eqref{eq:crucial_exp} can be rewritten as
\begin{equation}
\label{eq:1}
\sum_{S\text{ $r$-seed}}\phi_{\mathcal{D}}^0[S\text{ is $c_\square$-activated}]\ge c_5 \sum_{\substack{B\text{ $r$-box}:\\B\cap \partial \mathcal D_r\neq\emptyset}}\phi_{\mathcal{D}_r}^0[B\xleftrightarrow{\Lambda_{7R}} \Lambda_R].
\end{equation}
In order to prove this equation, we fix a $r$-box $B=\Lambda_r(x)$ intersecting $\partial \mathcal{D}_r$ and consider a $r$-seed $S=S(B)\subset \Lambda_{15r}(x)$; such a seed exists since $B$ is at distance between $8r$ and $12r$ from $\partial\mathcal{D}$.
For each such pair $(B,S)$ we will prove that the inequality
\begin{equation}
\label{eq:5}
\phi_{\mathcal{D}}^0[S\text{ is $c_\square$-activated}]\ge c_\square\phi_{\mathcal{D}_r}^0[B\xleftrightarrow{\Lambda_{7R}} \Lambda_R]
\end{equation}
holds for a suitable choice of the constant $c_\square$.
By summing this equation over all $r$-boxes intersecting $\partial \mathcal{D}_r$,
and using that the number of boxes $B$ corresponding to any given seed $S$ is bounded by a constant $C$,
this concludes the proof with $c_5=c_\square/C$.
We now prove~\eqref{eq:5}. First, by comparison between boundary conditions~\eqref{eq:CBC} together with the fact that being $c_\square$-activated is an increasing event, we have
\begin{align}\label{eq:6}
\phi_{\mathcal{D}}^0[S\text{ $c_\square$-activated}]&\ge \phi_{\mathcal{D}_r\cup\mathcal{D}_S}^0[S\text{ $c_\square$-activated}]\\
&\ge\phi_{\mathcal{D}_r\cup\mathcal{D}_S}^0[S\text{ $c_\square$-activated} \:|\: B\xleftrightarrow{\mathcal{D}_r\cap\Lambda_{7R}} \Lambda_R] \phi_{\mathcal{D}_r\cup\mathcal{D}_S}^0[B\xleftrightarrow{\mathcal{D}_r\cap\Lambda_{7R}} \Lambda_R].\notag
\end{align}
Define the random variable
$X(\omega)=\phi_{\mathcal{D}_r\cup\mathcal{D}_S}^0[S\xleftrightarrow{\Lambda_{7R}\cup \mathcal D_S} \Lambda_R|\,\omega_{|\mathcal{D}_r}]$,
and observe that $S$ is $c_\square$-activated if and only if $X(\omega) \geq c_\square$.
Apply the inequality $\mathbf1_{X\ge c_\square}\ge X-c_\square$ to deduce that
\begin{align}\label{eq:9}
\phi_{\mathcal{D}_r\cup\mathcal{D}_S}^0[S\text{ $c_\square$-activated}\: | \: B\xleftrightarrow{\mathcal{D}_r\cap\Lambda_{7R}} \Lambda_R]
&\ge\phi_{\mathcal{D}_r\cup\mathcal{D}_S}^0[S \xleftrightarrow{\Lambda_{7R}\cup \mathcal{D}_S}\Lambda_R\: | \: B\xleftrightarrow{\mathcal{D}_r\cap\Lambda_{7R}} \Lambda_R]-c_\square.
\end{align}
For the above, it is essential that $X$ is measurable in terms of the configuration in $\mathcal{D}_r$.
Finally, we can use the RSW-estimate~\eqref{eq:RSW} to bound the first term in the lower bound above as follows. Write $H$ for the event that all $r$-boxes $B_0$ with $\overline B_0\subset\mathcal{D}_S$ satisfy $\mathrm{Circ}_{B_0}$.
Observe that, if $B$ is connected to $\Lambda_R$ inside $\mathcal{D}_r$ and if $H$ occurs, then $S$ is connected to $\Lambda_R$ inside $(\mathcal{D}_r \cap\Lambda_{7R})\cup \mathcal{D}_S$ (see Fig.~\ref{fig:domains}).
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{approximatedDomains3.pdf}
\caption{The domain $\mathcal{D}_r$ associated to $\mathcal{D}$ is grey; several seeds are depicted. Notice that some seeds cannot be activated (for instance the lowest one). The box $B$ and seed $S$ are marked in red, the domain $\mathcal{D}_S$ in yellow. When $H$ occurs and $B$ is connected to $\Lambda_R$ inside $\mathcal{D}_r$, then $S$ is connected to $\Lambda_R$ inside $\mathcal{D}_r\cup \mathcal{D}_S$.}
\label{fig:domains}
\end{center}
\end{figure}
We deduce from~\eqref{eq:RSW}, the FKG inequality~\eqref{eq:FKG} and the comparison between boundary conditions~\eqref{eq:CBC} that
$H$ occurs with probability bounded below by some constant $c > 0$. Thus
\begin{align}\label{eq:8}
&\phi_{\mathcal{D}_r\cup\mathcal{D}_S}^0[S \xleftrightarrow{\Lambda_{7R}\cup\mathcal D_S}\Lambda_R\: | \: B\xleftrightarrow{\mathcal{D}_r\cap\Lambda_{7R}} \Lambda_R] \ge\phi_{\mathcal{D}_r\cup\mathcal{D}_S}^0[H\: | \: B\xleftrightarrow{\mathcal{D}_r\cap\Lambda_{7R}} \Lambda_R]\ge \phi_{\mathcal{D}_r\cup\mathcal{D}_{S}}^0[H]
\ge c.
\end{align}
Equations~\eqref{eq:6},~\eqref{eq:9},~\eqref{eq:8} and the comparison between boundary conditions imply
\begin{align*}
\phi_{\mathcal{D}}^0[S\text{ $c_\square$-activated}]
\ge(c-c_\square)\phi_{\mathcal{D}_r\cup\mathcal{D}_{S}}^0[B\xleftrightarrow{\mathcal{D}_r} \Lambda_R]
\ge(c-c_\square)\phi_{\mathcal{D}_r}^0[B\xleftrightarrow{\Lambda_{7R}} \Lambda_R],
\end{align*}
which concludes the proof if we choose $c_\square=c/2$.
\end{proof}
We are ready to prove Proposition~\ref{prop:renormalization}.
\begin{proof}[Proposition~\ref{prop:renormalization}]
By Lemma~\ref{lem:10}, it suffices to prove the existence of $c > 0$
such that for every $1\le r\le R/20$ and every $R$-centred domain $\mathcal{D}$,
\begin{equation}\label{eq:ai1}
\phi_\mathcal{D}^0 [\Lambda_R\xleftrightarrow{\Lambda_{9R}} \partial\mathcal{D}]
\ge c \phi_\mathcal{D}^0[\mathbf N_r(\mathcal{D},R,c_\square)]\min\{p(r),(\tfrac rR)^2\}.
\end{equation}
Fix $r$, $R$ and $\mathcal{D}$ as above. For each seed $S=\Lambda_r(x)$, introduce the events
\begin{align*}
E_S&:=\{S\xleftrightarrow{(\mathcal{D}_r\cap\Lambda_{7R})\cup\mathcal{D}_S}\Lambda_R\},\\
F_S&:=\{S\xleftrightarrow{\Lambda_{9r}(x)}\partial\mathcal{D}\}. \end{align*}
Fix a configuration $\xi$ in $\mathcal{D}_r$; it contains $\mathbf N_r(\xi)$ $c_\square$-activated seeds. Among them, we can select a subset $\mathbf A(\xi)$ of at least $\frac 1 C \mathbf N_r(\xi)$ $c_\square$-activated seeds with disjoint corresponding domains $\mathcal D_S$, where $C$ is an absolute constant which bounds from above the numbers of domains $\mathcal D_{S'}$ intersecting a fixed domain $\mathcal D_S$ (say $C=100^2$).
The FKG inequality~\eqref{eq:FKG} and the comparison between boundary conditions~\eqref{eq:CBC} give that for every $S\in \mathbf A(\xi)$,
\begin{align}
\phi_{\mathcal{D}_r \cup \mathcal{D}_S}^0[\mathrm{Circ}_S\cap E_S\cap F_S|\, \omega_{|\mathcal{D}_r}=\xi]
& \ge \phi_{\mathcal{D}_S\setminus \mathcal{D}_r}^0[\mathrm{Circ}_S]\, \phi_{\mathcal{D}_r\cup\mathcal{D}_S}^0[E_S|\,\omega_{|\mathcal{D}_r}=\xi]\, \phi_{\mathcal{D}_S\setminus \mathcal{D}_r}^0[F_S] \nonumber\\
& \ge c_{\rm cir}\,c_\square\, p(r),
\label{eq:renorm11}
\end{align}
where in the last inequality we used~\eqref{eq:RSW}, the definition of $c_\square$-activation, and the definition of $p(r)$.
If $\mathrm{Circ}_S\cap E_S\cap F_S$ occurs for some $S\in\mathbf A(\xi)$, then $\Lambda_R$ is connected in $\Lambda_{9R}$ to $\partial\mathcal{D}$ (we use that the respective supports $\Lambda_{2r}(x)$, $(\mathcal{D}_r\cap\Lambda_{7R})\cup\mathcal{D}_S$, and $\Lambda_{9r}(x)$ of the three events are all subsets of $\Lambda_{9R}$, thanks to the condition $R\ge 20r$).
Since the seeds in $\mathbf A(\xi)$ have disjoint domains $\mathcal{D}_S$, and using the comparison between boundary conditions,
\eqref{eq:renorm11} implies that, under $\phi_{\mathcal{D}}^0[.|\, \omega_{|\mathcal{D}_r}=\xi]$,
the probability that $\Lambda_{R}$ is connected in $\Lambda_{9R}$ to $\partial\mathcal{D}$ is larger than
the probability that a binomial random variable with parameters $\frac1C\mathbf N_r(\xi)$ and $c_{\rm cir}c_\square p(r)$ is strictly positive.
Averaging on $\xi$ gives
\begin{align*}
\phi_\mathcal{D}^0 [\Lambda_R\xleftrightarrow{\Lambda_{9R}} \partial\mathcal{D}]&\ge \phi_\mathcal{D}^0[1-(1-c_{\rm cir}c_\square p(r))^{\mathbf N_r(\xi)/C}]\\
&\ge \phi_\mathcal{D}^0[1-(1-c\min\{p(r),(\tfrac rR)^2\})^{\mathbf N_r(\xi)/C}]\\
&\ge \tfrac{c}{e}\phi_\mathcal{D}^0[\mathbf N_r(\mathcal{D},R,c_\square)]\min\{p(r),(\tfrac rR)^2\} ,
\end{align*}
where in the second inequality we used that $x\mapsto 1-(1-x)^n$ is increasing in $x$, and in the third that $c\in(0,c_{\rm cir}c_\square)$ is chosen small enough that $c(r/R)^2\mathbf N_r(\xi)/C \leq 1$ for {\em every} realization of $\xi$, so that we can use that $1-(1-x)^n\ge nx/e$.
In conclusion,~\eqref{eq:ai1} is proved.
\end{proof}
\section{Crossings in general quads: from Proposition~\ref{prop:crucial_exist} to Theorem~\ref{thm:sRSW}}\label{sec:3.3}
In order to prove Theorem~\ref{thm:sRSW} it suffices to show the lower bound (\emph{i.e.}~the first item) for free boundary conditions. Indeed, the lower bound for arbitrary boundary conditions then follows from the comparison between boundary conditions~\eqref{eq:CBC}. The upper bound (\emph{i.e.}~the second item) may be deduced from the lower bound by duality (see \cite{Gri06} for background on duality for the random cluster model) and the fact that $\ell_\mathcal{D}[(ab),(cd)]=1/\ell_\mathcal{D}[(bc),(da)]$. The rest of the section is therefore dedicated to showing the lower bound for free boundary conditions.
The challenge here is to translate the estimates of Proposition~\ref{prop:crucial_exist} to treat crossing probabilities in general quads.
We divide the proof in two: we first treat quads with small extremal distance, and then we generalise to quads with arbitrary extremal distance.
\paragraph{Quads with small extremal distance}
Let us show that there exist constants $\eta,m >0$ such that for any discrete quad $(\mathcal{D},a,b,c,d)$
with $\ell_{\mathcal{D}}[(ab),(cd)] \leq m$,
\begin{align*}
\phi_{\mathcal{D}}^0[(ab)\longleftrightarrow(cd)] \geq \eta.
\end{align*}
The proof will be based on the following fact shown in \cite{KemSmi12} within the proof of the implication ${\bf G2}\Rightarrow{\bf C2}$ of Proposition~2.6.
\begin{fact}\label{fact}
There exists a constant $m>0$ such that for every quad $(\mathcal{D},a,b,c,d)$
with $\ell_{\mathcal{D}}[(ab),(cd)] \leq m $,
there exist $x\in\mathbb{R}^2$ and $R >0$ such that any crossing $\gamma$ from $(bc)$ to $(da)$ in $\mathcal{D}$ contains a sub-path that
connects $\Lambda_R(x)$ to $\partial\Lambda_{2R}(x)$.
\end{fact}
From now on, fix a quad $(\mathcal{D},a,b,c,d)$ with $\ell_{\mathcal{D}}[(ab),(cd)] \leq m$ and let $x$ and $R$ be given by the previous fact, with $R$ minimal for this property. To simplify notation, let us translate $\mathcal{D}$ so that $x = 0$. Observe that the minimal graph distance between $(ab)$ and $(cd)$ in $\mathcal{D}\cap(\Lambda_{5R/3}\setminus\Lambda_{4R/3})$ is larger than $R/3$, since otherwise one may find $x'$ and $R'<R$ satisfying the assumptions of Fact~\ref{fact}, which would contradict the minimality of $R$.
Below we use the notation $\mathrm{Circ}_B$ and $F_S$ of Section~\ref{sec:2}. Set $r:=\lfloor R/60\rfloor$ and
consider the event $H$ that
\begin{itemize}[nolistsep]
\item $\mathrm{Circ}_B$ occurs for every $r$-box $B\subset\Lambda_{2R}$ with $\overline B\subset\mathcal{D}$,
\item $F_S$ occurs for every $r$-seed $S\subset\Lambda_{2R}$ of $\mathcal{D}$.
\end{itemize}
Then, if $H$ occurs, we claim that $(ab)$ is connected to $(cd)$ inside $\mathcal{D}$.
Indeed, by the choices of $R$ and $r$, there exists a seed $S = \Lambda_r(x)$
with $\Lambda_{9r}(x)\subset \Lambda_{5R/3}\setminus\Lambda_{4R/3}$
and such that $\Lambda_{9r}(x)$ intersects the arc $(ab)$, but not any other part of $\partial \mathcal{D}$.
The same holds for a seed $S'$, with the arc $(ab)$ replaced by $(cd)$.
When $H$ occurs, there exist open circuits contained in $\mathcal{D}$ surrounding each of these two seeds, and connected to each other inside $\mathcal{D}$.
Moreover, since $F_{S}$ and $F_{S'}$ occur, the circuits above are connected to $(ab)$ and $(cd)$, respectively.
See Fig.~\ref{fig:quad_annulus} for an illustration.
The FKG inequality~\eqref{eq:FKG} together with~\eqref{eq:RSW} and Proposition~\ref{prop:crucial_exist} imply that
\begin{align*}
\phi_{\mathcal{D}}^0[(ab)\longleftrightarrow(cd)]\ge\phi_{\mathcal{D}}^0[H]\ge (c_{\rm cir}c_0)^{C}=:\eta>0,
\end{align*}
where $C$ is a deterministic bound on the number of $r$-boxes in $\Lambda_{2R}$.
\begin{figure}
\begin{center}
\includegraphics[width = 0.45\textwidth]{quad_annulus3.pdf}\qquad
\includegraphics[width = 0.45\textwidth]{quad_annulus2.pdf}
\caption{Quads $(\mathcal{D},a,b,c,d)$ with small extremal distance between $(ab)$ and $(cd)$.
{\em Left:}~The minimality of $R$ ensures that the arcs $(ab)$ and $(cd)$ are at a distance at least $R/3$ from each other in the middle annulus $\Lambda_{5R/3}\setminus\Lambda_{4R/3}$. Indeed, otherwise a smaller annulus (in grey) satisfying Fact~\ref{fact} may be found.
{\em Right:} When $H$ occurs, all seeds in $\Lambda_{2R}$ are connected to each other; if in addition $F_S$ and $F_S'$ occur for particular seeds $S$ and $S'$, then $\mathcal{D}$ contains a crossing from $(ab)$ to $(cd)$.}
\label{fig:quad_annulus}
\end{center}
\end{figure}
\paragraph{Quads with arbitrary extremal distance}
Fix $M >2$ and some quad $(\mathcal{D},a,b,c,d)$ with $\ell:= \ell_\mathcal{D}[(ab),(cd)] \leq M$.
By potentially restricting the crossing to a smaller quad, we may assume $\ell > 2$, which we do for simplicity.
Let $\Psi$ be the conformal map that maps $\mathcal{D}$ to the rectangle $[-1,1] \times [0, 2\ell]$,
with $a,b,c,d$ being mapped to the corners $(-1,0),(1,0),(1,2\ell)$ and $(-1,2\ell)$, respectively.
For $\delta \in (0,1)$, define the simply connected domain (see Fig.~\ref{fig:short_to_long}),
$$\mathcal{Q}= \mathcal{Q}(\delta):= [-1,1]^2 \setminus ([-\delta,\delta]^2 \cup \{0\}\times [-1,-\delta]).$$
Consider four points (prime ends to be precise) $u,v,w$ and $t$ on $\partial \mathcal{Q}$ that split its boundary into four arcs:
\begin{itemize}[noitemsep]
\item $(uv)$ is the right side of the vertical segment $\{0\} \times[-1,-\delta]$;
\item $(vw)$ coincides with the boundary of $[-1,1]^2$;
\item $(wt)$ is the left side of $\{0\} \times [-1,-\delta]$;
\item $(tu)$ coincides with the boundary of $[-\delta,\delta]^2$.
\end{itemize}
The quantity $\ell_\mathcal{Q}[(uv),(wt)]$ can be chosen smaller than $m$ given by the first part of this section provided $\delta$ is chosen sufficiently small.
For $h=\delta, 2\delta,\dots, \ell$ (we assume that $\ell/\delta$ is an integer), write $\mathcal{Q}_h$ for the intersection of $\mathcal{Q} + (0,h)$ with $[-1,1] \times [0, 2\ell]$.
Any such translate is a simply connected domain.
We will consider it with four marked prime ends $u_h,v_h,w_h,t_h$ given by
\begin{itemize}[noitemsep]
\item if $h \geq 1$, $(\mathcal{Q}_h,u_h,v_h,w_h,t_h)$ is simply a translate of $(\mathcal{Q},u,v,w,t)$;
\item if $\delta < h < 1$, $v_h = (1,0)$, $w_h = (-1,0)$, $u_h$ and $t_h$ the translates of $u$ and $t$ by $(0,h)$;
\item if $h= \delta $, let $v_h = (1,0)$, $w_h = (-1,0)$, $u_h = (\delta,0)$ and $t_h = (-\delta, 0)$.
\end{itemize}
Notice that in all these cases
\begin{align}\label{eq:sfA}
\ell_{\mathcal{Q}_h}[(u_hv_h),(w_ht_h)] \leq \ell_{\mathcal{Q}}[(uv),(wt)] \leq m.
\end{align}
Consider now the pre-images $\Psi^{-1}(\mathcal{Q}_h; u_h,v_h,w_h,t_h)$ of the domains $(\mathcal{Q}_h; u_h,v_h,w_h,t_h)$.
To not overburden the notation, we will consider that they are discrete quads;
this is not generally true, and $\Psi^{-1}(\mathcal{Q}_h; u_h,v_h,w_h,t_h)$ should be replaced below by a discretisation of itself.
This may be done with only a limited influence on the constant $\eta (L)$ that is obtained at the end of the proof.
Since extremal length is preserved by conformal maps,
the extremal distance in $\Psi^{-1}(\mathcal{Q}_h)$ between $\Psi^{-1}(u_h v_h)$ and $\Psi^{-1}(w_h t_h)$ is smaller than $m$ for any $h=\delta, 2\delta,\dots, \ell$.
Write $A_h$ for the event that $\Psi^{-1}(\mathcal{Q}_h)$
contains an open path from $\Psi^{-1}(u_h v_h)$ to $\Psi^{-1}(w_h t_h)$.
The first part of this section implies that
\begin{align*}
\phi_\mathcal{D}^0 [A_h] \geq \phi_{\Psi^{-1}(\mathcal{Q}_h)}^0 [A_h] \geq \eta.
\end{align*}
\begin{figure}
\begin{center}
\includegraphics[width = 0.9\textwidth]{short_to_long.pdf}
\caption{A domain $(\mathcal{D};a,b,c,d)$ is transformed by the conformal map $\Psi$ into the rectangle $[-1,1]\times [0,2\ell]$.
The domains $\mathcal{Q}_{k\delta}$ (on the right) are used to pave the lower part of the rectangle;
if they all contain crossings, then the vertical line $\{0\}\times [0,\ell]$ is surrounded by an arc.
The same is true before the application of $\Psi$, that is in $\mathcal{D}$ (left image). Finally, if $B$ and $T$ both occur, then $(ab)$ is connected to $(cd)$.}
\label{fig:short_to_long}
\end{center}
\end{figure}
We recommend to look at Fig.~\ref{fig:short_to_long} for the definitions coming next. Let $B$ be the event that there exists an open path in $\mathcal{D}$
with endpoints on $\Psi^{-1}([-1,0) \times \{0\})$ and $\Psi^{-1}((0,1] \times \{0\})$, respectively,
and which does not cross $\Psi^{-1}(\{0\}\times [0,\ell])$.
If the events $A_h$ with $h = \delta, 2\delta,\dots, \ell$ occur simultaneously, then so does $B$ (this is easier to see after transformation by $\Psi$).
By the FKG inequality~\eqref{eq:FKG} and the previous display,
\begin{align*}
\phi_\mathcal{D}^0 [B] \geq \prod_{k=1}^{\ell /\delta} \phi_\mathcal{D}^0 [A_h] \geq \eta^{\ell/\delta}.
\end{align*}
Symmetrically, if $T$ is the event that $\mathcal{D}$ contains a path connecting
$\Psi^{-1}([-1,0) \times \{2\ell\})$ and $\Psi^{-1}((0,1] \times \{2\ell\})$ and which avoids $\Psi^{-1}(\{0\}\times [\ell, 2\ell])$,
then we also have $\phi_\mathcal{D}^0 [T]\geq \eta^{\ell/\delta}$.
Finally, if $T$ and $B$ both occur, then $\mathcal{D}$ contains a crossing from $(ab)$ to $(cd)$.
The FKG inequality~\eqref{eq:FKG} gives that
$$ \phi_{\mathcal{D}}^0[(ab)\longleftrightarrow(cd)] \geq \phi_\mathcal{D}^0 [B\cap T]\ge \eta^{2\ell/\delta},$$
which provides the desired conclusion with $\eta(M):= \eta^{2\ell/\delta}$.\hfill $\square$
\begin{remark}
Note that the argument of this section also implies the following result, and here $q=4$ is not excluded.
For all $L > 0$, there exists $\eta(L) >0$ such that, for all~$1\le q\le 4$
and~$(\mathcal{D}, a,b,c,d)$ a discrete quad,
if~$\ell_\mathcal{D}[(ab),(cd)] \le L$ then~
\[\phi_{\mathcal{D}}^{0/1}[\mathcal{C}(\mathcal{D})] \geq\eta(L),\]
where~$0/1$ denotes the boundary condition on~$\mathcal{D}$ where the arcs~$(ab)$ and~$(cd)$ are wired and the rest of the boundary is free.
Indeed, a careful inspection of the proofs in this section shows the existence of
a family of annuli~${\rm Ann}(x_i; r_i,2r_i) := \Lambda_{2r_i}(x_i) \setminus \Lambda_{r_i}(x_i)$ with~$i = 1,\dots, k$ such that,
if each one contains and open path separating~$\Lambda_{r_i}(x_i)$ from~$\Lambda_{2r_i}(x_i)$,
then~$\mathcal{D}$ is crossed from~$(ab)$ to~$(cd)$ by an open path
(for annuli intersecting~$\mathcal{D}^c$, we ask for the existence of an open path in~$\mathcal{D} \cap {\rm Ann}(x_i; r_i,2r_i)$
that separates~$\Lambda_{r_i}(x_i)$ from~$\Lambda_{2r_i}(x_i)$ \emph{inside}~$\mathcal{D}$).
Moreover,~$k$ is bounded in terms of~$\ell_\mathcal{D}[(ab),(cd)]$ only and each
${\rm Ann}(x_i; r_i,2r_i) \cap \mathcal{D}$ intersect the boundary of~$\mathcal{D}$ only along the wired arcs.
By crossing estimates from \cite{DumSidTas16} and \eqref{eq:CBC}, each annulus contains an open path separating the inside from outside with uniformly positive probability. Finally, by \eqref{eq:FKG} and the bound on~$k$, we conclude that the probability of~$\mathcal{C}(\mathcal{D})$ may bounded in terms of ~$\ell_\mathcal{D}[(ab),(cd)]$ only.
\end{remark}
\section{First moment estimate: proof of Proposition~\ref{prop:10}}\label{sec:3}
The section is divided in three.
We first show how the case for general $r$ follows from that with $r=0$.
Then, we introduce the necessary background on parafermionic observables to prove the $r=0$ case.
Finally, the last part is devoted to the proof of the $r=0$ case.
\subsection{Reduction to the case of $r=0$}
\begin{figure}
\begin{center}
\includegraphics[width =0.4\textwidth]{renormalization_r0.pdf}
\caption{When $x$ is connected to $\Lambda_R$ inside $\mathcal{D}_r'$,
then $B$ is connected to $\Lambda_R$ and $x$ is connected to $\partial B$ inside $\mathcal{D}_r'$. }
\label{fig:Bx}
\end{center}
\end{figure}
Fix $R \geq r\ge1$ and let $\mathcal{D}$ be a $R$-centred domain.
By adapting the constant $c_4$ in~\eqref{eq:fund}, we may restrict our study to the case where $R/r$ is large enough; we make this assumption below.
Let $\mathcal{D}'_r$ be the connected component of the origin in the union of $r$-boxes included in $\mathcal{D}$.
Consider $x\in \partial\mathcal{D}'_r$ and let $B = \Lambda_r(y)$ be the $r$-box with center $y \in \partial\mathcal{D}'_r$ closest to~$x$.
When $R/r$ is large enough, $\Lambda_R$ and $B$ do not intersect.
Then, the comparison between boundary conditions~\eqref{eq:CBC} and the mixing property~\eqref{eq:mix2} give,
\begin{align*}
\phi_{\mathcal{D}'_r}^0[x\xleftrightarrow{\Lambda_{7R}} \Lambda_R]
&\le \phi_{B\cap\mathcal{D}'_r}^{0/1}[x\longleftrightarrow \partial B\setminus \partial \mathcal{D}'_r]\,\phi_{\mathcal{D}'_r}^0[B\xleftrightarrow{\Lambda_{7R}} \Lambda_R]\le C\pi_1^+(\|x - y\|)\phi_{\mathcal{D}}^0[B\xleftrightarrow{\Lambda_{7R}} \Lambda_R],
\end{align*}
where $\phi_{B\cap\mathcal{D}'_r}^{0/1}$ denotes the measure on $B\cap\mathcal{D}'_r$ with free boundary conditions on $\partial \mathcal{D}'_r$ and wired on the rest of the boundary and $\|\cdot\|$ stands for the $L^\infty$-distance; see also Fig.~\ref{fig:Bx}.
Notice that any $B$ as above must intersect $\mathcal{D}^c$ (otherwise its center would not lie on $\partial\mathcal{D}'_r$).
By summing over all $x\in \partial\mathcal{D}'_r$ we find
\begin{equation}\label{eq:hh4}
\phi_{\mathcal{D}_r'}^0[\mathbf M_0(\mathcal{D}_r',R)]
= \sum_{x\in \partial\mathcal{D}'_r}\phi_{\mathcal{D}'_r}^0[x\xleftrightarrow{\Lambda_{7R}} \Lambda_R]
\leq C\sum_{k\le r/2}\pi_1^+(k) \sum_{B\cap \mathcal{D}^c\ne\emptyset}\phi_{\mathcal{D}}^0[B\xleftrightarrow{\Lambda_{7R}} \Lambda_R].
\end{equation}
Apply now the case $r=0$ of Proposition~\ref{prop:10} to the $R$-centred domain\footnote{Formally, $\mathcal{D}'_r$ is not always $R$-centred, but is $R'$-centred for some $R'$ between $R/2$ and $R$; this suffices to apply Proposition~\ref{prop:10}.} $\mathcal{D}'_r$ to bound the left-hand side from below by $c_4R\pi_1^+(R)$.
Moreover, Lemma~\ref{lem:mn} bounds from above the first sum in the right-hand side by $r\pi_1^+(r)$.
Dividing by the latter, we obtain~\eqref{eq:fund} for~$r \geq 1$.
\subsection{Background on parafermionic observables}\label{sec:3.1}
The proof of Proposition~\ref{prop:10} relies heavily on parafermionic observables, that we define below.
These definitions are now classical and we refer to \cite{Dum17a} for details. We also recommend that the reader looks at Fig.~\ref{fig:ab}.
Let $\Omega=(V,E)$ be a discrete domain, let $a$ and $b$ be two vertices on $\partial \Omega$. The triplet $(\Omega,a,b)$ is called a {\em Dobrushin domain}. Orient $\partial \Omega$ in counterclockwise order. It is divided into two boundary arcs denoted by $(ab)$ and $(ba)$: the first one from $a$ to $b$ (excluding $a$ and $b$) and the second one from $b$ to $a$ (including the endpoints).
The {\em Dobrushin boundary conditions} are defined to be free on $(ab)$ and wired on $(ba)$.
Below, the measure on $(\Omega,a,b)$ with Dobrushin boundary conditions is denoted by $\phi^{0/1}_{\Omega}$.
Let $(\mathbb Z^2)^\star$ be the dual of $\mathbb Z^2$ (defined as the translate of $\mathbb Z^2$ by $(1/2,1/2)$). This way, each edge $e$ of $\mathbb Z^2$ is associated to a unique edge $e^\star$ (the one that crosses $e$) of $(\mathbb Z^2)^\star$. The dual $\Omega^\star=(V^\star,E^{\star})$ of the domain $\Omega=(V,E)$ is the subgraph of $(\mathbb Z^2)^\star$ spanned by $E^\star$, where $E^\star$ is the set of dual edges associated to $E'=E\setminus\{\text{edges of $(ba)$}\}$.
Any configuration $\omega$ on $\Omega$ will be completed by open edges on $(ba)$ and closed edges on $\Omega^c$, which leads us to an identification between configurations $\omega$ on $\Omega'=(V,E')$ and dual configurations $\omega^\star$ on $\Omega^\star=(V^\star,E^\star)$. Then, the dual configuration $\omega^*$ has the property that dual edges between vertices of $\partial\Omega^*$ that are bordering $(ab)$ are open (we call the set of such edges $(ab)^*$). See Fig.~\ref{fig:ab} for an illustration.
The loop representation of a configuration on $\Omega$ is supported on the {\em medial graph} of $\Omega$ defined as follows.
Let $(\mathbb{Z}^2)^\diamond$ be the {\em medial} lattice, with vertex-set given by the midpoints of edges of $\mathbb{Z}^2$ and edges between pairs of nearest vertices (\emph{i.e.}~vertices at a distance $\sqrt 2/2$ of each other). It is a rotated and rescaled version of $\mathbb{Z}^2$.
Let $\Omega^\diamond$ be the subgraph of $(\mathbb{Z}^2)^\diamond$ spanned by the edges of $(\mathbb{Z}^2)^\diamond$ adjacent to a face corresponding to a vertex of $\Omega\setminus (ba)$ or $\Omega^* \setminus (ab)^*$.
Let $e_a$ and $e_b$ be the two medial edges entering and exiting $\Omega^\diamond$ between the arcs $(ba)$ and $(ab)^*$.
Let $\omega$ be a configuration on $\Omega$; recall its dual configuration $\omega^*$.
Draw self-avoiding paths on $\Omega^\diamond$ as follows: a path arriving at a vertex of the medial lattice
always takes a $\pm \pi/2$ turn at vertices so as not to cross the edges of $\omega$ or $\omega^*$.
The loop configuration thus defined is formed of a path between $e_a$ and $e_b$ and disjoint loops;
together these form a partition of the edges of $\Omega^\diamond$.
We will not detail further the definition of the loop representation, rather direct the reader to Fig.~\ref{fig:ab} and \cite{Dum17a} and point out that
\begin{itemize}
\item any vertex of $\Omega^\diamond$ (with the exception of the endpoints of $e_a$ and $e_b$) is contained either in an edge of $\omega$ or an edge of $\omega^*$.
Therefore there is exactly one coherent way for the loop to turn at any vertex of $\Omega^\diamond$;
\item the edges of $\omega$ in $(ba)$ and the edges of $\omega^*$ in $(ab)^*$ are such that the loops, when reaching boundary vertices, turn so as to remain in $\Omega^\diamond$.
\end{itemize}
In the loop configuration, the self-avoiding curve with endpoints $e_a$ and $e_b$ is called the \emph{exploration path};
it is denoted by $\gamma=\gamma(\omega)$ and is oriented from $e_a$ to $e_b$.
For an edge $e \in \gamma$, let $\text{W}_{\gamma}(e,e_b)$ be the winding of $\gamma$ between $e$ and $e_b$,
that is $\pi/2$ times the number of left turns minus the number of right turns taken by $\gamma$ when going from $e$ to $e_b$.
\begin{definition}\label{def:parafermionic_observable}
Consider a Dobrushin domain $(\Omega,a,b)$.
The {\em parafermionic observable} $F=F_{\Omega,a,b}$ is defined for any (medial) edge $e$ of $\Omega^\diamond$ by
\begin{equation*}
F(e) ~:=~\phi^{0/1}_{\Omega}[{\rm e}^{{\rm i}\sigma
\text{W}_{\gamma}(e,e_b)} \mathbf1_{e\in \gamma}],
\end{equation*}
where $\sigma\in[0,1]$ is the solution of the equation
\begin{equation}\label{eq:hahaha}
\displaystyle \sin (\sigma \tfrac\pi2) = \sqrt{q}/2.
\end{equation}
\end{definition}
The parafermionic observable satisfies a very special property first observed in \cite{Smi10} (see also \cite[Thm.~5.16]{Dum17a});
it applies for any $q>0$ when $p=\sqrt q /(1 + \sqrt q)$.
For any Dobrushin domain $(\Omega,a,b)$ and any vertex $v$ of $\Omega^\diamond$ corresponding to an edge of $\Omega \setminus (ba)$,
\begin{equation}\label{rel_vertex}
\sum_{i=1}^4 \eta(e_i) F(e_i) = F(e_1) - {\rm i}F(e_2) - F(e_3) + {\rm i}F(e_4) = 0,
\end{equation}
where $e_1$, $e_2$, $e_3$ and $e_4$ are the four edges incident to $v$, indexed in clockwise order,
and $\eta(e_i)$ is the complex number of norm one with same direction as $e_i$ and orientation from $v$ towards the other endpoint of $e_i$.
Write ${\rm Int}(\mathcal{C})$ for the set of vertices $v$ of the medial lattice which correspond to a primal edge of $\Omega \setminus (ba)$,
and $\mathcal{C}$ for the set of medial edges of $\Omega^\diamond$ with exactly one endpoint in ${\rm Int}(\mathcal{C})$.
Then, summing the relation above over all vertices $v \in {\rm Int}(\mathcal{C})$, we find
\begin{equation}\label{eq:rel_vertex}
\sum_{e\in \mathcal{C}}\eta(e)F(e)=0,
\end{equation}
where $\eta(e)$ is the complex number of norm one with direction given by $e$ and orientation from the endpoint of $e$ in ${\rm Int}(\mathcal{C})$ towards the outside.
\begin{remark}
This relation should be understood as ``the contour integral of the parafermionic observable along the boundary of $\Omega^\diamond$ is 0''.
The careful reader may, however, notice that $\mathcal{C}$ does not always form a closed curve (see Fig.~\ref{fig:ab}).
\end{remark}
\subsection{Proof of Proposition~\ref{prop:10} when $r=0$}\label{sec:r=0}
For technical reasons related to the parafermionic observable, we first provide an estimate in a geometry given by special Dobrushin domains. For $\ell,m\ge0$, call $(\Omega,a,b)$ a {\em $(m,\ell)$-corner} Dobrushin domain (see Fig.~\ref{fig:ab} for an illustration) if its boundary is made of
\begin{itemize}[noitemsep]
\item the vertical segment between $(0,0)$ and $b:=(0,m)$,
\item the horizontal segment between $(0,0)$ and $a:=(\ell,0)$,
\item a self-avoiding curve $\gamma$ between $a$ and $b$ avoiding the previous two segments and going clockwise around $0$.
\end{itemize}
\begin{figure}
\begin{center}
\includegraphics[width = 0.47\textwidth, page = 1]{ab2.pdf}\quad
\includegraphics[width = 0.47\textwidth, page = 3]{ab2.pdf}
\caption{{\em Left:} A $(m,\ell)$-corner domain $\Omega$. The edges of $\Omega^\diamond$ are grey; those of $\alpha$ are bold.
{\em Right:} The loop representation of a configuration. Loops are drawn on the medial lattice $\Omega^\diamond$ so as not to intersect any open or dual-open edges. For a point $x \in (ab)$, the interface passes between this point and the free arc $(ab)^*$ if and only if $x\leftrightarrow (ba)$. The winding of a curve going from a medial edge
adjacent to the primal arc $(ba)$ to $e_b$ is equal to $0$, $\pi/2$ or $\pi$.}
\label{fig:ab}
\end{center}
\end{figure}
\begin{lemma}\label{lem:op}
There exists $c_5>0$ such that for every $(m,\ell)$-corner Dobrushin domain $\Omega$ with $m\ge\ell$,
$$\sum_{x\in (ab)} \phi^{0/1}_\Omega[x\longleftrightarrow (ba)]\ge c_5\,m\pi_1^+(m).$$
\end{lemma}
\begin{proof}
Consider the parafermionic observable on $(\Omega,a,b)$. Note that the edges of $\mathcal{C}$ are of three kinds:
\begin{itemize}[noitemsep]
\item the medial edges $e_a$ and $e_b$;
\item the medial edges of $\mathcal{C}$ incident to $(ab)^*$, we call the set of such edges $\alpha$;
\item the medial edges of $\mathcal{C}$ incident to $(ba)$, we call the set of such edges $\beta$.
\end{itemize}
Equation~\eqref{eq:rel_vertex} applied to $\Omega$ implies that
\begin{align*} \big|\sum_{e\in \alpha}\eta(e)F(e)\big|
= \big|\eta(e_a)F(e_a)+\eta(e_b)F(e_b)+\sum_{e\in \beta}\eta(e)F(e)\big|.
\end{align*}
Notice that, for any medial edge $e \in \alpha$, if the interface passes through $e$, then the vertex $x \in (ab)$ which is adjacent to $e$
is connected inside $\Omega$ to the arc $(ba)$ (see also Fig.~\ref{fig:ab}).
Since there are at most two edges of $\alpha$ adjacent to any one vertex of $(ab)$,
we deduce that
\begin{align}\label{eq:h11}
2 \sum_{x\in(ab)}\phi_\Omega^{0/1}[x\longleftrightarrow (ba)]\ge \sum_{e\in \alpha}|F(e)|\ge \big|\sum_{e\in\beta}\eta(e)F(e)\big|-2,
\end{align}
where the second inequality uses the triangular inequality and the fact that $|F(e_a)|=|F(e_b)|=1$
Similarly, for any edge $e \in \beta$,
$\gamma$ passes through $e$ if and only if
the unique vertex $y \in \Omega^*$ adjacent to $e$ is connected by a dual-open path to $(ab)^*$ in $\Omega^*$;
write $y\xleftrightarrow{*} (ab)^*$ for the latter event.
When $\gamma$ contributes to $\eta(e) F(e)$, the argument of its contribution is determined by the orientation of $e$.
There are four possible arguments: $e^{{\rm i}\sigma \pi/2}$ and $e^{-{\rm i} \pi/2}$ for edges on the vertical section of $(ba)$
and $e^{-{\rm i}\sigma \pi/2}$ and $e^{{\rm i} \pi/2}$ for edges on the horizontal section of $(ba)$
(up to a fixed phase that depends on the geometry of $\Omega$ around $b$).
Additionally, observe that any edge $e$ whose contribution has argument $e^{\pm{\rm i} \pi/2}$
may be paired with the edge $f \in \beta$ to the right or above it,
which has contribution of same absolute value as that of $e$, and argument $e^{\mp{\rm i}\sigma \pi/2}$.
Indeed, $\gamma$ passes through $e$ if and only if it also passes through $f$.
Thus,
\begin{equation}\label{eq:h12}
\big|\sum_{e\in\beta}\eta(e)F(e)\big| \geq \cos \big[(1+\sigma)\tfrac{\pi}{4}\big]\sum_{y\in(ba)^*}\phi_\Omega^{0/1}[y\xleftrightarrow{*}(ab)^*].
\end{equation}
By self-duality, the crossing estimates~\eqref{eq:wRSW}, and the mixing property~\eqref{eq:mix2}, we may deduce that for every $y\in(ab)^*$ between distance $m/3$ and $2m/3$ of $b$,
\begin{equation}\label{eq:h13}
\phi_\Omega^{0/1}[y\xleftrightarrow{*} (ab)^*]\ge c\pi_1^+(m)
\end{equation}
for some constant $c>0$ independent of $m$.
Putting~\eqref{eq:h11}--\eqref{eq:h13} together, we find that
\begin{align*}
\sum_{x\in(ab)}\phi_\Omega^{0/1}[x\longleftrightarrow (ba)]
\geq \tfrac12 \cos \big[(1+\sigma)\tfrac{\pi}{4}\big] \sum_{y\in(ba)^*}\phi_\Omega^{0/1}[y\xleftrightarrow{*} (ab)^*] - 1
\geq c' m\pi_1^+(m),
\end{align*}
for some $c' > 0$. In the last inequality, we used that $\cos \big[(1+\sigma)\tfrac{\pi}{4}\big]>0$ when $q < 4$ (see~\eqref{eq:hahaha}) and that
$m\pi_1^+(m)$ tends to $\infty$ (see Lemma~\ref{lem:mn}).
\end{proof}
We now proceed with the proof of Proposition~\ref{prop:10} in the case $r = 0$, which is slightly technical. Throughout this proof, we will assume $R$ to be large; small values of $R$ may be incorporated by adjusting the constant in~\eqref{eq:fund}. Fix a small quantity $\delta > 0$ (we will see below how small $\delta$ needs to be, and that it does not depend on $R$) and assume for simplicity that $\delta R$ and $\delta^2 R$ are integers (we may do this as we may take $R$ large and adjust $\delta$). In the proof below, it will be important to keep track of the dependencies in $\delta$: the constants $c,c',C,C'$, etc are independent of $\delta$, while $c(\delta), c'(\delta)$, etc do depend on $\delta$.
The idea is to construct a $(m,\ell)$-corner Dobrushin domain $(\Omega,a,b)$ associated with each $R$-centred domain $\mathcal{D}$ in order to apply the previous lemma. We refer to Fig.~\ref{fig:domain} for an illustration of the construction.
\begin{proof}[Proposition~\ref{prop:10} for $r=0$]
Fix a $R$-centred domain $\mathcal{D}$ and consider the largest Euclidean open ball $B$ centred at the origin and which is included in $\mathcal{D}$. Let $x=(x_1,x_2)$ be a vertex of $\partial\mathcal{D}\cap\partial B$ and assume with no loss of generality that $x$ is in the wedge $\{(u,v):u\ge v\ge 0\}$. Let $y = (y_1,y_2) $ be the rightmost vertex of the half-line $\mathbb{Z}_+ \times \{x_2 + \delta R\}$ that is contained in $B$. Finally, let $\tau$ be the translation mapping $x$ to $y$.
\begin{definition}[corner domain associated with $\mathcal{D}$]
Let $a:=x-(8\delta^2 R,0)$, $b:=\tau(a)$ and
$\mathrm{Rect}$ be the rectangle with edges parallel to the axis, top-left corner $b$ and bottom-right corner $x$.
Finally, let $\Omega$ be the connected component of $a$ in the subgraph $(\mathcal{D}\cap \Lambda_{7R})\cap\tau(\mathcal{D}\cap \Lambda_{7R})\cap (\mathrm{Rect} \cup B^c)$; see the shaded region in Fig.~\ref{fig:domain}.
\end{definition}
\begin{figure}
\begin{center}
\includegraphics[width = 0.51\textwidth]{domain3.pdf}\,\,
\includegraphics[width = 0.47\textwidth]{domain2.pdf}
\caption{{\em Left:} The domain $\Omega$; part of its boundary is contained in $\partial \mathcal{D}$ (black) or in $\partial\tau(\mathcal{D})$ (blue). The arc $(ab)$ is divided into five sets $S_1,\dots, S_5$ as described above.
{\em Right:} When the event $E$ occurs, the measure induced inside $\Omega$ dominates that with Dobrushin boundary conditions. Then,~\eqref{eq:k5} provides a bound on the number of points connected to $\Lambda_{R}\cap\tau(\Lambda_{R})$.}
\label{fig:domain}
\end{center}
\end{figure}
Assuming $\delta > 0$ is small enough, a simple trigonometric computation shows that the distance between $x$ and $y$ is at most $2\delta R$.
This further shows that the distance along the horizontal line passing through $x$ between $x$
and the translated ball $\tau(B)$ is smaller than $4\delta^2 R$.
In particular, $a$ is then contained in $B \cap \tau(B)$, and so is $b$.
Thus, if we assume $\delta$ to be small and $R$ large enough,
the whole arc $(ba)$ of the boundary of $\mathrm{Rect}$ is also contained in $B \cap \tau(B)$, hence it is part of~$\partial \Omega$.
As such $(\Omega, a,b)$ is the translate of a $(m ,\ell)$-corner Dobrushin domain with $m = \delta R$ and $\ell:= x_1 - y_1$.
Lemma~\ref{lem:op} applied to $(\Omega, a,b)$ gives
\begin{equation}\label{eq:kkkk}
\sum_{z \in (ab)} \phi^{0/1}_\Omega[z\longleftrightarrow (ba)]\ge c_5\delta R\,\pi_1^+(\delta R).
\end{equation}
We are now going to harvest this inequality by splitting the boundary arc $(ab)$ into five types of vertices and estimating the contribution of each of them in order to get a more useful inequality, namely~\eqref{eq:k5} below. Divide $(ab)$ into five sets:
\begin{itemize}[noitemsep]
\item[$S_1=$] vertices of $\partial \mathcal{D}$;
\item[$S_2=$] vertices of $\tau(\partial \mathcal{D})$;
\item[$S_3=$] vertices of $\partial (\Lambda_{7R} \cap \tau(\Lambda_{7R}))$;
\item[$S_4=$] vertices of the horizontal segment between $a$ and $x$ -- call this set $[ax]$;
\item[$S_5=$] vertices of the horizontal segment between $y$ and $b$ -- call this set $[yb]$.
\end{itemize}
Next, we analyse the contribution of each of the sets $(S_i)_{i=1\dots5}$ to the right-hand side of~\eqref{eq:kkkk}.
Our goal is to show that $S_1$ and $S_2$ contribute significantly to~\eqref{eq:kkkk},
and thus that $S_3$, $S_4$ and $S_5$ contribute only partially. This will be valid for $\delta> 0$ small enough, but independent of $N$.
\smallskip
\noindent
{\em Contribution of $S_4\cup S_5$}.
There are at most $16 \delta^2 R$ vertices in $S_4\cup S_5$ and
the crossing estimates~\eqref{eq:wRSW}, the mixing property~\eqref{eq:mix2} and Lemma~\ref{lem:mn} give that
\begin{equation}\label{eq:kkk}
\sum_{z\in S_4 \cup S_5}\phi^{0/1}_\Omega[z\longleftrightarrow (ba)]
\le C\,\delta^2 R\,\pi_1^+(\delta^2 R).
\end{equation}
Choosing $\delta > 0$ small enough, we may suppose that the contribution of these vertices is smaller than a quarter of the right-hand side of~\eqref{eq:kkkk}.
\bigbreak
\noindent {\em Contribution of $S_3$}.
Any open path linking a vertex of $S_3$ to $(ba)$ needs to traverse a long thin corridor with free boundary conditions on its sides, something which occurs with very small probability. Formalising this is technical, but not surprising.
For any vertex $z \in S_3$, the mixing property~\eqref{eq:mix2} gives
\begin{align*}
\phi^{0/1}_\Omega[z\longleftrightarrow (ba)]
\le C' \pi_1^+(\delta R) \,\phi^{0/1}_\Omega[(ba)\longleftrightarrow\partial\Lambda_{6R}].
\end{align*}
Let us bound from above the last term on the right-hand side.
For this term to be positive, $\Omega$ needs to contain vertices of $\partial\Lambda_{6R}$, hence we may restrict ourselves to this case.
Recall that $x$ is the closest point of $\partial \mathcal{D}$ to $0$ in euclidian distance, and that we assumed that $\Lambda_{3R} \nsubset\mathcal{D}$.
Thus $x$ is contained in $\Lambda_{3\sqrt2 R} \subset \Lambda_{5R}$.
Moreover $\Omega \subset \mathcal{D}$, and therefore $\partial\mathcal{D}$ does intersect $\partial \Lambda_{6R}$.
Let $\tilde x$ be the first vertex of $\partial \Lambda_{6R}$ when going around $\partial \mathcal{D}$ in counter-clockwise order starting from $x$;
let $\gamma$ be the arc of $\partial \mathcal{D}$ between $x$ and $\tilde x$ (see Fig.~\ref{fig:domain}).
Then $\gamma$ has length at least $R$.
Choose a family of points $x_1,\dots,x_s$ on $\gamma$, at a distance at least $10\delta R$ from each other, from $\partial \Lambda_{6R}$ and from ${\rm Rect}$. Due to the length of $\gamma$, one may choose $s \geq c / \delta$ for some small constant $c > 0$.
Notice that, for any $1 \leq j \leq s$, any circuit of dual edges contained in $\Lambda_{5\delta R }(x_j)$ and surrounding both $x_j$ and $\tau(x_j)$ separates $(ba)$ from $\partial \Lambda_{6R}$ inside $\Omega$.
The crossing estimates~\eqref{eq:wRSW}, the fact that $\tau$ is a translation by at most $2\delta R$ and the comparison of boundary conditions~\eqref{eq:CBC} imply the existence of a universal positive constant
that bounds from below the $\phi^{0/1}_\Omega$-probability of existence of a dual-open path contained in $\Lambda_{5\delta R }(x_j) \cap \Omega$ that
separates $(ba)$ from $\partial \Lambda_{6R}$ inside $\Omega$ for each $j = 1,\dots, s$.
The mixing property~\eqref{eq:mix2} and the lower bound on $s$ yield
\begin{align*}
\phi^{0/1}_\Omega[(ba)\longleftrightarrow\partial\Lambda_{6R}]
\leq e^{-c'/\delta}
\end{align*}
for some $c' > 0$.
In conclusion
\begin{align*}
\sum_{z \in S_3} \phi^{0/1}_\Omega[z\longleftrightarrow (ba)]
\le C'' e^{-c'/\delta} R \,\pi_1^+(\delta R),
\end{align*}
where the factor $R$ is an upper bound comes from the number of terms in the sum (recall that $S_3 \subset \partial \Lambda_{7R} \cup \partial \tau(\Lambda_{7R})$).
By choosing $\delta$ smaller than some universal constant, the above may be rendered smaller than a quarter of the right-hand side of~\eqref{eq:kkkk}.
\bigbreak\noindent
{\em Contribution of $S_1$ and $S_2$}.
Overall, considering the bounds on the contributions of vertices in $S_3\cup S_4\cup S_5$, we find that~\eqref{eq:kkkk} implies
\begin{equation}\label{eq:k5}
\sum_{z\in S_1 \cup S_2}\phi^{0/1}_\Omega[z\longleftrightarrow (ba)]
\geq \tfrac12 c_5\, \delta R \, \pi_1^+(\delta R)\ge c(\delta) R\pi_1^+(R).
\end{equation}
\bigbreak
We are now in a position to conclude the proof. We have a large $\phi^{0/1}_\Omega$-expectation of the number of $z$ on the boundary that are connected to $(ba)$ and we now need to convert it to an estimate on the $\phi^0_\mathcal{D}$-expectation of the number of $z$ on the boundary that are connected to $\Lambda_R$.
Consider the event $E$ that there exists an open circuit surrounding $(ba)$ in $B\cap\tau (B)$,
and an open path from $\Lambda_{R}\cap \tau(\Lambda_{R})$ to $(ba)$ in $B\cap\tau (B)\setminus \mathrm{Rect}$.
Recall that, by construction, the arc $(ba)$ is at a distance at least $\delta^2 R$ from $[B\cap\tau (B)]^c$, and therefore of $[\mathcal{D}\cap\tau(\mathcal{D})]^c$. Set $\mathcal{D}':=(\mathcal{D}\cap \Lambda_{7R})\cap\tau(\mathcal{D}\cap \Lambda_{7R})$.
Using the FKG inequality~\eqref{eq:FKG} and the crossing estimates ~\eqref{eq:wRSW}, we find that
\begin{equation}\label{eq:k6}
\phi_{\mathcal{D}'}^0[E]\ge c'(\delta)>0.
\end{equation}
If $E$ occurs, let $\Gamma$ be the inner-most open circuit surrounding $(ba)$ in $B\cap\tau (B)$,
and let~$\Omega'$ be the set of edges of $\Omega$ that lie outside $\Gamma$; notice that $\Gamma$ may be explored from inside and that, by the definition of $E$, $\Gamma$ is connected to $\Lambda_{R}\cap \tau(\Lambda_{R})$.
By ~\eqref{eq:CBC} and~\eqref{eq:SMP}, conditioning $\phi_{\mathcal{D}'}^0$ on $E$ and on the realisation of $\Gamma$ induces
a measure on $\Omega'$ that dominates $\phi^{0/1}_\Omega$. Thus,~\eqref{eq:k5} and~\eqref{eq:k6} together give
\begin{equation*}
\sum_{z\in S_1 \cup S_2} \phi^{0}_{\mathcal{D}'}[z \xleftrightarrow{\mathcal{D}'} \Lambda_R\cap \tau(\Lambda_R)]
\ge c''(\delta) R\pi_1^+(R).
\end{equation*}
Observe that $\phi^{0}_{\mathcal{D}'}$ is dominated by both $\phi^{0}_{\mathcal{D}}$ and $\phi^{0}_{\tau(\mathcal{D})}$
and that $S_1 \subset \partial \mathcal{D} \cap \Lambda_{7R}$ and $S_2 \subset \tau(\partial \mathcal{D} \cap \Lambda_{7R})$.
Thus, the above implies
\begin{equation*}
\sum_{z\in \partial \mathcal{D}} \phi^{0}_{\mathcal{D}}[z \xleftrightarrow{\Lambda_{7R}} \Lambda_R\cap\tau(\Lambda_R)]
+ \sum_{z\in \tau(\partial \mathcal{D})} \phi^{0}_{\tau(\mathcal{D})}[z \xleftrightarrow{\tau(\Lambda_{7R})} \Lambda_R\cap \tau(\Lambda_R)]
\ge c''(\delta)R\pi_1^+(R).
\end{equation*}
We conclude by observing that both terms are smaller than $\sum_{z\in \partial \mathcal{D} } \phi^{0}_{\mathcal{D}}[z \xleftrightarrow{\Lambda_{7R}}\Lambda_R]$.
\end{proof}
\section{Applications to arm events}\label{sec:4}
In this section, we gather results concerning arm-events. Section~\ref{sec:5.1a} proves a lower bound for the probability of the one arm event in the half-space (Lemma~\ref{lem:mn}), which is necessary for the proof of Theorem~\ref{thm:RSWquads}. The next sections contain applications of Theorem~\ref{thm:RSWquads} to other arm events.
For $r \leq R$ consider the annulus $\Lambda_R\setminus \Lambda_r$ with inner boundary $\partial \Lambda_r$ and outer boundary $\partial \Lambda_R$.
A self-avoiding path of $\mathbb{Z}^2$ or $(\mathbb{Z}^2)^*$ connecting the inner to the outer boundaries of the annulus is called an {\em arm}.
We say that an arm is {\em of type $1$} if it is composed of primal edges that are all open, and {\em of type $0$} if it is composed of dual edges that are all dual-open.
For $k\ge1$ and $\sigma\in\{0,1\}^k$\,, define $A_{\sigma}(r,R)$ to be the event that there exist $k$ \emph{disjoint} arms from $\partial\Lambda_r$ to $\partial\Lambda_R$ which are of type $\sigma_1,\dots, \sigma_k$, when indexed in counterclockwise order.
To avoid annuli with inner radii too small for arm events to occur, define $r_\sigma$ be the smallest $r$ such that $A_{\sigma}(r,R)$ is non-empty for every $R\ge r$.
We also introduce $A_{\sigma}^+(r,R)$ to be the same event as $A_{\sigma}(r,R)$, except that the paths must lie in the upper half-plane $\mathbb H$ and are indexed starting from the right-most.
\subsection{Proof of Lemma~\ref{lem:mn}}\label{sec:4.1}\label{sec:5.1a}
We insist on the fact that this part only relies on the crossing estimates~\eqref{eq:wRSW}, not on Theorem~\ref{thm:sRSW}.
With the notation of this section, we have $\pi_1^+(R)=\phi_{\mathbb{H}}^0[A^+_1(0,R)]$, and we will use the latter notation in this part.
Let $E_r$ be the event that $\Lambda_{2r}\setminus\Lambda_{r}$ contains an open path from $\partial\mathbb{H}$ to itself disconnecting 0 from infinity in $\mathbb{H}$.
Combining crossings in three rectangles, the FKG inequality together with the crossing estimates~\eqref{eq:wRSW} give that $\phi_{\mathbb{H}}^0[E_r]\ge c$ for every $r\ge1$. As a consequence,~\eqref{eq:FKG} implies that
\begin{equation*
\frac{\phi_{\mathbb{H}}^0[A^+_1(0,R)]}{\phi_{\mathbb{H}}^0[A^+_1(0,r)]}
\ge \phi_\mathbb{H}^0[E_{r/2}]\phi_\mathbb{H}^0[E_{r}]\phi_\mathbb{H}^0[\Lambda_{r/2} \longleftrightarrow\partial\Lambda_{2r}]\phi_{\mathbb{H}}^0[A^+_1(r,R)]
\ge c'\phi_{\mathbb{H}}^0[A^+_1(r,R)],
\end{equation*}
so that we may focus on bounding the right-hand side from below.
For $|s|\le R/(2r)$, let $F_s$ be the event that there exist a path in $\omega$ and a path in $\omega^*$ going from the translate by $sr$ of $\Lambda_r$ to $\partial\Lambda_R$. Then, if $[0,R/2]\times[0,R]$ is crossed vertically by a path in $\omega$,
and $[-R/2,0]\times[0,R]$ is crossed vertically by a path in $\omega^*$, at least one event $F_s$ occurs.
Moreover, due to~\eqref{eq:wRSW}, the two crossings mentioned above occur simultaneously with probability at least $c''>0$.
The FKG inequality~\eqref{eq:FKG} and the union bound give
\begin{equation}\label{eq:hhg}
\phi_{\mathbb{H}}^0[A^+_1(r,R)]\phi_{\mathbb{H}}^0[A^+_0(r,R)]\ge \phi_{\mathbb{H}}^0[A^+_{10}(r,R)]\ge\max_s\phi_{\mathbb{H}}^0[F_s]\ge c'''\tfrac rR.
\end{equation}
Successive applications of the bound on the probability of $E_{2^k}$ with $r\le 2^k< R/2$ give
\begin{equation}\label{eq:hhgg}
\phi_{\mathbb{H}}^0[A^+_0(r,R)]\le (1-c)^{\lfloor \log[R/(2r)]\rfloor}.
\end{equation}
Dividing~\eqref{eq:hhg} by~\eqref{eq:hhgg} concludes the proof.
\subsection{Quasi-multiplicativity, localization and well-separation}\label{sec:4.2}
Let us start with the classical notion of {\em well-separated arms}. In what is next, let $x_i$ and $x'_i$ be the end-points of the arm $\gamma_j$ on the inner and outer boundary of $\Lambda_R\setminus\Lambda_r$ respectively.
\begin{definition} Fix $\delta>0$. The arms $\gamma_1,\dots,\gamma_k$ are said to be $\delta$-\emph{well-separated} if
\begin{itemize}[noitemsep]
\item $x_1,\dots,x_k$ are at a distance larger than $2\delta r$ from each other.
\item $x_1',\dots,x_k'$ are at a distance larger than $2\delta R$ from each other.
\item For every $1\le i\le k$, $x_i$ is $\sigma_i$-connected to distance $\delta r$ of $\partial\Lambda_r$ in $\Lambda_{\delta r}(x_i)$.
\item For every $1\le i\le k$, $x_i'$ is $\sigma_i$-connected to distance $\delta R$ of $\partial\Lambda_R$ in $\Lambda_{\delta R}(x_i')$.
\end{itemize}
\end{definition}
Let $A_{\sigma}^{\rm sep}(r,R)$ be the event
that $A_{\sigma}(r,R)$ occurs and there exist arms realizing
$A_{\sigma}(r,R)$ which are $\delta$-well-separated. While it is not explicit in the notation, $A_{\sigma}^{\rm sep}(r,R)$ depends on~$\delta$.
\begin{proposition}[Well-separation]\label{prop:separation}
Fix $1\le q<4$ and $\sigma \in \{0,1\}^k$ for some $k$. Then for all $\delta >0$ small enough, there exists $c_6=c_6(\sigma,\delta,q)>0$ such that for every $R\ge r\ge r_\sigma$,
\begin{equation}
c_6 \phi_{\mathbb{Z}^2}[A_{\sigma}(r,R)]\le \phi_{\mathbb{Z}^2}[A_{\sigma}^{\rm sep}(r,R)]\le \phi_{\mathbb{Z}^2}[A_{\sigma}(r,R)].
\end{equation}
\end{proposition}
\begin{proof}
With the help of Theorem~\ref{thm:sRSW}, the proof follows the same lines as for the random-cluster model with $q=2$. We refer to \cite{CheDumHon13} for details.
\end{proof}
As a first application of this result, we obtain the following.
\begin{proposition}[Quasimultiplicativity]\label{prop:quasimultiplicativity}
Fix $1\le q< 4$ and $\sigma$. There exist $c_7=c_7(\sigma,q)>0$ and $C_7=C_7(\sigma,q)>0$ such that for every $R\ge r\ge r_\sigma$,
\begin{equation}\label{eq:MULTIPLICATIVITY}
c_7\,\phi_{\mathbb{Z}^2}[A_{\sigma}(r,R)] \le \phi_{\mathbb{Z}^2}[A_{\sigma}(r,\rho)]\phi_{\mathbb{Z}^2}[A_{\sigma}(\rho,R)] \le C_7\,\phi_{\mathbb{Z}^2}[A_{\sigma}(r,R)].
\end{equation}
\end{proposition}
\begin{proof}
With the help of Theorem~\ref{thm:sRSW}, the proof follows the same lines as for the random-cluster model with $q=2$. We refer to \cite{CheDumHon13} for details.
\end{proof}
For sequences $\sigma$ with no two 0 or 1 following each other when going cyclically, the previously available crossing probability estimates~\eqref{eq:RSW} are sufficient to derive the quasi-multiplicativity for $A_{\sigma}$, see \cite{Wu}. Nevertheless, obtaining the same result for other sequences relies crucially on
Theorem~\ref{thm:sRSW}. This is particularly important for the arm sequence $10101$ that is used repeatedly later on, see for instance the discussions of~\eqref{eq:UNIVERSAL5},~\eqref{eq:SIX-ARM}, and~\eqref{eq:FOUR-ARM}.
Another classical consequence of well-separation is the possibility of {\em localizing} the endpoints of arms.
\begin{definition}
Let $I=(I_i)_{1\leq i\leq k}$ and $J=(J_i)_{1\le i\le k}$ be two collections of disjoint intervals on the boundary of
the square $[-1,1]^2$, distributed in counterclockwise order. For a sequence $\sigma$ of length $k$, let $A^{I,J}_{\sigma}(r,R)$ be the event that $A_{\sigma}(r,R)$ occurs and the arms $\gamma_i$ can be chosen in such a way that $\gamma_i$ starts on $rI_i$ and ends on $RJ_i$ for every $1\leq i\leq k$.
\end{definition}
Since Theorem~\ref{thm:sRSW} generalises \cite{CheDumHon13} to every $1\le q<4$, we refer to the corresponding paper for the proof of the following result.
\begin{proposition}[Localization]\label{prop:localization}
Fix $1\le q<4$. For every $k\ge1$, every $I$ and $J$ as above, and every $\sigma$ of length $k$, there exists $c_8=c_8(\sigma,I,J,q)>0$ such that for every $R\ge r\ge r_\sigma$,
\begin{equation}
c_8\phi_{\mathbb{Z}^2}[A_{\sigma}(r,R)]\le \phi_{\mathbb{Z}^2}[A_{\sigma}^{I,J}(r,R)]\le\phi_{\mathbb{Z}^2}[A_{\sigma}(r,R)].
\end{equation}
\end{proposition}
\subsection{Bounds on the probability of arm events}\label{sec:proof_arm_events}
We begin with deriving up-to-constant estimates on three specific arm-events whose probabilities do not really vary when changing $q$.
\begin{proposition}[Universal arm-exponents]\label{prop:universal}
Let $1\le q<4$. There exist $c_9,C_9>0$ such that for every $R\ge r\ge 1$,
\begin{align}
c_9\left(r/R\right)^{2}\le\,&\phi_{\mathbb{Z}^2}[A_{10101}(r,R)]\le C_9\left(r/R\right)^{2},\label{eq:UNIVERSAL5}\\
c_9\left(r/R\right)^{2}\le\,&\phi_{\mathbb{Z}^2}[A_{101}^+(r,R)]\le C_9\left(r/R\right)^{2},\label{eq:UNIVERSAL3}\\
c_9\,r/R\,\le\,&\phi_{\mathbb{Z}^2}[A_{10}^+(r,R)]\le C_9\,r/R.\label{eq:UNIVERSAL2}
\end{align}
\end{proposition}
\begin{proof}The proof of \cite{CheDumHon13} extends trivially to our setting using~\eqref{eq:MULTIPLICATIVITY} and Proposition~\ref{prop:localization}.
\end{proof}
Next we study two consequences of~\eqref{eq:UNIVERSAL5}. The first concerns the six arm event.
\begin{corollary}\label{cor:six arm}
Fix $1\le q< 4$. There exist $c_{10},C_{10}>0$ such that for every $R\ge r\ge 1$,
\begin{align} &\phi_{\mathbb{Z}^2}[A_{101010}(r,R)] \le C_{10}\left(r/R\right)^{2+c_{10}}.\label{eq:SIX-ARM}
\end{align}
\end{corollary}
\begin{proof}
This is a standard argument that we only sketch.
Conditionally on the first five arms, the probability that an additional dual arm exists decays at least as fast as $(r/R)^c$ due to Theorem~\ref{thm:sRSW}.
Since the occurrence of the first five arms has a probability of order $(r/R)^2$ as stated by~\eqref{eq:UNIVERSAL5},~\eqref{eq:SIX-ARM} follows.
\end{proof}
We now turn to an estimate on the probability of the four-arm event.
\begin{proposition}\label{prop:four arm}
Fix $1\le q< 4$. There exist $c_{11},c_{12}>0$ such that for every $R\ge r\ge 1$,
\begin{align}
&\phi_{\mathbb{Z}^2}[A_{1010}(r,R)]\ge c_{11}\tfrac{r\, \pi_1^+(R)}{R\, \pi_1^+(r)}\ge c_{12}(r/R)^{2-c_{12}} \label{eq:FOUR-ARM}.
\end{align}
\end{proposition}
That the probability of the four-arm event is polynomially larger than that of the five-arm event, that is than $(r/R)^2$, is a standard consequence of Theorem~\ref{thm:sRSW}.
It is noteworthy that~\eqref{eq:FOUR-ARM} aditionally provides an explicit bound for the probability of the four-arm event
in terms of the probability of the half-plane one-arm event.
\begin{proof}
Fix $1\le r\le R$.
Let $E$ be the event that $\Lambda_{3R}$ contains both an open circuit and a dual open circuit surrounding $\Lambda_{2R}$, with the open one being connected to $\partial \Lambda_{4R}$.
By the crossing estimates~\eqref{eq:wRSW}, $\phi_{\mathbb{Z}^2}[E]\ge c>0$.
Let $\mathcal{D}$ be the connected component of $0$ in the set of vertices not connected to $\partial\Lambda_{3R}$ (to be more precise the largest subdomain containing 0). Observe that when $E$ occurs, $\mathcal{D}$ is $R$-centred.
Moreover, conditionally on $\mathcal{D}$ and on the configuration outside of it, the measure inside $\mathcal{D}$ is $\phi_{\mathcal{D}}^0$.
Let $A_{1010}(x,r,R)$ be the translate by the vector $x \in \mathbb{Z}^2$ of the event $A_{1010}(r,R)$.
Then, for any $r$-box $\Lambda_r(x)$ that intersects $\mathcal{D}^c$ and is connected to $\Lambda_R$ in $\mathcal{D}$, $A_{1010}(x,r,R)$ occurs.
Indeed, the two arms of type 0 are given by $\partial \mathcal{D}$,
one arm of type 1 is given by the fact that any vertex of $\mathcal{D}^c$ neighbouring a vertex of $\mathcal{D}$ is connected to $\partial \Lambda_{4R}$ outside of $\mathcal{D}$,
and the second arm of type 1 is given by the connection between $\Lambda_r(x)$ and $\Lambda_R$.
Thus
\begin{equation}\label{eq:pro}
\sum_{x\in r\mathbb Z^2 \cap \Lambda_{3R}}\phi_{\mathbb{Z}^2}[A_{1010}(x,r,R)]
\ge \phi_{\mathbb{Z}^2}\big[\phi_{\mathcal{D}}[\mathbf M_r(\mathcal{D},R)]\,\boldsymbol 1_E\big]
\ge c\,M(r,R).
\end{equation}
Proposition~\ref{prop:10} and Lemma~\ref{lem:mn} conclude the proof since there are $C(R/r)^2$ terms in the sum above, all equal to $\phi_{\mathbb{Z}^2}[A_{1010}(r,R)]$.
\end{proof}
Using the parafermionic observable, when $q\in [1,3]$,
the previous lower bound on the probability of the four arm event may be transformed as follows.
\begin{proposition}\label{prop:beta}
For every $1\le q\le 3$, there exists $c_{13}>0$ such that for every $R\ge1$,
\begin{align}\label{eq:bbb}
\phi_{\mathbb{Z}^2}[A_{1010}(1,R)]&\ge c_{13}R^{-2 + c_{13}}\tfrac1{\phi_{\mathbb{Z}^2}[A_1(0,R)]}.
\end{align}
\end{proposition}
The above inequality is interesting from two points of view.
First, it can be used to prove that the density of the infinite cluster $\theta(p)$ is not Lipschitz near $p_c$ (see \cite{DumMan20}).
Second, it is a necessary condition fo the Glauber dynamics to have exceptional times (we refer to \cite{GPS} and references therein for details).
Let us mention that~\eqref{eq:bbb} is expected to fail for $q$ close to~$4$.
\begin{proof}
By Proposition~\ref{prop:four arm}, it suffices to prove the existence of $c>0$ such that for every $R\ge1$,
\begin{equation}\label{eq:h21}
R\, \phi_{\mathbb{H}}^0[A^+_1(0,R)]\, \phi_{\mathbb{Z}^2}[A_1(0,R)]\ge c\,R^c.
\end{equation}
In order to do so, we use the parafermionic observable.
Consider the Dobrushin domain $\Omega_R$ obtained from $\Lambda_{3R}$ by removing the vertices $(x,0)$ with $x\ge1$ (call this the {\em slit}),
with $a=b=0$ (in this case $e_a$ and $e_b$ are the medial edges right of the origin,
and the exploration path is simply the loop passing through $e_a$ and $e_b$); see Fig.~\ref{fig:Omega_R}.
We now apply~\eqref{eq:rel_vertex} to $\Omega_R$.
The set $\mathcal{C}$ of boundary medial edges is split into three parts:
the set $\{e_a,e_b\}$, the set $\alpha$ of edges that are on the boundary of $\Lambda_{3R}^\diamond$, and the set $\beta$ of remaining edges, which are above and below the slit. Proceeding as in Lemma~\ref{lem:op}, we have
\begin{equation}\label{eq:5.15}
2\sum_{x\in \partial\Lambda_{3R}}\phi_{\Omega_R}^0[0\longleftrightarrow x]
\ge\sum_{e\in\alpha}|F(e)|\ge \big|\sum_{e\in\beta}\eta(e)F(e)+\eta(e_a)F(e_a)+\eta(e_b)F(e_b)\big|.
\end{equation}
A careful computation (along the same lines as that leading to~\eqref{eq:h12}, and using the vertical symmetry of $\Omega_R$)
shows that the two complex numbers $\sum_{e\in\beta}\eta(e)F(e)$ and $\eta(e_a)F(e_a)+\eta(e_b)F(e_b)$ are collinear; see also the explanation of Fig.~\ref{fig:Omega_R}.
Moreover, when $1 < q\leq 3$ (which is to say $\sigma \in (1/3,2/3]$), they also have the same direction,
which implies
\begin{equation*}
\big|\sum_{e\in\beta}\eta(e)F(e)+\eta(e_a)F(e_a)+\eta(e_b)F(e_b)\big|
\ge |\eta(e_a)F(e_a)+\eta(e_b)F(e_b)|= 2\cos \tfrac{\pi}4(3\sigma-1) >0.
\end{equation*}
The two last displayed inequalities imply
\begin{equation}\label{eq:5.17}
\sum_{x\in\partial\Lambda_{3R}}\phi_{\Omega_R}^0[0\longleftrightarrow x]\ge \cos \tfrac{\pi}4(3\sigma-1).
\end{equation}
Now, for 0 to be connected to $x \in \partial \Lambda_{3R}$, 0 must be connected to $\partial\Lambda_R$ and $x$ to $\partial\Lambda_R(x)$.
Thus, using the mixing property~\eqref{eq:mix2},
\begin{equation*}
\phi_{\Omega_R}^0[0\longleftrightarrow x]
\le C\,\phi_{\Omega_R}^0[0\longleftrightarrow\partial\Lambda_R]\,\phi_{\Omega_R}^0[x\longleftrightarrow\partial\Lambda_R(x)]
\le C'\,\phi_{\mathbb{Z}^2}[0\xleftrightarrow{\Omega_R}\partial\Lambda_R] \, \phi_{\mathbb{H}}^0[A^+_1(0,R)].
\end{equation*}
The second inequality holds since $x$ is on $\partial\Lambda_{3R}$, and therefore the boundary conditions induced by $\Omega_R$ are dominated by the free boundary conditions on a half-plane with $x$ on its boundary.
Plugging the above into~\eqref{eq:5.17} yields
\begin{equation*}
R \, \phi_{\mathbb{Z}^2}[0\xleftrightarrow{\Omega_R}\partial\Lambda_R] \, \phi_{\mathbb{H}}^0[A^+_1(0,R)] \geq c'.
\end{equation*}
Thus, in order to prove ~\eqref{eq:h21}, it suffices to show that
\begin{equation}\label{eq:yt}
\phi_{\mathbb{Z}^2}[0\xleftrightarrow{\Omega_R} \partial\Lambda_R]\le R^{-c''}\phi_{\mathbb{Z}^2}[A_1(0,R)],
\end{equation}
or in words, having one arm in a slit box is polynomially harder than having one arm in a box.
This is an easy consequence of the crossing estimates~\eqref{eq:wRSW}.
Indeed, let $A_{k}$ be the event that for every even integer $\ell\le k$,
there exists no dual path in $\Lambda_{2^{\ell+1}}\setminus \Lambda_{2^\ell}$ that disconnects $0$ from $\partial\Lambda_R$ inside $\Omega_R$.
If $0$ is connected to $\partial \Lambda_R$ in $\Omega_R$, then $A_{\lfloor \log_2R\rfloor}$ necessarily occurs. Thus,
\begin{equation}\label{eq:yyt}
\phi_{\mathbb{Z}^2}[0\xleftrightarrow{\Omega_R} \partial\Lambda_R]\le \phi_{\mathbb{Z}^2}[0\longleftrightarrow\partial\Lambda_R,A_{\lfloor \log_2R\rfloor}].
\end{equation}
Now, the crossing estimates~\eqref{eq:wRSW} imply that
$$\phi_{\mathbb{Z}^2}[0\longleftrightarrow\partial\Lambda_R,A_{k+1}]\le (1-c''')\phi_{\mathbb{Z}^2}[0\longleftrightarrow\partial\Lambda_R,A_{k}].$$
Inequality~\eqref{eq:yt} follows readily by applying the above $\lfloor\log_2 R\rfloor$ times and using~\eqref{eq:yyt}. This concludes the proof.
\end{proof}
\subsection{New bounds on the one, two and four-arm exponents}\label{sec:perco}
\begin{proposition}\label{lem:perco}
Fix $1\le q<4$. There exists $c_{14}>0$ such that
\begin{align}
\phi_{\mathbb{Z}^2}[A_1(0,R)] &\geq c_{14}\,\pi_1^+(R)^{1/2},\label{eq:Bernoulli_exponents1}\\
\phi_{\mathbb{Z}^2}[A_{10}(0,R)] &\geq c_{14}\,\pi_1^+(R),\label{eq:Bernoulli_exponents2} \\
\phi_{\mathbb{Z}^2}[A_{1010}(1,R)] &\geq c_{14}\,\pi_1^+(R)/R.\label{eq:Bernoulli_exponents4}
\end{align}
\end{proposition}
\begin{proof}
The first inequality follows from the second one using the FKG inequality~\eqref{eq:FKG}.
The third is the conclusion of Proposition~\ref{prop:four arm} with $r = 1$.
Therefore, it only remains to prove~\eqref{eq:Bernoulli_exponents2}.
Consider the Dobrushin domain $\Lambda_R$, with $a$ and $b$ being the bottom right corner and top left corner of $\Lambda_R$, respectively; see Fig.~\ref{fig:Omega_R}.
We will proceed as in the proof of Lemma~\ref{lem:op} (and therefore only sketch the proof).
Instead of working with the contour $\mathcal{C}$ which runs along the boundary of $\Lambda_R$,
we will work with the contour $\mathcal{C}'$ that surrounds the vertices of the medial lattice which lie below the diagonal $x = -y$.
The medial edges of $\mathcal{C}'$ may be split into those adjacent to the diagonal (call this set $\alpha$) and those adjacent to $\partial \Lambda_R$ (call this set $\beta$).
By summing~\eqref{rel_vertex} over every vertex of the medial lattice which lies strictly inside $\mathcal{C}'$, we find
$$\sum_{e\in \mathcal{C}'}\eta(e)F(e)=0.$$
Using the triangular inequality, we obtain that
$$\sum_{e\in \alpha}|F(e)|\ge |\sum_{e\in \beta}\eta(e)F(e)| \geq c\,R\,\pi_1^+(R),$$
where the second inequality was already proved in Lemma~\ref{lem:op}.
Notice now that, for any $e \in \alpha$, a configuration contributes to $F(e)$ only when the primal and dual vertices separated by $e$ are connected inside $\Lambda_{R}$
to $(ba)$ and $(ab)^*$ by primal and dual paths, respectively.
Thus, due to the mixing property~\eqref{eq:mix2}, if $r$ denotes the distance from $e$ to $\partial \Lambda_R$, then $|F(e)|\leq C \phi_{\mathbb{Z}^2}[A_{10}(0,r)]$.
In conclusion,
\begin{align*}
\sum_{r = 1}^R \phi_{\mathbb{Z}^2}[A_{10}(0,r)]\geq c' \sum_{e\in \alpha}|F(e)|\ge c\, c'\, R\,\pi_1^+(R).
\end{align*}
Finally, it is a classic consequence of the quasi-multiplicativity~\eqref{eq:MULTIPLICATIVITY} and the bound $\phi_{\mathbb{Z}^2}[A_{10}(r,R)]\ge (r/R)^{1-c''}$
(which follows from~\eqref{eq:hhg} by standard arguments)
that the left-hand side of the above is bounded from above by $c''' R\,\phi_{\mathbb{Z}^2}[A_{10}(0,R)]$.
Plugging this estimate in the last displayed equation gives~\eqref{eq:Bernoulli_exponents2}.\end{proof}
This proposition implies that one deduces bounds on the left-hand sides of the three inequalities from bounds on $\pi_1^+(R)$. For $q=1$, \cite{PonIkh} showed that $\pi_1^+(R)R^{1/3}$ is bounded away from 0 and $\infty $ uniformly in $R$ so that
\begin{align*}
\phi_{\mathbb{Z}^2}[A_1(0,R)] &\geq c/R^{1/6},\\
\phi_{\mathbb{Z}^2}[A_{10}(0,R)] &\geq c/R^{1/3},\\
\phi_{\mathbb{Z}^2}[A_{1010}(1,R)] &\geq c/R^{4/3}.\\
\end{align*}
While the result of \cite{PonIkh} is sharp, the bound we obtain are not (see \cite{BefDum13} for references on the case of site percolation on the triangular lattice to compare to the following bounds).
Note also that it is elementary to show from~\eqref{eq:hhg} and~\eqref{eq:UNIVERSAL2} that for $q=1$, $\pi_1^+(R)\ge c/R^{1/2}$,
so that
\begin{align*}
\phi_{\mathbb{Z}^2}[A_1(0,R)] &\geq c/R^{1/4},\\
\phi_{\mathbb{Z}^2}[A_{10}(0,R)] &\geq c/R^{1/2},\\
\phi_{\mathbb{Z}^2}[A_{1010}(1,R)] &\geq c/R^{3/2}.
\end{align*}
We conclude this section by a proof that the inequality $\pi_1^+(R)\ge c/R^{1/2}$ is in fact valid for every $q\in[1,2]$, thus extending the previous bounds to this context.
\begin{proposition}\label{prop:one_arm1/2}
For $q\in[1,2]$, there exists $c_{15}=c_{15}(q)>0$ such that for every $R\ge1$,
\begin{equation*}
\pi_1^+(R)\ge c_{15}R^{-1/2}.
\end{equation*}
\end{proposition}
\begin{proof}
We apply the parafermionic observable to the graph $\Omega_R:=\mathbb{Z}\times[0,2R]$ with $a=b=0$.
Using that the contour integral on the boundary vanishes,
and following the same lines as when going from~\eqref{eq:5.15} to~\eqref{eq:5.17}, we find that
\begin{equation}\label{eq:5.30}
\sum_{x\in\mathbb{Z}\times\{2R\}}\phi_{\Omega_R}^0[0\longleftrightarrow x]
\ge \tfrac12|\eta(e_a)F(e_a)+\eta(e_b)F(e_b)|= \cos \tfrac{\pi}4(3\sigma - 1) >0.
\end{equation}
At this stage we used that $1 \leq q \leq 2$ and the horizontal symmetry of the strip to show that the contribution to the contour integral of medial edges on the bottom of $\Omega_R$ is positively proportional to that of $e_a$ and $e_b$.
Now, the mixing property~\eqref{eq:mix2} and crossing estimates~\eqref{eq:wRSW} easily lead to the existence of $c,C\in (0,1)$ such that
\begin{equation*}
\phi_{\Omega_R}^0[0\longleftrightarrow x]\le C \pi_1^+(R)^2 c^{|x|/R},
\end{equation*}
where the second term accounts for vertices $x\in\mathbb{Z}\times\{2R\}$ that are far on the left or right. Plugging this estimate in~\eqref{eq:5.30} gives
\begin{equation*}
C'R\pi_1^+(R)^2\ge c',
\end{equation*}
and therefore the claim.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[width = 0.45\textwidth]{slit2.pdf}\hspace{0.05\textwidth}
\includegraphics[width = 0.45\textwidth]{observable_square.pdf}
\caption{{\em Left:} The domain $\Omega_R$ used in the proof of Proposition~\ref{prop:beta} is obtained from $\Lambda_{3R}$ by removing the slit right of $0$.
The medial edges $e_a$ and $e_b$ are right of $0$; all other edges of $\mathcal{C}$ are bold.
The red arrows indicate the orientation of $\eta(e)$ for four edges of $\beta$.
Due to the symmetry of $\Omega_R$, the absolute value of $F(e)$ for these four edges is equal;
their complex arguments are
${ \frac{7\pi}4(\sigma+1)}$, ${- \frac{7\pi}4(\sigma + 1)}$,
${ \frac{5\pi}4(\sigma+1)}$ and ${- \frac{5\pi}4(\sigma+1)}$, respectively (up to an additive constant).
{\em Right:} The Dobrushin domain used in the proof of Proposition~\ref{lem:perco}; the medial edges of the contour $\mathcal{C}'$ are bold. }
\label{fig:Omega_R}
\end{center}
\end{figure}
\subsection{Bounds on the scale-to-scale connection probability in a half-plane with free boundary conditions}
The goal of this section is the following result, which is a refinement and an extension of the previous proposition.
\begin{proposition}
For every $1\le q<2$, there exists $c=c(q)>0$ such that for every $r\le R$,
\begin{equation}
\phi_\mathbb{H}^0[A_1(r,R)]\ge c(r/R)^{1/2-c}.
\end{equation}
For every $2<q<4$, there exists $c=c(q)>0$ such that for every $r\le R$,
\begin{equation}
\phi_\mathbb{H}^0[A_1(r,R)]\le \tfrac1c(r/R)^{1/2+c}.
\end{equation}
\end{proposition}
\begin{figure}
\begin{center}
\includegraphics[width = 0.85\textwidth]{stairs.pdf}
\caption{The domains $\Omega$ (above) and $\Omega'$ (below) used for $q<2$ and $q>2$, respectively, with $k = 2$.
Outside of the depicted section, $\Omega'$ is equal to the strip $\mathbb{Z} \times [-2R,2R]$. }
\label{fig:stair}
\end{center}
\end{figure}
\begin{proof}
We start with the case $1\le q<2$. We improve on the proof of Proposition~\ref{prop:one_arm1/2} by choosing a better domain.
Fix some integer $k \ge 1$ (which will be chosen later independently of $r,R$) and integers $r \le R$.
Consider the domain $\Omega_{R,r,k}=\Omega$ defined as the subset of the strip $\mathbb Z\times [0,2R]$ composed of vertices above the two half-lines $(0,-r )+ie^{\pm i\alpha}\mathbb{R}_+$ where $\tan\alpha:=1/k$; see Fig.~\ref{fig:stair}.
We consider the Dobrushin boundary conditions $0/1$ with $a=(kr,0)$ and $b=(-kr,0)$, which are wired on the segment $(ba):=\mathbb{Z}\cap\partial\Omega$, and free elsewhere.
This domain is an approximation of a (horizontal symmetry of a) trapeze.
Index the vertices of $\Omega$ that lie strictly inside $\mathbb{Z}_+\times[0,2R]$ and that have at most three neighbours in the $\Omega$ as $x_1,\dots,x_{2kR}$, from bottom to top, as in Fig.~\ref{fig:stair}.
This indexing ensures that for each $n =0,\dots, 2R-1$, $x_{kn+1},\dots,x_{k(n+1)}$ form a horizontal segment at height $n+1$.
Using \eqref{eq:rel_vertex} for $\Omega$ and its vertical symmetry, after an appropriate change of phase, we find
\begin{align*}
\sum_{x\in\mathbb{Z}\times\{2R\}}\phi_{\Omega}^{0/1}[x\longleftrightarrow (ba)]
=c_q^{(ba)}\Big(1+\!\!\!\sum_{u\in (ba)^*}\phi_{\Omega}^{0/1}[u\stackrel{*}\longleftrightarrow (ab)^*]\Big)
+\sum_{i=1}^{2kR}c_q(i)\phi_{\Omega}^{0/1}[x_i\longleftrightarrow (ba)],
\end{align*}
with constants $c_q^{(ba)} >0$ and $c_q(i)$ for $i \geq 1$ given by
\begin{equation}
c_q(i):=\begin{cases}
c_q^+&\text{ if }k\text{ does not divide }i,\\
c_q^-&\text{ if }k\text{ divides }i,
\end{cases}
\end{equation}
with $c_q^+>0$ and $c_q^-<0$.
We omit the details of this computation as it is similar to those previously performed. Note however that the signs of $c_q^+$ and $c_q^-$ are due $q$ being strictly smaller than $2$.
Now, choose $k=k(q)$ large enough (independent of $r$ and $R$) that for every $n$,
\begin{align*}
\sum_{i=kn+1}^{k(n+1)}c_q(i)\phi_{\Omega}^{0/1}[x_i\leftrightarrow (ba)]
\ge 0.
\end{align*}
This may be done since, due to Theorem~\ref{thm:sRSW},
the probability that $x_{k(n+1)}$ is connected to $(ba)$ is much smaller than the probability that one of the $x_i$ with $kn<i<k(n+1)$ is.
Using this fact and Theorem~\ref{thm:sRSW}, we obtain that
\begin{align*}
CR\pi_1^+(R) \phi_{\Omega}^{0/1}[(ba)\longleftrightarrow \mathbb{Z}\times\{R\}]
&\ge \sum_{x\in\mathbb{Z}\times\{2R\}}\phi_{\Omega}^{0/1}[x\longleftrightarrow (ba)]\\
&\ge c_q^{(ba)}\Big(1+\!\!\!\sum_{u\in (ba)^*}\phi_{\Omega}^{0/1}[u\stackrel{*}\longleftrightarrow (ab)^*]\Big)\\
&\ge c \sum_{\ell=1}^r\pi_1^+(\ell),
\end{align*}
for constants $c,C >0$ independent of $r$ and $R$.
Finally, a standard application of Theorem~\ref{thm:sRSW} implies the existence of $c' = c'(k)>0$ such that
\[
\phi_{\Omega}^{0/1}[(ba)\leftrightarrow \mathbb{Z}\times\{R\}]\le c'(r/R)^{c'}\pi_1^+(r,R).
\]
Combined with the above, this implies that
\[
C'R\pi_1^+(R)\pi_1^+(r,R)(r/R)^{c'}\ge \sum_{\ell=1}^r\pi_1^+(\ell)\ge r\pi_1^+(r).
\]
The claim follows by quasi-multiplicativity.
\bigbreak
We now turn to the case $2<q< 4$.
We proceed in a similar fashion, but using a slightly different domain.
Define the domain $\Omega_{R,r,k}'=\Omega'$ obtained as the reflection with respect to the horizontal axis of $\mathbb{Z}\times[-2R,2R]\setminus\Omega_{R,r,k}$; see Fig.~\ref{fig:stair}.
A similar reasoning to before, this time using that the contribution on the lateral sides is negative for $k= k(q)$ large enough, implies that
\begin{align*}
\sum_{x\in\mathbb{Z} \times \{2R\}}\phi_{\Omega'}^{0/1}[x\longleftrightarrow (ba)]
\le c_q^{(ba)}\Big(1+\!\!\!\sum_{u\in (ba)^*}\phi_{\Omega'}^{0/1}[u\longleftrightarrow (ab)^*]\Big).
\end{align*}
In this new domain, Theorem~\ref{thm:sRSW} provides a constant $c = c(k) > 0$ such that
$$ \phi_{\Omega'}^{0/1}[(ba)\leftrightarrow \mathbb{Z}\times\{R\}]\ge c(R/r)^{c}\pi_1^+(r,R).$$
The same reasoning as above yields positive constants $c', C', C''$ such that
\begin{align*}
c'R\pi_1^+(R) (R/r)^{c}\pi_1^+(r,R)
\le \sum_{x\in\mathbb{Z} \times \{2R\}}\phi_{\Omega'}^{0/1}[x\longleftrightarrow (ba)]
\le C' \sum_{k=1}^r\pi_1^+(k)
\le C'' r\pi_1^+(r),
\end{align*}
with the last inequality due to Lemma~\ref{lem:mn}.
The proof follows by quasi-multiplicativity.
\end{proof}
\section{Properties of any sub-sequential limit}\label{sec:6}
In this section, we describe properties of sub-sequential limits of the family of cluster boundaries of the critical random-cluster model.
\subsection{Existence of sub-sequential limits}
To start, we recall the tightness criterion of \cite{AizBur99} for families of interfaces, formulated here for the random-cluster measure.
The criterion may be shown to hold using the pre-existent crossing estimate~\eqref{eq:RSW}.
Let $\Omega$ be an open subset of the plane, and define $\Omega_\delta$ as the subgraph of $\delta\mathbb{Z}^2$ induced by the edges included in $\Omega$.
Let $\phi_{\Omega_\delta}^0$ and $\phi_{\Omega_\delta}^1$ be the critical random-cluster measures on $\Omega_\delta$ with free and wired boundary conditions respectively. Also, let $\mathcal F_\delta$ be the collection of interfaces between the primal and dual clusters in $\Omega_\delta$.
\begin{theorem}[\cite{AizBur99}]\label{thm:tightness}
Fix $i \in \{0,1\}$ and $q \in [1,4]$.
Suppose that for each $k\geq 2$ there exist constants $C(k)>0$ and $\lambda(k)>0$, with $\lambda(k)$ tending to infinity with~$k$ such that,
for all $\delta > 0$ and any annulus $\Lambda_R(x)\setminus\Lambda_r(x) \subset \Omega$ with $\delta\le r\le R\le 1$,
\begin{equation}\tag{{\bf H1}}\label{eq:H1}
\phi_{\Omega_\delta}^i[\Lambda_R(x)\setminus\Lambda_r(x)\text{ is traversed by $k$ separate paths of }\mathcal F_\delta]\le C(k)(\tfrac rR)^{\lambda(k)}.
\end{equation}
Then the random variables $(\mathcal F_\delta)_{\delta > 0}$ form a tight family for the Hausdorff metric on collections of loops (see \cite{AizBur99} for a definition).
Moreover, there exists $c_{16}>0$ such that any sub-sequential limit of the variables above (for the convergence in distribution)
is supported on collections of loops which have Hausdorff dimension between $1+c_{16}$ and $2-c_{16}$.
\end{theorem}
As mentioned above, it is a consequence of~\eqref{eq:RSW} that~\eqref{eq:H1} is satisfied for both $i=0$ and $i = 1$, and any $q \in [1,4]$.
Indeed, for $\Lambda_R(x)\setminus\Lambda_r(x)$ to be traversed by $k$ separate crossings of $\mathcal F_\delta$,
a $k$-alternating arm event needs to occur in the annulus around $x$.
It is standard to deduce from~\eqref{eq:RSW} that the probability of such an event is bounded as required,
uniformly in $r,R$ and the boundary conditions on the annulus.
\begin{remark}
A close inspection of the proof of \cite{AizBur99} shows that it suffices to have the existence of $k$ such that $\lambda(k)>2$.
We deduce from~\eqref{eq:SIX-ARM} that $k=6$ works in our setting.
See \cite{KemSmi12} for alternative criteria that are implied by Theorem~\ref{thm:sRSW}.
\end{remark}
\begin{remark}
The argument of \cite{AizBur99} shows that the interfaces are naturally fractal.
One implication of this fact is the existence of $c_{17}>0$ such that
\begin{equation}\label{eq:2-arm bound}
\phi_{\mathbb{Z}^2}[A_{10}(r,R)]\ge c_{17}(r/R)^{1-c_{17}}.
\end{equation}
\end{remark}
In the next sections we derive from Theorem~\ref{thm:sRSW} properties relating to sub-sequential limits for the family of interfaces.
\subsection{Large clusters touch the boundary many times}
In this section, we improve on Theorem~\ref{thm:sRSW} to show that large clusters touch the boundary of a domain at all scales and in many places.
To simplify the statements and illustrate this slightly informal claim, we choose the context of $R$-centred domains and formulate the result as follows.
\begin{proposition}\label{prop:polynomially many}
For $1\le q<4$, there exists $c_{18}>0$ such that for every $R\ge r\ge 1$ and every $R$-centred domain,
\begin{equation}\label{eq:polynomially many}
\phi_\mathcal{D}^0[\Lambda_R\text{ is connected in $\Lambda_{9R}$ to $(R/r)^{c_{18}}$ $r$-boxes intersecting }\partial\mathcal{D}]\ge c_{18}.
\end{equation}
\end{proposition}
There are several ways of obtaining this result.
One is to use an argument involving exploration, along with the fact that $p(R)$ is uniformly bounded away from $0$ (see Section~\ref{sec:idea}).
Here, we take a more direct approach based on our proof of Proposition~\ref{prop:renormalization}.
\begin{figure}
\begin{center}
\includegraphics[width = 0.5\textwidth]{polynomially_many.pdf}
\caption{When the $R$-seeds $S$ and $S'$ are each connected to $k$ $r $-boxes intersecting $\partial\mathcal{D}$ within the box of radius $9R$ around them,
and when $\Lambda_R$ is connected to circuits around both $S$ and $S'$, then $\Lambda_{R}$ is connected to at least $2k$ $r $-boxes intersecting $\partial\mathcal{D}$.
Indeed, the boxes associated to $S$ are disjoint from those associated to $S'$ because the two seeds were chosen far from each other. }
\label{fig:polynomially_many}
\end{center}
\end{figure}
\begin{proof}
We will use the same notation as in the proof of Proposition~\ref{prop:renormalization}.
First, we claim that the quantity
$$
p_{r ,k}(R):=\inf_{\mathcal{D} \text{ $R$-centred}}\phi_\mathcal{D}^0[\Lambda_R\text{ is connected in $\Lambda_{9R}$ to $k$ $r $-boxes intersecting }\partial\mathcal{D}]
$$
satisfies
\begin{equation}\label{eq:ahgg}
p_{r ,k}(R)\ge c_2M(r,R)\min\{p_{r ,k}(r),(\tfrac rR)^2\} \qquad \text{for all $R \geq 20r$}.
\end{equation}
Indeed, the same proof as for $p(R)$ applies,
with $F_S$ replaced by the event $F_S(r ,k)$ that $S$ is connected in $\Lambda_{9R}$ to $k$ $r $-boxes intersecting $\partial\mathcal{D}$.
Second, we claim that there exists $c > 0$ independent of $R$, $r $ and $k$, such that
\begin{align}\label{eq:ahg}
p_{r ,2k}(20R)\ge c\, p_{r ,k}(R)^2.
\end{align}
To see this, consider a $20R$-centred domain $\mathcal{D}$ and two $R$-seeds $S$ and $S'$ that are at a distance at least $20R$ of each other,
but within distance $30R$ of $\Lambda_R$.
If the events $\mathrm{Circ}_S\cap E_S\cap F_S(r ,k)\cap \mathrm{Circ}_{S'}\cap E_{S'}\cap F_{S'}(r ,k)$ occur,
then $\Lambda_{20R}$ is connected in $\Lambda_{120R}$ to at least $2k$ $r $-boxes intersecting $\partial\mathcal{D}$
(see Fig.~\ref{fig:polynomially_many} and its caption for more details).
Using the FKG inequality~\eqref{eq:FKG}, the crossing estimates~\eqref{eq:wRSW}, the definition of $p_{r ,k}(R)$,
and then taking the infimum over $20R$-centred domains, the above leads to~\eqref{eq:ahg}.
Proceeding as in the proof of Proposition~\ref{prop:crucial_exist},
we may choose a constant $\lambda$ independent of $R$, $r $ and $k$,
large enough that
\begin{align}\label{eq:Delta2}
p_{r ,k}(\lambda R)\ge 2\min\{ p_{r ,k}(R), \lambda^{-2}\}\qquad \text{ for all $R \geq 20r$}.
\end{align}
Moreover, due to~\eqref{eq:wRSW},
we may assume that $\lambda$ is also such that $\inf_r p_{r ,1}(20r ) \geq \lambda^{-2}$.
Suppose now that
$p_{r ,k}(R) \geq \lambda^{-2}$ for some $r$, $R$ and $k$.
Then $j:= \lceil 2 \log_2 \lambda/c\rceil $ applications of~\eqref{eq:Delta2} followed by one application of~\eqref{eq:ahg} yield
\begin{align*}
p_{r ,2k}(20 \lambda^j R)\ge \min\{2^{j} p_{r ,2k}(20 R), 2\lambda^{-2}\} \ge 2^j \, c \lambda^{-4}\ge \lambda^{-2}.
\end{align*}
The above together with the bound on $p_{r ,1}(20r)$ implies the existence of $c' > 0$
such that $\displaystyle \inf_{r, R} p_{r , (R/r )^{c'}}(R) > 0$, which is the desired conclusion.
\end{proof}
\subsection{Large clusters touch each other}
Theorem~\ref{thm:tightness} implies that sub-sequential limits $\mathcal{F}$ of collections of loops $\mathcal{F}_\delta$ exist,
but does not guarantee that the loops of $\mathcal{F}$ touch each other (as is expected).
If the macroscopic loops of $\mathcal{F}_\delta$ are shown to touch each other, then the same follows for those of $\mathcal{F}$.
However, the opposite is not true; it may be that the loops of $\mathcal{F}$ touch each other,
while the macroscopic loops of $\mathcal{F}^\delta$ come within a mesoscopic distance of one other (as is expected when $q = 4$).
The self-touching property for $\mathcal{F}_\delta$ is useful for
\begin{itemize}
\item obtaining the full scaling limit of discrete interfaces (we refer to \cite{AizBur99,KemSmi16} for examples);
\item applying \cite{MilSheWer17} to derive, for instance for $q$ equal to 2 or 3, the convergence of interfaces in the $q$-state Potts model from the convergence of the interfaces in the random-cluster model by using a continuous version of the Edwards-Sokal coupling where clusters of the CLE($\kappa$) are colored in one of $q$ colors.
\end{itemize}
Below we show that macroscopic clusters (or equivalently macroscopic loops of $\mathcal{F}^\delta$) do touch each other with high probability.
We illustrate this informal statement by three results that we believe could prove useful.
Other similar results may be obtained from Theorem~\ref{thm:sRSW} if needed.
We insist on the fact that $q < 4$ is necessary here (see Section~\ref{sec:q=4}).
Call a {\em chain of clusters} any sequence of distinct cluster $\mathbf C_1,\dots, \mathbf C_k$
with the property that, for each $1\leq j < k$, there exists a closed edge connecting $\mathbf C_j$ to $\mathbf C_{j+1}$.
We say that such a chain {\em connects} two sets of vertices $A$ and $B$ if $\mathbf C_1$ intersects $A$ and $\mathbf C_k$ intersects $B$.
\newcommand{{\rm Rect}}{{\rm Rect}}
For $N \geq 1$ and $\ell >0$, let ${\rm Rect} = {\rm Rect}(N,\ell) = [0,\ell N] \times [0,N]$ be the rectangle of aspect ratio $\ell$ and size $N$.
\begin{theorem}[Crossings of rectangles by chains of large clusters]\label{thm:self-touching}
For $\alpha \geq 0 $ and $K\geq 1$, let $\mathcal{G}(K,\alpha,N,\ell)$ be the event
that there exists a chain of at most $K$ clusters of $\omega \cap {\rm Rect}(N,\ell)$
connecting the left and right sides of ${\rm Rect}(N,\ell)$, all of which have a diameter at least~$\alpha N$.
Then, for every $\varepsilon,\ell > 0$, there exist $K \geq 1$ and $\alpha >0$ such that for every $N\ge1$,
$$ \phi_{\mathbb{Z}^2}[\mathcal{G}(K,\alpha,N,\ell)] \geq 1 - \varepsilon.$$
\end{theorem}
This theorem is expected to fail for $q=4$:
in this case the shortest chain crossing the rectangle should contain either one or a logarithmic number of clusters.
Below, we give two consequences of the theorem, closer to the informal statements announced.
\begin{corollary}[Large clusters are connected by chains of large clusters]\label{cor:chain}~
Write $\mathcal{H}(K, \alpha, \delta ,N)$ for the event that any two clusters $\mathbf C$, $\mathbf C'$ of $\omega\cap \Lambda_N$
of diameter at least $\delta N$ are connected by a chain of at most $K$ clusters, each of diameter at least $\alpha N$.
Then, for every $\varepsilon,\delta > 0$, there exist $K \geq 1$ and $\alpha >0$ such that for every $N\ge1$,
$$ \phi_{\mathbb{Z}^2}[\mathcal{H}(K,\alpha,\delta,N)] \geq 1 - \varepsilon.$$
\end{corollary}
\begin{corollary}[Neighbouring large clusters touch each other]\label{cor:clusters_touch}~
Write $\mathcal{F}(\alpha, \delta ,N)$
for the event that there exist clusters $\mathbf C$, $\mathbf C'$ of $\omega \cap \Lambda_N$
of diameter at least $\delta N$ and such that $1< {\rm dist}(\mathbf C,\mathbf C') \leq \alpha N$.
Then, for every $\varepsilon,\delta > 0$, there exists $\alpha >0$ such that for every $N\ge1$,
\begin{align}\label{eq:clusters_touch}
\phi_{\mathbb{Z}^2}[\mathcal{F}(\alpha,\delta,N)] < \varepsilon.
\end{align}
\end{corollary}
We start with the proof of the theorem, which is based on the following two steps.
First, we will show that the event $\mathcal{G}(K,0,N,\ell)$, that there exists a chain of at most $K$ clusters crossing ${\rm Rect}$,
regardless of their diameter, occurs with high probability.
Then, we will show that in any such chain, the clusters are actually large.
The first step is contained in the following lemma.
\begin{lemma}\label{lem:bounded hamming}
For every $\varepsilon>0$ and $\ell > 0$, there exists $ K \geq1$ such that for every $N \ge 1$,
$$ \phi_{\mathbb{Z}^2}[\mathcal{G}(K,0,N,\ell)] \geq 1 - \varepsilon.$$
\end{lemma}
In other words, the above states that the Hamming distance to the crossing of ${\rm Rect}$ is bounded by $K$ with high probability.
\begin{proof}
Fix $\varepsilon$, $\ell$ and $N$.
For $k\geq 0$ let $\mathcal{R}_k$ be the set of vertices connected to the left side of ${\rm Rect}$ by a path containing at most $k$ closed edges.
If $\mathcal{R}_k$ does not intersect the right side of ${\rm Rect}$,
let $\mathcal{D}_k$ be the connected component of ${\rm Rect} \setminus \mathcal{R}_k$ that contains the right side of ${\rm Rect}$.
Since the configuration inside $\mathcal{D}_k$ does not depend on the states of edges in $\mathcal{R}_k$,
the measure induced by $\phi_{\mathbb{Z}^2}[\cdot| \mathcal{R}_k]$ inside $\mathcal{D}_k$ dominates $\phi_{\mathcal{D}_k}^0$.
View $\mathcal{D}_k$ as a quad with the arc $(ab)$ being the right side of ${\rm Rect}$
and the arc $(cd)$ being the boundary of $\mathcal{D}_k$ (see Fig.~\ref{fig:hamming}).
Using Theorem~\ref{thm:sRSW} and the fact that $\ell_{\mathcal{D}_k}[(ab),(cd)]\le \ell$,
we find
\begin{align*}
\phi_{\mathbb{Z}^2}[ \mathcal{C}(\mathcal{D}_k) \,| \, \mathcal{R}_k] \geq \phi_{\mathcal{D}_k}^0[ \mathcal{C}(\mathcal{D}_k)]\geq \eta(\ell).
\end{align*}
Finally, observe that if $\mathcal{C}(\mathcal{D}_k)$ occurs, then so does $\mathcal{G}(k+1,0,N,\ell)$. Thus
\begin{align*}
\phi_{\mathbb{Z}^2}[\mathcal{G}(k+1,0,N,\ell)\,|\,\mathcal{G}(k,0,N,\ell)^c] \geq \eta(\ell).
\end{align*}
We may therefore fix $K \geq 0$ depending only on $\varepsilon$ and $\eta(\ell)$
such that $\phi_{\mathbb{Z}^2}[ \mathcal{G}(K,0,N,\ell)] \geq 1- \varepsilon $ for every $N\ge1$.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[width = 0.49\textwidth]{hamming.pdf}\hspace{0.01\textwidth}
\includegraphics[width = 0.49\textwidth]{6arms2.pdf}
\caption{{\em Left:} The sets $\mathcal{R}_0$ and $\mathcal{R}_1$ in dark and light grey, respectively.
The connected component of the complement containing the right side of ${\rm Rect}$ is $\mathcal{D}_1$.
Viewed as a quad, there is a positive probability that it is crossed horizontally, which would induce $\mathcal{G}(2,0,N,\ell)$.
{\em Right:} a configuration with $k=3$; the clusters $\mathbf C_0,\dots,\mathbf C_3$ are depicted.
The edges $e_1,e_2, e_3$ are marked by red circles (their choice is not unique).
The boxes $\Lambda_{r}(z)$, $\Lambda_{mr}(z)$ and $\Lambda_{nr}(z)$ are marked in blue; notice the arms of different types between them in red.}
\label{fig:hamming}
\end{center}
\end{figure}
The second step in the proof of Theorem~\ref{thm:self-touching} is provided by the lemma below. Some notation is required.
For $R\ge r\ge1$ and $\sigma \in \{0,1\}^j$, the quarter plane arm event between radii $r$ and $R$, denoted $A_{\sigma}^{++}(r,R)$,
is defined as $A_{\sigma}^+(r,R)$, with the arms being restricted to the quarter plane $\mathbb{Z}_+^2$.
For $K\geq 0$, an arm in $\Lambda_R\setminus\Lambda_r$ of type $1$ (resp.~type $0$) with $K$ defects
is a path crossing $\Lambda_R\setminus\Lambda_r$ which contains at most $K$ closed (resp.~open) edges.
For $K\geq 1$ and $\sigma$ of length $j$, define $A_{\sigma}(K;r,R)$ as the event that there exist
$j$ disjoint arms with $K$ defects in total, from the inner to the outer boundary of $\Lambda_R\setminus\Lambda_r$,
which are of type $\sigma_1,\dots, \sigma_j$, when indexed in counterclockwise order.
Define the half-plane and quarter-plane arm events with defects, written $A^+_\sigma(K;r,R)$ and $A^{++}_\sigma(K;r,R)$, in the same way.
The following lemma is a straightforward consequence of~\eqref{eq:SIX-ARM},
the crossing estimates~\eqref{eq:wRSW} and the quasi-multiplicativity~\eqref{eq:MULTIPLICATIVITY};
see \cite[Prop.~17]{Nol08} for a proof in the case of percolation which adapts readily to our case.
Note that the sequence in the first equation is not the same as in~\eqref{eq:SIX-ARM}, but the same bound may be proved without difficulty.
\begin{lemma}\label{lem:arm_defects}
There exist $c_{19},C_{19}>0$ such that for every $K$ and every $R\ge r\ge1$,
\begin{align*}
\phi_{\mathbb{Z}^2}[A_{100100}(K;r,R)] & \leq C_{19}[1 + \log(R/r)]^K \cdot (r/R)^{2+c_{19}},\\
\phi_{\mathbb{Z}^2}[A_{10}^+(K;r,R)] & \leq C_{19}[1 + \log(R/r)]^K \cdot (r/R)^2,\\
\phi_{\mathbb{Z}^2}[A_{10}^{++}(K;r,R)] & \leq C_{19}[1 + \log(R/r)]^K \cdot (r/R)^{1+c_{19}}.
\end{align*}
\end{lemma}
We are now in a position to prove Theorem~\ref{thm:self-touching}.
\begin{proof}[Theorem~\ref{thm:self-touching}]
Fix $\varepsilon,\ell > 0$ and $N$. For clarity, we will assume $\ell \geq 1$; the same proof applies for $\ell <1$.
By Lemma~\ref{lem:bounded hamming}, we may choose $K = K(\varepsilon,\ell) \geq 1$ such that
$\mathcal{G}(K,0,N,\ell)$ occurs with probability at least $1 - \varepsilon$.
Henceforth we assume $K$ fixed as above, and focus on configurations $\omega$ in $\mathcal{G}(K,0,N,\ell)$.
Our goal is to prove that $\omega \in \mathcal{G}(K,\alpha,N,\ell)$ with high probability for some $\alpha >0$.
If $\omega$ contains a horizontal crossing of ${\rm Rect}$, then the cluster containing the crossing has diameter at least $\ell N$
and $\omega\in \mathcal{G}(K,\alpha,N,\ell)$ for any $0 < \alpha \leq \ell$.
Next, we focus on the situation where $\omega$ does not contain a horizontal crossing of ${\rm Rect}$.
Let $e_1,\dots, e_k$ be a minimal set of edges such that $\omega\cup\{e_1,\dots, e_k\}$ contains a horizontal crossing of ${\rm Rect}$.
Write $e_i = (u_i,v_i)$ with $v_i$ connected to $u_{i+1}$ in $\omega \cap {\rm Rect}$ for all $1 \leq i < k$,
and $u_0$ and $v_k$ connected to the left and right sides of ${\rm Rect}$, respectively.
Write $\mathbf C_i$ for the cluster of $v_i$ in $\omega \cap {\rm Rect}$ and $\mathbf C_{0}$ for the cluster of $u_1$.
By the minimality of $e_1,\dots, e_k$, the clusters $\mathbf C_0,\dots, \mathbf C_k$ are all distinct.
We start by analysing $\mathbf C_j$ with $1 \leq j < k$.
Fix such a value $j$ and let us assume that $\|v_j - u_{j+1}\| \leq \alpha N$ for some small constant $\alpha >0$ to be chosen later
(here $\|\cdot\|$ denotes the $L^\infty$ norm).
Write $r = \lfloor \alpha N \rfloor$ and assume for convenience that $N /r \in \mathbb{N}$ and $\ell N /r \in \mathbb{N}$.
By our assumption on the distance between $v_j$ and $u_{j+1}$, there exists a $r$-box $\Lambda_r(z)$ containing both $v_j$ and $u_{j+1}$.
Let $m r$ be the distance between $z$ and $\partial {\rm Rect}$
(recall that $z \in r\mathbb{Z}^2$ and that $N/r \in \mathbb{N}$, hence this distance is indeed an integer multiple of $r$, as the notation suggests).
Let $n r$ be the distance from $z$ to the three sides of $\partial {\rm Rect}$ furthest from $z$.
Then, we claim that in $\tilde\omega:=\omega\cup\{e_1,\dots, e_{j-1}, e_{j+2}, \dots, e_k\}$ there exist\footnote{
Let us justify (a)--(c).
Let $C_{\text{left}}$ and $C_{\text{right}}$ be the clusters in $\tilde\omega$ of $u_{j}$ and $v_{j+1}$, respectively.
Then $C_{\text{left}}$ intersects the left side of ${\rm Rect}$ and $C_{\text{right}}$ the right side.
Write $\partial C_{\text{left}}$ for the paths of dual-open edges separating $C_{\text{left}}$ from ${\rm Rect} \setminus C_{\text{left}}$.
Define $\partial C_{\text{right}}$ in the same way.
By the minimality of $e_1,\dots, e_k$, the boundaries $\partial C_{\text{left}}$ and $\partial C_{\text{right}}$ of the two clusters are disjoint
(since we are considering the edge-boundaries of ${\bf C}_j$ and ${\bf C}_{j+2}$ and that an intersection would imply the existence of a shortcut).
We start by explaining (a): both clusters $C_{\text{left}}$ and $C_{\text{right}}$ and their boundaries intersect $\Lambda_r$ and $\partial \Lambda_{mr}$.
It follows that each cluster contains a primal open-path between $\Lambda_r$ and $\partial \Lambda_{mr}$
and the boundaries of the two clusters contain two dual-open paths between $\Lambda_r$ and $\partial \Lambda_{mr}$ each.
We move on to (b): suppose (as in Fig.~\ref{fig:hamming}) that $z$ is closest to the left side of ${\rm Rect}$.
Then $C_{\text{right}}$ and its boundary intersect both $\Lambda_{mr}$ and $\partial \Lambda_{nr}$;
as a consequence $C_{\text{right}}$ contains a path of open edges and its boundary contains two disjoint paths
of dual-open edges between $\Lambda_{mr}$ and $\partial \Lambda_{nr}$.
The same holds when $z$ is closest to the right side of ${\rm Rect}$.
If $z$ is closest to the top or bottom of ${\rm Rect}$, then $C_{\text{left}}$ and $C_{\text{right}}$ contain one arm of type $1$ each,
and their boundaries an arm of type $0$ each. Hence we deduce that the half-plane arm event with arms of types $1001$ occurs.
This is obviously contained in the half-plane arm event with arms of types $101$.
Finally we show (c):
The two sides closest to $z$ are necessarily adjacent. Hence we may suppose (as in Fig.~\ref{fig:hamming}) that they are the left and bottom ones.
Then, $C_{\text{right}}$ contains an arm of type $0$ from $\Lambda_{nr}$ to the right side of ${\rm Rect}$, contained in the quarter-plane $\mathbb{Z}_+^2$,
and its boundary contains an arm of type $0$ from $\Lambda_{nr}$ to either the top or the right side of ${\rm Rect}$, contained in the same quarter-plane.
We deduce that arms as in (a)--(c) also exist in $\omega$, but with $K$ defects at most.
}:
\begin{itemize}
\item[(a)] six arms in the annulus $\Lambda_{mr}(z)\setminus \Lambda_r(z)$ of types $100100$;
\item[(b)] three arms in the half-space annulus $(\Lambda_{nr}(z)\setminus \Lambda_{mr}(z)) \cap {\rm Rect}$
of types $101$ (if the side of ${\rm Rect}$ closest to $z$ is either the left or right one) or $010$ (otherwise);
\item[(c)] two arms in the quarter-plane annulus ${\rm Rect} \setminus \Lambda_{nr}(z)$ of types $10$ or $01$,
from $\Lambda_{nr}(z)$ to the two sides of ${\rm Rect}$ furthest from $z$.
\end{itemize}
Write $E(z)$ for the event that arms as in (a)-(c) exists, with at most $K$ defects. The mixing property ~\eqref{eq:mix2} and Lemma~\ref{lem:arm_defects} give that
\begin{align*}
\phi_{\mathbb{Z}^2}[E(z)]
&\leq C \phi_{\mathbb{Z}^2}[A_{100100}(K;r,mr)] \phi_{\mathbb{Z}^2}[A_{101}^{+}(K;mr,nr)]\, \phi_{\mathbb{Z}^2}[A_{10}^{++}(K;nr,N)] \\
&\leq C' [1+\log (N/r)]^{3K}(1/m)^{2 + c_{19}} (m/n)^{2} (nr/N)^{1 + c_{19}} .
\end{align*}
Now, observe that for any $m\leq n\leq N/r$,
there exist at most eight points $z \in r \mathbb{Z}^2 \cap {\rm Rect}$ at a distance $mr$ from $\partial {\rm Rect}$
and a distance $nr$ to the three sides of ${\rm Rect}$ furthest from $z$.
Applying a union bound, it follows that
\begin{align*}
\phi_{\mathbb{Z}^2}\big[\bigcup_{z}E(z)\big]
& \leq 8C' [1+\log (N/r)]^{3K} \sum_{1 \leq m \leq n \leq N/r} (1/m)^{2 +c_{19}} (m/n)^{2} (nr/N)^{1 + c_{19}}\\
&\leq 8C' [1+\log (N/r)]^{3K} (r/N)^{c_{19}},
\end{align*}
where the union if over all $z \in r \mathbb{Z}^2 \cap {\rm Rect}$ and the last inequality is obtained through straightforward computation.
Suppose now that $\alpha$ (and hence $r/N$) is chosen small enough such that the above is smaller than $\varepsilon$;
notice that the choice of $\alpha$ depends on $K$, but that $K$ only depends on $\varepsilon$ and $\ell$, not on $N$.
Then,
\begin{align}\label{eq:HK}
\phi_{\mathbb{Z}^2}[\mathcal{G}(K,0,N,\ell) \cap \bigcap_{z}E(z)^c ] \geq 1- 2\varepsilon.
\end{align}
Moreover, on the event above, each ${\bf C}_j$ with $1 \leq j < k$ has diameter at least $\alpha N$,
since it contains two points $v_j$ and $u_{j+1}$ at a distance at least $\alpha N$ from each other.
At this stage, we should also exclude that the diameters of ${\bf C}_0$ and ${\bf C}_k$ are small.
We do this below through a similar argument as for the diameters of ${\bf C}_1,\dots, {\bf C}_{k-1}$.
Suppose that the diameter of ${\bf C}_0$ is smaller than $r$.
Then, there exists $z \in \{0\}\times r\mathbb{Z}$ so that $u_1 \in \Lambda_r(z)$.
Let $nr$ denote the distance from $z$ to the top and bottom of ${\rm Rect}$.
The same analysis as above shows that $\omega$ contains
\begin{itemize}
\item[(b')] three arms in the half-space annulus $(\Lambda_{nr}(z)\setminus \Lambda_{r}(z)) \cap {\rm Rect}$ of types $010$ with at most $K$ defects;
\item[(c')] two arms, with at most $K$ defects, in the quarter-plane annulus ${\rm Rect} \setminus \Lambda_{nr}(z)$ of types $01$ or $10$,
from $\Lambda_{nr}(z)$ to the two sides of ${\rm Rect}$ furthest from $z$.
\end{itemize}
We recognise above the event $E(z)$ for $z$ on the left boundary of ${\rm Rect}$.
The same analysis applies when the diameter of ${\bf C}_k$ is smaller than $r$.
In conclusion, the event in~\eqref{eq:HK} guarantees the occurrence of $\mathcal{G}(K,\alpha,N,\ell)$, and the bound in~\eqref{eq:HK} implies the result.
\end{proof}
Finally, we prove the two corollaries.
\begin{proof}[Corollary~\ref{cor:chain}]
Fix $\varepsilon, \delta > 0$ and $N$.
We may assume $N$ larger than some threshold, and for simplicity we will consider $\delta N$ and $1/\delta$ to be integers.
Partition $\Lambda_N$ into strips $S_j = [-N,N] \times [j\delta N, (j+1)\delta N]$ with $-1/\delta \leq j <1/\delta$.
Each strip $S_j$ is a translate of ${\rm Rect}(\delta N, 2/\delta)$.
Let $\alpha$ and $K$ be such that
$$\phi_{\mathbb{Z}^2}[\mathcal{G}(K,\alpha/\delta ,\delta N,2/\delta)] \geq 1 - \varepsilon\delta/4.$$
By Theorem~\ref{thm:self-touching}, such values of $\alpha > 0$ and $K\geq 0$ exist, and only depend on $\varepsilon$ and $\delta$.
Write $\mathcal{G}_h$ for the event that $\mathcal{G}(K,\alpha/\delta,\delta N,2/\delta)$ occurs in every strip $S_j$, and $\mathcal{G}_v$ for the rotation by $\pi/2$ of $\mathcal{G}_h$.
Then, due to our choice of $\alpha$ and $K$,
\begin{align}\label{eq:Ghv}
\phi_{\mathbb{Z}^2}[\mathcal{G}_h \cap \mathcal{G}_v] \geq 1 - \varepsilon.
\end{align}
Moreover, we claim that if $\mathcal{G}_h \cap \mathcal{G}_v$ occurs, then any two clusters in $\Lambda_N$ of diameter at least $2 \delta N$
are connected by a chain of at most $2K$ clusters of diameter at least $\alpha N$.
Indeed, any cluster of diameter at least $2 \delta N$ contains a vertical crossing of a strip $S_j$,
or a horizontal crossing of the rotation by $\pi/2$ of a strip $S_j$.
As such, it is contained in one of the chains of clusters crossing horizontally the strips $S_j$, or vertically their rotations.
Finally, if $\mathbf C$ and $\mathbf C'$ are members of two such chains, we can exhibit a chain of clusters connecting $\mathbf C$ to $\mathbf C'$
by following a chain crossing some $S_j$ horizontally, then one crossing vertically the rotation of some $S_{j'}$.
By construction, the chain thus obtained contains only clusters of diameter at least $\alpha N$ and at most $2K$ of them.
In conclusion,~\eqref{eq:Ghv} implies that
$\phi_{\mathbb{Z}^2}[\mathcal{H}(2K,\alpha,2\delta,N)] \geq 1 - \varepsilon. $
\end{proof}
\begin{proof}[Corollary~\ref{cor:clusters_touch}]
Fix some $\delta >\alpha > 0$ and $N \geq 1$. Assume for simplicity that $r := \alpha N$ and $N/r=1/\alpha$ are integers.
By the same analysis as in the proof of Theorem~\ref{thm:self-touching},
if $\mathcal{F}(\alpha, \delta,N)$ occurs, then there exists a $r$-box $\Lambda_r(z)$ that intersects two clusters $\mathbf C$ and $\mathbf C'$
of diameters at least $\delta N$ and at a distance at least $2$ from each other.
Write $m r$ for the distance between $z$ and $\partial \Lambda_N$.
Then we claim that $\omega$ contains
\begin{itemize}
\item[(a)] six arms in the annulus $\Lambda_{mr}(z)\setminus \Lambda_r(z)$ of types $100100$;
\item[(b)] four arms in the half-space annulus $(\Lambda_{\delta N}(z)\setminus \Lambda_{mr}(z)) \cap \Lambda_N$ of types $1001$.
\end{itemize}
Indeed, the two primal arms are provided by $\mathbf C$ and $\mathbf C'$ and the dual ones by their disjoint boundaries.
Following the proof of Theorem~\ref{thm:self-touching}, there exists $C > 0$ independent of $\alpha,\delta$ or $N$ such that
\begin{align*}
\phi_{\mathbb{Z}^2}[\mathcal{F}(\alpha, \delta,N)]
&\leq C (\alpha/\delta)^{c_{19}}.
\end{align*}
Thus, for $\varepsilon,\delta > 0$, in order to obtain~\eqref{eq:clusters_touch},
it suffices to choose $\alpha$ small enough for the above to be smaller than $\varepsilon$.
\end{proof}
\newcommand{\etalchar}[1]{$^{#1}$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,869,038,155,060 | arxiv | \section{Introduction}
\label{sec1}
The flow polytope $\mathcal{F}_G$ associated to a directed acyclic graph $G$ is the set of all flows $f:E(G) \rightarrow \mathbb{R}_{\geq 0}$ of size one. Flow polytopes are fundamental objects in combinatorial optimization \cite{schrijver}, and in the past decade they were also uncovered in representation theory \cite{bv, mm}, the study of the space of diagonal harmonics \cite{lmm, tesler}, and the study of Schubert and Grothendieck polynomials \cite{pipe1, toric}. In this paper we establish the deep connection between flow polytopes and generalized permutahedra and use this connection to prove that for certain permutations, the supports of Schubert polynomials as well as the homogeneous components of Grothendieck polynomials are integer points of generalized permutahedra.
A natural way to analyze a convex polytope is to dissect it into simplices. The relations of the subdivision algebra, developed in a series of papers \cite{root1, root2, prod}, encode dissections of a family of flow (and root) polytopes (see Section \ref{sec2} for details). The key to connecting flow polytopes and generalized permutahedra lies in the study of the dissections of flow polytopes obtained via the subdivision algebra:
\medskip
\noindent \textit{(1) How are the dissections of a flow polytope obtained via the subdivision algebra related to each other?}
\medskip
\noindent In Theorem \ref{theoremA} we give a full characterization of the left-degree sequences (Definition \ref{def:ld}) of any dissection of a flow polytope obtained via the subdivision algebra, and we show that while the dissections themselves are different their left-degree sequences are the same. That the left-degree sequences do not depend on the dissection was previously proved in special cases by Escobar and the first author \cite{pipe1}, and independently from the authors, Grinberg \cite{grinberg} recently showed it for arbitrary graphs in his study of the subdivision algebra. Our characterization of the left-degree sequences of any reduction tree of any graph serves as the cornerstone of the rest of the work in this paper.
\medskip
Since by Theorem \ref{theoremA} the left-degree sequences are an invariant of the underlying flow polytope and do not depend on the choice of dissection, it is natural to ask:
\medskip
\noindent \textit{(2) What is the significance of the left-degree sequences associated to a flow polytope $\mathcal{F}_G$?}
\medskip
\noindent The answer to this question is both inspiring and revealing. In Theorem \ref{theoremB}, we prove that left-degree sequences of $\mathcal{F}_G$ with fixed sums are exactly lattice points of generalized permutahedra, which were introduced by Postnikov in his beautiful paper \cite{genperms}. Moreover, we show that the left-degree polynomial $L_G(\bm{t})$ (Definition \ref{ldpolynomialdefinition}) has polytopal support (Definition \ref{polsup}).
\medskip
In earlier work of Escobar and the first author \cite{pipe1}, it was shown that some left-degree polynomials are Grothendieck polynomials. This brings us to:
\medskip
\noindent \textit{(3) What does the answer to (2) imply about Schubert and Grothendieck polynomials?}
\medskip
\noindent In Theorem \ref{theoremC}, we conclude that for all permutations $1\pi'$ where $\pi'$ is dominant, the Grothendieck polynomial $\mathfrak{G}_{1\pi'}(\bm{x})$ is a weighted integer-point transform of its Newton polytope, with all weights nonzero. Moreover, the homogeneous components of $\mathfrak{G}_{1\pi'}(\bm{x})$ are weighted integer-point transforms of their Newton polytopes, which are all generalized permutahedra. For the homogeneous component corresponding to the Schubert polynomial $\mathfrak{S}_{1\pi'}(\bm{x})$, something more is true: it equals the integer-point transform of its Newton polytope, which is a generalized permutahedron. Theorem \ref{theoremC} implies in particular that the recent conjectures of Monical, Tokcan, and Yong \cite[Conjecture 5.1 \& 5.5]{newtonalgcom} are true for permutations $1\pi'$, where $\pi'$ is a dominant permutation.
\medskip
The outline of this paper is as follows. Sections \ref{sec2} covers the necessary background. Sections \ref{sec3}, \ref{sec4} and \ref{sec5} answer questions (1), (2) and (3) from above respectively. For ease of reading Sections \ref{sec3}, \ref{sec4} and \ref{sec5} are phrased for simple graphs. In Section \ref{sec6} we show that our techniques extend to generalize all results to all graphs.
\section{Background information}
\label{sec2}
In this section, we summarize definitions, notations, and results that we use later. Throughout this paper, by \textbf{graph} we mean a loopless directed graph where multiple edges are allowed, as described below. Although we sometimes refer to edges by their endpoints, we keep in mind that $E(G)$ is a multiset. We also adopt the convention of viewing each element of a multiset as being distinct, so that we may speak of subsets, though we will use the word submultiset interchangeably to highlight the multiplicity. Due to this convention, all unions in this paper are assumed to be disjoint multiset unions.
For any integers $m$ and $n$, we will frequently use the notation $[m,n]$ to refer to the set $\{m,m+1,\ldots, n\}$ and $[n]$ to refer to the set $[1,n]$.\\
\noindent \textbf{Flow polytopes.} Let $G$ be a loopless graph on vertex set $[0,n]$ with edges directed from smaller to larger vertices. For each edge $e$, let $\mathrm{in}(e)$ denote the smaller (initial) vertex of $e$ and $\mathrm{fin}(e)$ the larger (final) vertex of $e$. Imagine fluid moving along the edges of $G$. At vertex $i$ let there be an external inflow of fluid $a_i$ (outflow of $-a_i$ if $a_i<0$), and call $\bm{a}=(a_0,\,\ldots,\,a_n)$ the \textbf{netflow vector}. Formally, a \textbf{flow} on $G$ with netflow vector $\bm{a}$ is an assignment $f:E(G)\to \mathbb{R}_{\geq 0}$ of nonnegative values to each edge such that fluid is conserved at each vertex. That is, for each vertex $i$
\[ \sum_{\mathrm{in}(e)=i}{f(e)} - \sum_{\mathrm{fin}(e)=i}{f(e)} = a_i.\]
The \textbf{flow polytope} $\mathcal{F}_G(\bm{a})$ is the collection of all flows on $G$ with netflow vector $\bm{a}$. Alternatively, let $M_G$ denote the incidence matrix of $G$, that is let the columns of $M_G$ be the vectors $e_i-e_j$ for $(i,j)\in E(G)$, $i<j$, where $e_i$ is the $(i+1)$-th standard basis vector in $\mathbb{R}^{n+1}$. Then,
\[\mathcal{F}_G(\bm{a})= \{f\in \mathbb{R}^n_{\geq 0}:\, M_Gf=\bm{a} \}. \]
From this perspective, note that the number of integer points in $\mathcal{F}_G(\bm{a})$ is exactly the number of ways to write $\bm{a}$ as a nonnegative integral combination of the vectors $e_i-e_j$ for edges $(i,j)$ in $G$, $i<j$, that is the \textbf{Kostant partition function} $K_G(\bm{a})$. For brevity, we write $\mathcal{F}_G:=\mathcal{F}_G(1,0,\ldots , 0,-1)$, and we refer to $\mathcal{F}_G$ as the flow polytope of $G$, since in this paper our primary focus is on studying these particular flow polytopes.
The following milestone result giving the volume of flow polytopes was shown by Postnikov and Stanley in unpublished work:
\begin{theorem}[Postnikov-Stanley]
\label{pstheorem}
Given a loopless connected graph $G$ on vertex set \\$\{0,1,\ldots,n\}$, let $d_i=\mathop{\mathrm{indeg}}_G(i)-1$ for each vertex $i$, where $\mathop{\mathrm{indeg}}_G(i)$ is the number of edges incoming to vertex $i$ in $G$. Then, the normalized volume of the flow polytope of $G$ is
\[ \mathrm{Vol} \,\,\mathcal{F}_{G} = K_{G} \left (0,\,d_1,\, \ldots ,\, d_n, \, -\sum_{i=1}^{n}{d_i} \right ). \]
\end{theorem}
Baldoni and Vergne \cite{bv} generalized this result for flow polytopes with arbitrary netflow vectors. Theorem \ref{pstheorem} beautifully connects the volume of the flow polytope of any graph to an evaluation of the Kostant partition function. We note that since the number of integer points of a flow polytope is already given by a Kostant partition function evaluation, the volume of any flow polyope is given by the number of integer points of another.
Recall that two polytopes $P_1\subseteq \mathbb{R}^{k_1}$ and ${P_2\subseteq R^{k_2}}$ are \textbf{integrally equivalent} if there is an affine transformation $T:\mathbb{R}^{k_1}\to \mathbb{R}^{k_2}$ that is a bijection $P_1\to P_2$ and a bijection $\mathop{\mathrm{aff}}(P_1)\cap \mathbb{Z}^{k_1}\to \mathop{\mathrm{aff}}(P)_2\cap \mathbb{Z}^{k_2}$. Integrally equivalent polytopes have the same face lattice, volume, and Ehrhart polynomial. We write $P_1 \equiv P_2$ to denote integral equivalence.
While simple to prove, the following lemma is important. We leave its proof to the reader. For the rest of the paper, given a graph $G$ and a set $S$ of its edges, we use the notation $G/S$ to denote the graph obtained from $G$ by contracting the edges in $S$ (and deleting loops) and we use the notation $G\backslash S$ to denote the graph obtained from $G$ by deleting the edges in $S$. For a set $V$ of vertices of $G$, we also use the notation $G\backslash V$ to denote the graph obtained from $G$ by deleting the vertices in $V$ and all edges incident to them. When $S$ or $V$ consists of just one element, we simply write $G/e$ or $G\backslash v$.
\begin{lemma}
\label{contraction}
Let $G$ be a graph on $[0,n]$. Assume vertex $j$ has only one outgoing edge $e$ and netflow $a_j\geq0$. If $e$ is directed from $j$ to $k\in[n]$, then
\[\mathcal{F}_G(a_0,\dots, a_n) \mbox{ and } \mathcal{F}_{G/e}(a_0,\ldots, a_{j-1}, a_{j+1},a_{j+2},\ldots, a_{k-1}, a_k+a_j,a_{k+1},\ldots, a_n)\] are integrally equivalent. An analogous result holds if $j$ has only one incoming edge and $a_j\leq 0$.
\end{lemma}\medskip
\noindent \textbf{Dissections of flow polytopes.}
For graphs with a special source and sink, there is a systematic way to dissect the flow polytope $\mathcal{F}_{\widetilde{G}}$ studied in \cite{prod}. Let $G$ be a graph on $[0,n]$, and define $\widetilde{G}$ on $[0,n]\cup \{s,t\}$ with $s$ being the smallest vertex and $t$ the biggest vertex by setting $E(\widetilde{G})= E(G)\cup \{(s,i),(i,t):\,i\in[0,n] \}$. The systematic dissections can be expressed in the language of the subdivision algebra or equivalently in terms of reduction trees \cite{root1, root2, prod}. We use the language of reduction trees in this paper.
Let $G_0$ be a graph on $[0,n]$ with edges $(i,j)$ and $(j,k)$ for some $i<j<k$. By a \textbf{reduction} on $G$, we mean the construction of three new graphs $G_1$, $G_2$ and $G_3$ on $[0,n]$ given by:
\begin{align}
E(G_1)&=E(G)\backslash \{(j,k)\}\cup \{(i,k)\}\nonumber \\
E(G_2)&=E(G)\backslash \{(i,j)\}\cup \{(i,k)\} \label{reducing} \\
E(G_3)&=E(G)\backslash \{(i,j),\,(j,k)\}\cup \{(i,k)\} \nonumber
\end{align}
We say $G_0$ \textbf{reduces} to $G_1$, $G_2$ and $G_3$. We also say that the above reduction is at vertex $j$, on the edges $(i,j)$ and $(j,k)$.
\begin{proposition}
\label{subdivisionlemma}
Let $G_0$ be a graph on $[0,n]$ which reduces to $G_1$, $G_2$ and $G_3$ as above. Then for each $m\in[3]$, there is a polytope $Q_m$ integrally equivalent to $\mathcal{F}_{\widetilde{G}_m}$ such that $Q_1$ and $Q_2$ subdivide $\mathcal{F}_{\widetilde{G}_0}$ and intersect in $Q_3$. That is, the polytopes $Q_1$, $Q_2$, and $Q_3$ satisfy
\[\mathcal{F}_{\widetilde{G}_0} = Q_1 \bigcup Q_2 \mbox{ with } Q_1^o\bigcap Q_2^o= \emptyset \mbox{ and } Q_1\bigcap Q_2= Q_3.\]
Moreover, $Q_1$ and $Q_2$ have the same dimension as $\mathcal{F}_{\widetilde{G}_0}$ and $Q_3$ has dimension one less.
\end{proposition}
\begin{proof}
Let $r_1$ and $r_2$ denote the edges of $G_0$ from $i$ to $j$ and from $j$ to $k$ respectively that were used in the reduction. Viewing $\mathbb{R}^{\#E(\widetilde{G}_0)}$ as functions $f:E(\widetilde{G}_0)\to \mathbb{R}$, cut $\mathcal{F}_{\widetilde{G}_0}$ with the hyperplane $H$ defined by the equation $f(r_1)=f(r_2)$. Let $Q_1$ be the intersection of $\mathcal{F}_{\widetilde{G}_0}$ with the positive half-space $f(r_1)\geq f(r_2)$, let $Q_2$ be the intersection of $\mathcal{F}_{\widetilde{G}_0}$ with the negative half-space $f(r_1)\leq f(r_2)$, and let $Q_3$ be the intersection of $\mathcal{F}_{\widetilde{G}_0}$ with the hyperplane $H$.
See Figure \ref{subdivlemmaproofpic} for an illustration of the integral equivalence between $Q_m$ and $\mathcal{F}_{\widetilde{G}_m}$. Notice that since we are doing the reductions on the edges of $G_0$ (as opposed to on the edges incident to the source or sink in $\widetilde{G}_0$), it follows that the hyperplane $H$ meets $\mathcal{F}_{\widetilde{G}_0}$ in its interior, giving the claims on the dimensions of $Q_m$, $m \in [3]$. \end{proof}
\begin{figure}
\includegraphics[]{Pictures/subdivlemmaproof}
\caption{An illustration of the integral equivalence between $Q_m$ and $\mathcal{F}_{\widetilde{G}_m}$ for $m\in[3]$ used Proposition \ref{subdivisionlemma}.}
\label{subdivlemmaproofpic}
\end{figure}
Iterating this subdivision process produces a dissection of $\mathcal{F}_{\widetilde{G}_0}$ into simplices. This process can be encoded using a reduction tree. A \textbf{reduction tree} of $G$ is constructed as follows. Let the root node of the tree be labeled by $G$. If a node has any children, then it has three children obtained by performing a reduction on that node and labeling the children with the graphs defined in (\ref{reducing}). Continue this process until the graphs labeling the leaves of the tree cannot be reduced. See Figure \ref{reductiontree} for an example.
Fix a reduction tree $\mathcal{R}(G)$ of $G$. Let $L$ be a graph labeling one of the leaves in $\mathcal{R}(G)$. Lemma \ref{contraction} implies that $\mathcal{F}_{\widetilde{L}}$ is a simplex, so the flow polytopes of the graphs labeling the leaves of $\mathcal{R}(G)$ dissect $\mathcal{F}_{\widetilde{G}}$ into simplices. All dissections we consider in this paper will be dissections into simplices. By \textbf{full-dimensional leaves} of $\mathcal{R}(G)$, we mean the leaves $L$ with $\#E(L)=\#E(G)$. By \textbf{lower-dimensional leaves} we mean all other leaves $L$ of $\mathcal{R}(G)$. Note that the full-dimensional leaves correspond to top-dimensional simplices in the dissection of $\mathcal{F}_{\widetilde{G}}$, and the lower-dimensional leaves index intersections of the top-dimensional simplices. Since all simplices above are unimodular, it follows that:
\begin{corollary}
The normalized volume of $\mathcal{F}_{\widetilde{G}}$ equals the number of full-dimensional leaves in any reduction tree of $G$. Moreover, the number of leaves with a fixed number of edges is independent of the reduction tree.
\end{corollary}
\begin{figure}[]
\centering
\includegraphics[scale=1]{Pictures/ReductionTreeFull.pdf}
\caption{A reduction tree for a graph on three vertices. The edges involved in each reduction are shown in bold. The left-degree sequences of the leaves are shown in blue.}
\label{reductiontree}
\end{figure}\medskip
\noindent \textbf{Left-degree sequences.} Let $G$ be a graph on $[0,n]$, and let $\mathcal{R}(G)$ be a reduction tree of $G$. Denote by $\mathop{\mathrm{indeg}}_G(i)$ the number of edges directed into vertex $i$. For each leaf $L$ of $\mathcal{R}(G)$, consider the \textbf{left-degree sequence} $\left (\mathop{\mathrm{indeg}}_{L}(1),\,\mathop{\mathrm{indeg}}_{L}(2),\,\ldots,\, \mathop{\mathrm{indeg}}_{L}(n) \right )$. By \textbf{full-dimensional sequences} we will mean left-degree sequences of full-dimensional leaves of $\mathcal{R}(G)$. The following definition is central to this paper.
\begin{definition} \label{def:ld}
Denote by $\mathop{\mathrm{LD}}(G)$ the multiset of left-degree sequences of leaves in a reduction tree of $G$.
\end{definition}
Although the actual leaves of a reduction tree are dependent on the individual reductions performed, we prove in Theorem \ref{theoremA} that $\mathop{\mathrm{LD}}(G)$ is independent of the particular reduction tree considered.
\section{Triangular arrays and left-degree sequences}
\label{sec3}
In this section, we expand the technique described in \cite{prod} that characterized left-degree sequences of full-dimensional leaves in a specific reduction tree of a graph. We give a characterization of the left-degree sequences of all leaves of this reduction tree, not just the full dimensional ones. This enables us to relate the left-degree sequences to generalized permutahedra in Section \ref{sec4} and to use left-degree sequences and generalized permutahedra to show in Section \ref{sec5} that the Schubert and Grothendieck polynomials have polytopal support. The main theorem of this section is the following. The independence of $\mathop{\mathrm{LD}}(G)$ on the reduction tree was first proved independently by Grinberg \cite{grinberg} in his study of the subdivision algebra.
\begin{letteredtheorem}
\label{theoremA}
For any graph $G$ on $[0,n]$ the multiset of left-degree sequences $\mathop{\mathrm{LD}}(G)$ in any reduction tree of $G$ equals the first columns of $\mathrm{Sol}_{G}(F)$ over all $F\subseteq E(G\backslash 0)$, also denoted by $\mathop{\mathrm{InSeq}}(\mathcal{T}(G))$. In particular, $\mathop{\mathrm{LD}}(G)$ is independent of the choice of reduction tree.
\end{letteredtheorem}
\medskip
For simplicity, throughout this section we restrict to the case where $G$ is a simple graph on the vertex set $[0,n]$. The set $\mathrm{Sol}_{G}(F)$ is defined in Definition \ref{solg} for simple graphs. We address the general case in Section \ref{sec6} where we also prove Theorem~ \ref{theoremA}.
We start by generalizing \cite[Lemma 3]{prod} to include the descriptions of the lower dimensional leaves of reductions performed at a special vertex $v$. The proof is a straightforward generalization of that of \cite[Lemma 3]{prod} illustrated in Figure \ref{cornerstonelemmaexample}.
The key to the proof is the \textbf{special reduction order}, whereby we always perform a reduction on the longest edges possible that are incident to the vertex at which we are reducing (the length of an edge being the absolute value of the difference of its vertex labels).
We leave the details of the proof to the interested reader.
\begin{lemma}
\label{cornerstonelemma}
Assume $G$ has a distinguished vertex $v$ with $p$ incoming edges and one outgoing edge $(v,u)$. If we perform all reductions possible which involve only edges incident to $v$ in the special reduction order, then we obtain graphs $H_i$, $i\in [p+1]$, and $K_j$, $j\in[p]$, with $(\mathop{\mathrm{indeg}}_{H_i}(v),\mathop{\mathrm{indeg}}_{H_i}(u))=(p+1-i,\,i)$ and $(\mathop{\mathrm{indeg}}_{K_j}(v),\mathop{\mathrm{indeg}}_{K_j}(u))=(p-j,\,j)$.
\end{lemma}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.8]{Pictures/SpecialReductionTree.pdf}
\end{center}
\caption{The graphs $H_i$ and $K_j$ of Lemma \ref{cornerstonelemma}.}
\label{cornerstonelemmaexample}
\end{figure}
We now construct a specific reduction tree $\mathcal{T}(G)$ and characterize the left-degree sequences of its leaves. Denote by $I_i$ the set of incoming edges to vertex $i$ in $G$. Let $V_i$ be the set of vertices $k$ with $(k,i)\in I_i$, and let $G[0,i]$ be the restriction of $G$ to the vertices $[0,i]$. For any reduction tree $\mathcal{R}(G)$, by $\mathop{\mathrm{InSeq}}(\mathcal{R}(G))$ we mean the multiset of left-degree sequences of the leaves of $\mathcal{R}(G)$. Since we will build $\mathcal{T}(G)$ inductively from $\mathcal{T}(H)$ for smaller graphs $H$, it is convenient to let $\mathop{\mathrm{InSeq}}^n(\mathcal{R}(H))$ denote the multiset $\mathop{\mathrm{InSeq}}(\mathcal{R}(H))$ with each sequence padded on the right with zeros to have length $n$.
We proceed using the following algorithm, analogous to the one described in \cite{prod}:
\begin{itemize}
\item For the base case, define the reduction tree $\mathcal{T}(G[0,1])$ to be the single leaf $G[0,1]$.
Hence, \[\mathrm{InSeq}(\mathcal{T}(G[0,1]))=\{(\mathop{\mathrm{indeg}_G}(1)) \}.\]
\item Having built $\mathcal{T}(G[0,i])$, construct the reduction tree $\mathcal{T}(G[0,i+1])$ from $\mathcal{T}(G[0,i])$ by appending the vertex $i+1$ and the edges $I_{i+1}$ to all graphs in $\mathcal{T}(G[0,i])$ and then performing reductions at each vertex in $V_{i+1}$ on all graphs corresponding to the leaves of $\mathcal{T}(G[0,i])$ in the special reduction order as described below.
\item Let $V_{i+1}=\{i_1< i_2<\cdots < i_k\}$ and let $(s_1,\ldots, s_{n})$ be one of the sequences in $\mathrm{InSeq}^{n}(\mathcal{T}(G[0,i]))$. Applying Lemma \ref{cornerstonelemma} to each of the vertices $i_1,\ldots,i_k$, we see that the leaves of $\mathcal{T}(G[0,i+ 1])$ which are descendants of the graph with $n$-left-degree sequence $(s_1,\ldots, s_{n})$ in $\mathcal{T}(G[0,i])$ will have $n$-left-degree sequences exactly given by
\[
(s_1,\ldots, s_{n})+ v^{i+1}[i_1]+\cdots+v^{i+1}[i_k]
\]
\medskip
\noindent where $v^{i+1}[i_l]\in S_1(i_l)\cup S_2(i_l)$ and $S_1$, $S_2$ are given by:
\medskip
\begin{align*}
S_1(i_l)&=\{(c_1,\ldots,\, c_{n}):\, c_i=0 \mbox{ for } i\notin \{i_l,i+1 \},\, c_{i_l}=s_{i_l}-s, \mbox{ and } c_{i+1}=s \mbox{ for } s\in [s_{i_l}+1]\}, \\
S_2(i_l)&=\{(c_1,\ldots,\, c_{n}):\, c_i=0 \mbox{ for } i\notin \{i_l,i+1 \},\, c_{i_l}=s_{i_l}-1-s, \mbox{ and } c_{i+1}=s \mbox{ for } s\in [s_{i_l}]\}.
\end{align*}
\end{itemize}
\medskip
\begin{definition}
\label{specialreductiontree}
For a simple graph $G$ on $[0,n]$, denote by $\mathcal{T}(G)$ the specific reduction tree constructed using the algorithm described above.
\end{definition}
\begin{definition}
\label{arrdefinition}
To each leaf $L$ of $\mathcal{T}(G)$, associate the triangular array of numbers $\mathop{\mathrm{Arr}}(L)$ given by
\[
\begin{tabular}{llllll}
$a_{n,\,1}$ & $a_{n-1,\,1}$ & $\cdots $&$ a_{3,\,1}$&$ a_{2,\,1}$&$a_{1,\,1}$\\
$a_{n,\,2}$ &$ a_{n-1,\,2}$ &$ \cdots $&$ a_{3,\,2}$&$a_{2,2}$& \\
\hspace{2ex}$\vdots$&\hspace{2ex}$\vdots$&$\Ddots$&& & \\
$a_{n,\,n-1}$&$a_{n-1,\, n-2}$&&&&\\
$a_{n,\, n}$&&&&&
\end{tabular}
\]
\noindent where $(a_{i,\, 1},\, a_{i,\, 2}, \ldots, a_{i,\,i})$ is the left-degree sequence of the leaf of $\mathcal{T}(G[0,i])$ preceding (or equaling if $i=n$) $L$ in the construction of $\mathcal{T}(G)$.
\end{definition}
\begin{theorem}[\cite{prod}, Theorem 4]
\label{arrayconstraints}
The arrays $\mathop{\mathrm{Arr}}(L)$ for full-dimensional leaves $L$ of $\mathcal{T}(G)$ are exactly the nonnegative integer solutions in the variables $\{a_{i,\,j}:\, 1\leq j\leq i\leq n \}$ to the constraints:
\begin{itemize}
\item $a_{1,\,1}= \#E(G[0,1])$
\item $a_{i,\,j}\leq a_{i-1,\,j} \mbox{ if } (j,\,i)\in E(G)$
\item $a_{i,\,j}=a_{i-1,\,j}$ \mbox{ if } $(j,\,i)\notin E(G)$
\item $a_{i,\,i}=\#E(G[0,i]) - \sum_{k=1}^{i-1}{a_{i,\,k}}$
\end{itemize}
\end{theorem}
\medskip
\begin{example}
\label{triangulararrayofconstraintsexample}
If $G$ is the graph on vertex set $[0,4]$ with \newline $E(G)=\{(0,1),\, (0,2),\,(1,2),\, (2,3),\,(2,4),\,(3,4) \}$, then from Theorem \ref{arrayconstraints} we obtain the constraints:
\begin{align*}
&0\leq a_{4,\,1}=a_{3,\,1}= a_{2,\,1}\leq a_{1,\,1}=1\\
&0\leq a_{4,\,2}\leq a_{3,\,2}\leq a_{2,\,2}=3-a_{2,\,1}\\
&0\leq a_{4,\,3}\leq a_{3,\,3}= 4-a_{3,\,1}-a_{3,\,2}\\
&0\leq a_{4,\,4}=6-a_{4,\,1}-a_{4,\,2}-a_{4,\,2}
\end{align*}
The solutions to these constraints yield the full-dimensional left-degree sequences
\\$(a_{4,\,1},a_{4,\,2},a_{4,\,3},a_{4,\,4})$ of $G$.
\end{example}
Given a graph $G$, we write the constrains specified in Theorem \ref{arrayconstraints} in the form shown in Example \ref{triangulararrayofconstraintsexample} and call them the \textbf{triangular constraint array} of $G$. We proceed by generalizing triangular constraint arrays to encode the lower-dimensional leaves of $\mathcal{T}(G)$ as well.
\begin{definition}
\label{solg}
Denote by $\mathop{\mathrm{Tri}}_G(\emptyset)$, or when the context is clear, by $\mathop{\mathrm{Tri}}(\emptyset)$, the triangular constraint array of $G$. For each subset $F\subseteq E(G\backslash 0)$ (recall that $G$ is simple in this section), define a constraint array $\mathop{\mathrm{Tri}}(F)$ by modifying $\mathop{\mathrm{Tri}}(\emptyset)$ as follows: for each $(j,\,i)\in F$ and each ordered pair $(m,j)$ with $n\geq m\geq i$, replace each occurrence of $a_{m,\,j}$ by $a_{m,\,j}+1$ and add 1 to the constant at the leftmost edge of row $j$.
Denote by $\mathop{\mathrm{Sol}}_G(F)$, or when the context is clear, by $\mathop{\mathrm{Sol}}(F)$, the collection of all integer solution arrays to the constraints $\mathop{\mathrm{Tri}}(F)$.
\end{definition}
\begin{example}
\label{generalizedarrayexample}
With $G$ as in Example \ref{triangulararrayofconstraintsexample} and $F=\{(2,3),\, (2,4),\,(3,4) \}$, we have\\
\[ \mathop{\mathrm{Tri}}(F):
\begin{tabular}{l}
0$\leq a_{4,\,1}=a_{3,\,1}= a_{2,\,1}\leq a_{1,\,1}=1$\\
2$\leq a_{4,\,2}+2\leq a_{3,\,2}+1\leq a_{2,\,2}=3-a_{2,\,1}$\\
1$\leq a_{4,\,3}+1\leq a_{3,\,3}= 3-a_{3,\,1}-a_{3,\,2}$\\
0$\leq a_{4,\,4}=3-a_{4,\,1}-a_{4,\,2}-a_{4,\,3}$
\end{tabular}
\]
\end{example}
\noindent The characterization of $\mathop{\mathrm{InSeq}}(\mathcal{T}(G))$ given in the construction of $\mathcal{T}(G)$ implies the following theorem.
\begin{theorem}
\label{multisetequality}
The leaves of $\mathcal{T}(G)$ are in bijection with the multiset union of solutions to the arrays $\mathop{\mathrm{Tri}}(F)$, that is
\[\{\mathop{\mathrm{Arr}}(L):\, L \mbox{ is a leaf of }\mathcal{T}(G)\} = \bigcup_{F\subseteq E(G\backslash 0)} \mathrm{Sol}_G(F).\]
\noindent In particular, $\mathop{\mathrm{InSeq}}(\mathcal{T}(G))$ is the (multiset) image of the right-hand side under the map that takes a triangular array to its first column $\left (a_{n,\,1}, \ldots, a_{n,\, n}\right)$.
\end{theorem}
Since Theorem \ref{theoremA} shows that $\mathop{\mathrm{InSeq}}(\mathcal{R}(G)) = \mathop{\mathrm{LD}}(G)$ for any reduction tree $\mathcal{R}(G)$ of $G$, we can now state the following important definition.
\begin{definition}
\label{ldsequencesfromF}
For any $F\subseteq E(G\backslash 0)$, denote by $\mathop{\mathrm{LD}}(G,F)$ the submultiset of $\mathop{\mathrm{LD}}(G)$ consisting of sequences occurring as the first column of an array in $\mathop{\mathrm{Sol}}(F)$.
\end{definition}
As a consequence of Theorem \ref{multisetequality}, \[\mathop{\mathrm{InSeq}}(\mathcal{T}(G))=\bigcup_{F\subseteq E(G\backslash 0)}\mathop{\mathrm{LD}}(G,F). \]
Combinatorially, we can think of $\mathop{\mathrm{LD}}(G,F)$ in the following way. Construct the reduction tree $\mathcal{T}(G)$ of $G$. Take any graph $H$ appearing as a node of $\mathcal{T}(G)$. Let $H$ have descendants $H_1$, $H_2$ and $H_3$ in $\mathcal{T}(G)$ obtained by the reduction on edges $(i,j)$ and $(j,k)$ in $H$ with $i<j<k$, so that $H_3$ has edge set $E(H)\backslash \{(i,j), (j,k)\} \cup \{(i,k)\})$. Label the edge in $\mathcal{T}(G)$ between $H$ and $H_3$ by $(j,k)$. To each leaf $L$ of $\mathcal{T}(G)$, associate the set of all labels on the edges of the unique path from $L$ to the root $G$ of $\mathcal{T}(G)$. The left-degree sequences of leaves assigned a set $F$ in this manner are exactly the elements of the multiset $\mathop{\mathrm{LD}}(G,F)$.
\begin{figure}[b]
\includegraphics[scale=.9]{Pictures/TriangularArrayTable.pdf}
\caption{A small example demonstrating Theorem \ref{multisetequality}. In general, $\mathop{\mathrm{Sol}}_G(F)$ will be empty for many $F$.}
\end{figure}
To understand the multisets $\mathop{\mathrm{Sol}}(F)$ and $\mathop{\mathrm{LD}}(G,F)$, we study the constraint arrays $\mathop{\mathrm{Tri}}(F)$. We begin by investigating the case where $G=K_{n+1}$ is the complete graph on $[0,n]$. Given $F\subseteq E(K_{n+1}\backslash 0)$, consider the numbers
\begin{align}
\label{fijnumbers}
f_{i,\,j}=\#\{(j,\,k)\in F:\, k\leq i\}.
\end{align}
Observe that for each $F\subseteq E(K_{n+1}\backslash 0)$, $\mathop{\mathrm{Tri}}(F)$ is obtained from $\mathop{\mathrm{Tri}}(\emptyset)$ by replacing $a_{i,j}$ in $\mathop{\mathrm{Tri}}(\emptyset)$ by $a_{i,\,j}+f_{i,\,j}$ and replacing the 0 in the leftmost spot of row $j$ by $f_{n,\,j}$. Also note that $f_{j,\,j}=0$ for each $j$. Modify $\mathop{\mathrm{Tri}}(F)$ to obtain a new constraint array denoted $A_{K_{n+1}}(F)$ with the same solutions by subtracting $f_{n,\, j}$ from each term in row $j$ for each $j$, so that the leftmost column becomes all zeros.
For notational compactness, let $b_{i,\,j}=a_{i,\,j}+f_{i,\,j}$. $A_{K_{n+1}}(F)$ is given by
\begin{align*}
&0\leq b_{n,\,1}-f_{n,\,1}\leq \cdots \leq b_{2,\,1} -f_{n,\,1}\leq b_{1,\,1}-f_{n,\,1}= \#E(K_{n+1}[0,1])-f_{n,\,1}\\
&0\leq b_{n,\,2}-f_{n,\,2}\leq \cdots \leq b_{2,\,2}-f_{n,\,2}= \#E(K_{n+1}[0,2])-f_{n,\,2}-b_{2,\,1} \\
&\phantom{0}\phantom{\leq}\vdots \mbox{\hspace{7ex}}\vdots\mbox{\hspace{7ex}}\Ddots \\
&0\leq b_{n,\,n} -f_{n,\,n}= \#E(K_{n+1})-f_{n,\,n}-\displaystyle\sum_{k=1}^{n-1}{b_{n,\,k}}
\end{align*}
Note that the real solution set in variables $\{a_{i,\,j}\}$ to $A_{K_{n+1}}(F)$ is a polytope in $\mathbb{R}^{\binom{n+1}{2}}$. We first show that it is a flow polytope. For any constraint array $A$, denote by $\mathop{\mathrm{Poly}}(A)$ the \textbf{polytope defined by the inequalities in $A$}.
\begin{lemma}
\label{arraysgiveflowpolytopes}
Let $K_{n+1}$ be the complete graph on $[0,n]$. Fix $F\subseteq E(K_{n+1}\backslash 0)$ and let $Q$ be the polytope $Q=\mathop{\mathrm{Poly}}(A_{K_{n+1}}(F))$.
Then, there exists a graph denoted $\mathop{\mathrm{Gr}}(K_{n+1})$ and a netflow vector $\bm{a}_{K_{n+1}}^F$ such that $Q$ is integrally equivalent to $\mathcal{F}_{\mathop{\mathrm{Gr}}(K_{n+1})}\left(\bm{a}_{K_{n+1}}^F\right)$.
\end{lemma}
\begin{proof}
For $\{(i,\,j):\, 1\leq j<i\leq n \}$, we introduce slack variables $z_{i,\,j}$ to convert the inequalities in $A_{K_{n+1}}(F)$ into equations $Y_{i,\,j}$ via
\[
Y_{i,\,j}:\begin{cases}
a_{i,\,j}+f_{i,\,j}+z_{i,\,j}=a_{i-1,\,j}+f_{i-1,\,j} & \text{ if } i>j\\
\displaystyle \sum\limits_{k=1}^{i}{a_{i,\,k}} = \#E(K_{n+1}[0,i])& \text{ if } i=j.
\end{cases}
\]
\noindent Define an equivalent system of equations $\{Z'_{i,\,j} \}$ by setting
\[
Z'_{i,\,j}:\begin{cases}
Y_{i,\,j} & \text{ if } i>j \mbox{ or } i=j=1\\
Y_{i,\,j} - Y_{i-1,\,j-1} - \displaystyle \sum\limits_{k=1}^{j-1}{Y_{j,\,k}} & \text{ if } i=j>1.
\end{cases}
\]
We then modify each equation $Z'_{i,\,j}$ by rearranging negated terms to get equations $Z_{i,\,j}$ given by
\[
Z_{i,\,j}:\begin{cases}
a_{i,\,j}+z_{i,\,j}=a_{i-1,\,j}+f_{i-1,\,j}-f_{i,\,j} & \text{ if } i>j\\
a_{i,\,j}=\mathop{\indeg_{K_{n+1}}}(1) &\text{ if } i=j=1\\
a_{i,\,j} = \mathop{\indeg_{K_{n+1}}}(j)+ \displaystyle \sum\limits_{k=1}^{j-1}{z_{j,\,k}} & \text{ if } i=j>1
\end{cases}
\]
\noindent where we use that $\#E(K_{n+1}[0,j])-\#E(K_{n+1}[0,j-1])=\mathop{\indeg_{K_{n+1}}}(j)$.
\\
\noindent We now construct the graph $\mathop{\mathrm{Gr}}(K_{n+1})$. Let the vertices of $\mathop{\mathrm{Gr}}(K_{n+1})$ be
\[\{v_{i,\,j}:\, 1\leq j\leq i\leq n \}\cup \{v_{n+1,\,n+1}\}\]
with the ordering $v_{1,\,1}<v_{2,\,1}<\cdots < v_{n,\,1}<v_{2,\,2}<\cdots<v_{n,\,n}<v_{n+1,\,n+1}$.
\noindent Let the edges of $\mathop{\mathrm{Gr}}(K_{n+1})$ be labeled suggestively by the flow variables $a_{i,j}$ and $z_{i,j}$.
Set $E(\mathop{\mathrm{Gr}}(K_{n+1}))=E_a\cup E_z$ where
\begin{align*}
&E_a \mbox{ consists of edges }a_{i,\,j}:v_{i,\,j}\to v_{i+1,\,j} \mbox{ for } 1\leq j \leq i\leq n \mbox{ and }\\
&E_z \mbox{ consists of edges }z_{i,\,j}:v_{i,\,j}\to v_{i,\,i} \mbox{ for } 1\leq j < i\leq n
\end{align*}
and we take indices $(n+1,\,j)$ to refer to $(n+1,\,n+1)$.\\
\noindent To define the netflow vector $\bm{a}_{K_{n+1}}^F$, we assign netflow $\mathop{\indeg_{K_{n+1}}}(j)$ to vertices $v_{j,\,j}$ with $j<n+1$, we assign netflow
\[-\#E(K_{n+1})+\sum_{k=1}^{n-1}{f_{n,\, k}}\]
to $v_{n+1,\,n+1}$, and we assign netflow $f_{i-1,\,j}-f_{i,\,j}$ to each remaining vertex $v_{i,\,j}$.
\\
\noindent The netflow vector $\bm{a}_{K_{n+1}}^F$ is given by reading each row of the triangular array
\[
\begin{tabular}{lllll}
$f_{n-1,\,1}-f_{n,\,1}$ & $f_{n-2,\,1}-f_{n-1,\,1} $&$\hspace{3ex}\cdots $&$f_{1,\,1}-f_{2,\,1}$&$ \mathop{\indeg_{K_{n+1}}}(1)$\\
$f_{n-1,\,2}-f_{n,\,2}$ &$ \hspace{5ex} \cdots $&$f_{2,\,2}-f_{3,\,2} $&$\mathop{\indeg_{K_{n+1}}}(2)$& \\
\hspace{7ex}$\vdots$&$\Ddots$&&& \\
$\mathop{\indeg_{K_{n+1}}}(n)$&&&&\\
\end{tabular}
\]
\noindent right to left starting with the first row, moving top to bottom, and then appending $-\#E(K_{n+1})+\sum_{k=1}^{n-1}{f_{n,\, k}}$ at the end.
\medskip
\noindent By construction, the flow equation at vertex $v_{i,j}$ in $\mathop{\mathrm{Gr}}(K_{n+1})$ is exactly the equation $Z_{i,j}$ for $(i,\,j)\neq (n+1,\,n+1)$. At $v_{n+1,\,n+1}$, the flow equation is $Y_{n,\,n}$, which follows from the equations $Z_{i,\,j}$ and adds no additional restrictions.
\end{proof}
\begin{figure}
\includegraphics[]{Pictures/GraphforK4ArrayDouble.pdf}
\caption{Two drawings of the graph $\mathop{\mathrm{Gr}}(K_{n+1})$ of Lemma \ref{arraysgiveflowpolytopes}. The drawing on the right has the netflow vector $\bm{a}_{K_{n+1}}^\emptyset$.}
\label{completegraph4demonstration}
\end{figure}
We now generalize Lemma \ref{arraysgiveflowpolytopes} to any simple graph $G$ on $[0,n]$. Note that for $F\subseteq E(G\backslash 0)$, $\mathop{\mathrm{Tri}}_G(F)$ can be obtained from $\mathop{\mathrm{Tri}}_{K_{n+1}}(F)$ by turning certain inequalities into equalities and changing all occurrences of $\#E(K_{n+1}[0,j])$ to $\#E(G[0,j])$ for each $j$. In the language of the proof of Lemma \ref{arraysgiveflowpolytopes}, this amounts to setting $z_{i,\,j}=0$ whenever $(j,\,i)\notin E(G)$. Relative to the graph $\mathop{\mathrm{Gr}}(K_{n+1})$, this is equivalent to deleting the edges labeled $z_{i,\,j}$ for $(j,\,i)\notin E(G)$. Thus, we have the following extension of the construction given in the proof of Lemma \ref{arraysgiveflowpolytopes}.
\begin{definition}
\label{tagsimplefinalversion}
For a simple graph $G$ on $[0,n]$ define a graph $\mathop{\mathrm{Gr}}(G)$ on vertices
\[\{v_{i,\,j}:\, 1\leq j\leq i\leq n \}\cup \{v_{n+1,\,n+1}\}\]
ordered $v_{1,\,1}<v_{2,\,1}<\cdots < v_{n,\,1}<v_{2,\,2}<\cdots<v_{n,\,n}<v_{n+1,\,n+1}$ and with edges
$E_a\cup E_z$ where
\begin{align*}
&E_a \mbox{ consists of edges }a_{i,\,j}:v_{i,\,j}\to v_{i+1,\,j} \mbox{ for } 1\leq j \leq i\leq n \mbox{ and }\\
&E_z \mbox{ consists of edges }z_{i,\,j}:v_{i,\,j}\to v_{i,\,i} \mbox{ for } (j,\,i)\in E(G\backslash 0).
\end{align*}
For any $F\subseteq E(G\backslash 0)$, we define a netflow vector $\bm{a}_G^F$ for $\mathop{\mathrm{Gr}}(G)$ by reading each row of the triangular array
\[
\begin{tabular}{lllll}
$f_{n-1,\,1}-f_{n,\,1}$ & $f_{n-2,\,1}-f_{n-1,\,1} $&$\hspace{3ex}\cdots $&$f_{1,\,1}-f_{2,\,1}$&$ \mathop{\mathrm{indeg}_G}(1)$\\
$f_{n-1,\,2}-f_{n,\,2}$ &$ \hspace{5ex} \cdots $&$f_{2,\,2}-f_{3,\,2} $&$\mathop{\mathrm{indeg}_G}(2)$& \\
\hspace{7ex}$\vdots$&$\Ddots$&&& \\
$\mathop{\mathrm{indeg}_G}(n)$&&&&\\
\end{tabular}
\]
\noindent right to left starting with the first row, moving top to bottom, and then appending \\$-\#E(G)+\sum_{k=1}^{n-1}{f_{n,\, k}}$ at the end, where $f_{i,\,j}=\#\{(j,\,k)\in F:\, k\leq i\}$.
\end{definition}
\begin{proposition}
\label{tapisflowpolytope}
Let $G$ be a simple graph on $[0,n]$ and $F\subseteq E(G\backslash 0)$. Then,
$\mathop{\mathrm{Poly}}(\mathop{\mathrm{Tri}}_G(F))$ is integrally equivalent to $\mathcal{F}_{\mathop{\mathrm{Gr}}(G)}(\bm{a}_G^F)$. In particular, the multiset of solutions $\mathop{\mathrm{Sol}}(F)$ to $\mathop{\mathrm{Tri}}(F)$ consists precisely of the projections of integral flows on $\mathop{\mathrm{Gr}}(G)$ with netflow $\bm{a}_G^F$ onto the edges labeled $\{a_{i,\,j}\}$.
\end{proposition}
\begin{example}
Let $G$ be the graph on vertex set $[0,4]$ with edge set \newline $E(G)=\{(0,1),\, (0,2),\,(1,2),\, (2,3),\,(2,4),\,(3,4) \}$ and $F=\{(2,\,3)\}$. The graph $\mathop{\mathrm{Gr}}(G)$ and netflow $\bm{a}_G^F$ are:\\
\begin{figure}[ht]
\includegraphics[scale=.9]{Pictures/ContractionExample.pdf}
\end{figure}
Observe that contracting the edges $\{a_{1,\,1},\, a_{2,\,1},\, a_{3,\,1},\, a_{2,\,2},\, a_{3,\,2},\, a_{3,\,3} \}$ yields the graph below,
\begin{figure}[ht]
\includegraphics[scale=1]{Pictures/ContractionExample2.pdf}
\end{figure}
\noindent which is exactly $\widetilde{G}\backslash \{s,\,0\}$. The next result shows that this occurs in general.
\end{example}
\medskip
For a graph $G$ and a subset $F\subseteq E(G\backslash 0)$, view $F$ as a subgraph of $G$ on the same vertex set. Note that for each $j$,
\[f_{n,\,j}=\#\{(j,\,k)\in F:\, k\leq n\}=\mathop{\mathrm{outdeg_F}}(j)\]
and the number
\[-\#E(G)+\sum_{k=1}^{n-1}{f_{n,\, k}}\]
appearing as the last entry of $\bm{a}_G^F$ equals $-\#E(G\backslash F)$.
\begin{theorem}
\label{isomorphism}
Let $G$ be a simple graph on $[0,n]$ and $F\subseteq E(G\backslash 0)$. Then, the flow polytopes
\[\mathcal{F}_{\mathop{\mathrm{Gr}}(G)}\left (\bm{a}_G^F \right ) \mbox{ and } \mathcal{F}_{\widetilde{G}\backslash\{s,0\}}\left (d_1^F,d_2^F,\cdots, d_n^F, -\#E(G\backslash F) \right )\] are integrally equivalent, where $d_j^F=\mathop{\mathrm{indeg}}_G(j)-\mathop{\mathrm{outdeg_F}}(j)$ for $j\in[n]$.
\end{theorem}
\begin{proof}
First, note that in $\mathop{\mathrm{Gr}}(G)$, the edges $\{a_{i,j}:\, i<n\}$ are each the only edges incoming to their target vertex. Contracting these edges via Lemma \ref{contraction} identifies vertices $v_{i,\,j}$ and $v_{i',\,j}$. Label the representative vertices $v_{j,\,j}$ by $j$ for $j\in [n]$ and $v_{n+1,\,n+1}$ by $t$. The remaining edges are
\[z_{i,j}:j\to i \mbox{ for } (j, i)\in E(G) \mbox{ and } a_{n,\,j}: j\to t \mbox{ for } j\in [n],\]
\noindent which, are exactly the edges of $\widetilde{G}- \{s,0\}$.
\\
\noindent Viewing the netflow vector $\bm{a}_G^F$ as the array
\[
\begin{tabular}{lllll}
$f_{n-1,\,1}-f_{n,\,1}$ & $f_{n-2,\,1}-f_{n-1,\,1} $&$\hspace{3ex}\cdots $&$f_{1,\,1}-f_{2,\,1}$&$ \mathop{\mathrm{indeg}_G}(1)$\\
$f_{n-1,\,2}-f_{n,\,2}$ &$ \hspace{5ex} \cdots $&$f_{2,\,2}-f_{3,\,2} $&$\mathop{\mathrm{indeg}_G}(2)$& \\
\hspace{7ex}$\vdots$&$\Ddots$&&& \\
$\mathop{\mathrm{indeg}_G}(n)$&&&&\\
$-\#E(G\backslash F),$&&&&
\end{tabular}
\]
Lemma \ref{contraction} implies the entries of the netflow vector after contracting are given by reading the sums of each row from top to bottom.
\end{proof}
Recall from Definition \ref{ldsequencesfromF} that $\mathop{\mathrm{LD}}(G,F)$ is the multiset of left-degree sequences occurring as the first column $\left (a_{n,\,1}, \ldots, a_{n,\, n}\right)$ of an array in $\mathop{\mathrm{Sol}}(F)$.
\begin{corollary}
\label{flows}
Let $G$ be a simple graph on $[0,n]$ and $F\subseteq E(G\backslash 0)$. If $\bm{b}_G^F$ is the vector \[\bm{b}_G^F=\left (\mathop{\mathrm{indeg}_G}(1)-\mathop{\mathrm{outdeg_F}}(1), \ldots , \, \mathop{\mathrm{indeg}_G}(n)-\mathop{\mathrm{outdeg_F}}(n), -\#E(G\backslash F) \right )\]
and $\psi$ is the map that takes a flow on $\widetilde{G}\backslash \{s,0\}$ to the tuple of its values on the edges $\{(j,t):\,j\in[n] \}$, then
$\mathop{\mathrm{LD}}(G,F)$ equals the (multiset) image under $\psi$ of all integral flows on $\widetilde{G}\backslash \{s,0\}$ with netflow vector $\bm{b}_G^F$.
\noindent In particular, $\mathop{\mathrm{LD}}(G,F)$ is in bijection with integral flows on $\widetilde{G}\backslash \{s, 0\}$ with netflow $\bm{b}_G^F$.
\end{corollary}
We note that the preceding result implies a formula for the Ehrhart polynomial of flow polytopes of graphs with special source and sink vertices. In particular, a special case of Theorem \ref{pstheorem} follows readily.
\begin{theorem}
\label{pstheoremaltproof}
Let $G$ be a simple graph on $[0,n]$ and let $d_i=\mathop{\mathrm{indeg}}_G(i)$. Then, the normalized volume of the flow polytope on $\widetilde{G}$ is
\begin{equation} \label{eq:vol} \mathrm{Vol} \,\,\mathcal{F}_{\widetilde{G}} = K_{\widetilde{G}\backslash \{s,0\}} \left (d_1,\, \ldots ,\, d_n, \, -\# E(G) \right ). \end{equation}
Moreover, the Ehrhart polynomial of $\mathcal{F}_{\widetilde{G}}$ is \
\begin{equation} \label{eq:ehr}
\operatorname {Ehr}(\mathcal{F}_{\widetilde{G}},t) = (-1)^{d}\sum_{i=0}^{d} (-1)^i \left (\mbox{\hspace{1.5ex}}\sum_{\mathclap{\substack{F\subseteq E(G\backslash 0)\\ \#F=d-i}}} K_{\widetilde{G}\backslash \{s,0\}} \left (\bm{b}_G^F \right ) \right ) {t+i \choose i}, \end{equation} where $\bm{b}_G^F= (\mathop{\mathrm{indeg}_G}(1)-\mathop{\mathrm{outdeg_F}}(1), \ldots , \, \mathop{\mathrm{indeg}_G}(n)-\mathop{\mathrm{outdeg_F}}(n), -\#E(G\backslash F))$.
\end{theorem}
\begin{proof}
From the dissection of $\mathcal{F}_{\widetilde{G}}$ obtained via the reduction tree $\mathcal{T}(G)$, it follows that \\ $\mathrm{Vol} \,\,\mathcal{F}_{\widetilde{G}} $ is the number of full-dimensional left-degree sequences. By Corollary \ref{flows}, these are in bijection with the integer points in the flow polytope $\mathcal{F}_{\widetilde{G}\backslash \{s,0\}}\left (d_1,\ldots, d_{n}, -\#E(G) \right )$, proving \eqref{eq:vol}.
To prove \eqref{eq:ehr} note that $\mathcal{F}_{\widetilde{G}}^\circ= \bigsqcup _{\sigma^\circ \in D_{\mathcal{T}(G)}} \sigma^\circ$, where $D_{\mathcal{T}(G)}$ is the set of open simplices corresponding to the leaves of the reduction tree $\mathcal{T}(G)$. Then,
$ \operatorname {Ehr}(\mathcal{F}_{\widetilde{G}}^\circ, t)=\sum _{\sigma^\circ \in D_{\mathcal{T}(G)}} \operatorname {Ehr}({\sigma^\circ}, t)$. Since all simplices in $D_{\mathcal{T}(G)}$ are unimodular, it follows that for a $k$-dimensional simplex $\sigma^\circ \in D_{\mathcal{T}(G)}$, $ \operatorname {Ehr}({\sigma^\circ}, t)=\operatorname {Ehr}({\Delta^\circ}, t)$, where $\Delta$ is the standard $k$-simplex. By \cite[Theorem 2.2]{br} $\operatorname {Ehr}({\Delta^\circ}, t)= {t-1 \choose k}.$
Thus,
$ \operatorname {Ehr}(\mathcal{F}_{\widetilde{G}}^\circ, t)=\sum_{i=0}^{\infty} f_i {t-1 \choose i},$ where $f_i$ is the number of $i$-simplices in $D_{\mathcal{T}(G)}$. If we let $d=\#E(\widetilde{G})-\#V(\widetilde{G})+1$, which is the dimension of $\mathcal{F}_{\widetilde{G}}$, then for $i\in [0,d]$,
\[f_i=\sum_{\mathclap{\substack{F\subseteq E(G\backslash 0)\\ \#F=d-i}}} \#LD(G, F).\]
Corollary \ref{flows} then implies
\[f_i= \sum_{\mathclap{\substack{F\subseteq E(G\backslash 0)\\ \#F=d-i}}} K_{\widetilde{G}\backslash \{s,0\}} \left (\bm{b}_G^F \right ) \text{ for } i \in [0,d].\]
Therefore,
\[ \operatorname {Ehr}(\mathcal{F}_{\widetilde{G}}^\circ, t)=\sum_{i=0}^{d} \left (\mbox{\hspace{1.5ex}}\sum_{\mathclap{\substack{F\subseteq E(G\backslash 0)\\ \#F=d-i}}} K_{\widetilde{G}\backslash \{s,0\}} \left (\bm{b}_G^F \right )\right ) {t-1 \choose i}.\]
\noindent From the Ehrhart-Macdonald reciprocity \cite[Theorem 4.1]{br} \[\operatorname {Ehr}(\mathcal{F}_{\widetilde{G}},t)=(-1)^{d} \operatorname {Ehr}(\mathcal{F}_{\widetilde{G}}^{\circ},-t),\]
it follows that
\begin{align*}
\operatorname {Ehr}(\mathcal{F}_{\widetilde{G}},t)&=(-1)^{d}\sum_{i=0}^{d} \left (\mbox{\hspace{1.5ex}}\sum_{\mathclap{\substack{F\subseteq E(G\backslash 0)\\ \#F=d-i}}} K_{\widetilde{G}\backslash \{s,0\}} \left (\bm{b}_G^F \right )\right ) {-t-1 \choose i}\\
&=(-1)^{d}\sum_{i=0}^{d} (-1)^i \left (\mbox{\hspace{1.5ex}}\sum_{\mathclap{\substack{F\subseteq E(G\backslash 0)\\ \#F=d-i}}} K_{\widetilde{G}\backslash \{s,0\}} \left (\bm{b}_G^F \right ) \right ) {t+i \choose i}.
\end{align*}
\end{proof}
\section{Newton polytopes of left-degree polynomials}
\label{sec4}
In this section, we study the Newton polytopes of polynomials $L_G(\bm{t})$ built from left-degree sequences (see Definition \ref{ldpolynomialdefinition}). We first show that each of these polynomials have polytopal support (Definition \ref{polsup}). Then, we investigate the Newton polytopes of their homogeneous components and certain homogeneous subcomponents and prove that these Newton polytopes are generalized permutahedra. We can summarize some of our results as:
\begin{letteredtheorem}
\label{theoremB}
Let $G$ be a graph on $[0,n]$. Then the left-degree polynomial $L_G(\bm{t})$ has polytopal support, and the Newton polytope of each homogeneous component $L_G^k(\bm{t})$ of $L_G(\bm{t})$ of degree $\#E(G)-k$ is a generalized permutahedron.
\end{letteredtheorem}
\medskip
Theorems \ref{newtonofldpolynomial}, \ref{cap} and \ref{ldhomogeneouspiecesnewton} imply Theorem \ref{theoremB}, and also contain a lot more detail regarding the players in Theorem ~B.
\begin{definition} \label{polsup}
Recall that for a polynomial $f=\displaystyle \sum_{\alpha\in \mathbb{Z}_{\geq 0}^n}{c_{\alpha}}\bm{t}^{\alpha}$, the \textbf{Newton polytope} is
\[\mathrm{Newton}(f)=\mathop{\mathrm{Conv}}\left( \left \{ \alpha:\, c_{\alpha}\neq 0 \right \} \right).\]
\noindent We say the polynomial $f$ has \textbf{polytopal support} if $c_{\alpha}\neq 0$ whenever $\alpha\in \mathrm{Newton}(f)$, that is whenever the integer points of $\mathrm{Newton}(f)$ are exactly the exponents of monomials appearing in $f$ with nonzero coefficients.
\end{definition}
The question of when a polynomial has polytopal support is a very natural one, and has recently been investigated for various polynomials from algebra and combinatorics by Monical, Tokcan and Yong in \cite{newtonalgcom}, who refer to this notion as the SNP property (saturated Newton polytope property).
Recall from Definition \ref{ldsequencesfromF} that for a simple graph $G$ and a subset $F\subseteq E(G\backslash 0)$, $\mathop{\mathrm{LD}}(G,F)$ denotes the submultiset of $\mathop{\mathrm{LD}}(G)$ consisting of sequences occurring as the first column of an array in $\mathop{\mathrm{Sol}}(F)$. Just as in Section \ref{sec3}, for the remainder of this section we add the simplifying assumption that $G$ has no multiple edges. All of the results of this section are also valid for graphs with multiple edges, with similar proof and notation modifications to those described in Section \ref{sec6}.
\begin{definition}
\label{ldpolynomialdefinition}
Let $G$ be a graph on $[0,n]$. For $\alpha\in\mathop{\mathrm{LD}}(G)$, let $\mathrm{codim}(\alpha)=\#E(G)-\sum_{i=1}^{n}{\alpha_i}$. Define the \textbf{left-degree polynomial} $L_G(\bm{t})$ in variables $\bm{t}=(t_1,t_2,\ldots, t_n )$ by
\begin{align*}
L_G(\bm{t})=\sum_{\alpha \in \mathop{\mathrm{LD}}(G)}(-1)^{\mathrm{codim}(\alpha)}\bm{t}^{\alpha}.
\end{align*}
Similarly, for $F\subseteq E(G\backslash 0)$, define $L_{G,F}(\bm{t})$ by
\begin{align*}
L_{G,F}(\bm{t})=\sum_{\alpha \in \mathop{\mathrm{LD}}(G,F)}(-1)^{\mathrm{codim}(\alpha)}\bm{t}^{\alpha}=\sum_{\alpha \in \mathop{\mathrm{LD}}(G,F)}(-1)^{\#F}\bm{t}^{\alpha}.
\end{align*}
\end{definition}
Note that the $(-1)^{\mathrm{codim}(\alpha)}$ in Definition \ref{ldpolynomialdefinition} has no effect on the Newton polytope. It is present so the definition of the left-degree polynomial agrees with the definition of right-degree polynomials utilized in \cite{pipe1} that we address in Section \ref{sec5}.
\noindent Restating Theorem \ref{multisetequality} in terms of left-degree sequences gives the multiset union decomposition
\[\mathop{\mathrm{LD}}(G)=\bigcup_{F\subseteq E(G\backslash 0)}\mathop{\mathrm{LD}}(G,F). \]
Relative to Newton polytopes, this implies
\begin{align}
\mathrm{Newton}(L_G(\bm{t})) = \mathop{\mathrm{Conv}}\left (\bigcup_{F\subseteq E(G\backslash 0)}\mathrm{Newton}\left (L_{G,F}(\bm{t})\right ) \right ).
\end{align}
We first study the polytope $\mathrm{Newton}(L_G(\bm{t}))$ and then the component pieces $\mathrm{Newton}\left (L_{G,F}(\bm{t})\right )$. To start, we define a new constraint array.
\begin{definition}
Let $G$ be a simple graph on $[0,n]$. Proceed as follows:
\begin{itemize}
\item Start with the trianglular constraint array $\mathop{\mathrm{Tri}}_G(\emptyset)$ of $G$ as in Theorem \ref{arrayconstraints}.
\item Replace the zero on the left of row $j$ by $y_{n,\,j}+y_{n-1,\,j}+\cdots + y_{j+1,\,j}$ for $j\in [n-1]$, so the zero on the left in row $n$ is left unchanged.
\item For each $(i,\,j)$ with $n\geq i>j\geq 1$, replace all occurrences of $a_{i,\,j}$ in the array by $a_{i,\,j}+\sum_{k=j+1}^i y_{k,\,j}$.
\item For every $(j,i)\notin E(G\backslash 0)$, set $y_{i,j}=0$ throughout.
\end{itemize}
We refer to this array as the \textbf{augmented constraint array of} $G$ and view it as having variables $a_{i,\,j}$ and $y_{i,\,j}$ subject to the additional constraints that for all $1\leq j<i\leq n$,
\[0\leq y_{i,\,j}\leq 1.\]
\end{definition}
\begin{example}
If $G$ is the graph on vertex set $[0,4]$ with \newline $E(G)=\{(0,1),\, (0,2),\,(1,2),\, (2,3),\,(2,4),\,(3,4) \}$, then we start with the constraints:
\begin{align*}
0&\leq a_{4,\,1}=a_{3,\,1}= a_{2,\,1}\leq a_{1,\,1}=1\\
0&\leq a_{4,\,2}\leq a_{3,\,2}\leq a_{2,\,2}=3-a_{2,\,1}\\
0&\leq a_{4,\,3}\leq a_{3,\,3}= 4-a_{3,\,1}-a_{3,\,2}\\
0&\leq a_{4,\,4}=6-a_{4,\,1}-a_{4,\,2}-a_{4,\,3}
\end{align*}
After performing the modifications, we arrive at:
\begin{align*}
y_{2,\,1}&\leq a_{4,\,1}+y_{2,\,1}=a_{3,\,1}+y_{2,\,1}= a_{2,\,1}+y_{2,\,1}\leq a_{1,\,1}=1\\
y_{4,\,2}+y_{3,\,2}&\leq a_{4,\,2}+y_{4,\,2}+y_{3,\,2}\leq a_{3,\,2}+y_{3,\,2}\leq a_{2,\,2}=3-a_{2,\,1}-y_{2,\,1}\\
y_{4,\,3}&\leq a_{4,\,3}+y_{4,\,3}\leq a_{3,\,3}= 4-a_{3,\,1}-y_{2,\,1}-a_{3,\,2}-y_{3,\,2}\\
0&\leq a_{4,\,4}=6-a_{4,\,1}-y_{2,\,1}-a_{4,\,2}-y_{4,\,2}-y_{3,\,2}-a_{4,\,3}-y_{4,\,3}
\end{align*}
\end{example}
\begin{theorem}
\label{newtonofldpolynomial}
Let $A$ denote the augmented constraint array of $G$ and $\mathop{\mathrm{Poly}}(A)$ the polytope defined by the real valued solutions to $A$ with the additional constraints $0\leq y_{i,\,j}\leq 1$ for all $i$ and $j$ with $1\leq j<i\leq n$. If $\rho$ is the projection that maps a solution of A to its values $(a_{n,\,1},\,\ldots,\,a_{n,\,n})$, then
\begin{align*}
\mathrm{Newton}(L_G(\bm{t}))=\rho \left ( \mathop{\mathrm{Poly}}(A) \right ).
\end{align*}
Furthermore, each integer point in the right-hand side is in $\mathop{\mathrm{LD}}(G)$, so $L_G$ has polytopal support.
\end{theorem}
For the proof of Theorem \ref{newtonofldpolynomial} and later Theorem \ref{ldpolynomialhomogeneousnewton}, it is convenient to replace $\mathop{\mathrm{Poly}}(A)$ by an integrally equivalent flow polytope using the proof techniques from Lemma \ref{arraysgiveflowpolytopes} and Theorem \ref{isomorphism}. Begin with the case where $G$ is a complete graph. By introducing slack variables $z_{i,\,j}$ for the inequalites in the augmented constraint array (not $0\leq y_{i,\,j}\leq 1$), we get equations $Y_{i,\,j}$ given by
\begin{align*}
Y_{i,\,j}:\begin{cases}
a_{i,\,j}+y_{i,\,j}+z_{i,\,j}=a_{i-1,\,j} &\mbox{ if } i>j\\
a_{i,\,j}=\#E(G[0,1]) &\mbox{ if } i=j=1\\
\displaystyle\sum_{k=1}^{i}{a_{i,\,k}}+ \sum_{m=2}^{i}{\sum_{k=1}^{m-1}{y_{m,\,k}}} = \#E(G[0,i]) &\mbox{ if } i=j>1
\end{cases}
\end{align*}
Applying the exact same transformation used in the proof of Lemma \ref{arraysgiveflowpolytopes}, we get equivalent equations $Z_{i,\,j}$ given by
\begin{align*}
Z_{i,\,j}:
\begin{cases}
a_{i,\,j}+y_{i,\,j}+z_{i,\,j}=a_{i-1,\,j} &\mbox{ if } i>j\\
a_{i,\,j}=\mathop{\mathrm{indeg}_G}(1) &\mbox{ if } i=j=1\\
a_{i,\,j} = \mathop{\mathrm{indeg}_G}(i)+ \displaystyle \sum\limits_{k=1}^{i-1}{z_{i,\,k}} &\mbox{ if } i=j>1
\end{cases}
\end{align*}
To move from the complete graph to any simple graph, just set $y_{i,\,j}=0$ and $z_{i,\,j}=0$ whenever $(j,i)\notin E(G)$. We can realize the solutions to the $Z_{i,\,j}$ as points in a flow polytope of some graph. However, to account for the additional restrictions $0\leq y_{i,\,j}\leq 1$, we view it as a \textbf{capacitated flow polytope}. This is for convenience and is not mathematically significant since any capacitated flow polytope is integrally equivalent to an uncapacitated flow polytope \cite[Lemma 1]{cap}.
\begin{definition}
\label{augmentedarraygraph}
Define the \textbf{augmented constraint graph} $\mathop{\mathrm{Gr}^{\mathrm{aug}}}(G)$ to have vertex set $\{v_{i,\,j}:\, 1\leq j\leq i\leq n \}\cup \{v_{n+1,\,n+1}\}$
with the ordering $v_{1,\,1}<v_{2,\,1}<\cdots < v_{n,\,1}<v_{2,\,2}<\cdots<v_{n,\,n}<v_{n+1,\,n+1}$ and edge set
$E_a\cup E_z \cup E_y$
\noindent labeled by the variables $a_{i,\,j}$, $z_{i,\,j}$, and $y_{i,\,j}$ respectively, where
\begin{align*}
E_a&\mbox{ consists of edges }a_{i,\,j}:v_{i,\,j}\to v_{i+1,\,j}\mbox{ for } 1\leq j \leq i \leq n, \\
E_z&\mbox{ consists of edges }z_{i,\,j}:v_{i,\,j}\to v_{i,\,i}\mbox{ for }(j,\, i)\in E(G\backslash 0) ,\\
E_y&\mbox{ consists of edges }y_{i,\,j}:v_{i,\,j} \to v_{n+1,\,n+1}\mbox{ for } (j,\, i)\in E(G\backslash 0),
\end{align*}
and we take indices $(n+1,\,j)$ to refer to $(n+1,\,n+1)$.
Define a netflow vector $\bm{a}_G^{\mathrm{aug}}$ by reading each row of the array
\[
\begin{tabular}{lllll}
$0$ & $0 $&$\hspace{3ex}\cdots $&$0$&$ \mathop{\mathrm{indeg}_G}(1)$\\
$0$ &$ 0\hspace{5ex} \cdots $&$0 $&$\mathop{\mathrm{indeg}_G}(2)$& \\
\hspace{7ex}$\vdots$&$\Ddots$&&& \\
$\mathop{\mathrm{indeg}_G}(n)$&&&&\\
$-\#E(G)$&&&&
\end{tabular}
\]
from right to left and reading the rows from top to bottom.
\end{definition}
Denote by $\mathcal{F}_{\mathop{\mathrm{Gr}^{\mathrm{aug}}}(G)}^c\left (\bm{a}_G^{\mathrm{aug}}\right )$ the capacitated flow polytope of the graph $\mathop{\mathrm{Gr}^{\mathrm{aug}}}(G)$ with netflow $\bm{a}_G^{\mathrm{aug}}$ and with the capacity constraints $0\leq y_{i,\,j}\leq 1$ for all $1\leq j<i\leq n$. By construction, the points in $\mathcal{F}_{\mathop{\mathrm{Gr}^{\mathrm{aug}}}(G)}^c\left (\bm{a}_G^{\mathrm{aug}}\right )$ are exactly the solutions to the augmented constraint array of $G$.
\begin{definition}
\label{moddingofG}
Similar to Theorem \ref{isomorphism}, contracting the edges $\{a_{i,\,j}:\, 1\leq j\leq i<n \}$ of $\mathop{\mathrm{Gr}^{\mathrm{aug}}}(G)$ and relabeling the representative vertices $v_{n,\,j}$ by $j$ and $v_{n+1,\,n+1}$ by $t$, we obtain a graph called the \textbf{augmented graph of $G$}. This graph is denoted $\mathop{G^{\mathrm{aug}}}$ and is defined on vertices $[n]\cup \{t\}$ with labeled edges $E_a\cup E_z\cup E_y$ where
\begin{align*}
E_a&\mbox{ consists of edges }a_{n,\,j}:j\to t\mbox{ for } j\in [n]; \\
E_z&\mbox{ consists of edges }z_{i,\,j}:j\to i\mbox{ for }(j,\, i)\in E(G\backslash 0);\\
E_y&\mbox{ consists of edges }y_{i,\,j}:j \to t\mbox{ for } (j,\, i)\in E(G\backslash 0).
\end{align*}
\end{definition}
\begin{example}
For $G$ the complete graph on $[0,3]$, the graphs $\mathop{\mathrm{Gr}^{\mathrm{aug}}}(G)$ and $\mathop{G^{\mathrm{aug}}}$ are shown below.
\end{example}
\includegraphics[scale=.8]{Pictures/ModGraphDouble.pdf}
\bigskip
Before proceeding to the proof of Theorem \ref{newtonofldpolynomial}, recall that
\[
\bm{b}_G^F=\left (\mathop{\mathrm{indeg}_G}(1)-\mathop{\mathrm{outdeg_F}}(1),\,\ldots,\, \mathop{\mathrm{indeg}_G}(n)-\mathop{\mathrm{outdeg_F}}(n),-\#E(G\backslash F) \right )
\]
for any $F\subseteq E(G\backslash 0).$ Denote by $ \mathcal{F}^c_{\mathop{G^{\mathrm{aug}}}}\left (\bm{b}_G^\emptyset\right )$ the capacitated flow polytope of the graph ${\mathop{G^{\mathrm{aug}}}}$ with netflow $\bm{b}_G^\emptyset$ and the capacity constraints $0\leq y_{i,\,j}\leq 1$ for all $1\leq j<i\leq n$.
\begin{proof}[Proof of Theorem \ref{newtonofldpolynomial}]
By the constructions of Definitions \ref{augmentedarraygraph} and \ref{moddingofG}, we have integral equivalences of capacitated flow polytopes
\[\mathop{\mathrm{Poly}}(A)\equiv \mathcal{F}_{\mathop{\mathrm{Gr}^{\mathrm{aug}}}(G)}^c\left (\bm{a}_G^{\mathrm{aug}}\right ) \equiv \mathcal{F}^c_{\mathop{G^{\mathrm{aug}}}}\left (\bm{b}_G^\emptyset\right ). \]
Thus, it suffices to prove
\begin{align*}
\mathrm{Newton}(L_G(\bm{t}))=\psi \left ( \mathcal{F}^c_{\mathop{G^{\mathrm{aug}}}}\left (\bm{b}_G^\emptyset\right ) \right ).
\end{align*}
where $\psi$ is the projection that takes a flow on $\mathcal{F}^c_{\mathop{G^{\mathrm{aug}}}}\left (\bm{b}_G^\emptyset\right )$ to its values on the edges labeled $\{a_{n,\,j}:\,j\in [n]\}$. This is accomplished in Theorem \ref{cap}. \end{proof}
\begin{theorem} \label{cap} For $G$ a graph on $[0,n]$, the Newton polytope of the left-degree polynomial $L_G(\bm{t})$ and the capacitated flow polytope $ \mathcal{F}^c_{\mathop{G^{\mathrm{aug}}}}\left (\bm{b}_G^\emptyset\right )$ satisfy
\begin{align*}
\mathrm{Newton}(L_G(\bm{t}))=\psi \left ( \mathcal{F}^c_{\mathop{G^{\mathrm{aug}}}}\left (\bm{b}_G^\emptyset\right ) \right ),
\end{align*} where where $\psi$ is the projection that takes a flow on $\mathcal{F}^c_{\mathop{G^{\mathrm{aug}}}}\left (\bm{b}_G^\emptyset\right )$ to its values on the edges labeled $\{a_{n,\,j}:\,j\in [n]\}$.
\end{theorem}
\begin{proof}
Let $\alpha\in \mathop{\mathrm{LD}}(G,F)$ for $F\subseteq E(G\backslash 0)$. Consider the set of integer flows on $\mathop{G^{\mathrm{aug}}}$ such that each edge $y_{i,\,j}$ has flow 1 if $(j,i)\in F$ and zero otherwise. By the construction of $\mathop{G^{\mathrm{aug}}}$, these are in bijection with the integer flows on $\widetilde{G}\backslash \{s,0\}$ with netflow vector $\bm{b}_G^F$, which in turn are in bijection to $\mathop{\mathrm{LD}}(G,F)$ (Corollary \ref{flows}). Thus $\alpha$ is the projection of a capacitated flow on $\mathop{G^{\mathrm{aug}}}$ with netflow $\bm{b}_G^\emptyset$.
Conversely, let $\alpha=(\alpha_1,\,\ldots, \alpha_n)\in \psi \left ( \mathcal{F}^c_{\mathop{G^{\mathrm{aug}}}}\left (\bm{b}_G^F\right ) \right ) $ be an integer point. Then, there exists some flow $f$ (not necessarily integral) on $\mathop{G^{\mathrm{aug}}}$ with netflow $\bm{b}_G^\emptyset$ having the integer values $\alpha_j$ on the edges $(j,t)$. If we remove these edges and modify the netflow vector accordingly, the new flow polytope we get is the (integrally capacitated) flow polytope of a graph with an integral netflow vector. Any such polytope has integral vertices \cite[Theorem 13.1]{schrijver}. Thus, we can choose $f$ to be an integral flow.
Since the edges labeled $y_{i,\,j}$ are constrained between 0 and 1, $f$ takes value 0 or 1 on these edges. If we let $F=\{(j,i)\in E(G\backslash 0): f \mbox{ takes value 1 on the edge labeled by } y_{i,\,j} \}$, then $f$ induces a flow on $\widetilde{G}\backslash\{s,0\}$ with netflow vector $\bm{b}_G^F$, so $\alpha\in \mathop{\mathrm{LD}}(G,F)$.
\end{proof}
We now analyze the component polytopes $\mathrm{Newton}(L_{G,F}(\bm{t}))$ and show that they are generalized permutahedra. We first briefly recall the relevant definitions from \cite{genperms}.
A \textbf{generalized permutahedron} is a deformation of the usual permutahedron obtained by parallel translation of the facets. Generalized permutahedra are parameterized by real numbers $\{z_I\}_{I\subseteq [n]}$ with $z_\emptyset=0$ and satisfying the supermodularity condition \[z_{I\cup J}+z_{I\cap J}\geq z_{I}+z_{J} \mbox{ for any } I,J\subseteq [n].\]
For a choice of parameters $\{z_I\}_{I\subseteq [n]}$, the associated generalized permutahedron $P_n^z\left(\{z_I \}\right)$ is defined by
\[P_n^z(\{z_I \})=\left \{ \bm{t}\in\mathbb{R}^n:\, \sum_{i\in I}{t_i}\geq z_I \mbox{ for } I\neq [n], \mbox{ and } \sum_{i=1}^{n}{t_i}=z_{[n]} \right \}. \]
There is a subclass of generalized permutahedra given by Minkowski sums of dilations of the faces of the standard $(n-1)$-simplex. For $I\subseteq [n]$, let $\Delta_I=\mathop{\mathrm{Conv}}(\{e_i:\, i\in I \})$, where $e_i$ is the $ith$ standard basis vector in $\mathbb{R}^n$ and $\Delta_{\emptyset}$ is the origin. Given a set $\{y_I\}_{I\subseteq [n]}$ of nonnegative real numbers with $y_\emptyset=0$, denote by $P_n^y(\{y_I \})$ the polytope
\[P_n^y(\{y_I \})= \sum_{I\subseteq [n]}{y_I \Delta_I}. \]
\begin{proposition}[\cite{genperms}, Proposition 6.3]
\label{mobiusrelation}
For nonnegative real numbers $\{y_I\}_{I\subseteq [n]}$, the polytope $P_n^y(\{y_I \})$ is a generalized permutahedron $P_n^z(\{z_I \})$ with $z_I=\sum_{J\subseteq I}{y_J}$.
\end{proposition}
\medskip
\noindent We now return to left-degree polynomials. For $F\subseteq E(G\backslash 0)$, recall the numbers $f_{i,\,j}$ given by
\[f_{i,\,j}=\#\{(j,\,k)\in F:\, k\leq i\}. \]
By Corollary \ref{flows} (Theorem $\ref{finalflows}$ for the general case), $\mathop{\mathrm{LD}}(G,F)$ is in bijection with integral flows on the graph $\widetilde{G}\backslash\{s,0\}$ with the netflow vector $\bm{b}_G^F$ defined by
\begin{align*}
\bm{b}_G^F= (\mathop{\mathrm{indeg}_G}(1)-\mathop{\mathrm{outdeg_F}}(1),\dots,\mathop{\mathrm{indeg}_G}(n)-\mathop{\mathrm{outdeg_F}}(n),-\#E(G\backslash F))
\end{align*}
via projection onto the edges $(i,t)$. To each $I\subseteq[n]$, associate the integer $z_I^F$ given by
\begin{align}
\label{parameters}
z_I^F=\min \left \{\sum_{i\in I}f(i,t):\, f \mbox{ is a flow on } \widetilde{G}\backslash\{s,0\} \mbox{ with netflow vector $\bm{b}_G^F$} \right \}.
\end{align}
\begin{definition}
For a collection of vertices $I$ of a graph $G$, define the outdegree $\mathop{\mathrm{outdeg_G}}(I)$ to be the number of edges from vertices in $I$ to vertices not in $I$.
\end{definition}
Note that the parameters $z_I^F$ of (\ref{parameters}) satisfy the supermodularity condition since $z_{I}^F=z_{I'}^F$ where $I'$ is the largest subset of $I$ satisfying $\mathop{\mathrm{outdeg}}_G(I')=0$.
Our goal is to show that
\[\mathrm{Newton}(L_{G,F}(\bm{t}))=P_n^z\{z_I^F\}_{I\subseteq [n]}. \]
The proof relies on the following fact about flow polytopes, which readily follows from the max-flow min-cut theorem.
\begin{lemma}
\label{feasibleflowproblem}
Let $G$ be a graph on $[0,n]$ and $\bm{\alpha}=(\alpha_0,\,\ldots,\,\alpha_n)\in \mathbb{R}^{n+1}$. Then $\mathcal{F}_G(\bm{\alpha})$ is nonempty if and only if \begin{align}
\label{existenceofaflow}
\sum_{i=0}^{n}{\alpha_i}=0 \mbox{ and }
\sum_{i\in S}{\alpha_i}\leq 0 \mbox{ for all } S\subseteq [0,n] \mbox{ with } \mathop{\mathrm{outdeg_G}}(S)=0.
\end{align}
\end{lemma}
\begin{proof}
Observe that the conditions \eqref{existenceofaflow} are necessary in order for $\mathcal{F}_G(\bm{\alpha})$ to be nonempty. We now show they are also sufficient. For this,
we rephrase the problem as a max-flow problem on another graph. Let $G'=(V(G)\cup \{s,t\}, E(G)\cup \{(s, i) \mid i \in [0,n], \alpha_i>0)\}\cup \{(i, t) \mid i \in [0,n], \alpha_i<0)\})$ with edges directed from smaller to larger vertices, where $s$ is the smallest and $t$ is the largest vertex. Let the upper capacity of the edges $(s, i)$, with $i \in [0,n], \alpha_i>0$, be $\alpha_i$ and the upper capacity of the edges $(i, t)$, with $i \in [0,n], \alpha_i<0$, be $-\alpha_i$. All edges have a lower capacity of $0$ and the edges also belonging to $G$ all have the upper capacity $\sum_{i \in [0,n], \alpha_i>0}\alpha_i$. If the maximum flow on $G'$ saturates the edges incident to $s$ (equivalently, to $t$) then $\mathcal{F}_G(\bm{\alpha})$ is nonempty. We thus proceed to show that if $\bm{\alpha}$ satisfies \eqref{existenceofaflow} with the given $G$, then the maximum flow on $G'$ saturates the edges incident to $s$. In other words, if $\bm{\alpha}$ satisfies \eqref{existenceofaflow} with the given $G$, then the value of the maximum flow on $G'$ is $\sum_{i \in [0,n], \alpha_i>0}\alpha_i$.
Recall that by the max-flow min-cut theorem \cite[Theorem 10.3]{schrijver} the maximum value of an $s-t$ flow on $G'$ subject to the above capacity constraints equals the minimum capacity of an $s-t$ cut in $G'$. For the cut $(\{s\}, V(G)\backslash \{s\})$ the capacity is $\sum_{i \in [0,n], \alpha_i>0}\alpha_i$, and we show that this is the minimum capacity of an $s-t$ cut in $G'$. If the cut contains any edge not incident to $s$ or $t$, then the capacity of that edge is already $\sum_{i \in [0,n], \alpha_i>0}\alpha_i$. On the other hand, if the cut does not contain any edge not incident to $s$ or $t$, the partition of vertices is of the form $(\{s\}\cup S, S^c\cup \{t\}),$ where $S\subseteq [0,n]$ with $\mathop{\mathrm{outdeg_G}}(S)=0$ and $S^c=[0,n]\backslash S$. Thus, by \eqref{existenceofaflow} we have $\sum_{i\in S}{\alpha_i}\leq 0$. The capacity of the cut $(\{s\}\cup S, S^c\cup \{t\})$ is $\sum_{i \in S^c, (s,i) \in G'}\alpha_i-\sum_{i \in S, (i,t) \in G'} \alpha_i$. Note that $\sum_{i \in S^c, (s,i) \in G'}\alpha_i-\sum_{i \in S, (i,t) \in G'} \alpha_i \geq \sum_{i \in [0,n], \alpha_i>0}\alpha_i$ since it is equivalent to $0\geq \sum_{i \in S, \alpha_i>0}\alpha_i+\sum_{i \in S, (i,t) \in G'} \alpha_i=\sum_{i\in S}{\alpha_i}$. In other words, the capacity of any cut is at least $\sum_{i \in [0,n], \alpha_i>0}\alpha_i$, and we saw that this is achieved. Thus, the value of the maximum flow on $G'$ is $\sum_{i \in [0,n], \alpha_i>0}\alpha_i$, as desired.
\end{proof}
\begin{theorem}
\label{genpermtheoremzI}
For a simple graph $G$, $F\subseteq E(G\backslash 0)$, and $\{z_I^F\}$ the parameters defined by (\ref{parameters}), $\mathrm{Newton}(L_{G,F}(\bm{t}))$ is the generalized permutahedron
\[\mathrm{Newton}(L_{G,F}(\bm{t}))=\mathop{\mathrm{Conv}}(\mathop{\mathrm{LD}}(G,F))=P_n^z\{z_I^F\}_{I\subseteq [n]}.\]
Furthermore, each integer point of $P_n^z\{z_I^F\}$ is in $\mathop{\mathrm{LD}}(G,F)$, so $\mathrm{Newton}(L_{G,F}(\bm{t}))$ has polytopal support.
\end{theorem}
\begin{proof}
Since $\mathop{\mathrm{LD}}(G,F)$ equals the projection of integral flows on $\widetilde{G}\backslash\{s,0\}$ with netflow $\bm{b}_G^F$ onto the edges $\{(j,t)\}_{j\in[n]}$, $\mathop{\mathrm{Conv}}(\mathop{\mathrm{LD}}(G,F))\subseteq P_n^z\{z_I^F\}.$\\
\noindent For the reverse direction, let $\bm{d}$ denote the truncation of $\bm{b}_G^F$ by its last entry, that is let $\bm{d}=(d_1,\dots,d_n)$ where \[d_i=\mathop{\mathrm{indeg}_G}(i)-\mathop{\mathrm{outdeg_F}}(i).\]
We must show that each point $\bm{x}=(x_1,\ldots,x_n)\in P_n^z\{z_I^F\}$, the assignment $a_{n,\,j}=x_j$ in $\widetilde{G}\backslash\{s,0\}$ can be extended to a flow on $\widetilde{G}\backslash\{s,0\}$. This is equivalent to showing \[\mathcal{F}_{G\backslash 0}(\bm{d}-\bm{x})\neq \emptyset.\]
By Lemma \ref{feasibleflowproblem}, it suffices to note that
\[\sum_{i\in S}{d_i-x_i}\leq 0 \mbox{ for all } S\subseteq [n] \mbox{ with } \mathop{\mathrm{outdeg_G}}(S)=0.\]
However, since $\mathop{\mathrm{outdeg}}_G(S)=0$, we have
\[\sum_{i\in S}{x_i}\geq z_S= \sum_{i\in S}{d_i}. \]
\end{proof}
We further show that $\mathrm{Newton}(L_{G,F}(\bm{t}))$ can be written as $P_n^y\{y_I\}$ for some parameters $y_I$. Let $L=\{J \subseteq [n]:\, \mathop{\mathrm{outdeg}}_H(J)=0 \}$. Then $L$ is a lattice, so consider the set $Q$ of join irreducible elements of $L$. For $J\subseteq [n]$, define
\begin{align}
\label{convLDtypeY}
y_J^F=\begin{cases}
\mathop{\mathrm{indeg}_G}(k) &\mbox{ if } J\in Q \mbox{, $J$ covers $J'$ in $L,$}\,\, J\backslash J'=\{k\}\\
0 &\mbox{ if } J\notin Q
\end{cases}
\end{align}
\begin{proposition}
\label{typey}
$P_n^y\{y_J^F \}=P_n^z\{z_I^F\}$
\end{proposition}
\begin{proof}
Note that $z_I^F=z_{I_1}^F$ where $I_1$ is the largest element of $L$ contained in $I$. Thus,
\[z_I^F=z_{I_1}^F= \sum_{k\in I_1}{b_k} = \sum_{\mathclap{\substack{J\in Q\\ J\subseteq I_1}}}{y_J^F} =\sum_{J\subseteq I}{y_J^F}. \]
Apply Proposition \ref{mobiusrelation}.
\end{proof}
From (\ref{convLDtypeY}), we can read off the $\{y_I^F\}$ decomposition of $\mathrm{Newton}(L_{G,F}(\bm{t}))$. Let $\delta(i)$ denote all the vertices of $G$ that can be reached from $i$ by an increasing path (including $i$ itself). Then,
\begin{align}
\label{ldpolytopeydescription}
\mathrm{Newton}(L_{G,F}(\bm{t}))= \sum_{i=1}^n{\mathop{\mathrm{indeg}_G}(i) \Delta_{\delta(i)} }.
\end{align}
\begin{example}
For a simple graph $G$, recall that the transitive closure of $G$ is the simple graph formed by adding edges $(i,j)$ to $E(G)$ whenever the vertices $i$ and $j$ are connected by an increasing path in $G$. If $G$ is a simple graph on $[0,n]$ such that the transitive closure of $G\backslash\{0\}$ is complete, then for each $F\subseteq E(G\backslash 0)$,
\[\mathrm{Newton}(L_{G,F}(\bm{t})) = \Pi_n\left (\mathop{\mathrm{indeg}_G}(1)-\mathop{\mathrm{outdeg_F}}(1), \ldots,\, \mathop{\mathrm{indeg}_G}(n)-\mathop{\mathrm{outdeg_F}}(n)\right )
\]
\noindent where $\Pi_n(\bm{x})$ is the Pitman-Stanley polytope as defined in \cite{sppolytope}, but shifted up one dimension in affine space, that is
\begin{align*}
\Pi_n(\bm{x})&=\left \{\bm{t}\in \mathbb{R}^n_{\geq 0}:\, \sum_{p=1}^{k}{t_p} \leq \sum_{p=1}^{k}{x_p} \mbox{ for } k\in [n-1],\, \mbox{ and } \sum_{p=1}^{n}{t_p} = \sum_{p=1}^{n}{x_p} \right \} \\
&=x_n\Delta_{\{n\}}+x_{n-1}\Delta_{\{n-1,\,n\}}+\cdots+x_1\Delta_{[n]}.
\end{align*}
\end{example}
\begin{proposition}
If $T$ is a tree on $[0,n]$, then $\mathrm{Newton}(L_{T,F}(\bm{t}))$ is a simple polytope.
\end{proposition}
\begin{proof}
By the Cone-Preposet Dictionary for generalized permutahedra, (\cite{genpermfaces}, Proposition 3.5) it is enough to show that each vertex poset $Q_v$ is a tree-poset, that is, its Hasse diagram has no cycles. To show this, let $I\subseteq [n]$ and consider the normal fan $N(\Delta_I)$ of the simplex $\Delta_I$. By (\ref{ldpolytopeydescription}), the normal fan of $\mathrm{Newton}(L_{G,F}(\bm{t}))$ is the refinement of normal fans $N(\Delta_I)$.
Thus, a maximal cone of the normal fan of $\mathrm{Newton}(L_{G,F}(\bm{t}))$ is given by an intersection of of maximal cones in each $N(\Delta_I)$ for $I=\delta(j)$, $j\in [n]$, $\mathop{\mathrm{indeg}}_T(j)>0$. A maximal cone in $N(\Delta_I)$ gives the vertex poset relations $x_i>x_j$ for all $j\in I$ and any chosen $i\in I$. Thus, relations in the Hasse diagram of a vertex poset lift to undirected paths in $T$.
If some $Q_v$ has a cycle $C$, then we can lift the relations to get two different paths in $T$ between two vertices. This subgraph will contain a cycle, contradicting that $T$ is a tree.
\end{proof}
The Newton polytopes of the homogeneous components of $L_G(\bm{t})$ are also generalized permutahedra.
\begin{definition}
For each $k\geq 0$ let $L_G^k(\bm{t})$ denote the degree $\#E(G)-k$ homogeneous component of $L_G(\bm{t})$, that is
\begin{align*}
L_G^k(\bm{t})=\sum_{\mathclap{\substack{F\subseteq E(G\backslash 0)\\
\#F =k}}}{L_{G,F}(\bm{t})}
\end{align*}
\end{definition}
For a simple graph $G$ on $[0,n]$, the proof of Theorem \ref{newtonofldpolynomial} showed that the augmented graph $\mathop{G^{\mathrm{aug}}}$ of Definition \ref{moddingofG} has the property that the projection of integral flows on $\mathop{G^{\mathrm{aug}}}$ with netflow
\[\bm{b}_G^{\emptyset}=\left (\mathop{\mathrm{indeg}_G}(1),\,\ldots,\, \mathop{\mathrm{indeg}_G}(n),-\#E(G) \right )\]
and capacitance $0\leq y_{i,\,j}\leq 1 $ for all $1\leq j<i\leq n$ onto the edges labeled $a_{n,\,j}$ for $j \in [n]$ is exactly $\mathop{\mathrm{LD}}(G)$. The following construction is a variation on this theme designed so its integral flows will only project to left-degree sequences whose entries have a particular sum.
\begin{definition}
Given a simple graph $G$ on $[0,n]$ and $k\geq 0$, let $G^{(k)}$ be the graph on $[1,n+1]\cup \{t\}$ with labeled edges $E_a\cup E_z\cup E_y$ where
\begin{align*}
E_a&\mbox{ consists of edges }a_{n,\,j}:j\to t\mbox{ for } j\in [n]; \\
E_z&\mbox{ consists of edges }z_{i,\,j}:j\to i\mbox{ for }(j,\, i)\in E(G\backslash 0);\\
E_y&\mbox{ consists of edges }y_{i,\,j}:j \to n+1\mbox{ for } (j,\, i)\in E(G\backslash 0).
\end{align*}
The flow polytope $\mathcal{F}_{G^{(k)}}^c(\bm{b}_G^{(k)} )$ is the flow polytope of
$G^{(k)}$ with netflow vector $\bm{b}_G^{(k)} = (\mathop{\mathrm{indeg}_G}(1),\,\ldots,\, \mathop{\mathrm{indeg}_G}(n),\, -k,\, k-\#E(G))$ and capacities $1$ on the edges $y_{i,\,j}$.
\end{definition}
\begin{example}
For $G$ the complete graph on $[0,3]$, $G^{(k)}$ is shown below alongside $\mathop{G^{\mathrm{aug}}}$ for comparison.\\
\includegraphics[]{Pictures/kGraphDouble.pdf}
\end{example}
Note that capacitated integral flows on $G^{(k)}$ with netflow $\bm{b}_G^{(k)}$ are in bijection with capacitated integral flows on $\mathop{G^{\mathrm{aug}}}$ with netflow $\bm{b}_G^\emptyset$ where exactly $k$ edges $y_{i,\,j}$ have flow 1, and the bijection preserves the values on the edges $\{a_{n,\,j}:j\in[n] \}$.
\begin{theorem}
\label{ldpolynomialhomogeneousnewton}
For $k\geq 0$, if $\psi$ is the projection that takes a flow on $\mathcal{F}_{G^{(k)}}^c\left (\bm{b}_G^{(k)}\right )$ to the tuple of its values on the edges labeled $a_{n,\,j}$ for $j$ in $[n]$, then
\begin{align*}
\mathrm{Newton}\left(L^k_G(\bm{t})\right)=\psi \left ( \mathcal{F}_{G^{(k)}}^c\left (\bm{b}_G^{(k)}\right ) \right ).
\end{align*}
Furthermore, each integer point in the right-hand side is a left-degree sequence with components that sum to $\#E(G)-k$, so $L_G^k$ has polytopal support.
\end{theorem}
\begin{proof}
Let $\alpha$ be an integer point in $\mathrm{Newton}\left(L^k_G(\bm{t})\right)$, so $\alpha \in \mathop{\mathrm{LD}}(G,F)$ for $F\subseteq E(G\backslash 0)$ with $\#F=k$. Then, $\alpha$ corresponds to a capacitated integral flow on $\mathop{G^{\mathrm{aug}}}$ with netflow $\bm{b}_G^\emptyset$, which in turn corresponds to a capacitated integral flow on $G^{(k)}$ with netflow $\bm{b}_G^{(k)}$ that $\psi$ takes to $\alpha$.
Conversely, let $\alpha$ be an integer point in $\psi \left ( \mathcal{F}_{G^{(k)}}^c\left (\bm{b}_G^{(k)}\right ) \right )$. Lift $\alpha$ to an integral flow $f$ on $G^{(k)}$. The flow $f$ corresponds to an integral flow on $\mathop{G^{\mathrm{aug}}}$, so if $F=\{(j,i):\, y_{i,\,j}=1 \mbox{ in } f \}$, then $\#F=k$ and $\alpha\in\mathop{\mathrm{LD}}(G,F)$.
\end{proof}
Similar to the proof of Theorem \ref{genpermtheoremzI}, for $k\geq 0$ and $I\subseteq [n]$, define parameters $z_I^{(k)}$ by
\begin{align}
\label{homogeneousparameters}
z_I^{(k)}=\min \left \{\sum_{i\in I}f(i,t):\, f \mbox{ is a flow on } G^{(k)} \mbox{ with netflow vector $\bm{b}_G^{(k)}$} \right \}.
\end{align}
\begin{theorem}
\label{ldhomogeneouspiecesnewton}
For $k\geq 0$ and $\{z_I^{(k)}\}$ the parameters defined by (\ref{homogeneousparameters}), $\mathrm{Newton}(L_G^k(\bm{t}))$ is the generalized permutahedron
\[\mathrm{Newton}(L_G^k(\bm{t}))=P_n^z\{z_I^{(k)}\}_{I\subseteq [n]}.\]
Furthermore, each integer point of $P_n^z\{z_I^{(k)}\}$ is a left-degree sequence, so $\mathrm{Newton}(L_{G,F}(\bm{t}))$ has polytopal support. Additionally, if $G$ is an acyclic graph, then $L_G^0(\bm{t})$ is the integer-point transform of its Newton polytope.
\end{theorem}
\begin{proof}
The proof of the first two statements is analogous to that of Theorem \ref{genpermtheoremzI}.
To prove the third statement we must show that if $G$ is an acyclic graph, all nonzero coefficients of $L_G^0$ are 1. It follows from Corollary \ref{flows} (Theorem \ref{finalflows}) that $\mathop{\mathrm{LD}}(G,\emptyset)$ equals the multiset of projections of integral flows on $\widetilde{G}\backslash\{s,0\}$ with the netflow vector $\bm{b}_G^\emptyset$. Then, the multiplicity of any particular $\alpha\in \mathop{\mathrm{LD}}(T,\emptyset)$ is the number of flows on $G\backslash 0$ with netflow $\bm{b}_G^\emptyset-\alpha$. However, acyclic graphs admit at most one flow for any given netflow vector, so every element of $\mathop{\mathrm{LD}}(G,\emptyset)$ has multiplicity 1. This implies all coefficients in $L_G^0$ are 0 or 1.
\end{proof}
Theorems \ref{newtonofldpolynomial} and \ref{ldhomogeneouspiecesnewton} imply:
\begin{corollary}
\label{hyperpl}
Given a graph $G$ on the vertex set $[0,n]$ with $m$ edges, we have that
\[\mathrm{Newton}(L_G(\bm{t})) \cap \{(x_1, \ldots, x_n)\in \mathbb{R}^n \mid \sum_{i=1}^n=m-k\}=P_n^z\{z_I^{(k)}\}_{I\subseteq [n]},\]
for the parameters $\{z_I^{(k)}\}$ given in (\ref{homogeneousparameters}).
\end{corollary}
\begin{proof}
We have that $\mathrm{Newton}(L_G(\bm{t})) \cap \{(x_1, \ldots, x_n)\in \mathbb{R}^n \mid \sum_{i=1}^n=m-k\}=\mathrm{Newton}(L_G^k(\bm{t})),$ which by Theorem \ref{ldhomogeneouspiecesnewton} equals $P_n^z\{z_I^{(k)}\}_{I\subseteq [n]}$.
\end{proof}
Theorems \ref{pstheoremaltproof} and \ref{ldhomogeneouspiecesnewton} imply:
\begin{corollary}
\label{latticeptenum}
If $G$ is an acyclic graph on $[0,n]$, then the normalized volume of the flow polytope of $\widetilde{G}$ is
\begin{align*}
\mathrm{Vol} \,\,\mathcal{F}_{\widetilde{G}} = \operatorname{Ehr}(P_G^0, 1),
\end{align*}
where $P_G^0:=\mathrm{Newton}(L_G^0(\bm{t}))$ is the generalized permutahedron specified in Theorem \ref{ldhomogeneouspiecesnewton}.
\end{corollary}
Corollary \ref{latticeptenum} is of the same flavor as Postnikov's following beautiful result; for the details of the terminology used in this theorem refer to \cite{genperms}.
\begin{theorem}\cite[Theorem 12.9]{genperms} \label{thm:post} For a bipartite graph $G$, the normalized volume of the root polytope $Q_G$ is
\begin{equation*}
\label{eq:Qvol} \mathrm{Vol}\, Q_G = \operatorname{Ehr}(P_G^{-}, 1),
\end{equation*}
where $P_G^{-}$ is the trimmed generalized permutahedron.
\end{theorem}
Root polytopes and flow polytopes are closely related, as can be seen by contrasting the techniques and results in the papers \cite{root1, root2, prod, mm, genperms}. It is thus reasonable to expect that Corollary \ref{latticeptenum} and Theorem \ref{thm:post} are related mathematically. We invite the interested reader to investigate their relationship.
\section{Newton polytopes of Schubert and Grothendieck polynomials}
\label{sec5}
In this section, we discuss the connection between left-degree sequences, Schubert polynomials, and Grothendieck polynomials discovered in \cite{pipe1} and relate it to their Newton polytopes. Our main theorem is as follows:
\begin{letteredtheorem}
\label{theoremC}
Let $\pi\in S_{n+1}$ be of the form $\pi=1\pi'$ where $\pi'$ is a dominant permutation of $\{2,3,\ldots n+1\}.$ Then the Grothendieck polynomial $\mathfrak{G}_{\pi}$ has polytopal support and the Newton polytope of each homogeneous component of $\mathfrak{G}_{\pi}$ is a generalized permutahedron. In particular, the Schubert polynomial $\mathfrak{S}_{\pi}$ has polytopal support and $\mathrm{Newton}(\mathfrak{S}_{\pi})$ is a generalized permutahedron. Moreover, $\mathfrak{S}_{\pi}$ is the integer-point transform of its Newton polytope.
\end{letteredtheorem}
Theorem \ref{theoremC} implies that the recent conjectures of Monical, Tokcan, and Yong \cite[Conjecture 5.1 \& 5.5]{newtonalgcom} are true for permutations $1\pi'$, where $\pi'$ is a dominant permutation. The following conjecture, discovered jointly with Alex Fink, is a strengthening of \cite[Conjecture 5.5]{newtonalgcom}. We have tested it for all $\pi\in S_n$, for $n\leq 8$.
\begin{conjecture} The Grothendieck polynomial $\mathfrak{G}_{\pi}$ has polytopal support and the Newton polytope of each homogeneous component of $\mathfrak{G}_{\pi}$ is a generalized permutahedron.
\end{conjecture}
Since \cite{pipe1} uses right-degree sequences and right-degree polynomials instead of their left-degree counterparts, we will adopt this convention throughout this section. To simplify notation, all graphs in this section will be on the vertex set $[n+1]$. Note the following easy relation between right-degree and left-degree.
Given a graph $G$ on vertex set $[n+1]$, let $G^*$ be the mirror image of the graph $G$ with vertex set shifted to $[0,n]$. More formally, let $G^*$ be the graph on vertices $[0,n]$ with edges
\[E(G^*)=\{(n+1-j,\,n+1-i) :\, (i,j) \in E(G) \}.\]
The right-degree sequences of $G$ are exactly the left-degree sequences of $G^*$ read backwards. We can then define the \textbf{right-degree multiset} $\mathop{\mathrm{RD}}(G)$ as the multiset of right-degree sequences of leaves in any reduction tree of $G$, and $\mathop{\mathrm{RD}}(G,\emptyset)$ the submultiset of sequences whose components sum to $\#E(G)$ (notation consistent with $\mathop{\mathrm{LD}}(G,F)$ in Definition \ref{ldsequencesfromF}).
\begin{definition}
For any graph $G$ on $[n+1]$, define the \textbf{right-degree polynomial} $R_G$ by
\[
R_G(t_1,\,t_2,\ldots t_{n}) = L_{G^*}(t_{n}, t_{n-1}, \ldots, t_1)=\sum_{\alpha\in \mathop{\mathrm{RD}}(G)} (-1)^{\mathrm{codim}(\alpha)} t_1^{\alpha_1} t_2^{\alpha_2} \ldots t_{n}^{\alpha_{n}}
\]
where $\mathrm{codim}(\alpha)=\#E(G)-\sum_{i=1}^{n}{\alpha_i}$.
\noindent For $k\geq 0$, let $R_G^k(\bm{t})$ denote
the degree $\#E(G)-k$ homogeneous component of $R_G(\bm{t})$.
Define the reduced right-degree polynomial $\widetilde{R}_G$ as follows: If $\{v_{i_1},\ldots v_{i_k} \}$ are the vertices of $G$ with positive outdegree, then $R_G$ is a polynomial in $t_{i_1},\ldots, t_{i_{k}}$. Obtain $\widetilde{R}_G$ by relabeling the variables $t_{i_{m}}$ by $t_m$ for each $m$. Note that $R_G^0$ (resp. $\widetilde{R^0_G}$) is the top homogeneous component of $R_G$ (resp. $\widetilde{R}_G$), and is given by
\[R_G^0(t_1,\,\ldots,\,t_{n})=\sum_{\alpha\in \mathop{\mathrm{RD}}(G,\emptyset)} t_1^{\alpha_1} t_2^{\alpha_2} \ldots t_{n}^{\alpha_{n}}\]
\end{definition}
The following statement collects the right-degree analogues of Theorem \ref{newtonofldpolynomial} and Theorem \ref{ldhomogeneouspiecesnewton} from the previous section.
\begin{theorem}
\label{rdanaloguecollection}
Let $G$ be a graph on $[n+1]$. Then, $R_G(\bm{t})$ has polytopal support, and the Newton polytope of each homogeneous component $R_G^k$ is a generalized permutahedron. Additionally, if $G$ is an acyclic graph, then $R_G^0(\bm{t})$ is the integer-point transform of its Newton polytope.
\end{theorem}
Recall that for a polytope $P \subseteq \mathbb{R}^m$, the \textbf{integer-point transform} of $P$ is
\[
L_P(x_1, \ldots, x_m)=
\sum_{p \in P\cap \mathbb{Z}^m} \bm{x}^{p}.
\]
We now recall the definition of pipe dreams of a permutation and the characterization of Schubert and Grothendieck polynomials in terms of pipe dreams.
\begin{definition}
A \textbf{pipe dream} for $\pi\in S_{n+1}$ is a tiling of an $(n+1)\times (n+1)$ matrix with two tiles, crosses $\ \smash{\lower4pt\hbox{\rlap{\hskip4.15pt\vrule height14pt}$ and elbows $\textelbow$, such that
\begin{enumerate}
\item all tiles in the weak south-east triangle are elbows, and
\item if we write $1,2,\ldots, n+1$ on the top and follow the strands (ignoring second crossings among the same strands), they come out on the left and read $\pi$ from top to bottom.
\end{enumerate}
A pipe dream is \textbf{reduced} if no two strands cross twice.
\end{definition}
\begin{figure}[h]$
\begin{array}{ccccc}
&\perm1{}&\perm2{}&\perm3{}&\perm4{}\\
\petit2& \+ & \smash{\raise2pt\hbox{\co \rlap{\rlap{\char'005} \char'007} & \+ & \smash{\raise2pt\hbox{\co \rlap{\rlap{\char'005 \\
\petit1& \smash{\raise2pt\hbox{\co \rlap{\rlap{\char'005} \char'007} & \smash{\raise2pt\hbox{\co \rlap{\rlap{\char'005} \char'007} & \smash{\raise2pt\hbox{\co \rlap{\rlap{\char'005 &\\
\petit4& \smash{\raise2pt\hbox{\co \rlap{\rlap{\char'005} \char'007} & \smash{\raise2pt\hbox{\co \rlap{\rlap{\char'005 & &\\
\petit3& \smash{\raise2pt\hbox{\co \rlap{\rlap{\char'005 & & &\\
\end{array}
$
\caption{A reduced pipe dream for $\pi=2143$. All tiles not shown are elbows.}\end{figure}
For $\pi\in S_{n+1}$ let $\mathrm{PD}(\pi)$ denote the collection of all pipe dreams of $\pi$ and $\mathrm{RPD}(\pi)$ the collection of all reduced pipe dreams of $\pi$. For $P\in \mathrm{PD}(\pi)$, define the weight of $P$ by
\[wt(P)=\prod_{(i,j)\in {\rm cross}(P)}t_i.\]
Recall that for any $\pi\in S_{n+1}$, the Grothendieck polynomial $\mathfrak{G}_\pi$ can be represented in terms of pipe dreams of $\pi$ by:
\begin{align*}
\mathfrak{G}_\pi(t_1,\ldots, t_{n}) = \sum_{P\in \mathrm{PD}(\pi)}{wt(P)}
\end{align*}
and the Schubert polynomial $\mathfrak{S}_\pi$ is the lowest degree homogeneous component of the Grothendieck polynomial:
\[\mathfrak{S}_\pi(t_1,\,\ldots,\,t_{n})=\sum_{P\in \mathrm{RPD}(\pi)}{wt(P)}.\]
In \cite{pipe1}, it is proved that $\mathop{\mathrm{RD}}(T)$ is independent of the reduction tree for $T$ a tree, and the following connection to Grothendieck polynomials is shown.
\begin{theorem}[\cite{pipe1}, Theorem 5.3]
\label{relatingrdandschub}
Let $\pi\in S_{n+1}$ be of the form $\pi=1\pi'$ where $\pi'$ is a dominant permutation of $\{2,3,\ldots n+1\}.$ Then, there is a tree $T(\pi)$ and nonnegative integers $g_i=g_i(\pi)$ such that
\[\widetilde{R}_{T(\pi)}(\bm{t}) =\left (\prod_{i=1}^{n}{t_i^{g_i}} \right )\mathfrak{G}_{\pi}(t_1^{-1},\,\ldots,\, t_{n}^{-1}). \]
\noindent Explicitly, if $C(\pi)$ denotes the set $\mathrm{core}(\pi)\cup \{(1,1)\}$, then $g_i(\pi)$ is the number of boxes in column $i$ of $C(\pi)$.
\end{theorem}
\noindent In terms of Newton polytopes, Theorem \ref{relatingrdandschub} implies
\begin{align*}
\mathrm{Newton}\left(\mathfrak{G}_{\pi}\right )=\varphi\left (\mathrm{Newton}\left(\widetilde{R}_{T\left(\pi\right )}\left(\bm{t}\right )\right)\right )
\end{align*}
and
\begin{align*}
\mathrm{Newton}\left(\mathfrak{S}_{\pi}\right )=\varphi\left (\mathrm{Newton}\left(\widetilde{R}^0_{T\left(\pi\right )}\left(\bm{t}\right )\right)\right )
\end{align*}
\noindent where $\varphi$ is the integral equivalence
\[(x_1,\ldots,x_{n}) \mapsto \left (g_1-x_1,\ldots,g_{n}-x_{n}\right ).\]
\begin{proof}[Proof of Theorem \ref{theoremC}]
By Theorem \ref{rdanaloguecollection}, right-degree polynomials $R_G(\bm{t})$ have polytopal support. Since $\mathrm{Newton}\left(\widetilde{R}_{T\left(\pi\right )}\right )$ is the image of $\mathrm{Newton}\left(R_{T\left(\pi\right )}\right )$ by a projection forgetting coordinates that are always zero, it follows from Theorem \ref{relatingrdandschub} that $\mathfrak{G}_{\pi}$ has polytopal support.
Theorem \ref{rdanaloguecollection} and Theorem \ref{relatingrdandschub} also yield that each homogeneous component of $\mathfrak{G}_{\pi}$ has polytopal support and that their Newton polytopes are generalized permutahedra. In particular, this holds for the Schubert polynomial. Since by \cite{pipe1} the Schubert polynomial of $\pi=1\pi'$, where $\pi'$ is a dominant permutation, has $0,1$ coefficients, the last statement also follows.
\end{proof}
From the proof of Theorem \ref{relatingrdandschub} in \cite{pipe1}, one can infer the following new transition rule for Schubert polynomials of permutations of the form $1\pi'$ with $\pi'$ dominant.
\begin{lemma}(\textbf{Transition rule for Schubert polynomials.}) Let $\pi\in S_{n+1}$ be of the form $\pi=1\pi'$ with $\pi'$ a dominant permutation of $\{2,\ldots,n+1\}$. Let $\pi'$ have diagram given by the partition $\lambda(\pi')=(\lambda_1,\cdots, \lambda_z)$ with $\lambda_z=k$. For $0\leq l \leq k$, let $w_l$ be the permutation on $\{2,\ldots,n+1\}$ whose diagram is the partition $(\lambda_1-(k-l), \ldots, \lambda_{z-1}-(k-l))$. Then
\[
\mathfrak{S}_{\pi}(\bm{x})= \sum_{l=0}^{k}{\left (\prod_{m=1}^{l}{x_m} \right ) \left (\prod_{p=l+2}^{k+1}{x_p^z} \right ) \mathfrak{S}_{1w_l}(\bm{x}_{\phi_l})}
\]
where $\bm{x}=(x_1,\,x_2,\,\ldots)$, $\bm{x}_{\phi_l}=(x_{\phi_l(1)},x_{\phi_l(2)},\ldots)$, and $\phi_l(i)=\begin{cases}
i &\mbox{ if } i\leq l+1\\
i+k-l &\mbox{ if } i\geq l+2
\end{cases}$
\end{lemma}
We illustrate the above transition rule in the following example.
\begin{example}
Let $\pi=14523$. Then, $\pi'=4523$, so $\lambda(\pi')=(2,2)$. For $0\leq l\leq 2$,the permutation $w_l$ will have diagram given by the partition $(l)$. These permutations are $w_0=2345$, $w_1=3245$, and $w_2=3425$.
Hence, the terms in the transition rule are
\begin{align*}
&(1)(x_2^2x_3^2)\mathfrak{S}_{1w_0}(x_1,x_4,x_5,x_6) = x_2^2x_3^2 \\
&(x_1)(x_3^2)\mathfrak{S}_{1w_1}(x_1,x_2,x_4,x_5) = x_1^2 x_3^2 + x_1 x_2 x_3^2\\
&(x_1x_2)(1)\mathfrak{S}_{1w_2}(x_1,x_2,x_3,x_4) = x_1^2 x_2^2 + x_1^2 x_2 x_3 + x_1 x_2^2 x_3.
\end{align*}
\noindent Adding these terms together gives the expected polynomial
\begin{align*}
\mathfrak{S}_{\pi}(x_1,x_2,x_3,x_4) = x_1^2 x_2^2 + x_1^2 x_2 x_3 + x_1 x_2^2 x_3 + x_1^2 x_3^2 + x_1 x_2 x_3^2 + x_2^2 x_3^2.
\end{align*}
\end{example}
\section{Left-degree sequences are invariants of the graph}
\label{sec6}
In this section we generalize the results of the Section \ref{sec3} to any graph $G$, not necessarily simple. Similar accommodations can be made to generalize Sections \ref{sec4} and \ref{sec5}. We also prove Theorem \ref{theoremA}, which characterizes the left-degree sequences of the leaves of a reduction tree of $G$, and concludes that they are independent of the choice of reduction tree, and are therefore an invariant of $G$ itself. To deal with multiple edges in $E(G)$, we view each element of $E(G)$ as being distinct. Formally, we may think of assigning a distinguishing number to each copy of a multiple edge. In this way, we may speak of subsets $F\subseteq E(G\backslash 0)$ in the usual sense.
For $G$ any graph on the vertex set $[0,n]$, we can still construct the reduction tree $\mathcal{T}(G)$ using the same algorithm as before in Definition \ref{specialreductiontree}. As in the case of simple graphs, the leaves of this specific reduction tree can be encoded as solutions to some constraint arrays. The key is using a generalized version of Lemma \ref{cornerstonelemma} with multiple incoming and outgoing edges at vertex $v$. This generalization is derived the same way and is not harder, but far more technical. The arrays we obtain are no longer necessarily triangular, but rather they may be staggered. This is explained below and demonstrated in Examples \ref{generalizedtriarrayexample} and \ref{ex:F}. We leave the proofs to the interested reader; they are straightforward generalizations of those in the previous section.\\
\noindent \textbf{Triangular arrays $\mathop{\mathrm{Tri}}_G(\emptyset)$ for arbitrary $G$.} For the case of full-dimensional degree sequences, replace each $a_{i,\,j}$ by $a_{i,\,j}^{(1)}$ in Definition \ref{arrdefinition} and Theorem \ref{arrayconstraints}, and add variables $a_{i,\,j}^{(k)}$ with $k>1$ for each additional copy of the edge $(j,i)$ appearing in $G$. When there are $k>1$ copies of the edge $(j,i)\in E(G)$, also replace $a^{(1)}_{i,\,j}\leq a^{(1)}_{i-1,\,j}$ in the constraint array by $a^{(1)}_{i,\,j}\leq a^{(2)}_{i,\,j}\leq \cdots \leq a^{(k)}_{i,\,j}\leq a^{(1)}_{i-1,\,j}$. The following example demonstrates these changes.
\begin{example}
\label{generalizedtriarrayexample}
Following Example \ref{triangulararrayofconstraintsexample}, if $G$ is the graph on vertex set $[0,4]$ with \[E(G)=\{(0,1),\, (0,1),\, (0,2),\,(1,2),\,(1,2),\, (2,3),\,(2,4),\,(3,4),\,(3,4) \},\] we obtain the constraints:
\begin{align*}
&0\leq a_{4,\,1}^{(1)}=a_{3,\,1}^{(1)}= a_{2,\,1}^{(1)}\leq a_{2,\,1}^{(2)}\leq a^{(1)}_{1,\,1}=2\\
&0\leq a_{4,\,2}^{(1)}\leq a_{3,\,2}^{(1)}\leq a_{2,\,2}^{(1)}=5-a^{(1)}_{2,\,1}\\
&0\leq a_{4,\,3}^{(1)}\leq a_{4,\,3}^{(2)} \leq a_{3,\,3}^{(1)}= 6-a_{3,\,1}^{(1)}-a_{3,\,2}^{(1)}\\
&0\leq a_{4,\,4}^{(1)}=9-a_{4,\,1}^{(1)}-a_{4,\,2}^{(1)}-a_{4,\,3}^{(1)}
\end{align*}
\end{example}\medskip
\noindent\textbf{Triangular arrays $\mathop{\mathrm{Tri}}_G(F)$ for arbitrary $G$.} Similarly, we can encode all left-degree sequences by introducing the arrays $\mathop{\mathrm{Tri}}(F)$ used in Theorem \ref{multisetequality}. To do this we view $E(G)$ as a multiset, so we formally view each copy of a multiple edge $(j,i)$ as a distinct element. Let $F$ vary over subsets of $E(G\backslash 0)$, and define $\mathop{\mathrm{Tri}}_G(F)$ from (the general version of) $\mathop{\mathrm{Tri}}_G(\emptyset)$ as before using the numbers $f_{i,\,j}$ of (\ref{fijnumbers}) and treating each $a_{i,\,j}^{(m)}$ identically for different $m$.
\begin{example} \label{ex:F}
With $G$ as in Example \ref{generalizedtriarrayexample} and $F=\{(1,2),\,(1,2),\,(2,3)\}$, the array $\mathop{\mathrm{Tri}}(F)$ is given by
\begin{align*}
&2\leq a_{4,\,1}^{(1)}+2=a_{3,\,1}^{(1)}+2= a_{2,\,1}^{(1)}+2\leq a_{2,\,1}^{(2)}+2\leq a^{(1)}_{1,\,1}=2\\
&1\leq a_{4,\,2}^{(1)}+1\leq a_{3,\,2}^{(1)}+1\leq a_{2,\,2}^{(1)}=3-a^{(1)}_{2,\,1}\\
&0\leq a_{4,\,3}^{(1)}\leq a_{4,\,3}^{(2)} \leq a_{3,\,3}^{(1)}= 3-a_{3,\,1}^{(1)}-a_{3,\,2}^{(1)}\\
&0\leq a_{4,\,4}^{(1)}=6-a_{4,\,1}^{(1)}-a_{4,\,2}^{(1)}-a_{4,\,3}^{(1)}
\end{align*}
\end{example}
Using the definition of $\mathop{\mathrm{Tri}}_G(F)$ for arbitrary graphs $G$, we can extend the definitions of $\mathop{\mathrm{Sol}}_G(F)$ and $\mathop{\mathrm{LD}}(G,F)$ from simple graphs to arbitrary graphs $G$.
As in Proposition \ref{tapisflowpolytope}, for each $F\subseteq E(G\backslash 0)$ the polytope $\mathop{\mathrm{Poly}}(\mathop{\mathrm{Tri}}_G(F))$ is integrally equivalent to the flow polytope of a graph $\mathop{\mathrm{Gr}}(G)$, a straightforward generalization of Definiton \ref{tagsimplefinalversion}. The proofs of Theorem \ref{isomorphism} and its Corollaries then go through with minor changes. In particular, we have the following crucial result.
\begin{theorem}
\label{finalflows}
Let $G$ be a graph on $[0,n]$, $\rho$ be the map that takes a triangular array in any $\mathop{\mathrm{Sol}}_G(F)$ to its first column $\left(a_{n,\,1}^{(1)}, \ldots, a_{n,\, n}^{(1)}\right)$, and $\psi$ be the map that takes a flow on $\widetilde{G}\backslash \{s,0\}$ to the tuple of its values on the edges $\{(j,t):\, j\in[n] \}$. For $F\subseteq E(G\backslash 0)$, recall the netflow vector
\[\bm{b}_G^F=\left (\mathop{\mathrm{indeg}_G}(1)-\mathop{\mathrm{outdeg_F}}(1), \ldots , \, \mathop{\mathrm{indeg}_G}(n)-\mathop{\mathrm{outdeg_F}}(n), -\#E(G\backslash F) \right ).\]
Then for each $F\subseteq E(G\backslash 0)$,
\[\mathop{\mathrm{LD}}(G,F)=\rho\left(\mathop{\mathrm{Sol}_G}(F) \right)=\psi\left(\mathcal{F}_{\widetilde{G}\backslash\{s,0\}}\left(\bm{b}_G^F \right)\cap \mathbb{Z}^{\#E(\widetilde{G}\backslash\{s,0\})} \right) , \mbox{ so } \]
\begin{align*}
\mathop{\mathrm{InSeq}}\left(\mathcal{T}(G)\right)&=\bigcup_{F\subseteq E(G\backslash 0)}{\mathop{\mathrm{LD}}(G,F)}\\
&=\bigcup_{F\subseteq E(G\backslash 0)}{\rho\left(\mathop{\mathrm{Sol}_G}(F) \right)}\\
&=\bigcup_{F\subseteq E(G\backslash 0)}{\psi\left(\mathcal{F}_{\widetilde{G}\backslash\{s,0\}}\left(\bm{b}_G^F \right)\cap \mathbb{Z}^{\#E(\widetilde{G}\backslash\{s,0\})} \right)}
\end{align*}
\end{theorem}
\label{finalflowsproofedition}
In the proof of Theorem \ref{theoremA} below, it will be more convenient to use an equivalent formulation of Theorem \ref{finalflows}:
Instead of considering flows on $\widetilde{G}\backslash\{s,0\}$ with netflow vector $\bm{b}_G^F$, consider flows on $\widetilde{G}\backslash\{s\}$ with netflow vector $(0,\bm{b}_G^F)$, where
\[(0,\bm{b}_G^F)=\left (0,\mathop{\mathrm{indeg}_G}(1)-\mathop{\mathrm{outdeg_F}}(1), \ldots , \, \mathop{\mathrm{indeg}_G}(n)-\mathop{\mathrm{outdeg_F}}(n), -\#E(G\backslash F) \right ).\]
\medskip
Next, we use Theorem \ref{finalflows} to prove that for all graphs $G$ on $[0,n]$, $\mathop{\mathrm{LD}}(G)$ depends only on $G$ and not on the choice of reduction tree of $G$ as stated in Theorem \ref{theoremA}.
Before proceeding with the proof, we first recall the relevant notation introduced previously. For a graph $G$ on $[0,n]$, let $\mathcal{R}(G)$ be any reduction tree of $G$ and $\mathcal{T}(G)$ the specific reduction tree whose leaves are encoded by the arrays $\mathop{\mathrm{Sol}}_G(F)$ (constructed in Definition \ref{specialreductiontree}). Recall that $\mathop{\mathrm{InSeq}}(\mathcal{R}(G))$ denotes the multiset of left-degree sequences of the leaves of $\mathcal{R}(G)$. Since $\mathop{\mathrm{LD}}(G)$ was defined as the left-degree sequences of leaves in any reduction tree of $G$, to show this definition is valid it suffices to prove that $\mathop{\mathrm{InSeq}}(\mathcal{R}(G))=\mathop{\mathrm{InSeq}}(\mathcal{T}(G))$.
\medskip
\noindent \textit{Proof of Theorem \ref{theoremA}.}
We proceed by induction on the maximal depth of a reduction tree of $G$. For the base case, the only reduction tree possible is the single leaf $G$. For the induction, perform a single reduction on $G$ using fixed edges $r_1=(i,j)$ and $r_2=(j,k)$ with $i<j<k$ to get graphs $G_1$, $G_2$, and $G_3$, with notation as in (\ref{reducing}). Note that we are selecting particular edges $r_1$ and $r_2$ even if there are multiple edges $(i,j)$ or $(j,k)$. Let $r_3$ denote the new edge $(i,k)$ in $G_m$ for each $m \in [3]$. Let $\mathcal{R}(G_m)$ be the reduction tree of $G_m$, $m \in [3]$, induced from $\mathcal{R}(G)$ by restriction to the node labeled by $G_m$ and all of its descendants.
By the induction assumption, $\mathop{\mathrm{InSeq}}(\mathcal{R}(G_m))$ is exactly $\mathop{\mathrm{InSeq}}(\mathcal{T}(G_m))$, so
\[\mathop{\mathrm{InSeq}}(\mathcal{R}(G))=\bigcup_{m\in [3]}{\mathop{\mathrm{InSeq}}(\mathcal{R}(G_m))}=\bigcup_{m\in [3]}{\mathop{\mathrm{InSeq}}(\mathcal{T}(G_m))}. \]
Thus, we need to show that
\begin{align}
\label{T(G)union}
\bigcup_{m\in [3]}\mathop{\mathrm{InSeq}}(\mathcal{T}(G_m))=\mathop{\mathrm{InSeq}}(\mathcal{T}(G))
\end{align}
regardless of the choice of $r_1$ and $r_2$. However, if $\rho$ is the map that takes an array to its first column, then Theorem \ref{finalflows} implies the disjoint union decompositions
\[\mathop{\mathrm{InSeq}}\left(\mathcal{T}(G) \right)=\bigcup_{F\subseteq E(G\backslash 0)}\rho\left(\mathop{\mathrm{Sol}_G}(F) \right),
\]
and for each $m\in[3]$,
\[\mathop{\mathrm{InSeq}}\left(\mathcal{T}(G_m) \right)=\bigcup_{F\subseteq E(G_m\backslash 0)}\rho\left(\mathop{\mathrm{Sol}_{G_m}}(F) \right)
\]
Thus, to prove (\ref{T(G)union}), it suffices to show
\begin{align}
\label{identification}
\bigcup_{F\subseteq E(G\backslash 0)}\rho\left(\mathop{\mathrm{Sol}_G}(F) \right)=\bigcup_{m\in [3]}{\bigcup_{F\subseteq E(G_m\backslash 0)}\rho\left(\mathop{\mathrm{Sol}_{G_m}}(F) \right)}.
\end{align}
To show (\ref{identification}), to each $F\subseteq E(G\backslash 0)$, we associate a tuple $(F_m)_{m \in I(F, r_1, r_2)}$ with $I(F, r_1, r_2)\subseteq [3]$ and $F_m\subseteq E(G_m\backslash 0)$, $m\in [3]$, such that each subset of any $E(G_m\backslash 0)$ is in exactly one tuple and for each $F\subseteq E(G\backslash 0)$,
\[\rho\left(\mathop{\mathrm{Sol}_G}(F) \right)=\bigcup_{m \in I(F, r_1, r_2)}{\rho\left(\mathop{\mathrm{Sol}_{G_m}}(F_m) \right)}. \]
By Theorem $\ref{finalflows}$, we verify the equivalent condition
\[\psi\left(\mathcal{F}_{\widetilde{G}\backslash\{s\}}\left(0,\bm{b}_G^F \right)\cap \mathbb{Z}^{\#E(\widetilde{G}\backslash\{s\})} \right)=
\bigcup_{m \in I(F, r_1, r_2)}{\psi\left(\mathcal{F}_{\widetilde{G}_m\backslash\{s\}}\left(0,\bm{b}_{G_m}^F \right)\cap \mathbb{Z}^{\#E(\widetilde{G}_m\backslash\{s\})} \right)}. \]
To make the notation more compact, let $H=\widetilde{G}\backslash\{s\}$ and $H_m=\widetilde{G_m}\backslash\{s\}$ for $m\in [3]$. We proceed in several cases depending on $F, r_1, r_2$. In each case, the argument is very similar to the proof of Proposition \ref{subdivisionlemma}.
\medskip
\noindent {\it I. Suppose that $r_1$ is not incident to vertex $0$}. The following four cases deal with this case.
\smallskip
\noindent \emph{Case 1:} $r_1, r_2\notin F$: Associate to $F$ the tuple $(F_1,F_2)$ with
\[F_1=F \mbox{ and } F_2=F.\]
Let $h$ be an integral flow on $H$ with netflow vector $(0,\bm{b}_G^F)$. For $m\in [3]$, we define integral flows on $H_m$ with netflow $(0,\bm{b}_{G_m}^F)$ having the same image under $\psi$.
\begin{itemize}
\item If $h(r_1)\geq h(r_2)$, define $h_1$ on $H_1$ with netflow $\bm{b}_{G_1}^{F_1}$ by
\begin{align*}
h_1(e)=
\begin{cases}
h(r_2) &\mbox{ if } e=r_3\\
h(r_1)-h(r_2) &\mbox{ if } e=r_1 \\
h(e) &\mbox{ otherwise }
\end{cases}
\end{align*}
\item If $h(r_1)< h(r_2)$, define $h_2$ on $H_2$ with netflow $\bm{b}_{G_2}^{F_2}$ by
\begin{align*}
h_2(e)=
\begin{cases}
h(r_1) &\mbox{ if } e=r_3\\
h(r_2)-h(r_1)-1 &\mbox{ if } e=r_2 \\
h(e) &\mbox{ otherwise }
\end{cases}
\end{align*}
\end{itemize}
For the inverse map, given integral flows $h_m$ on $H_m$ with netflow $\bm{b}_{G_m}^{F_m}$ for $m\in[2]$, define flows $h^{(m)}$ on $H$ by
\begin{align*}
h^{(1)}(e)=
\begin{cases}
h_1(r_1)+h_1(r_3) &\mbox{ if } e=r_1\\
h_1(r_3) &\mbox{ if } e=r_2 \\
h_1(e) &\mbox{ otherwise }
\end{cases}
\mbox{ and \hspace{2ex}}
h^{(2)}(e)=
\begin{cases}
h_2(r_3) &\mbox{ if } e=r_1\\
h_2(r_2)+h_2(r_3)+1 &\mbox{ if } e=r_2 \\
h_2(e) &\mbox{ otherwise }
\end{cases}
\end{align*}
\medskip
\noindent \emph{Case 2:} $r_1\in F,$ $r_2\notin F$: Associate to $F$ the tuple $(F_1,F_2)$ with
\[F_1= F\backslash \{r_1\}\cup \{r_3\}\mbox{ and } F_2= F\backslash \{r_1\}\cup \{r_3\}. \]
Use the same maps on flows given in Case 1.
\medskip
\noindent \emph{Case 3:} $r_1\notin F,$ $r_2\in F$: Associate to $F$ the tuple $(F_1,F_2,F_3)$ with
\[F_1= F\backslash \{r_2\}\cup \{r_1\},\mbox{ } F_2=F,\mbox{ and } F_3=F\backslash \{r_2\}. \]
Let $h$ be an integral flow on $H$ with netflow vector $(0,\bm{b}_G^{F})$. For $m\in [3]$, we define integral flows on $H_m$ with netflow $(0,\bm{b}_{G_m}^{F_m})$ having the same image under $\psi$.
\begin{itemize}
\item If $h(r_1)> h(r_2)$, define $h_1$ on $H_1$ with netflow $\bm{b}_{G_1}^{F_1}$ by
\begin{align*}
h_1(e)=
\begin{cases}
h(r_2) &\mbox{ if } e=r_3\\
h(r_1)-h(r_2)-1 &\mbox{ if } e=r_1 \\
h(e) &\mbox{ otherwise }
\end{cases}
\end{align*}
\item If $h(r_1)< h(r_2)$, define $h_2$ on $H_2$ with netflow $\bm{b}_{G_2}^{F_2}$ by
\begin{align*}
h_2(e)=
\begin{cases}
h(r_1) &\mbox{ if } e=r_3\\
h(r_2)-h(r_1)-1 &\mbox{ if } e=r_2 \\
h(e) &\mbox{ otherwise }
\end{cases}
\end{align*}
\item If $h(r_1)= h(r_2)$, define $h_3$ on $H_3$ with netflow $\bm{b}_{G_3}^{F_3}$ by
\begin{align*}
h_3(e)=
\begin{cases}
h(r_1) &\mbox{ if } e=r_3\\
h(e) &\mbox{ otherwise }
\end{cases}
\end{align*}
\end{itemize}
Given integral flows $h_m$ on $H_m$ with netflows $\bm{b}_{G_m}^{F_m}$ for $m\in[3]$, construct the inverse map by defining flows $h^{(m)}$ on $H$ for $m\in[3]$. Let $h^{(2)}$ be the same as in Case 1, and define
\begin{align*}
h^{(1)}(e)=
\begin{cases}
h_1(r_1)+h_1(r_3)+1 &\mbox{ if } e=r_1\\
h_1(r_3) &\mbox{ if } e=r_2 \\
h_1(e) &\mbox{ otherwise }
\end{cases}
\mbox{ and \hspace{2ex}}
h^{(3)}(e)=
\begin{cases}
h_3(r_3) &\mbox{ if } e=r_1\\
h_3(r_3) &\mbox{ if } e=r_2 \\
h_3(e) &\mbox{ otherwise }
\end{cases}
\end{align*}
\medskip
\noindent \emph{Case 4:} $r_1,r_2\in F$: Associate to $F$ the tuple $(F_1,F_2,F_3)$ with
\[F_1=F\backslash \{r_2\}\cup \{r_3\},\mbox{ } F_2= F\backslash \{r_1\}\cup \{r_3\},\mbox{ and } F_3= F\backslash \{r_1,\,r_2\}\cup \{r_3\}. \]
Use the maps on flows given in Case 3.
\medskip
A straightforward check shows that every $F\subseteq E(G_m\backslash 0)$ for $m\in [3]$ is reached exactly once by {\it cases 1-4}.\\
\smallskip
\noindent {\it II. Suppose that $r_1$ is incident to vertex $0$}. The following two cases deal with this case.
\smallskip
\noindent \emph{Case 1':} $r_2\notin F$: Associate to $F$ the tuple $(F_1,F_2)$ with
\[F_1=F \mbox{ and } F_2= F. \]
Use the maps on flows given in Case 1.
\noindent \emph{Case 2':} $r_2\in F$: Associate to $F$ the tuple $(F_2,F_3)$ with
\[F_2=F \mbox{ and } F_3=F\backslash \{r_2\} \]
Use the maps on flows for $H_2$ and $H_3$ given in Case 3.
\smallskip
A straightforward check shows that every $F\subseteq E(G_m\backslash 0)$ for $m\in [3]$ is reached exactly once by {\it cases 1'-2'}.
\qed
\section*{Acknowledgements} We thank Bal\'azs Elek, Alex Fink and Allen Knutson for inspiring conversations.
|
2,869,038,155,061 | arxiv | \section*{Acknowledgment} I would like to thank Michele Arzano for his helpful comments
and for bringing \cite{Alsing:2004ig} to my attention. This research
was supported in parts
by research projects N202 081 32/1844 and NN202318534 and
Polish Ministry of Science and Higher Education grant 182/N-QGG/2008/0
|
2,869,038,155,062 | arxiv | \section{Introduction}
Microtubule (MT) dynamics is essential for many cellular processes, such as the
positioning and separation of chromosomes in mitosis \cite{McIntosh2002}, or
maintenance of cell polarity and cell shape \cite{Siegrist2007}.
An important feature, which enables MTs to exert pulling and pushing forces in
these cellular processes, is their dynamic instability, which is the stochastic
switching of MTs between states of growth by polymerization and states
of fast shrinkage by depolymerization \cite{Mitchison1984}.
Switching from growth into shrinkage happens in catastrophe events, whose
mechanism and triggers are not completely understood on the molecular level,
but they are associated with a loss of the GTP-cap by hydrolysis within
the MT \cite{Carlier1984,Walker1991} (see Refs.\ \cite{Howard2009,VanHaren2019}
for reviews).
Hydrolysis is strongly
coupled to mechanics of the MT, as
is clearly seen in the curling of MT protofilaments
into a \enquote{ram's horn} conformation after the catastrophe and during the
shrinking phase \cite{Mandelkow1991}.
The loss of the stabilizing GTP-cap triggers a release of binding energy and
stored mechanical energy in the tubular MT structure.
Therefore, shrinkage following a catastrophe is more than simple
depolymerization of the MT;
it is rather a rupture or crack propagation process
between protofilaments, which releases chemical
and mechanical energy while it propagates towards the minus end.
The energy released during shrinking has biological functions and
can be employed to exert
pulling forces onto kinetochores during separation of MTs in mitosis
\cite{McIntosh2010}.
The curling of hydrolyzed protofilaments into a ram's horn structure
shows that
GDP-tubulin dimers have a bent conformation
\cite{Mandelkow1991,Mueller-Reichert1998,Downing1998,Nogales2006}.
Tubulin dimers assembled within the MT body are
in a straight conformation, on the other hand \cite{Nogales1999}.
Hydrolysis of tubulin dimers embedded in a straight MT causes mechanical
strains in the tubular structure because the surrounding MT lattice
prevents these GDP-tubulin dimers from assuming their preferred bent
conformation.
This mechanical strain
is released in a catastrophe via the rupture of lateral bonds.
There are different models explaining
how the mechanical strain is increased by hydrolysis or
how lateral bonds are weakened by hydrolysis such that the strained MT
becomes more prone for catastrophes.
The first cryo-electron microscopy (EM) studies showed
blunt tips for growing MTs but curved tips for shrinking MTs
\cite{Mandelkow1991}
suggesting that GTP-protofilaments are straight while
GDP-protofilaments are curved.
Later evidence from cryo-EM showed
that GTP-protofilaments are also curved, but significantly
less
than GDP-protofilaments \cite{Mueller-Reichert1998}.
The \emph{allosteric model} is based on the assumption that
hydrolysis of a tubulin dimer changes the dimer
conformation from a rather
straight GTP-conformation to a bent GDP-conformation.
Hydrolysis of tubulin dimers embedded in a straight MT causes mechanical
strain in the tubular structure because the surrounding MT lattice
prevents these GDP-tubulin dimers from assuming their preferred bent
conformation.
This model was employed in almost all previous MT simulation models
that consider MT mechanics
\cite{Molodtsov2005,VanBuren2005,Coombes2013,Mueller2014,Zakharov2015,Jain2015}
The \emph{lattice model}, on the other hand,
is based on evidence from X-ray and cryo-EM structures
\cite{Buey2006,Rice2008,Alushin2014,Manka2018}
and simulations \cite{Ayoub2015,Fedorov2019}
that
also GTP-tubulin dimers assume a bent conformation and
that
hydrolysis rather affects the lateral and longitudinal dimer
interaction energies.
It is supported by recent experimental observations that both
growing and shrinking MTs have
bent protofilament ends \cite{McIntosh2018}.
Ref.\ \cite{McIntosh2018} also
presents first simulation results
with a lattice model.
But there is also recent evidence from
molecular dynamics (MD) simulation pointing in a different
direction and supporting an intermediate model, where hydrolysis
affects interactions but also
lowers GDP-tubulin flexibility \cite{Igaev2018}.
If hydrolysis weakens lateral interaction energies,
hydrolysis makes the structure more prone for a catastrophe.
While in the allosteric model, the mechanical strain in the structure
is increased by hydrolysis, in the lattice model, the mechanical
strain that the MT structure can tolerate is reduced by hydrolysis.
In both models, the result
is an increased propensity for lateral bonds to rupture. Therefore,
chemomechanical MT models with explicit bond rupture are
a necessity to reproduce catastrophes.
We build on existing modelling approaches based on the allosteric model
\cite{Molodtsov2005,VanBuren2005,Coombes2013,Mueller2014,Zakharov2015,Jain2015}
and include lateral bond rupture as explicit stochastic events with
force-dependent rates, which
can give important clues about how catastrophes
are triggered in the MT structure.
The influence of tubulin dimer hydrolysis onto the mechanics of the MT lattice
suggests that, vice versa, mechanical forces and torques acting on tubulin
dimers via strains in the tubular structure could also affect hydrolysis rates,
an effect which has been explored only in Ref.\ \cite{Mueller2014} previously.
Although this interplay is plausible from a mechanochemistry
point of view, experimental
verification on the dimer level is extremely difficult and
not possible yet, but we can employ chemomechanical
MT models to explore and suggest possible implications for
the dynamic instability.
The coupling between chemical events -- namely polymerization events,
dimer hydrolysis, bond rupture -- and
mechanical forces because of
conformational changes due to these
chemical events, is a characteristic of MTs and
requires chemomechanical MT models on the dimer level in order to develop a
microscopic understanding of their dynamic instability including catastrophe and
rescue events \cite{Zakharov2016}.
In this respect, chemomechanical models go beyond a phenomenological description
of MT dynamics in a four-parameter model based on growth and shrinking
velocities and phenomenological catastrophe and rescue
rates \cite{Dogterom1993}.
The challenge for microscopic chemomechanical models is to include all chemical
events as stochastic processes, to perform conformational relaxation governed by
MT mechanics following each chemical event, and, eventually, to also include the
feedback of mechanical forces within the MT onto reaction rates of the chemical
events.
We present a stochastic chemomechanical MT model on the dimer level.
Our model includes
(i) a mechanical model of the MT containing lateral elastic bonds between
tubulin monomers in neighboring protofilaments and a harmonic bending energy
between tubulin monomers with a nonzero equilibrium angle after hydrolysis
(allosteric model),
(ii) stochastic addition and removal of tubulin dimers,
(iii) explicit stochastic lateral bond rupture and bond formation; the bond
rupture rate is coupled to the mechanical stress state of the bond and thus via
elastic interactions within the MT lattice also to the other bonds,
(iv) stochastic hydrolysis of dimers with a rate that can also couple to
the mechanical bending stress in the dimer.
The stochastic kinetics (ii)-(iv) is handled by a Gillespie algorithm and after
each stochastic event, a mechanical energy minimization mimicking the
relaxational dynamics of the structure is applied to the MT.
In order to parameterize our model, we will focus on the simplified scenarios of
a growing MT consisting of GTP-tubulin only and a shrinking MT consisting of
GDP-tubulin only.
In both cases, we can neglect hydrolysis (iv);
in the growing GTP-MT, we can also neglect mechanics, which is generated
by hydrolysis.
In the presence of mechanics and hydrolysis, repeated catastrophe and rescue
events are obtained and will be described and analyzed.
One problem in chemomechanical MT models is the computational effort associated
with the mechanical relaxation.
We investigate in detail, which level of computational effort is necessary in
our model to obtain a sufficient mechanical relaxation following each chemical
event, on the one hand, and which simplifications can be taken to assure a
finite simulation time for growing MTs, on the other hand.
This will allow us to
simulate arbitrarily long growing MTs at fixed computational speed.
Our chemomechanical model has to be compared to previous modelling approaches,
which include the mechanics of the MT
\cite{Molodtsov2005,VanBuren2005,Coombes2013,Mueller2014,Zakharov2015,Jain2015}:
\begin{itemize}
\item Refs.\ \cite{VanBuren2005,Coombes2013} employ the allosteric
model for dimer bending
and include stochastic addition and removal of dimers.
Hydrolysis is random.
Mechanical energy minimization is performed only
locally on randomly selected dimers.
Lateral bond rupture is not implemented as explicit
stochastic process but only included using a threshold
energy criterion.
\item The models in Refs.\ \cite{Molodtsov2005,Jain2015}
focus on mechanics and do not include dimer addition and removal.
They are
also based on the allosteric model but consider
fixed hydrolysis states.
In Ref.\ \cite{Molodtsov2005},
the lateral bond energy landscape is harmonic around a minimum
but includes an energy barrier and a dissociated, i.e., ruptured
state.
Global energy minimization gives the final state of the static
structure.
\item In Ref.\ \cite{Zakharov2015}, the stochastic kinetics is added
to a mechanical model similar to \cite{Molodtsov2005}.
Here, the mechanical relaxation and lateral bond rupture is
performed using Brownian dynamics (which include thermal
fluctuations) with small time steps (equivalent to
$2\times 10^7$ minimization steps), which is only applied to 300
tubulin dimers at the plus end.
Stochastic addition of dimers and removal by rupture of lateral
and longitudinal bonds is included.
The rupture of lateral bonds happens by
activation over the bond energy barrier, the longitudinal
rupture by a threshold criterion.
Hydrolysis is random and stochastic with a rate that is
independent of mechanics.
\item Ref.\ \cite{Mueller2014} is also based on the allosteric model.
Lateral bond rupture is possible using a threshold criterion.
Mechanical energy minimization was performed globally.
There is no addition
or removal of dimers, but hydrolysis is included.
In a first attempt to include a coupling of the hydrolysis rate
to mechanical forces, the hydrolysis kinetics remained
deterministic, however, with the most probable hydrolysis event
determined by mechanical forces.
In the present paper, we will add addition and removal of dimers
and a fully stochastic hydrolysis kinetics.
\end{itemize}
Our chemomechanical model has also to be compared to previous purely chemical
modelling approaches on the dimer level but without explicit mechanical model
\cite{VanBuren2002,Piette2009,Margolin2011,Margolin2012,Li2014}.
These models include attachment and detachment of tubulin dimers; some of these
models \cite{Margolin2011,Margolin2012,Li2014} also include lateral bond rupture
and are thus able to produce crack-like catastrophe events.
Crack-like catastrophe events are, however, triggered by adjusting chemical
rupture rates rather than including MT mechanics.
The model by Margolin \emph{et al}~\cite{Margolin2012} has successfully reproduced
features of the experimentally observed MT dynamic instability
\cite{Mahserejian2019} but relies on a heuristic tuning of simulation
parameters.
\section{Materials and Methods}
\subsection{Microtubule structure and energy}
\label{sec:3d_model}
Our MT model is formulated on the dimer level.
The base units of the model are alpha- and beta-tubulin monomers.
In our model, we represent each monomer as cylinder with radius
$r_\text{t} = \SI{2}{\nano\meter}$ and height $\ell_\text{t} = \SI{4}{\nano\meter}$
(see \autoref{tab:geometric_parameters}).
Alpha- and beta-tubulin monomers form unbreakable tubulin dimers, which are
arranged head-to-tail into protofilaments.
13 protofilaments form a 13$\_$3 MT, i.e., a MT with
a helical shift of 3 tubulin monomer lengths per turn.
\begin{table}[h!]
\centering
\caption{
Geometric parameters of our MT model.
}
\label{tab:geometric_parameters}
\begin{tabular}{@{}lll}
\hline
Parameter & Symbol & Value \\
\hline
mean MT radius & $R_\text{MT}$ & \SI{10.5}{\nano\meter} \\
\hline
tubulin monomer radius & $r_\text{t}$ & \SI{2}{\nano\meter} \\
\hline
tubulin monomer length & $\ell_\text{t}$ & \SI{4}{\nano\meter} \\
\hline
helical shift between protofilaments & $\Delta z_\text{h}$ & \SI{0.92}{\nano\meter} \\
\hline
rest length of lateral springs & $s_0$ & \SI{1.47}{\nano\meter} \\
\hline
straight equilibrium bending angle & $\Delta \theta_0$ & \SI{0}{\degree} \\
\hline
curved equilibrium bending angle & $\Delta \theta_0$ & \SI{11}{\degree} \\
\hline
\end{tabular}
\end{table}
For the remainder of this paper, we will use triples $(p,d,t)$ to address
specific tubulin monomers within the MT with
$p \in \{ 1, 2, \dots, 13 \}$ as protofilament
number, $d \in \{ 1, 2, \dots, d(p) \}$ as tubulin layer (with
$d = 1$ denoting the minus end and $d = d(p)$ denoting the plus end of
the protofilament $p$), and
$t \in \{ 1, 2 \}$ denoting the tubulin monomer within the
dimer with $t = 1$ for the alpha- and $t = 2$ for the beta-tubulin
monomers.
For simplicity, we assume periodicity in $p$ (i.e., $p = 0 \equiv 13$ and
$p = 14 \equiv 1$) and combined periodicity in $d$ and $t$ (i.e.,
$(p,d,3) \equiv (p,d+1,1)$ and $(p,d,0) \equiv (p,d-1,2)$).
We will also generally refer to the lateral neighbors of tubulin monomer
$(p,d,t)$ using $(p \pm 1,d,t)$ even though at the seam, lateral neighbors
differ in all three indices.
The MT is straight and oriented along the $z$-axis with the positive
$z$-direction pointing to the plus end.
Vectors $\vec{m}(p,d,t)$ and $\vec{p}(p,d,t)$
point to the to the lower (minus end) and upper (plus end)
circular base of the tubulin monomer $(p,d,t)$.
The direction vector
\begin{equation}
\vec{d}(p,d,t)
= \vec{p}(p,d,t) - \vec{m}(p,d,t)
= \ell_\text{t} \begin{pmatrix}
\cos \phi(p) \sin \theta(p,d,t) \\
- \sin \phi(p) \sin \theta(p,d,t) \\
\cos \theta(p,d,t)
\end{pmatrix}
\end{equation}
with length $\ell_\text{t}= \SI{4}{\nano\meter}$ points from $\vec{m}(p,d,t)$
to $\vec{p}(p,d,t)$ and is
specified using spherical coordinates, i.e., azimuthal and
polar angles, see \autoref{fig:model_geometry}(A).
The polar angle $\theta(p,d,t)$
is the only degree of freedom of each
monomer, because we assume that monomers can only be displaced in radial
direction, i.e., all azimuthal angles are fixed to
$\phi(p) = 2 \pi (p - 1) / 13$.
As both alpha- and beta-tubulin have their polar angles as a degree of freedom,
the model supports intra- and inter-dimer curling \cite{Wang2005}.
\begin{figure}[!ht]
\centering
\includegraphics{figure1-crop.pdf}
\caption{
(A) Schematic illustration of the different vectors with the
origin $O$. (The vertical gaps between tubulin cylinders are for
illustration purposes only.) (B) Bending angles between the
tubulin monomer direction vectors.
}
\label{fig:model_geometry}
\end{figure}
At the minus end of the MT each protofilament $p$ starts with
an alpha-tubulin arranged in a circle with
mean MT radius $R_\text{MT} = \SI{10.5}{\nano\meter}$ and with an
offset $z(p,1,1)=3 \ell_\text{t}(p - 1) / 13$ in $z$-direction, such that
the seam is
between the 13th and the 1st protofilament.
The protofilament length that will be used to
calculate the growth and shrinkage
velocities is the maximum $z$-coordinate $\ell_\text{max}(p)$ of all tubulin
monomers within the protofilament (see Supplementary Material
for more details).
The MT length is given by the average
\begin{equation}
\ell_\text{MT}
= \frac{1}{13} \sum_{p = 1}^{13} \ell_\text{max}(p) .
\label{eq:average_microtubule_length}
\end{equation}
Every tubulin monomer has four interaction points: two in longitudinal direction
and two in lateral direction.
The longitudinal bond between alpha- and beta-tubulin monomers of the same dimer
is considered unbreakable but the orientation of this junction can change via
the beta-tubulin's polar angle $\theta(p,d,2)$.
In contrast, the longitudinal bond between adjacent tubulin monomers of
different dimers can break and is modeled via the bond energy $\Delta G_\text{long}^{0*}$ (where
the \enquote{0} refers to it being a standard energy \cite{VanBuren2004} and the
asterisk to the fact that it also includes the entropic cost of
\enquote{immobilization} \cite{VanBuren2002}).
The lateral interaction points are located at the edge of the upper base (see
\autoref{fig:model_geometry}(A)).
If there is a lateral bond between tubulin monomer $(p,d,t)$ and its neighbor in
the $(p+1)$-th protofilament, the bond is modeled as a harmonic spring with base
energy $\Delta G_\text{lat}^{0}$:
\begin{equation}
E_\text{lat}(p,d,t)
= \Delta G_\text{lat}^{0} + \frac{1}{2} k_\text{lat} \left( | \vec{s}(p,d,t) | - s_0 \right)^2
\end{equation}
with the spring constant $k_\text{lat}$ of the bond and
the vector $\vec{s}(p,d,t)$
connecting the lateral interaction points;
$s_0 \simeq \SI{1.47}{\nano\meter}$ is the
rest length of the spring (see \cite{Mueller2014} and also consider the helical
shift between two neighboring tubulin monomers of $3 \ell_\text{t} / 13$).
Lateral bonds at the seam are assumed to have identical mechanical
properties as other lateral bonds based on evidence
that they do not constitute a weaker bond \cite{Alushin2014,Harris2018}.
Additionally, there is a lateral repulsion term between neighboring tubulin
monomers (regardless of whether they are bonded or not) to ensure a cylindrical
form \cite{Mueller2014}:
\begin{equation}
E_\text{rep}(p,d,t)
= k_\text{rep} \left( | \vec{p}(p,d,t) - \vec{p}(p+1,d,t) |
- 2 r_\text{t} \right)^{-12}.
\end{equation}
The bending of monomer junctions is described by a harmonic
potential with bending constant $\kappa$:
\begin{equation}
E_\text{bend}(p,d,t)
= \frac{1}{2} \kappa \left( \Delta \theta(p,d,t)
- \Delta \theta_0(p,d,t) \right)^2 .
\label{eq:Ecurl}
\end{equation}
The bending angle $\Delta \theta(p,d,t) = \theta(p,d,t) - \theta(p,d,t-1)$
(see \autoref{fig:model_geometry}(B)) is calculated with
the neighboring monomer in
the minus direction (using the periodicity convention in $d$ and $t$,
$(p,d,0) \equiv (p,d-1,2)$), and $\Delta \theta_0(p,d,t)$ is its equilibrium
value.
For hydrolyzed beta-tubulin monomers and for alpha-tubulin monomers on top of a
hydrolyzed beta-tubulin (and for the first alpha-tubulin monomers of each
protofilament if the beta-tubulin in the same dimer is hydrolyzed), we use a
rest angle $\Delta \theta_0(p,d,t) = \SI{11}{\degree}$ in order to reproduce the
experimentally measured radius of curvature of \SI{21}{\nano\meter}
corresponding to an angle of \SI{22}{\degree} per dimer for a GDP-protofilament
curling into the ram's horn configuration
\cite{Mueller-Reichert1998,ElieCaille2007}.
Otherwise (for an unhydrolyzed beta-tubulin monomer or an alpha-tubulin monomer
on top of an unhydrolyzed beta-tubulin monomer), we assume a straight
equilibrium configuration with $\Delta \theta_0(p,d,t) = \SI{0}{\degree}$.
This choice of rest angles implements the allosteric model, where
GTP-hydrolysis leads to bending of tubulin dimers.
Our mechanical MT model is defined by the total energy
\begin{equation}
E_\text{MT}
= \sum_{p = 1}^{13} \sum_{d = 1}^{d(p)} \left( \Delta G_\text{long}^{0*}
+ \sum_{t = 1}^2 \Bigl[ E_\text{lat}(p,d,t) + E_\text{rep}(p,d,t)
+ E_\text{bend}(p,d,t) \Bigr] \right),
\label{eq:total_microtubule_energy}
\end{equation}
where $E_\text{lat}(p,d,t)$ only contributes if there is a lateral bond between
tubulin monomers $(p,d,t)$ and $(p+1,d,t)$ and $E_\text{rep}(p,d,t)$ only contributes
if tubulin monomer $(p,d,t)$ has a lateral partner $(p+1,d,t)$.
There are four free parameters in our mechanical MT model (see
\autoref{tab:parameters}):
the longitudinal bond energy $\Delta G_\text{long}^{0*}$, the lateral bond energy $\Delta G_\text{lat}^{0}$, the
lateral spring constant $k_\text{lat}$, and the bending constant $\kappa$.
For the repulsion constant $k_\text{rep}$, we use the same value
$k_\text{rep} = \SI{e-6}{\square\radian \nano\meter\tothe{12}} \kappa$
that has been found previously to ensure the overall cylindrical shape of the MT
and only contributes a small portion to the MT energy \cite{Mueller2014}.
\begin{table}[h!]
\centering
\caption{
Free parameters of our MT model and the \enquote{standard set}
of their values that we will focus on in the rest of the paper.
}
\label{tab:parameters}
\begin{tabular}{@{}lll}
\hline
Parameter & Symbol & standard set of values \\
\hline
longitudinal bond energy & $\Delta G_\text{long}^{0*}$ & \SI{-9.3}{\ensuremath{\mathit{k}_\text{B} \mathit{T}}} \\
\hline
lateral bond energy & $\Delta G_\text{lat}^{0}$ & \SI{-1.58}{\ensuremath{\mathit{k}_\text{B} \mathit{T}}} \\
\hline
lateral spring constant & $k_\text{lat}$ &
\SI{100}{\ensuremath{\mathit{k}_\text{B} \mathit{T}} \per \nano\meter\squared} \\
\hline
bending constant & $\kappa$ &
\SI{149}{\ensuremath{\mathit{k}_\text{B} \mathit{T}} \per \radian\squared} \\
\hline
pseudo-first-order polymerization rate & $k_+$ &
\SI[per-mode=reciprocal]{4}{\per\micro\molar \per\second} \\
\hline
lateral bond formation attempt rate & $k_\text{att}$ &
\SI[per-mode=reciprocal]{258}{\per\second} \\
\hline
constant hydrolysis rate & $k_\text{hydr}$ &
\SIrange[per-mode=reciprocal]{0.1}{0.5}{\per\second} \\
\hline
base hydrolysis rate & $k_\text{hydr}^0$ &
\SIrange[per-mode=reciprocal]{1}{5}{\per\second} \\
\hline
\end{tabular}
\end{table}
In the simulation model, we do not use this mechanical energy to calculate
forces for a microscopic dynamics such as Brownian dynamics on the dimer level
(as opposed to \cite{Zakharov2015}).
We rather assume that mechanical relaxation dynamics is fast compared to
chemical changes in the MT due to tubulin attachment and detachment, bond
rupture and formation, or hydrolysis.
The slowest mechanical process is relaxation of bending modes
of protofilaments governed by small restoring bending moments.
The basic time scale for this process can be estimated as
$\tau \sim \eta \ell_\text{t}^3/\kappa$ \cite{Kroy1997}, where $\eta \sim
\SI{e-3}{\pascal\second}$ is the
viscosity of water. This gives
$\tau \sim \SI{e-10}{\second}$, which is orders of magnitude smaller
than typical time scales of seconds for chemical events. Therefore,
even longer protofilaments relax fast compared to chemical
changes.
There is additional evidence from Brownian dynamics
that bending mode relaxation is
also much faster than immobilization in cryo-EM \cite{Ulyanov2021}.
Therefore, we perform a quasi-instantaneous energy minimization
of \eqref{eq:total_microtubule_energy} between these chemical simulation steps.
This is the computationally more efficient strategy to achieve mechanical
relaxation.
The rates of all chemical simulation events themselves determine the dynamics of
the MT and are handled by a Gillespie algorithm as explained in more detail
below.
\subsection{Chemical simulation events}
\label{sec:simulation_events}
To simulate the dynamics of MTs, we include attachment of individual GTP-tubulin
dimers and detachment of (laterally unbonded) tubulin dimers or whole (laterally
unbonded) protofilament segments at
the plus end, as well as lateral bond rupture and formation, and hydrolysis of
tubulin dimers as stochastic chemical events into the simulation;
\autoref{fig:simulation_events}(A) summarizes the different possible
events and the associated rates.
\begin{figure}[!ht]
\centering
\includegraphics{figure2-crop}
\caption{
(A) Schematic illustration of the different simulation events
with their rates.
Dashed lateral bonds can be formed with rate $k_\text{form}$, thin
solid lateral bonds can rupture with rate $k_\text{rup}$, and thick
bond cannot rupture.
\enquote{T} and \enquote{D} correspond to the hydrolysis state
of beta-tubulin of the dimers.
(B) If the black tubulin dimer in layer $d$ is affected by an
event and $d_\text{cutoff} = 2$ was used, all of the gray (and the
black) tubulin dimers are used for energy minimization.
}
\label{fig:simulation_events}
\end{figure}
\subsubsection{Attachment and detachment}
At the plus end of each protofilament, a GTP-tubulin dimer can attach with an
on-rate
\begin{equation}
k_\text{on}
= k_+ c_\text{tub}
\label{eq:kon}
\end{equation}
where $k_+$ is the pseudo-first-order polymerization rate and $c_\text{tub}$ is the
concentration of free GTP-tubulin dimers.
The on-rate is assumed to be
independent of the hydrolysis state of the protofilament end.
For depolymerization, we assume that a tubulin dimer at the plus end can only
detach if it has no lateral bonds.
We also allow for detachment of whole protofilament segments starting from an
interior dimer ($d < d(p)$) if the whole segment has no lateral bonds.
Laterally unbounded dimers or segments can detach with a rate
\begin{equation}
k_\text{off}
= k_+ c_0 \exp \left( \Delta G_\text{long}^{0*} \right)
\label{eq:depolymerization_rate}
\end{equation}
as given by Kramers theory with the longitudinal standard bond energy $\Delta G_\text{long}^{0*}$
(including the entropic cost of \enquote{immobilization}) and the standard
concentration $c_0 = \SI{1}{\molar}$ \cite{VanBuren2004}.
This approach differs from other models
\cite{VanBuren2002,VanBuren2005,Gardner2011}, where tubulin dimers can detach
regardless of whether they have lateral bonds or not.
In such models, if a tubulin dimer still has lateral bonds, its detachment rate
decreases exponentially.
In our model, we rather include lateral bond rupture and formation as separate
stochastic events into the simulation (similarly to the purely chemical models
in \cite{Margolin2011,Margolin2012,Li2014}); bond rupture can then be followed
by detachment of laterally unbounded dimers or protofilament segments.
Bond rupture enables dimer detachment and is necessary prior to a catastrophe;
vice versa, bond reformation is necessary for a rescue event.
Therefore, it is essential to also include the process of bond formation
into the model.
Moreover, it has been observed in MD simulations in \cite{Kononova2014}
that lateral tubulin bonds can easily reform.
The restriction that only laterally unbonded dimers can detach also causes an
indirect increase of the effective off-rate if the last dimers of a
protofilament are hydrolyzed because this tends to create stretched bonds, which
rupture more easily.
\subsubsection{Zipper-like lateral bond rupture and bond formation}
We assume that bond rupture between protofilaments starts from the plus end and
proceeds by a rupture front monomer by monomer towards the minus end;
likewise, bonds can be reformed only monomer by monomer towards the
plus end in a zipper-like fashion.
As a result, we always have a rupture front between two neighboring
protofilaments such that all monomers on top of the front toward the plus end
are ruptured and all monomers below toward the minus end are intact.
If tubulin monomer $(p,d,t-1)$ has a lateral bond with its neighbor in
protofilament $p+1$ but the tubulin monomer on top of it, $(p,d,t)$, has no
lateral bond with this neighbor, the rupture front can recede towards the plus
end, and tubulin monomer $(p,d,t)$ can form a bond with rate
\begin{equation}
k_\text{form}
= k_\text{att}
\end{equation}
with the attempt rate $k_\text{att}$.
Vice versa, if the bond at $(p,d,t)$ is intact and bond $(p,d,t+1)$ is broken,
the rupture front can advance towards the minus end by rupturing this bond with
a rate
\begin{equation}
k_\text{rup}
= k_\text{att} \exp \left( \Delta G_\text{lat}^{0} + \Delta G_\text{mech} \right)
\label{eq:krup}
\end{equation}
which contains a chemical bond energy $\Delta G_\text{lat}^{0}$ and a mechanical energy
$\Delta G_\text{mech}$, which accounts for the weakening of the lateral bond due to mechanical
strain in the bond and enters according to Bell theory
\cite{Bell1978,Evans1997}.
In our model, $\Delta G_\text{mech}$ is due to the stretching of the Hookean springs
representing the lateral bonds so that $\Delta G_\text{mech} = F_\text{lat} \ell_\text{rup}$, where
$F_\text{lat} = - \partial E_\text{lat} / \partial |\vec{s}(p,d,t)|$ is the force currently
acting on the lateral bond and $\ell_\text{rup}$ is the characteristic bond rupture
length.
We define $\ell_\text{rup}$ as the length increase of the lateral bond from
its rest length $s_0$ at which the stretching energy of the spring cancels the
bond energy:
\begin{equation}
\ell_\text{rup}
= \sqrt{\frac{- 2 \Delta G_\text{lat}^{0}}{k_\text{lat}}} .
\label{eq:lrup}
\end{equation}
\subsubsection{Hydrolysis without and with mechanical feedback}
Lastly, GTP in beta-tubulin monomers can hydrolyze into GDP via a random (or
scalar) hydrolysis rule meaning that almost every GTP-tubulin dimer in the MT
can hydrolyze with a fixed rate $k_\text{hydr}$
regardless of the hydrolysis state of its
longitudinal neighbor (which would be a vectorial hydrolysis
rule).
The \enquote{almost} in the previous sentence refers to the finding that the
polymerization of the tubulin dimer $(p,d)$ and thus the formation of a
longitudinal bond between beta-tubulin $(p,d-1,2)$ and alpha-tubulin $(p,d,1)$
catalyzes the hydrolysis reaction in beta-tubulin $(p,d-1,2)$
\cite{Nogales1999}.
As a consequence, only GTP-tubulin dimers that ever had another tubulin dimer on
top of them can be hydrolyzed in our model.
We also consider the possibility
that hydrolysis is mechanochemically coupled to the bending
strain \cite{Mueller2014}.
Then, the hydrolysis rate is modulated
\begin{equation}
k_\text{hydr}(p,d)
= k_\text{hydr}^0 \exp \left( - \Delta E_\text{hydr}(p,d) \right)
\label{eq:mechanics_hydrolysis_rate}
\end{equation}
with a dimer-specific change $\Delta E_\text{hydr}(p,d)$ in the energy barrier height
of the hydrolysis reaction, which
depends on the bending state of dimer $(p,d)$.
Because this bending state also depends via lateral bonds on the bending states
in all neighboring dimers, and because the bending state of all neighboring
dimers strongly depends on their hydrolysis state, the hydrolysis dynamics
becomes effectively non-random but depends on the hydrolysis state of the
neighbors.
The basis for our assumption of a tubulin dimer-specific mechanochemical
hydrolysis rate is to
view the equilibrium bending angle $\Delta \theta_0$ of a dimer
as the reaction coordinate
for hydrolysis which can be described by an energy profile
$F_\text{hydr}(\Delta \theta_0)$.
$F_\text{hydr}(\Delta \theta_0)$ has two local minima corresponding to the straight
conformation with $\Delta \theta_0 = \SI{0}{\degree}$ and the curved
conformation with $\Delta \theta_0 = \SI{11}{\degree}$ and a
rate-limiting energy barrier of unknown height $\Delta F_\text{hydr}^\text{barrier}$ in between.
We propose that hydrolysis of a tubulin dimer is eased if its
actual bending angle $\Delta \theta$ is closer to
the equilibrium angle $\Delta \theta_0= \SI{11}{\degree}$ in the hydrolyzed
state.
We model this dependency by adding a dimer-specific bending
energy contribution
$E_\text{hydr}(\Delta \theta_0)$ to $F_\text{hydr}(\Delta \theta_0)$,
which
changes the energy barrier height from $\Delta F_\text{hydr}^\text{barrier}$ to
$\Delta F_\text{hydr}^\text{barrier} + \Delta E_\text{hydr}(p,d)$, see \autoref{fig:mechanical_hydrolysis}.
$\Delta F_\text{hydr}^\text{barrier}$ can be
absorbed into the constant rate $k_\text{hydr}^0$
so that only $\Delta E_\text{hydr}(p,d)$ remains in the Arrhenius factor in
\eqref{eq:mechanics_hydrolysis_rate}.
\begin{figure}[ht!]
\centering
\includegraphics{figure3}
\caption{
Schematic hydrolysis energy
landscape with two local minima corresponding to the straight
conformation ($\Delta \theta_0 = \SI{0}{\degree}$) and the bent
conformation ($\Delta \theta_0 = \SI{11}{\degree}$) and an energy
barrier $\Delta F_\text{hydr}^\text{barrier}$ at $\Delta \theta_0 = \SI{5.5}{\degree}$
between them.
$\Delta F_\text{hydr}^\text{release}$ is the energy released by hydrolysis.
The dashed line represents the modified energy landscape due to
the dimer-dependent contribution $E_\text{hydr}(\Delta \theta_0)$.
}
\label{fig:mechanical_hydrolysis}
\end{figure}
To calculate the change in the energy barrier height $\Delta E_\text{hydr}(p,d)$, we
now consider
the total MT energy in \eqref{eq:total_microtubule_energy}
as a function of the hydrolysis
reaction coordinate $\Delta \theta_0$ while keeping all polar angles
$\{ \theta(p,d,t) \}$ fixed.
We
simply assume that the energy barrier is centered between the minima at
$\Delta \theta_0^\text{barrier} = \SI{5.5}{\degree}$ resulting in
\begin{equation}
\Delta E_\text{hydr}
= E_\text{MT}(\Delta \theta_0 = \SI{5.5}{\degree})
- E_\text{MT}(\Delta \theta_0 = \SI{0}{\degree}) .
\label{eq:DEhydr_abstract}
\end{equation}
Because hydrolysis of tubulin dimer $(p,d)$ affects the rest bending angles of
beta-tubulin monomer $(p,d,2)$ and alpha-tubulin monomer $(p,d+1,1)$
and the rest bending angles only affect the bending energies \eqref{eq:Ecurl},
we finally obtain
\begin{align}
\Delta E_\text{hydr}(p,d)
&= \frac{1}{2} \kappa \Bigl[ (\Delta \theta(p,d,2) - \SI{5.5}{\degree})^2
- \Delta \theta^2(p,d,2) + \nonumber\\
&\qquad\qquad
(\Delta \theta(p,d+1,1) - \SI{5.5}{\degree})^2
- \Delta \theta^2(p,d+1,1) \Bigr]\nonumber\\
&= \frac{1}{2} \kappa \Bigl[ -(\Delta \theta(p,d,2)+\Delta \theta(p,d+1,1))
\cdot \SI{11}{\degree} +2\cdot(\SI{5.5}{\degree})^2 \Bigr]
\label{eq:mechanics_hydrolysis_rate2}
\end{align}
so that only a local bending energy change has to be calculated.
As a result, tubulin monomers in the MT lattice with larger bending angles
$\Delta \theta(p,d,t)$ tend to hydrolyze preferentially.
For the terminal tubulin dimer of a protofilament $(p,d(p))$,
the $d+1$-term in
\eqref{eq:mechanics_hydrolysis_rate2} is missing because tubulin monomer
$(p,d(p)+1,1)$ does not exist.
This results in an overall smaller energy barrier and, thus, a higher
hydrolysis rate of the terminal tubulin dimer.
We also see that the base hydrolysis rate $k_\text{hydr}^0$ in
\eqref{eq:mechanics_hydrolysis_rate} is not the hydrolysis rate for a perfectly
straight MT ($\Delta \theta(p,d,t) = \SI{0}{\degree}$ for all tubulin monomers)
because there is
still the constant contribution $\kappa(\SI{5.5}{\degree})^2$ to the energy
barrier in \eqref{eq:mechanics_hydrolysis_rate2}
that reduces the hydrolysis
rate.
As these terms are proportional to the bending constant $\kappa$, we cannot
simply absorb them into the constant factor $k_\text{hydr}^0$.
We note that for almost all GTP-tubulin dimers in the
GDP-body of the MT, we will typically find \emph{negative} bending angles; these
dimers bend inward in order to allow the longitudinal GDP-dimer neighbors to
further bend outwards.
For such negative bending angles the hydrolysis rate is reduced according to
\eqref{eq:mechanics_hydrolysis_rate2}.
In addition to the previous four free parameters
from the MT energy,
the simulation events add three additional
free parameters: the pseudo-first-order polymerization rate $k_+$, the attempt
rate $k_\text{att}$, and the hydrolysis rate $k_\text{hydr}$ (or $k_\text{hydr}^0$).
In total, there are now seven free parameters, which are listed in
\autoref{tab:parameters}.
\subsection{Simulation and parameter determination}
\label{sec:simulation}
The actual MT simulation (implemented in C++) works as follows:
\begin{enumerate}
\item Initially, a MT with $N_\text{GDP}$ GDP-tubulin dimers
followed by $N_\text{GTP}$ GTP
tubulin dimers per protofilament is constructed with
$\theta(p,d,t) = \SI{0}{\degree}$ for all $(p,d,t)$.
\item Using the tubulin monomers' polar angles $\{ \theta(p,d,t) \}$,
the MT's actual initial configuration is determined by
minimizing its mechanical energy.
Details on the minimization procedure will be discussed in the
next section.
\item For all of the events described in the previous section, a list
of possible events is determined and based on their rates $k_i$,
a \enquote{tentative} event time $t_i$ is calculated using
Gillespie's first reaction method \cite{Gillespie1976}:
\begin{equation}
t_i= \frac{1}{k_i} \ln \frac{1}{r}
\end{equation}
where $r$ is a uniformly distributed random number from $0$ to
$1$.
The event $i$ with the shortest event time $t_i$ is executed and
the simulation time is increased by $t_i$.
\item Assuming fast mechanical relaxation the MT's energy is
minimized after any event.
\item The simulation terminates if a protofilament is shorter than
two tubulin dimers.
\footnote{To calculate shrinkage velocities of shrinkage simulations via a
simple linear fit, it has proven to be easier to
stop simulations if a protofilament still contains one tubulin
dimer instead of zero tubulin dimers as the last tubulin dimer
requires more time to depolymerize creating a \enquote{tail} in
the length-versus-time plot.
This time increase is due to lateral springs being stretched
less because there is no additional tubulin dimer below the
terminal tubulin dimer that would exert an additional bending
moment.
In practice, for determining parameters and when running full
simulations, this first layer at the minus end is irrelevant and
could be regarded as a \enquote{seed} on which the MT grows.
}
Otherwise we go back to the third step to determine
the next event.
\end{enumerate}
There is a general agreement between different experiments
\cite{Mitchison1984,Walker1988,OBrien1990,Drechsel1992,Trinczek1993,Chretien1995,Pedigo2002}
that the MT growth velocity $v_\text{gro}$ increases linearly with the tubulin dimer
concentration $c_\text{tub}$ and that the shrinkage velocity $v_\text{shr}$ is independent of
$c_\text{tub}$.
We will use the results by Walker \emph{et al}~\cite{Walker1988}, which were measured
for $c_\text{tub} \in [ \SI{7.7}{\micro\molar}, \SI{15.5}{\micro\molar} ]$,
\begin{align}
v_\text{gro}(c_\text{tub})
&= \SI{0.33 +- 0.01}{\micro\meter \per\minute \per\micro\molar} c_\text{tub}
- \SI{1.59 +- 0.50}{\micro\meter \per\minute} ,
\label{eq:walker_growth} \\
v_\text{shr}
&= \SI{-27 +-1}{\micro\meter \per\minute} ,
\label{eq:walker_shrinkage}
\end{align}
and lead to an individual critical concentration
$c_\text{tub,c} \simeq \SI{5}{\micro\molar}$
(below which $v_\text{gro} < 0$).
To determine the values of the model parameters, we use a \enquote{divide and
conquer} approach \cite{VanBuren2002,VanBuren2005}.
First, we consider MT growth, where mechanics are assumed not to play a
significant role as protofilaments are not curling outward so that $\Delta G_\text{mech} = 0$.
Thus, we use a GTP-only MT ($N_\text{GDP} = 0$) and set $k_\text{lat} = 0$ and $\kappa = 0$ so
that the only free parameters left are $k_+$, $\Delta G_\text{long}^{0*}$,
$\Delta G_\text{lat}^{0}$, and $k_\text{att}$.
The goal of these simulations is to reproduce the measured growth velocity in
\eqref{eq:walker_growth} as function of the free tubulin dimer concentration
$c_\text{tub}$.
Secondly, we consider MT shrinkage, where mechanics are now assumed to play a
significant role, i.e., $k_\text{lat} > 0$ and $\kappa > 0$.
For a shrinking MT, we use $N_\text{GTP} = 0$, $N_\text{GDP} > 0$, and the parameter values
already determined by the growth simulations to reproduce the
shrinkage velocity in \eqref{eq:walker_shrinkage}.
In both cases, hydrolysis is ignored.
A schematic overview of the entire
parameter determination procedure can be found in
Figure S7 in the Supplementary Material.
Comparing the number of free parameters and the amount of experimental data, we
can already predict that we will not be able to determine one set of fixed
parameter values but only restrict some parameter values to specific values if
other parameter values are set to (arbitrarily but reasonably)
chosen values.
We will discuss this issue in more detail in the conclusion.
\subsection{Energy minimization}
\label{sec:energy_minimization}
In previous three-dimensional models, different energy minimization approaches
have been used.
VanBuren \emph{et al}~\cite{VanBuren2005} used a local minimization approach in which
they randomly selected individual tubulin dimers and then only locally minimized
with respect to the parameters of this dimer.
On average, each tubulin dimer was visited three times for minimization.
Zakharov \emph{et al}~\cite{Zakharov2015} employed a completely different approach by
explicitly modelling the stochastic motion of tubulin monomers in space using
Brownian dynamics (applied to the first 300 tubulin dimers at the plus end).
They solve Langevin equations every \SI{2e-10}{\second} while using
\SI{e-3}{\second} as the time step for the events in their simulation resulting
in $\mathcal{O}(\num{e7})$ dynamics steps between actual events.
Using a parallel implementation run on a supercomputer, their simulation took
more than a day to simulate \SI{1}{\second} of MT dynamics.
There are drawbacks for both approaches: a local energy minimization scheme
might not come close enough to a mechanically relaxed configuration, whereas a
full Brownian dynamics simulation is computationally very costly.
In this paper, we employ a systematic mechanical energy
minimization between each stochastic chemical simulation event.
We try to achieve a better mechanical energy relaxation than
VanBuren \emph{et al}~\cite{VanBuren2005} with significantly less computational steps
than Zakharov \emph{et al}~\cite{Zakharov2015}.
In our simulation, we use the Broyden--Fletcher--Goldfarb--Shanno (BFGS)
algorithm, a quasi-Newton method, provided by the GNU Scientific Library (GSL)
\cite{GSL} to minimize the total mechanical MT energy in
\eqref{eq:total_microtubule_energy} as a function of the polar angles
$\{ \theta(p,d,t) \}$.
If each protofilament in the simulated MT contains $N_\text{GDP} + N_\text{GTP}$ tubulin
dimers, there are a total of $26 (N_\text{GDP} + N_\text{GTP})$ polar angles and thus the same
number of minimization parameters.
In realistic simulations,
MTs can stay in the growing phase for a very long time
resulting in an unbounded increase in the number of minimization parameters
drastically slowing down the simulation.
In essence, the average time for one minimization step increases with the MT
length in this scenario making long-running simulations impossible.
To overcome this limitation, we will explore two possibilities to avoid having
a MT length-dependent number of minimization parameters:
\begin{enumerate}
\item restricting the number of minimization steps per energy
minimization to a small value but still
considering all minimization parameters (this approach is
similar to the strategy in \cite{VanBuren2005}),
\item restricting the number of minimization parameters by only
considering the tip of the MT but not restricting the number of
minimization steps.
\end{enumerate}
While the first strategy is easy to understand and implement, the second
needs further specifications in terms of how we define the tip of
the MT here.
If a certain event is executed that affects tubulin dimer $(p,d)$,
we include all layers starting from
$\min(0, d - d_\text{cutoff})$ into mechanical energy minimization
because mechanical interactions within the MT have a certain range,
where $d_\text{cutoff}$ is a cutoff \emph{layer} distance.
Below, we will compare these approaches of restricted minimization with respect
to accuracy and speed
and find that
we obtain accurate energy minimization at a high simulation
speed by using the second approach and restricting the number of
minimization parameters with $d_\text{cutoff}=10$.
We can compare with the approaches of
Zakharov \emph{et al}~\cite{Zakharov2015} and
VanBuren \emph{et al}~\cite{VanBuren2005} in terms of the average
number of minimization steps between chemical events.
Zakharov \emph{et al}~\cite{Zakharov2015} use $\mathcal{O}(\num{e7})$
Brownian dynamics
steps between events and restrict
the number of simulation parameters to 300 tubulin dimers at the plus end.
With $d_\text{cutoff}=10$
we minimize on average with respect to a comparable number of
150 tubulin dimers at the plus end.
To compare the efficiency, we consider a single
quasi-Newton minimization step in our simulation to be equivalent to one time
step of their Brownian dynamics
(if we ignore the random thermal fluctuations in their Langevin equations,
they are basically using a gradient descent method).
We compare the event time $t_i$ divided by the number of
minimization steps after the execution of that event
to their Brownian dynamics time step of \SI{2e-10}{\second}.
For shrinking MTs, one minimization step takes
$\mathcal{O}(\SI{e-5}{\second})$ after polymerization events,
$\mathcal{O}(\SI{e-4}{\second})$ after depolymerization events, and
$\mathcal{O}(\SI{e-7}{\second})$ after lateral bond
events;
all of these time steps are orders of magnitude larger
than \SI{2e-10}{\second} and, thus, the simulation proceeds orders of
magnitude faster, while we still achieve an accurate energy minimization.
As a comparison with the \SI{1}{\second} of MT dynamics simulated in
more than a day in a parallel computation in Ref.\ \cite{Zakharov2015}, we
generally do not require more than a few hours for \SI{1}{\minute} of MT
dynamics (for a constant hydrolysis rate) using just a single CPU core.
VanBuren \emph{et al}~\cite{VanBuren2005} apply a local minimization procedure
and restrict minimization to, on average,
three minimizations
with respect to the parameters of each dimer.
Because one step of their algorithm minimizes with respect
to the parameters of a single tubulin dimer, a comparison to
our quasi-Newton minimization steps which minimize the MT energy
with respect to the parameters of, on average,
$\mathcal{O}(150)$ tubulin dimers
is not straightforward.
In addition, VanBuren \emph{et al}'s model also contains longitudinal springs so that
outward bending of single tubulin dimers as a consequence of local minimization
can be compensated by stretching the next longitudinal spring.
As our model does not contain such longitudinal springs, bending one tubulin
dimer causes the whole protofilament part above the tubulin dimer to also bend
outwards creating an effectively non-local, far-reaching interaction.
Consequently, we are not able to also implement a local minimization procedure
for comparison.
To make a qualitative comparison between the two approaches,
we assume that one minimization step of our BFGS algorithm, which
acts on average on $300$ parameters, i.e.,
$150$ tubulin dimers,
corresponds to $100$ single tubulin dimer minimizations
in the model of Ref.\ \cite{VanBuren2005} as they consider
three parameters per tubulin dimer.
Between chemical events, we perform on average $150$
BFGS minimization steps, which corresponds to $1.5 \times 10^4$
single tubulin dimer minimizations in Ref.\ \cite{VanBuren2005}.
Therefore, we apply the equivalent of $15000/150 =100$
single tubulin minimizations to each of the 150 tubulin dimers
close to the plus tip on average
as compared to three single tubulin dimer minimizations
in the simulation model of Ref.\ \cite{VanBuren2005}.
Accordingly, we should achieve a more accurate mechanical energy relaxation.
We also
compared our chosen minimization method, the BFGS algorithm, against the
other multidimensional minimization algorithms using derivatives provided by GSL
\cite{GSL}, including the conjugate gradient method,
and found the BFGS
algorithm to perform better.
In particular, to fully minimize the initial configuration of a MT with
$N_\text{GDP} = 20$ and $N_\text{GTP} = 0$, BFGS only required about a third of the time
compared to the next best algorithm, a conjugate gradient method.
\section{Results}
\subsection{GTP-microtubule growth and model parameterization}
\label{sec:MTgrowth}
MT growth mainly depends on the four parameters $k_+$, $\Delta G_\text{long}^{0*}$, $\Delta G_\text{lat}^{0}$ and
$k_\text{att}$, because the growing MT tip mainly consists of straight GTP-tubulin
dimers.
Therefore, we consider growth of a GTP-only MT ($N_\text{GDP} = 0$) in the absence of
hydrolysis and set $k_\text{lat} = 0$ and $\kappa = 0$ so that the only free parameters
left are $k_+$, $\Delta G_\text{long}^{0*}$, $\Delta G_\text{lat}^{0}$, and $k_\text{att}$.
For $\koncVal{2}$ and $\koncVal{4}$, we scanned the parameter space
$(\Delta G_\text{long}^{0*}, \Delta G_\text{lat}^{0}, k_\text{att})$ in steps $\Delta \Delta G_\text{long}^{0*} = \SI{0.2}{\ensuremath{\mathit{k}_\text{B} \mathit{T}}}$.
to find parameter values that reproduce the
experimental growth velocity data of Walker \emph{et al}~in \eqref{eq:walker_growth}.
The growth velocity $v_\text{gro}$
for each simulation was determined by fitting $\ell_\text{MT}(t_\text{sim})$
with a linear function.
Experiments on MT growth show a linear dependence
$v_\text{gro}(c_\text{tub}) = a_\text{gro} c_\text{tub} + b_\text{gro} $ characterized by two parameters $a_\text{gro}$
and $b_\text{gro}$ from \eqref{eq:walker_growth}.
If simulations reproduce a linear dependence of $v_\text{gro}$ as a function of
$c_\text{tub}$, we can determine two of the three model parameters
$(\Delta G_\text{long}^{0*}, \Delta G_\text{lat}^{0}, k_\text{att})$ by fitting to the experimental data
\eqref{eq:walker_growth} for $a_\text{gro}$ and $b_\text{gro}$, i.e., two
experimental constraints fix two
model parameters as a function of the third parameter.
This will allow us to parameterize a one-dimensional sub-manifold (a line)
within the three-dimensional parameter space $(\Delta G_\text{long}^{0*}, \Delta G_\text{lat}^{0}, k_\text{att})$ where our
model agrees with experimental growth data.
This procedure is conceptually analogous to the approach of VanBuren \emph{et al}
~\cite{VanBuren2002}, but we work in a higher-dimensional (three-dimensional)
space of model parameters.
As a result, we obtain a line in the three-dimensional parameter space,
which we parameterize by $\Delta G_\text{long}^{0*}$, i.e., for a given value of $\Delta G_\text{long}^{0*}$,
a value of $\Delta G_\text{lat}^{0}$
(see \autoref{fig:growth_plots}(A)) and a value of $k_\text{att}$ (see
\autoref{fig:growth_plots}(B)) is determined by the experimental growth data.
Afterwards, we will fix a particular value of $\Delta G_\text{long}^{0*}$ by the additional
requirement that the simulation should exhibit an as linear as possible
concentration dependence of the growth velocity $v_\text{gro}$ over a certain range of
tubulin concentrations $c_\text{tub}$ (see \autoref{fig:growth_plots}(D)) such that we
arrive at parameter sets $(\Delta G_\text{long}^{0*}, \Delta G_\text{lat}^{0}, k_\text{att})$ for $\koncVal{2}$ and
$\koncVal{4}$, see \autoref{tab:most_linear_parameter_values}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.99\linewidth]{figure4.pdf}
\caption{
(A) Lateral bond energy $\Delta G_\text{lat}^{0}$ as a function of the
longitudinal bond energy $\Delta G_\text{long}^{0*}$ from matching the
concentration-dependent growth velocity data from
Walker \emph{et al}~\cite{Walker1988}, see \eqref{eq:walker_growth}.
To compare our lateral bond energies (per tubulin monomer) to
other publications (lateral bond energy per tubulin
\emph{dimer}), the $y$-axis shows $2 \Delta G_\text{lat}^{0}$.
(The numbers behind Ref.\ \cite{VanBuren2002} refer
to their value of $k_+$.)
(B) Lateral bond attempt rate $k_\text{att}$ as a function of the
longitudinal bond energy $\Delta G_\text{long}^{0*}$ for our two values of $k_+$
from matching the concentration-dependent growth velocity data
from Walker \emph{et al}~\cite{Walker1988},
see \eqref{eq:walker_growth}.
(C) Relative occurrence of
different $d_\text{depoly}$ values for MT growth with
$\koncVal{4}$ and $c_\text{tub} = \SI{10}{\micro\molar}$.
The inset shows the average $d_\text{depoly}$ as a function of $\Delta G_\text{long}^{0*}$
for $\koncVal{4}$ and $c_\text{tub} = \SI{10}{\micro\molar}$ and also
for $c_\text{tub} = \SI{16}{\micro\molar}$.
(D) MT growth velocity $v_\text{gro}$ as a function of a larger
interval of free tubulin dimer concentration values $c_\text{tub}$ for
$\koncVal{4}$ and different longitudinal bond energies
$\Delta G_\text{long}^{0*}$.
We also plot \eqref{eq:walker_growth} from
the growth velocity data from Walker
\emph{et al}~\cite{Walker1988} over the larger concentration
interval.
}
\label{fig:growth_plots}
\end{figure}
\begin{table}[ht!]
\caption{
Growth parameter values that generate the most linear
dependence $v_\mathrm{gro}(c_\mathrm{tub})$.
}
\label{tab:most_linear_parameter_values}
\begin{tabular}{@{}lll}
\hline
$k_+$ (\si[per-mode=reciprocal]{\per\micro\molar \per\second}) & $2$ & $4$ \\
\hline
$\Delta G_\text{long}^{0*}$ (\si{\ensuremath{\mathit{k}_\text{B} \mathit{T}}}) & $-9.7$ & $-9.3$ \\
\hline
$\Delta G_\text{lat}^{0}$ (\si{\ensuremath{\mathit{k}_\text{B} \mathit{T}}}) & $-1.38$ & $-1.58$ \\
\hline
$k_\text{att}$ (\si[per-mode=reciprocal]{\per\second}) & $281$ & $258$ \\
\hline
\end{tabular}
\end{table}
The results in \autoref{fig:growth_plots}(A) show that the values
of $\Delta G_\text{long}^{0*}$ and
$\Delta G_\text{lat}^{0}$ depend only weakly on our chosen $k_+$ values.
\autoref{fig:growth_plots}(A) also shows that our data matches
results obtained in
\cite{VanBuren2002} (this data was later re-used in
\cite{VanBuren2005,Coombes2013,Ayaz2014}) but also differs from other results
\cite{Piette2009,Mickolajczyk2019}, which were all obtained by the
same approach of fitting
growth velocity data from Walker \emph{et al}~\cite{Walker1988} (or their own growth
data in \cite{Mickolajczyk2019}).
Kononova \emph{et al}~\cite{Kononova2014} obtained bond energies from
MD simulations of nano-indentation experiments;
the values from Kononova \emph{et al}~\cite{Kononova2014} are much larger for both
types of bonds ($\Delta G_\text{long}^{0*} \sim 2\Delta G_\text{lat}^{0} \sim 25 k_BT$) and, thus, not shown in
\autoref{fig:growth_plots}(A).
Qualitatively, the measured dependencies of $\Delta G_\text{lat}^{0}$ and $k_\text{att}$ on $\Delta G_\text{long}^{0*}$ can
be understood as follows:
the weaker longitudinal bonds are, the more likely it is that a tubulin dimer
will depolymerize.
To get the same growth velocity, this decrease in
\enquote{longitudinal stability} has to be compensated by an increase in
\enquote{lateral stability} by stronger lateral bonds (making it less likely
that lateral bonds break and, thus, enabling depolymerization)
or faster formation
of lateral bonds (to stabilize newly polymerized tubulin dimers).
\autoref{fig:growth_plots}(C) shows the number of tubulin dimers $d_\text{depoly}$ that
detach at once during depolymerization events.
For increasingly stronger longitudinal bonds and, thus,
weaker lateral bonds, multi-dimer
depolymerization becomes more relevant.
The data in the inset in \autoref{fig:growth_plots}(C) is also compatible
with
results in Ref.\ \cite{Margolin2012} obtained with a purely chemical model.
Until now, we only considered free tubulin dimer concentrations
$c_\text{tub} \in [ \SI{7}{\micro\molar}, \SI{16}{\micro\molar} ]$ to use similar
values as Walker \emph{et al}~\cite{Walker1988}, but there have also been
other measurements with a larger range of $c_\text{tub}$ values
\cite{Mitchison1984,OBrien1990,Chretien1995,Pedigo2002}.
In general, it is assumed that the growth velocity $v_\text{gro}$ increases linearly
with $c_\text{tub}$ for the whole MT just as the polymerization rate in \eqref{eq:kon}
increases linearly with $c_\text{tub}$ for individual protofilaments.
Theoretically, it has been shown that, for multistranded polymers,
lateral interactions
give rise to a non-linear relation between growth velocity on
monomer concentration \cite{Stukalin2004}. For MT growth, a non-linear
dependence on tubulin concentration was found in Ref.\ \cite{Piette2009}
using a two-dimensional model based on Ref.\ \cite{VanBuren2002}.
Over a larger range of $c_\text{tub}$ values, our simulations also exhibit a
non-linear relation between $v_\text{gro}$ and $c_\text{tub}$ depending
on the value of $\Delta G_\text{long}^{0*}$, as shown in
\autoref{fig:growth_plots}(D).
Data for different values of $\Delta G_\text{long}^{0*}$ (and correspondingly adjusted values of
$\Delta G_\text{lat}^{0}$ and $k_\text{att}$, see \autoref{fig:growth_plots}(A) and (B))
and the same value of $k_+$,
that was previously overlapping in the interval $c_\text{tub} \in [
\SI{7}{\micro\molar}, \SI{16}{\micro\molar} ]$ start to differentiate in
a larger concentration interval.
While possible non-linear relations have been predicted theoretically,
the available experimental data show a linear
$v_\text{gro}(c_\text{tub})$ dependence over a large range of $c_\text{tub}$ values
\cite{Mitchison1984,OBrien1990,Chretien1995,Pedigo2002}.
Therefore, we determined the remaining free parameter value of $\Delta G_\text{long}^{0*}$ for the
two $k_+$ values from the condition that the concentration dependence of
$v_\text{gro}$ is as linear as possible up to \SI{50}{\micro\molar}.
To determine these values of $\Delta G_\text{long}^{0*}$, we ignored concentrations $c_\text{tub}$ below
the individual critical concentration (for which $v_\text{gro} < 0$) which violate our
fundamental assumption of a growing MT.
In summary, we find a triple $(\Delta G_\text{long}^{0*}, \Delta G_\text{lat}^{0}, k_\text{att})$
that fits the growth
velocity data from Walker \emph{et al}~\cite{Walker1988} and that gives a linear
concentration dependence over a wide tubulin concentration range for two
representative values of $k_+$.
\autoref{tab:most_linear_parameter_values} lists these parameter triples for
$\koncVal{2}$ and $\koncVal{4}$.
For a given $k_+$, these results fix four of the seven model parameters in
\autoref{tab:parameters} using experimental data on MT growth.
To address the parameters $\kappa$ and $k_\text{lat}$, we now turn to MT shrinkage.
\subsection{GDP-microtubule shrinkage and model parameterization}
As opposed to MT growth, MT shrinkage
also depends on the bending constant $\kappa$ and spring constant $k_\text{lat}$ as
protofilament curling and bond rupture become relevant processes for a
shrinking MT.
We consider a shrinking MT that initially only consists of GDP-tubulin dimers
($N_\text{GTP} = 0$, $N_\text{GDP} > 0$) with parameter values $k_+$, $\Delta G_\text{long}^{0*}$, $\Delta G_\text{lat}^{0}$,
and $k_\text{att}$ as already determined by the growth simulations and in the absence
of hydrolysis
(a shrinking, initially GDP-only MT acquires some GTP-dimers
by attachment but remains GDP-dominated).
To investigate shrinkage, MTs with $N_\text{GDP} = 20$ and $N_\text{GTP} = 0$ were used.
For each parameter set, 20 simulations were run to get an average shrinkage
velocity $v_\text{shr}$.
Experimental data on shrinking MTs show a shrinkage speed $v_\text{shr}$ that is
independent of the tubulin dimer concentration.
For each value of $k_+$, we should be able to determine one of the two
parameters $(\kappa, k_\text{lat})$ as a function of the other parameter by fitting
such that the experimental value of the shrinkage velocity is reproduced in
simulations (for parameters $\Delta G_\text{long}^{0*}$, $\Delta G_\text{lat}^{0}$, and $k_\text{att}$ fixed by the growth
velocity data).
We use the experimental shrinkage velocity of Walker \emph{et al}, see
\eqref{eq:walker_shrinkage}, for this fitting procedure.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.99\linewidth]{figure5-crop.pdf}
\caption{
Mechanical parameter values reproducing the experimentally
measured shrinkage velocity in \eqref{eq:walker_shrinkage} for
(A) $\koncVal{2}$ and (B) $\koncVal{4}$ and different values of
$\Delta G_\text{long}^{0*}$.
(C) Force on lateral bonds at rupture $F_\text{rup}$ as a function of
$k_\text{lat}$ for $\koncVal{2}$ with $\GlongVal{-9.5}$ and
$\koncVal{4}$ with $\GlongVal{-9.0}$, both at
$c_\text{tub} = \SI{10}{\micro\molar}$.
(D) Rupture energy $F_\text{rup} \ell_\text{rup}$ of lateral bonds as a function
of $k_\text{lat}$ for the same parameters as in (C).
(E) Shrinkage velocity $v_\text{shr}$ as a function of the free tubulin
dimer concentration $c_\text{tub}$ for $\koncVal{4}$,
$\GlongVal{-9.3}$, and different values of $k_\text{lat}$ and linear
fits $v_\text{shr}(c_\text{tub})$.
}
\label{fig:shrinkage_parameters}
\end{figure}
\autoref{fig:shrinkage_parameters}(A) and (B) show the values of $k_\text{lat}$ and
$\kappa$ for $\koncVal{2}$ and $\koncVal{4}$ and different values of $\Delta G_\text{long}^{0*}$
that reproduce the experimentally measured shrinkage velocity in
\eqref{eq:walker_shrinkage}.
All data points for each $\Delta G_\text{long}^{0*}$ fall on square root functions
\begin{equation}
\kappa(k_\text{lat})
= a_\text{shr} \sqrt{k_\text{lat}} + b_\text{shr} .
\end{equation}
This functional dependence can be understood qualitatively by considering the
mechanical contribution to the bond rupture rate \eqref{eq:krup},
$\exp(F_\text{lat} \ell_\text{rup})$, which, on average, should have the same value for all
mechanical parameter combinations to produce the same shrinkage velocity.
As the characteristic bond rupture length
in \eqref{eq:lrup} depends on $k_\text{lat}$ as
$\ell_\text{rup} \sim \sqrt{k_\text{lat}}^{-1}$, the average lateral bond force at rupture
should depend on
$k_\text{lat}$ like $F_\text{rup} \sim k_\text{lat} \ell_\text{rup} \sim \sqrt{k_\text{lat}}$.
The lateral bond force $F_\text{lat}$ is a consequence of the lateral bonds stretching
as the tubulin monomers curl outward to decrease the bending force
$F_\text{bend} = \kappa \left( \Delta \theta(p,d,t) - \Delta \theta_0(p,d,t) \right)$,
which leads to $F_\text{rup} \sim F_\text{bend} \sim \kappa$ resulting in
$\kappa \sim \sqrt{k_\text{lat}}$ in accordance with
\autoref{fig:shrinkage_parameters}(A) and (B).
\autoref{fig:shrinkage_parameters}(C) confirms that
the average force on lateral bonds
at rupture $\langle \Frup \rangle$ has the functional dependence
$\langle \Frup \rangle \sim \sqrt{k_\text{lat}}$ predicted by our above qualitative argument
($\langle \Frup \rangle$ and error bars
$\sigma_{F_\text{rup}}$ were determined by fitting normal distributions
to the histogram of the lateral bond rupture forces collected for 20 shrinkage
simulations per parameter set with $N_\text{GDP} = 20$).
Also,
the resulting mechanical contribution $F_\text{rup} \ell_\text{rup}$ for the exponential function
of the lateral bond rupture rate in \autoref{fig:shrinkage_parameters}(D)
is approximately constant as expected from our above argument.
As the experimentally measured shrinkage velocity $v_\text{shr}$ does not depend on the
free tubulin dimer concentration $c_\text{tub}$, we used constants to fit our
$v_\text{shr}(c_\text{tub})$ data.
In reality, however, our data shows a linear dependence between $v_\text{shr}$ and
$c_\text{tub}$ as shown in \autoref{fig:shrinkage_parameters}(E)
corresponding to a slowing down of depolymerization.
This is caused by an increased probability for
intermediate addition of tubulin dimers and lateral bond formation between
them; these lateral bonds require additional time to rupture.
While this dependency of $v_\text{shr}$ on $c_\text{tub}$ will have a small influence on the
concrete value of the shrinkage velocity, we expect it to not have any
qualitative effect on the overall MT dynamics. At higher
tubulin concentrations, where the decrease of
$|v_\text{shr}(c_\text{tub})|$ would become significant, the catastrophe rates
decrease dramatically so that shrinking will rarely occur.
Comparing our results from figures \ref{fig:shrinkage_parameters}(A) and (B) to
other results is not always directly possible due to different modelling
approaches but most find that
$k_\text{lat} \ll \SI{1000}{\ensuremath{\mathit{k}_\text{B} \mathit{T}} \per \nano\meter\squared}$ and
$\kappa \ll \SI{1000}{\ensuremath{\mathit{k}_\text{B} \mathit{T}} \per \radian\squared}$
\cite{VanBuren2005,Sim2013,Driver2017}, with some exceptions
\cite{Deriu2007,Kononova2014}.
Previously, we used MD simulation data from
Grafm{\"u}ller \emph{et al}~\cite{Grafmueller2011}
to calculate the bending constant $\kappa$ \cite{Mueller2014}. Compared to
Ref.\ \cite{Mueller2014}, we have to adjust the calculation to consider both
inter-dimer and intra-dimer bending resulting in
$\kappa \simeq \SI{50}{\ensuremath{\mathit{k}_\text{B} \mathit{T}} \per \radian\squared}$.
MD simulation in \cite{Kononova2014}, on the other hand, give a persistence
length of individual protofilaments of
$L_\text{p} \simeq \SI{6}{\micro\meter}$, which corresponds to a
significantly larger value of
$\kappa \simeq \SI{1500}{\ensuremath{\mathit{k}_\text{B} \mathit{T}} \per \radian\squared}$ for the bending constant.
This discrepancy cannot be resolved at present.
We use $\kappa = \SI{149}{\ensuremath{\mathit{k}_\text{B} \mathit{T}} \per \radian\squared}$ in the following together
with the corresponding value of
$k_\text{lat} = \SI{100}{\ensuremath{\mathit{k}_\text{B} \mathit{T}} \per \nano\meter\squared}$
according to \autoref{fig:shrinkage_parameters}(B) which are values close
to the ones used by \cite{VanBuren2005}.
\subsection{Restricted energy minimization for efficient simulation}
\label{sec:restricted_energy_minimization}
Until now, energy minimization was not restricted by either a maximum number of
minimization steps or by only considering a subset of tubulin dimers at
the MT tip so that we will consider this unrestricted minimization as the
\enquote{gold standard} to which we will compare the two restricted energy
minimization approaches described in Section \ref{sec:energy_minimization}.
We use the shrinkage velocity $v_\text{shr}$ as the observable by which we judge the
relevant cutoff values in the two approaches.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.99\linewidth]{figure6.pdf}
\caption{
(A) Shrinkage velocity $v_\text{shr}$ as a function of the maximum
number of minimization steps.
(B) Shrinkage velocity $v_\text{shr}$ as a function of the layer cutoff
distance $d_\text{cutoff}$ (where $d_\text{cutoff} = \infty$ means that no
cutoff was used).
20 simulations for each parameter set were run for both plots
and both used $\koncVal{4}$, $\GlongVal{-9.3}$,
$c_\text{tub} = \SI{10}{\micro\molar}$, $N_\text{GDP} = 20$, and different
values of $k_\text{lat}$.
}
\label{fig:restricted_energy_minimization}
\end{figure}
For restricting the number of quasi-Newton minimization steps,
\autoref{fig:restricted_energy_minimization}(A) shows that an acceptable maximum
number of minimization steps reproducing
$v_\text{shr} = \SI{-27}{\micro\meter\per\minute}$ depends on the chosen mechanical
parameters as the higher their values are, the greater the energy and its
gradient.
A maximum number of minimization steps of around 100 should be an appropriate
value according to the results shown in
\autoref{fig:restricted_energy_minimization}(A).
The results in \autoref{fig:restricted_energy_minimization}(A) also
show that reducing the number of minimization steps by a factor
of 10 can lead to deviating growth velocities. Therefore,
the improved energy relaxation that we obtain in comparison
to Ref.\ \cite{VanBuren2005} by applying the equivalent of one order
of magnitude more minimization steps should be relevant.
If minimization is restricted to a subset of minimization parameters at the tip
of the simulated MT, this subset is defined by the cutoff distance $d_\text{cutoff}$.
To have a maximum improvement in simulation speed, $d_\text{cutoff}$ should be as small
as possible.
It is evident from the data shown in
\autoref{fig:restricted_energy_minimization}(B)
that values $d_\text{cutoff} < 5$ have a
detectable influence on the shrinkage velocity.
We also ran some simulations with $N_\text{GDP} = 50$ and also for $\koncVal{2}$
(see Figure S8 in the Supplementary Material)
and based on all data, we choose $d_\text{cutoff} = 10$ as a
conservative value for the cutoff distance.
In summary, we are more confident in the second approach to only minimize the
MT tip where actual conformational changes happen, because for this subset, the
restricted energy is fully minimized.
Additionally, the first approach still has the issue of slowing down with an
increasing number of minimization parameters as all minimization parameters are
considered.
The second approach ensures that the number of minimization parameters does not
scale with the MT length but remains bounded, which assures that we can simulate
arbitrarily long growing MTs at a fixed minimal computational speed.
In the first approach, the quality of the minimization will probably also
decline because the number of minimization parameters increases
while the number of minimization steps is kept constant.
Lastly, the first approach, in contrast to the second approach, does not
guarantee that the upper, i.e., the dynamic part of the MT is properly
minimized.
We also note that in the presence of
mechanical feedback onto hydrolysis, simulations take longer because
minimizations after hydrolysis events need to consider more tubulin dimers if
the hydrolyzed tubulin dimer is relatively deep in the MT lattice
(see Supplementary Material for more details).
\subsection{Full simulations exhibit repeated catastrophe and rescue events}
Based on the previous section on energy minimization, we use
$d_\text{cutoff} = 10$ for full simulations in which the initial MTs have both
a GDP body and a GTP cap, thus $N_\text{GDP} > 0$ and $N_\text{GTP} > 0$.
We now aim for realistic MT dynamics with
repeated phases of growth and shrinkage
in the same simulation and catastrophe and rescue events in between.
First, we only consider strictly random hydrolysis with a hydrolysis
rate $k_\text{hydr}$ that is independent of tubulin dimers' position or
mechanical forces and which is another unknown free parameter in our model.
Hydrolysis coupled to mechanics via \eqref{eq:mechanics_hydrolysis_rate} will be
considered later.
It poses a computational challenge for chemomechanical MT models to reach time
scales of MT dynamics where repeated catastrophe events occur at
realistic hydrolysis rates $k_\text{hydr}$ and tubulin dimer concentrations $c_\text{tub}$.
In Ref.\ \cite{Zakharov2015}, where mechanics was implemented via full Brownian
dynamics, only short times scales could be reached (although the Brownian
dynamics was applied to only 300 tubulin dimers at the plus end).
Therefore, they increased the hydrolysis rate from their \enquote{normal} value
of \SI[per-mode=reciprocal]{0.5}{\per\second} (based on the \SI{2}{\second}
delay between polymerization and phosphate release measured by \cite{Melki1996},
which is also used by \cite{Aparna2017}) into a range of
\SIrange[per-mode=reciprocal]{3}{11}{\per\second}
in order to trigger catstrophe
events within computationally accessible time scales.
They found a linear scaling of catastrophe rate with $k_\text{hydr}$ and employed a
linear extrapolation to obtain catastrophe rates for realistic hydrolysis rates
(see their Figure 3A).
In our simulations, we observe that increasing $k_\text{hydr}$ beyond a certain
($c_\text{tub}$-dependent) value leads to immediate MT shrinkage because the initial
cap quickly hydrolyzes; this can be interpreted as an
instantaneous catastrophe.
In such cases (like in \autoref{fig:full_simulation_klat100} for
$c_\text{tub} = \SI{7}{\micro\molar}$ and $\khydrVal{0.5}$), there is no real growth
phase based on which a catastrophe frequency could be determined.
For these hydrolysis rates, the individual critical concentration $c_\text{tub,c}$
(where $v_\text{gro} = 0$ is reached) has apparently increased above the given tubulin
concentration.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.99\linewidth]{figure7.pdf}
\caption{
The MT length $\ell_\text{MT}$ was measured as a function of the
simulation time $t_\text{sim}$ for 20
different simulations with $\koncVal{4}$, $\GlongVal{-9.3}$,
$k_\text{lat} = \SI{100}{\ensuremath{\mathit{k}_\text{B} \mathit{T}} \per\nano\meter\squared}$, seven
different values of $c_\text{tub}$, and five different values of
$k_\text{hydr}$.
MT growth trajectories for three additional $c_\text{tub}$ values can
be found in Figure S9 in the Supplementary Material.
}
\label{fig:full_simulation_klat100}
\end{figure}
The experimental data on the hydrolysis rate
is limited, so that many publications determine the hydrolysis rate themselves
by matching simulation results with experimental data
\cite{VanBuren2002,Piette2009,Margolin2011,Margolin2012,Padinhateeri2012,Bowne-Anderson2013,Coombes2013,Piedra2016}.
There are, however, more direct measurements in \cite{Melki1996}.
In most models and also in measurements from \cite{Melki1996}, the (random)
hydrolysis rate is in the range of
\SIrange[per-mode=reciprocal]{0.1}{0.5}{\per\second} (Ref.\
\cite{VanBuren2002} use a
relatively high value of \SI[per-mode=reciprocal]{0.95}{\per\second}).
We explore exactly this range of hydrolysis rates, see
\autoref{fig:full_simulation_klat100}.
\autoref{fig:full_simulation_klat100} shows MT growth curves (length vs.\ time)
over simulation times up to $t_\text{sim} = \SI{10}{\min}$ for several representative
tubulin concentrations and realistic hydrolysis rates.
MT growth trajectories as in \autoref{fig:full_simulation_klat100} for other
$k_\text{lat}$ values can be found in Figures S10, S11, S12, and S13 in the
Supplementary Material.
Simulations in \autoref{fig:full_simulation_klat100}
were started with $N_\text{GTP}=10$ and $N_\text{GDP}=20$, but results
are largely independent of the initial ratio $N_\text{GTP}/N_\text{GDP}$ (see, for example,
Figure S11 in the Supplementary Material).
Our chemomechanical MT model is computationally efficient such that
we can determine catastrophe and rescue rates as inverse
average growth and shrinking times between repeated
catastrophe and rescue events.
In the Supplementary Material, we explain the algorithm that
we used to identify catastrophe and rescue events and, thus,
growth and shrinking times from MT
simulation trajectories in detail.
The results are shown in
\autoref{fig:catastropheRescueRates}.
In comparison to typical experimental data
\cite{Walker1988,Gardner2011_kinesins},
this decrease of the catastrophe rate
with tubulin concentration seems too steep.
Current phenomenological models for the MT catastrophe rate as a function of
tubulin concentration can be found in
\cite{Flyvbjerg1996,Zelinski2013}, experimental data in
\cite{Walker1988,Janson2003};
the decrease of the catastrophe rate with GTP-tubulin concentration
$c_\text{tub}$ appears steeper in the simulation for all hydrolysis
rates $k_\text{hydr}=$ \SIrange[per-mode=reciprocal]{0.1}{0.5}{\per\second}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.99\linewidth]{figure8-crop.pdf}
\caption{
(A) Catastrophe rate $\omega_\text{cat}$ and (B) rescue rate
$\omega_\text{res}$
as a function of
GTP-tubulin concentration $c_\text{tub}$ and in comparison
with experimental data from Walker
\emph{et al}~\cite{Walker1988} and Janson
\emph{et al}~\cite{Janson2003}.
}
\label{fig:catastropheRescueRates}
\end{figure}
In the following, we will discuss two aspects of MT growth and catastrophes in
more detail, namely the dependence of growth velocity on hydrolysis rate and the
detailed dynamics within single catastrophe events, which become accessible
within a computational model and are impossible to address experimentally.
\subsection{Growth velocity reduces linearly with hydrolysis rate because
of cap structure}
\label{sec:vgro_of_ctub_with_hydrolysis}
So far, we parameterized the model by fitting the growth
velocity of GTP-only MTs, i.e., in the absence of hydrolysis to the
experimentally measured velocity in \eqref{eq:walker_growth}.
Hydrolysis reduces this growth velocity by increasing the probability of
GDP-dimers dimers at the plus end.
This increases the rate of bond rupture because hydrolyzed
dimers tend to create stretched bonds which rupture more easily (there
is no direct increase of the off-rate for hydrolyzed GDP-dimers
in our model).
As only laterally unbounded dimers can detach, hydrolyzed
GDP-dimers at the plus end have an
effectively higher detachment rate.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.99\linewidth]{figure9.pdf}
\caption{
Growth velocity $v_\text{gro}$ as a function of (A) the free tubulin
dimer concentration $c_\text{tub}$ for different hydrolysis rates
$k_\text{hydr}$ and as a function of (B) the hydrolysis rate $k_\text{hydr}$
for different free tubulin dimer concentrations $c_\text{tub}$ in
comparison to the experimental data from \cite{Walker1988}. (C)
Average GTP-tubulin cap length
$\langle N_\text{cap} \rangle$ of protofilaments and
(D) fraction of protofilaments without a
GTP cap as a function of the hydrolysis rate $k_\text{hydr}$.
The standard set of parameters from \autoref{tab:parameters} was
used.
}
\label{fig:vgro_of_ctub_with_hydrolysis}
\end{figure}
The last row of \autoref{fig:full_simulation_klat100} indicates and
\autoref{fig:vgro_of_ctub_with_hydrolysis}(B) shows explicitly
that increasing the
hydrolysis rate decreases the growth velocity linearly
although the growth reduction
mechanism is indirect via the increased probability of bond rupture for
hydrolyzed GDP-dimers.
Our model parameterization was such that we obtain the
experimentally measured growth velocities by Walker \emph{et al}~\cite{Walker1988}
at $\khydrVal{0}$ in \autoref{fig:vgro_of_ctub_with_hydrolysis}(B).
Nevertheless, \autoref{fig:vgro_of_ctub_with_hydrolysis}(A) shows that there is
still a linear relation between the free tubulin dimer concentration $c_\text{tub}$ and
the growth velocity $v_\text{gro}$
so that it is possible to re-adjust parameters to reproduce the
growth velocity in the presence of hydrolysis, once a particular hydrolysis
rate can be reliably selected.
Because both the dependence on tubulin concentration in
\autoref{fig:vgro_of_ctub_with_hydrolysis}(A)
remains linear and the reduction by
the hydrolysis rate in \autoref{fig:vgro_of_ctub_with_hydrolysis}(B)
is linear, we
also expect that the individual critical concentration (where $v_\text{gro} = 0$ is
reached) increases linearly with the hydrolysis rate beyond the value
$c_\text{tub,c} \simeq \SI{5}{\micro\molar}$ of Walker \emph{et al}~\cite{Walker1988}.
\autoref{fig:full_simulation_klat100} clearly shows that increasing $k_\text{hydr}$
actually increases the individual critical concentration $c_\text{tub,c}$.
\footnote{
The individual critical concentration can be read off from
\autoref{fig:full_simulation_klat100} as the
concentration below which immediate MT
shrinkage sets in.
}
The mechanism of growth velocity reduction by hydrolysis can be further
elucidated by comparing the average GTP-tubulin
cap length $\langle N_\text{cap} \rangle$ of protofilaments (see
\autoref{fig:vgro_of_ctub_with_hydrolysis}(C)),
and the fraction of protofilaments
without a GTP-cap (see \autoref{fig:vgro_of_ctub_with_hydrolysis}(D)):
The higher the hydrolysis rate is, the smaller the GTP-cap and the higher the
fraction of cap-less protofilaments is.
\footnote{
As the cap lengths shown in \autoref{fig:vgro_of_ctub_with_hydrolysis}(C) are
averaged over the whole duration of the simulations, these cap lengths also
average over growth and shrinkage phases.
As cap lengths are shorter during shrinkage than growth, the cap lengths in
\autoref{fig:vgro_of_ctub_with_hydrolysis}(C) can be regarded as a lower limit
on the average cap length during MT growth.
}
The increase in GDP-tubulin dimers depolymerizing from the protofilament tips
for higher hydrolysis rates is due to an increase in the probability of uncapped
protofilaments with the hydrolysis rate as shown in
\autoref{fig:vgro_of_ctub_with_hydrolysis}(D).
In Ref.\ \cite{Li2010}, dependencies $\langle N_\text{cap}\rangle
\propto \sqrt{c_\text{tub}/k_\text{hydr}}$
and $p(N_\text{cap}=0) \propto k_\text{hydr}/c_\text{tub}$ have been predicted, which are
in agreement with \autoref{fig:vgro_of_ctub_with_hydrolysis}(C) and (D).
\subsection{Detailed dynamics within single catastrophe and rescue events}
The chemomechanical model reproduces realistic MT dynamics including
catastrophe and rescue events.
\autoref{fig:full_simulations} shows typical MT growth paths featuring two
catastrophe events and a rescue event in subfigure (C).
Moreover, we observe \enquote{dips} in the growth path where a short phase of
shrinking appears, which are similar to \enquote{stutter} events that have been
observed in Ref.\ \cite{Mahserejian2019}.
Videos of these two simulations with two- and
three-dimensional representations of the MT structure
can be found in the Supplementary Material.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\linewidth]{figure10.pdf}
\caption{
Lengths of two MTs as a function of simulation time $t_\text{sim}$
with $\koncVal{4}$, $\GlongVal{-9.3}$,
$k_\text{lat} = \SI{100}{\ensuremath{\mathit{k}_\text{B} \mathit{T}} \per\nano\meter\squared}$, and
(A) $c_\text{tub} = \SI{8}{\micro\molar}$ and $\khydrVal{0.1}$
and (C) $c_\text{tub} = \SI{9}{\micro\molar}$ and $\khydrVal{0.2}$.
The insets highlight parts of the trajectories of interest for
the dynamics and color-code the probability of the $\ell_\text{MT}(t_\text{sim})$
curve to stay quantitatively the same at the relevant point in
time if new simulations are started with the relevant
configuration as the initial configuration (for more details,
refer to the text).
(B) shows the two-dimensional representations of
certain MT tip configurations that are marked by arrows
in the insets of (A) and (C)
(configuration $4*$ has been shifted towards
the MT tip by 24
tubulin dimer lengths). The first protofilament is the
periodic image of $p = 13$ and the last protofilament is the
periodic image of $p = 1$.
Lateral bonds are represented by the thick black line between
protofilaments.
}
\label{fig:full_simulations}
\end{figure}
Using our computational model, we can systematically identify the point in a MT
growth path, where a catastrophe becomes structurally unavoidable.
This allows us to search for typical catastrophe-triggering features in MT
growth.
To analyze how probable it is at specific points in the simulation of MT
dynamics that the MT continues a certain growth path, we chose two simulations
with at least one significant event (meaning a catastrophe, rescue, or a
\enquote{dip}/\enquote{stutter}) and took configurations around such events as
starting points for new simulations (similar to \cite{Margolin2012}).
In these new simulations, MTs were allowed to grow (or shrink) for a maximum of
\SI{60}{\second}, a sufficient amount of time to check if the new simulations
show dynamics similar to the original simulation around the significant event.
The MT growth trajectory shown in \autoref{fig:full_simulations}(A) has two
significant events: a dip at $t_\text{sim} = \SI{1.2}{\minute}$ and a catastrophe at
$t_\text{sim} = \SI{6.85}{\minute}$;
the trajectory
in \autoref{fig:full_simulations}(C) contains three significant events:
a dip at the
very beginning, a catastrophe at $t_\text{sim} = \SI{9.15}{\minute}$, and a rescue
at $t_\text{sim} = \SI{9.54}{\minute}$.
To determine whether newly run simulations with
starting points from the initial simulation
qualitatively follow the original simulation, we need
criteria to identify dips, catastrophes, or rescue events.
The exact criteria for these events in \autoref{fig:full_simulations}(A) and
(C) are stated in the Supplementary Material.
In short, in order to identify whether a new simulation reproduces
a catastrophe, we check after
a time of \SIrange{10}{15}{\second}
whether the MT is sufficiently short
that a catastrophe must have happened;
for a dip, we check whether the MT continued to grow without entering
a catastrophe; for a rescue, we check that the MT did not
completely vanish because it continued to shrink.
For each initial configuration, we ran 20 new simulations and calculated the
fraction of simulations that fulfilled these criteria.
These fractions are the probabilities for the original growth path at different
points in time, and they are shown color-coded in all the insets in
\autoref{fig:full_simulations}.
Both catastrophes and the rescue show that the transition from a high
probability to stay in the current dynamic state to a high probability to switch
into the other dynamic state occurs within a few seconds.
In \autoref{fig:full_simulations}(A) and (C), we first observe
that catastrophes become practically unavoidable (red color code in (A.C) and
(C.C)) after a phase of relatively slow shrinking by
\SIrange{50}{100}{\nano\meter}; similar \enquote{transitional catastrophe}
behavior has been observed in Ref.\ \cite{Mahserejian2019}.
A dip, on the other hand, can only evade a catastrophe (yellow to red color code
in (A.D) and (C.D)) if the MT length shrinks by significantly less than
$\SI{50}{\nano\meter}$.
Because hydrolysis followed by straining and rupture of the lateral bonds is
required before a laterally unbonded dimer can detach, MT shrinking by
$\SI{50}{\nano\meter}$ suggests that roughly 6 dimer layers must hydrolyze in a
row to trigger a catastrophe.
This is, however, not sufficient to remove the entire GTP-cap.
The GTP-cap length averaged over all protofilaments is still $>1$ when the
catastrophe becomes unavoidable
(at points 3 in \autoref{fig:full_simulations}(A)
and 6 in \autoref{fig:full_simulations}(C), see also
Figure S1 in the Supplementary Material).
As the corresponding MT snapshot insets 3 and 6 reveal, the reason for this
discrepancy is the average over all protofilaments:
it appears that typically only a \enquote{nucleus} of three neighboring
protofilaments shrinks by more than 6 dimers, such that its GTP-cap is removed
and its ends reach into the GDP-body of the MT, when a catastrophe is triggered.
The MT snapshots in \autoref{fig:full_simulations}(B) also suggest that
rescue events require formation of a GTP-cap on almost all 13 protofilaments
(with an average cap length $\sim 4$)
such that nuclei of three neighboring uncapped GDP-protofilaments are avoided.
Further investigation of more catastrophe events will be necessary to definitely
deduce catastrophe- and rescue-triggering structural MT features.
\subsection{Hydrolysis coupled to mechanics changes the cap structure}
Finally, we test how a mechanical feedback onto the hydrolysis rate as
introduced in \eqref{eq:mechanics_hydrolysis_rate} and
\eqref{eq:mechanics_hydrolysis_rate2} changes the cap structure and
dynamic behavior.
In the presence of this mechanical feedback, tubulin dimers in the MT lattice
with larger bending angles tend to hydrolyze preferentially.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.99\linewidth]{figure11-crop.pdf}
\caption{
(A) Average actual hydrolysis rate $\langle k_\text{hydr} \rangle$ as a
function of the constant base hydrolysis rate $k_\text{hydr}^0$.
Comparison of (B) the average actual hydrolysis rate
$\langle k_\text{hydr} \rangle$ and (C) the porous cap length
$N_\text{pcap}$ as a function of the free tubulin dimer
concentration $c_\text{tub}$ for hydrolysis coupled to mechanics with
$\khydrNVal{1.5}$ and a constant hydrolysis rate of
$\khydrVal{0.25}$.
(D) shows the two-dimensional representations of
two MT tip configurations that are marked by arrows in (C) at
$t_\text{sim} = \SI{5}{\min}$.
The top and bottom protofilaments are periodic images of
$p = 13$ and $p = 1$, respectively.
Relative occurrence of GTP tubulin dimers as a function of the
dimer-based distance from the protofilament tip $d(p) - d$ for
(E) a constant hydrolysis rate of $\khydrVal{0.25}$ and (F)
hydrolysis being coupled to mechanics and $\khydrNVal{1.5}$.
(G) Average hydrolysis rate as a function of distance
$d(p) - d$ from the tip and (H)
the associated average bending angle
$\langle \Delta \tilde{\theta} \rangle$ for hydrolysis coupled
to mechanics and $\khydrNVal{1.5}$.
All plots are for $\koncVal{4}$, $\GlongVal{-9.3}$, and
$k_\text{lat} = \SI{100}{\ensuremath{\mathit{k}_\text{B} \mathit{T}} \per\nano\meter\squared}$.
}
\label{fig:mechanical_hydrolysis_results}
\end{figure}
Overall,
we find a linear relation between $k_\text{hydr}^0$ and the average hydrolysis rate
$\langle k_\text{hydr} \rangle$ (see \autoref{fig:mechanical_hydrolysis_results}(A))
with $k_\text{hydr}^0 \gg \langle k_\text{hydr} \rangle$.
When comparing MT growth with hydrolysis
coupled to mechanics with average hydrolysis rate
$\langle k_\text{hydr} \rangle$ to MT growth with constant hydrolysis
rate $k_\text{hydr}$ (for exmaple in
\autoref{fig:mechanical_hydrolysis_results}(D)-(F)),
we use \autoref{fig:mechanical_hydrolysis_results}(A)
to choose the base hydrolysis rate $k_\text{hydr}^0$ such that
$\langle k_\text{hydr} \rangle \approx k_\text{hydr}$.
\autoref{fig:mechanical_hydrolysis_results}(B) shows the average
hydrolysis rate $\langle k_\text{hydr} \rangle$ as a function of the free tubulin dimer
concentration $c_\text{tub}$ for $\khydrNVal{1.5}$.
Here, we observe a pronounced nonlinear concentration dependence with a decrease
around the individual critical tubulin concentration
$c_\text{tub} \simeq \SI{10}{\micro\molar}$.
At the same concentration, also the porous cap length $N_\text{pcap}$ (see
\autoref{fig:mechanical_hydrolysis_results}(C)),
which is defined as the difference
between the number of tubulin dimers in a protofilament and the value of $d$ of
the first GTP-tubulin dimer counted from the minus end,
starts to increase.
As a result, the porous cap length for hydrolysis
coupled to mechanics is much longer compared to a
constant hydrolysis rate, even if the average effective hydrolysis rate is
roughly the same.
In the following, we argue that the reason
for this increase in porous cap length is a decrease of the
hydrolysis rate for GTP-dimers away from the tip.
Mechanical feedback gives rise to preferential hydrolysis at the tip,
i.e., the average hydrolysis rate $\langle k_\text{hydr}(x) \rangle$
(over all actually executed
hydrolysis events) is
larger for small layer distances $x \equiv d(p) - d$ from the tip,
as can be seen in \autoref{fig:mechanical_hydrolysis_results}(G).
This is in line with previous results in Ref.\ \cite{Mueller2014}
from a much simpler version of our model
with a deterministic hydrolysis kinetics and without dimer attachment and
detachment.
According to \eqref{eq:mechanics_hydrolysis_rate2}, GTP-tubulin dimers with
larger bending angles tend to hydrolyze preferentially.
If a straight GTP-dimer is bent inward ($\Delta \theta < \SI{0}{\degree}$),
its hydrolysis rate is reduced according to
\eqref{eq:mechanics_hydrolysis_rate2}; if it is bent outwards
($\Delta \theta > \SI{0}{\degree}$) the rate is increased.
From the hydrolysis rates shown in
\autoref{fig:mechanical_hydrolysis_results}(G),
it is possible to calculate the average bending angles using
\eqref{eq:mechanics_hydrolysis_rate} and
\eqref{eq:mechanics_hydrolysis_rate2},
\begin{equation}
\langle \Delta \tilde{\theta}(p,d) \rangle
= \frac{1}{\SI{11}{\degree}} \left[
\frac{1+\delta_{d,d(p)}}{\kappa}
\ln \left( \frac{k_\text{hydr}(p,d)}{k_\text{hydr}^0} \right)
+ (\SI{5.5}{\degree})^2 \right] .
\label{eq:avDelta}
\end{equation}
The results for these bending angles as a function
of the distance $x$ from the top are shown in
\autoref{fig:mechanical_hydrolysis_results}(H).
Surprisingly, almost
all dimers are bent inwards ($\Delta \theta < \SI{0}{\degree}$)
on average apart from dimers close to the tip,
We will try to
interpret these results in the following.
An isolated GTP-dimer
within the GDP-body can alleviate the bending stress of GDP-dimers
by bending inward ($\Delta \theta < \SI{0}{\degree}$), which allows longitudinally
neighboring GDP-dimers to bend outwards (such that
$\Delta \theta > \SI{0}{\degree}$)
resulting in an overall decrease of the MT energy
(see Figures S14 and S15 in the Supplementary
Material).
Therefore,
isolated GTP-dimers deep in the GDP-body hydrolyze with a reduced asymptotic
rate $\langle k_\text{hydr} \rangle_\infty\ll k_\text{hydr}$.
We also find that, for several consecutive GTP-dimers in the same protofilament,
GTP-dimers curl inward directly at the GDP/GTP interface resulting in a
reduced hydrolysis rate (see Supplemental Figure S15),
while GTP-dimers in the center of a GTP-island
are straight so that they have a higher hydrolysis rate than at the GDP/GTP
interfaces.
Effectively, this hydrolysis rate distribution within a GTP-island results in a
\enquote{anti-vectorial} hydrolysis mechanism with which GTP-islands are
hydrolyzed from the interior in contrast to vectorial hydrolysis where
hydrolysis happens at the GTP/GDP interfaces.
Also for GTP-dimers in layers closer to the MT tip,
other longitudinally close-by
GTP-dimers cooperate in alleviating bending stresses; then inward bending is
still preferred, but the inward bending angle becomes smaller.
This decrease in inward bending corresponds to an increase of the average
hydrolysis rates $\langle k_\text{hydr}(x) \rangle$ for GTP-dimers in these layers
compared to GTP-dimers buried deeper in the MT body
(see \autoref{fig:mechanical_hydrolysis_results}(G) and (H)).
For terminal tubulin dimers ($x = 0$), we observe
a hydrolysis rate $\langle k_\text{hydr}(x) \rangle$
\emph{higher} than $k_\text{hydr}$ (while it is equal
or lower than $k_\text{hydr}$ for all other layers $x > 0$).
Hydrolysis in the first layer is enhanced because
there are no tubulin dimers on top, such
that hydrolysis has to overcome a smaller energy barrier as pointed
out previously (the $d+1$-term in
\eqref{eq:mechanics_hydrolysis_rate2} is missing corresponding to the
$\delta_{d,d(p)}$-contribution in \eqref{eq:avDelta}).
As a result of the hydrolysis bias toward the tip, the spatial GTP-tubulin dimer
distribution also differs.
For concentrations where the MTs are growing only on time scales of
several minutes ($c_\text{tub} \ge \SI{11}{\micro\molar}$) for the chosen parameters, a
constant hydrolysis rate leads to the expected exponential distribution of
GTP-dimers shown in \autoref{fig:mechanical_hydrolysis_results}(E)
as observed in \emph{in vivo} experiments \cite{Seetapun2012}.
Using an effective one-dimensional (or single protofilament) model similar to
\cite{Padinhateeri2012} to
calculate the probability of tubulin dimers being GTP-tubulin dimers as a
function of the polymerization rate $k_\text{on}$,
effective depolymerization rate $\tilde{k}_\text{off}$, and
hydrolysis rate $k_\text{hydr}$ matches the simulation results for concentrations at
which the MTs can be considered in a steady state of growth (see
Section 4 in the Supplementary Material).
We use an effective depolymerization rate $\tilde{k}_\text{off}$ instead of $k_\text{off}$,
because we map onto the depolymerization process
of a one-dimensional model so that $\tilde{k}_\text{off}$ includes
all effects from lateral bond formation and rupture and the actual
depolymerization process in the full model.
If hydrolysis is coupled to mechanics, the spatial distribution is only
exponential in its tail, has larger values at the MT tip, and
GTP-tubulin dimers can be found much deeper in the GDP-body
(see \autoref{fig:mechanical_hydrolysis_results}(F)).
These results reflect that the average hydrolysis rate
$\langle k_\text{hydr}(x) \rangle$ is decreasing towards the GDP-body and reaches a
small limiting value $\langle k_\text{hydr} \rangle_\infty\ll k_\text{hydr}$ for distances
$x = d(p)-d > 500$ away from the tip, which governs the exponential tail
(see \autoref{fig:mechanical_hydrolysis_results}(G)).
This can be rationalized by considering the probability $p_\text{GTP}(x)$ to
find a GTP-dimer at distance $x$ from the tip in a single protofilament
and continuum approximation.
The balance between attachment/detachment and hydrolysis leads to
\begin{equation}
0
= -(k_\text{on} - \tilde{k}_\text{off}) \frac{\text{d} p_\text{GTP}}{\text{d} x}
- \langle k_\text{hydr}(x) \rangle \,p_\text{GTP}(x)
\label{eq:pGTPx}
\end{equation}
in the stationary state, which results in
a sharp initial decrease of $p_\text{GTP}(x)$ because
$\langle k_\text{hydr}(0) \rangle$ is large at the tip but a much slower asymptotic
exponential decrease when
$\langle k_\text{hydr}(x) \rangle \approx \langle k_\text{hydr} \rangle_\infty\ll k_\text{hydr}$,
which explains the main features in
\autoref{fig:mechanical_hydrolysis_results}(F).
In Section 4 in the Supplementary Material,
we show that \eqref{eq:pGTPx}
describes simulations with a constant hydrolysis and with hydrolysis coupled to
mechanics equally well.
With $p_\text{GTP}(x)$, we can define an \enquote{average cap length} as
$\bar{\ell}_\text{cap} = \int_0^\infty \text{d}x\, p_\text{GTP}(x) x$.
This average cap length $\bar{\ell}_\text{cap}$ is longer if hydrolysis is coupled to
mechanics compared to a constant hydrolysis rate because $p_\text{GTP}(x)$ is much
greater for larger $x$
(see \autoref{fig:mechanical_hydrolysis_results}(E) and (F)).
As $\bar{\ell}_\text{cap} < N_\text{pcap}$, this increase in average cap length also
explains the increased porous cap length if hydrolysis is coupled to mechanics.
The relative increase of hydrolyzed GDP-dimers at the tip could make MTs more
prone for catastrophes and give rise to an increased catastrophe rate and,
eventually, a more realistic concentration dependence of catastrophe rates.
\autoref{fig:full_simulation_klat100_mechanical_hydrolysis}, however, shows that
this is not the case.
Instead, the same steep dependence on the (base) hydrolysis rate as in
\autoref{fig:full_simulation_klat100} persists.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.99\linewidth]{figure12.pdf}
\caption{
MT length $\ell_\text{MT}$ as a function of the simulation time $t_\text{sim}$
for 20 different simulations with
$\koncVal{4}$, $\GlongVal{-9.3}$,
$k_\text{lat} = \SI{100}{\ensuremath{\mathit{k}_\text{B} \mathit{T}} \per\nano\meter\squared}$, three
different values of $c_\text{tub}$, and four different values of
$k_\text{hydr}^0$.
}
\label{fig:full_simulation_klat100_mechanical_hydrolysis}
\end{figure}
In comparison to the MT growth trajectories with a constant
hydrolysis rate shown in \autoref{fig:full_simulations},
\autoref{fig:full_simulations_mechanical_hydrolysis} shows an example of a
MT simulation in which the hydrolysis rate is coupled to mechanics.
To calculate the probabilities shown in the insets, the same criteria as for
\autoref{fig:full_simulations}(A) were used.
At first sight, these trajectories look similar to the corresponding
trajectories for a constant hydrolysis rate \autoref{fig:full_simulations}(A).
There is, however, a significantly increased roughness of the trajectory during
the growth phase, which could be interpreted as increased occurrence of
\enquote{dips} or \enquote{stutter} events.
A high probability of stutter events has also been observed in Ref.\
\cite{Mahserejian2019}, which supports the existence of
a mechanochemical coupling in hydrolysis.
The catastrophe-triggering configuration of a \enquote{nucleus} of several
neighboring protofilaments shrinking by more than 6 dimers is also similar
as snapshots 4 and 5 in \autoref{fig:full_simulations_mechanical_hydrolysis}(B)
show.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.99\linewidth]{figure13.pdf}
\caption{
(A) Length of a MT $\ell_\text{MT}$ as a function of the simulation time
$t_\text{sim}$ with $\koncVal{4}$, $\GlongVal{-9.3}$,
$k_\text{lat} = \SI{100}{\ensuremath{\mathit{k}_\text{B} \mathit{T}} \per\nano\meter\squared}$,
$c_\text{tub} = \SI{9}{\micro\molar}$ and $\khydrNVal{1.5}$ with
hydrolysis being coupled to mechanics and (B) the
two-dimensional representations of certain MT tip
configurations that are marked by arrows in the inset (A.C)
(configuration $6*$ has been shifted towards
the MT tip by 21
tubulin dimer lengths).
}
\label{fig:full_simulations_mechanical_hydrolysis}
\end{figure}
\subsection{Dilution experiments}
In dilution experiments, the free tubulin dimer concentration
$c_\text{tub}$ is reduced to $c_\text{dil}\ll c_\text{tub}$ at a certain point in time
\cite{Voter1991,Walker1991,Duellberg2016}.
If the diluted concentration is sufficiently small or zero,
the GTP-cap stops growing by polymerization (and depolymerizes)
but continues to hydrolyze;
after a characteristic delay time $\Delta t_\text{delay}$,
the GTP-cap has vanished, a catastrophe is initiated, and the MT shrinks.
Thus, dilution experiments and their comparison to corresponding
dilution simulations can
give information on the hyrolysis rate.
Simulation results for the delay time are shown in \autoref{fig:dilution}.
In the Supplementary Material, we explain the algorithm that
we used to determine the delay time $\Delta t_\text{delay}$ from
MT simulation trajectories in detail.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.99\linewidth]{figure14-crop.pdf}
\caption{
Average post-dilution delay time $\langle \Delta t_\text{delay}\rangle$
as a function of
(A) the hydrolysis rate $k_\text{hydr}$ for $\ctubVal{16}$ and different
post-dilution GTP-tubulin dimer concentrations $c_\text{dil}$ and
(B) the pre-dilution GTP-tubulin dimer concentration $c_\text{tub}$ for
$c_\text{dil} = \SI{0}{\micro\molar}$ and different hydrolysis rates $k_\text{hydr}$.
The averaged data from Duellberg \emph{et al}~\cite{Duellberg2016}
specified the pre-dilution growth velocity,
which was converted to $c_\text{tub}$ for this plot.
(C) Average GTP-cap length $\langle N_\text{cap} \rangle$ at the time of dilution
$t_\text{dil}$ as a function of the delay time $\langle \Delta t_\text{delay}\rangle$ for
$\ctubVal{16}$ and different values of $c_\text{dil}$ .
}
\label{fig:dilution}
\end{figure}
We expect the delay time to be proportional to the
GTP-cap length, $\Delta t_\text{delay} \propto \langle N_\text{cap}\rangle$, as
corroborated by \autoref{fig:dilution}(C) and
$\langle N_\text{cap}\rangle\propto
\sqrt{c_\text{tub}/k_\text{hydr}}$ according to
Section \ref{sec:vgro_of_ctub_with_hydrolysis}
(see \autoref{fig:vgro_of_ctub_with_hydrolysis}(C) and (D)) \cite{Li2010}.
This results in $\Delta t_\text{delay} \propto\sqrt{c_\text{tub}/k_\text{hydr}}$,
which is in qualitative agreement with our simulation data
in \autoref{fig:dilution}(A) and (B).
The comparison with the experimental dilution data from Ref.\
\cite{Duellberg2016} in \autoref{fig:dilution}(B) shows that delay
times for a hydrolysis rate $\khydrVal{0.1}$
come close to the experimental data but appear to depend too steeply
on $c_\text{tub}$.
\section{Discussion}
We introduced, parameterized, and analyzed a chemomechanical model for MT
dynamics in which, in addition to polymerization (attachment of dimers),
depolymerization (detachment of dimers), and hydrolysis of dimers, the rupture
of lateral bonds between monomers in neighboring
protofilaments is explicitly modeled and coupled to the mechanics of the MT.
The basis for this coupling is the allosteric model according to which a
hydrolyzed dimer acquires a more bent configuration, which builds up mechanical
stress in the MT tubular structue via lateral bonds between dimers.
As many model parameters as possible have been determined from the
experimentally measured MT growth and shrinkage velocities measured by Walker
\emph{et al}~\cite{Walker1988}.
To determine the values of the model parameters, we use a \enquote{divide and
conquer} approach \cite{VanBuren2002,VanBuren2005}.
We used simulations of growing GTP-only MTs to parameterize longitudinal and
lateral bond energies $\Delta G_\text{long}^{0*}$ and $\Delta G_\text{lat}^{0}$ and the attempt rate $k_\text{att}$ for
lateral bond formation.
By requiring a linear concentration dependence of growth
velocity, we can fix all three parameter values for a given value of $k_+$.
We used simulations of shrinking GDP-only MTs to parameterize the bending
constant $\kappa$ and the spring constant $k_\text{lat}$ of the lateral bonds.
Here, we can only fix one of the two parameters.
Moreover, the hydrolysis rate $k_\text{hydr}$ is still a free parameter, for
which we use values in the range
\SIrange[per-mode=reciprocal]{0.1}{0.5}{\per\second} known from experiments
\cite{Melki1996}.
The general philosophy of a divide-and-conquer approach is the successive
fixation of simulation parameters by using first GTP-only growth, then GDP-only
shrinkage and, eventually, catastrophe frequencies or dilution to fix the
hydrolysis rate.
This successive fixation is, however, problematic, as the corresponding
experimental data is influenced by \emph{all} simulation parameters in general.
The problem becomes apparent when considering the hydrolysis rate: changes in
the hydrolysis rate also affect the growth rate over a wide concentration range
because hydrolyzed dimers have an effectively higher
detachment rate, see \autoref{fig:vgro_of_ctub_with_hydrolysis}(B).
Strictly speaking, all simulation parameters in \autoref{tab:parameters} must be
determined at once by fitting several experimental results
simultaneously instead of the successive fixation in the divide-and-conquer
approach or to apply the divide-and-conquer approach iteratively several times
until a self-consistent parameter set is found.
A simultaneous fixation of all parameters has been performed, for example, in
Ref.\ \cite{Piette2009} on a chemical model without bond rupture and, thus, with
only four parameters (on-rate, bond energies, and hydrolysis rate).
Future work on our model should include at least a re-adjustment of the
parameters once a hydrolysis rate is selected such that the growth velocity of
Walker \emph{et al}~\cite{Walker1988} is reproduced \emph{in the presence of
hydrolysis}.
If mechanical feedback onto hydrolysis is included, the model has to be
re-parameterized again, in principle.
Our simulation model handles all chemical events, i.e., dimer attachment and
detachment, bond rupture and formation, and hydrolysis using a Gillespie
algorithm. After each chemical event we relax the resulting MT structure
mechanically by mechanical energy minimization based on the assumption that
the microscopic mechanical dynamics is much faster than the chemical steps.
Therefore, mechanical energy minimization is the computationally most demanding
step in the simulation.
This is a common problem in all dimer-based chemomechanical MT models
\cite{VanBuren2005,Coombes2013,Zakharov2015}.
We address this problem by restricting the mechanical energy minimization to
bounded number MT degrees of freedom near the plus end.
We showed that restricting energy minization to a depth of $d_\text{cutoff} = 10$
additional layers into the MT (in minus end direction)
from the point of the last chemical event
is an accurate and efficient choice.
Computational efficiency of this procedure is better than performing a
dedicated microscopic Brownian dynamics simulation \cite{Zakharov2015} and
better than random local energy minimization \cite{VanBuren2005,Coombes2013}
(for the same accuracy in energy minimization).
The restricted energy minimization strategy also ensures that the number of
minimization parameters does not scale with the MT length but remains bounded,
which assures that we can simulate arbitrarily long growing MTs at a fixed
minimal computational speed using our approach.
Simulations do not require more than a few hours for \SI{1}{\minute} of
MT dynamics (for a constant hydrolysis rate) using just a single CPU core.
Therefore, we can reach time scales of several minutes of MT dynamics
which is the time scale for repeated catastrophe events for concentrations above
the individual critical concentration, where the dynamic instability can occur.
We performed a first systematic analysis of catastrophe and rescue rates in
\autoref{fig:catastropheRescueRates}, which
indicates that the decrease of the catastrophe rate
with tubulin concentration is too steep compared to experimental data
\cite{Walker1988,Janson2003}.
It is also much steeper than simulation results of Ref.\ \cite{Zakharov2015} but
these results for the catastrophe rate relied on linear extrapolation from
unrealistically high hydrolysis rates
(\SIrange[per-mode=reciprocal]{3}{11}{\per\second})
down to realistic values
(\SIrange[per-mode=reciprocal]{0.1}{0.5}{\per\second}).
In the future, our computational model can also be used to measure
the dependence of catastrophe rates on MT lifetime \cite{Gardner2011_kinesins}.
Within our model, we could also study single catastrophe and rescue
events in detail, see \autoref{fig:full_simulations}.
The growth paths appear very similar to experimentally observed catastrophe
and rescue events.
Catastrophes typically feature an initial \enquote{transitional} phase of slow
shrinking by \SIrange{50}{100}{\nano\meter} as also observed in Ref.\
\cite{Mahserejian2019}.
Moreover, we observe \enquote{dips} in the growth paths resembling the
\enquote{stutter} events from Ref.\ \cite{Mahserejian2019}.
The most interesting results of chemomechanical models are possible statements
about the typical catastrophe-triggering configurations.
In this respect, our simulations indicate that a catastrophe could be triggered
by a \enquote{nucleus} of three neighboring protofilaments shrinking by more
than 6 dimers, such that its GTP-cap is removed and its ends reach into the
GDP-body of the MT.
To rescue a shrinking MT the GTP-cap has to be re-established on almost all 13
protofilaments such that nuclei of three neighboring uncapped GDP-protofilaments
are avoided.
This shows that mechanical correlations in the dynamics of protofilaments are
important in triggering catastrophe events.
This is an aspect which is absent in the calculation of catastrophe frequencies
based on simplified purely chemical models such as in Ref.\
\cite{Flyvbjerg1996}, where protofilaments are regarded as effectively
independent and uncorrelated.
Our model can achieve qualitative agreement with experimental
data on dilution experiments (see \autoref{fig:dilution}(C)) from Ref.\
\cite{Duellberg2016} for relatively low hydrolysis rates of
$\khydrVal{0.1}$, which is an indication that the catastrophe
mechanism is correctly captured by our chemomechanical model. This also
constrains the hydrolysis rate, which is still a free parameter
in our model, to lower values around $\khydrVal{0.1}$.
Finally, we
explored the consequences of a mechanochemical coupling in
the hydrolysis of tubulin dimers.
Because hydrolysis gives rise to bending of the GTP-dimers, we argue that
mechanical forces on a dimer that increase its bending angle should also
lead to higher hydrolysis rates, see \eqref{eq:mechanics_hydrolysis_rate} and
\eqref{eq:mechanics_hydrolysis_rate2}.
In the presence of mechanical feedback, hydrolysis gets a bias towards the
MT plus end which, in turn, also causes an increase in porous cap length.
At the same average hydrolysis rate, hydrolysis in the
immediate tip of the GTP-cap is more likely while
it is less likely in the remaining part of the
cap such that GTP-tubulin dimers can be found much deeper in the GDP-body,
see \autoref{fig:mechanical_hydrolysis_results}.
Individual catastrophe and rescue events (see
\autoref{fig:full_simulations_mechanical_hydrolysis})
look qualitatively similar in
the presence of mechanical feedback but the probability of \enquote{dips} or
\enquote{stutter} events is increased in agreement with Ref.\
\cite{Mahserejian2019}.
The coupling of hydrolysis to mechanics does not increase catastrophe rates
significantly such that the steep decrease of the catastrophe rate with tubulin
concentration persists.
The main problem of our model appears to be the steep decrease of catastrophe
rate with tubulin concentration, which could hint to a failure of basic
assumptions.
One possibility is that a direct effect of the hydrolysis state of the dimer
onto the off-rate (as also suggested by atomistic simulations
\cite{Grafmueller2013}) is relevant and not included in the model.
Another possibility is a failure of allosteric models in general.
The steep decline of catastrophe rates with tubulin concentration
gives a hint that MTs are structurally too stable for GTP-rich
caps. This might provide evidence for a shortcoming
of the underlying allosteric model, which inserts GTP-dimers
in a straight configuration that is more prone to
form stable lateral bonds than a curved configuration.
An alternative are so-called lattice models \cite{Buey2006,Rice2008},
according to which dimers are
always bent but hydrolysis affects lateral and longitudinal dimer interaction
energies.
A systematic comparison of allosteric and lattice models towards the resulting
concentration dependence of catastrophe rates within the framework provided here
could help decide which class of models is more appropriate.
So far, almost all chemomechanical modelling approaches were based on
the allosteric model
\cite{Molodtsov2005,VanBuren2005,Coombes2013,Mueller2014,Zakharov2015,Jain2015} but
recent experimental advancements in the analysis of the structure of
MT tips \cite{McIntosh2018} demonstrated that both
growing and shrinking MTs have
bent protofilament ends supporting similar earlier results
\cite{Hoog2011,Kukulski2011,Nawrotek2011,Pecqueur2012}. Additionally,
calculations using MT structures with different nucleotide content in the
beta-tubulin \cite{Manka2018} and all-atom MD
simulations of
GTP- and GDP-only MTs \cite{Ayoub2015,Fedorov2019}
revealed that hydrolysis weakens
lateral bonds and strengthens longitudinal bonds. Both aspects
support the lattice model for the influence of hydrolysis
on MT mechanics.
There is, however, also evidence from MD simulations for
intermediate models, where hydrolysis
affects interactions and also
leads to a much lower GDP-tubulin flexibility \cite{Igaev2018}.
Independent of these findings, our study based on the
allosteric model is valuable
for the following reasons:
(i)
In both the allosteric and the lattice model, catastrophes are
cascades of lateral bond rupture and in both models,
the bent shape of GDP-dimers is the dominating cause of mechanical
strain in the MT structure.
In the allosteric model, bending and mechanical strain
is directly generated by the hydrolysis of GTP-dimers,
whereas in the lattice model, the tubulin dimers are always bent
but hydrolysis weakens lateral bonds.
In both models, the result is an increased lateral bond rupture
rate of mechanically strained bonds after hydrolysis.
Therefore, an explicit modelling
approach for lateral bond rupture as a stoachstic process under
force generated by the bending of GDP-dimers will also be important
in all future chemomechanical models based on the lattice model.
So far explicit stochastic models of lateral bond rupture
have only been included into two-dimensional models
lacking explicit mechanics \cite{Margolin2011,Margolin2012,Li2014}
or with heavy computational cost by explicitly simulating the Brownian
dynamics of dimers and bonds \cite{Zakharov2015}.
(ii)
The importance of lateral bond rupture
becomes particularly clear for shrinking MTs or MTs
entering a catastrophe. In these phases of the dynamic instability,
GDP-tubulin dimers are
significantly more relavant than GTP-tubulin dimers.
As GDP-dimers are bent in both models and this bending gives rise
to lateral bond stretching, we believe that both types
of models will display a very similar behavior in these phases.
The only difference in this scenario is
that in the lattice model, the lateral bond energy $\Delta G_\text{lat}^{0}$ in the rupture
rate \eqref{eq:krup} will depend on the nucleotide type of the bonded tubulin
monomers, which also makes $k_\text{rup}$ an explicit function of the nucleotide
state. Because the nucleotide state is predominantly GDP,
the results for properly parameterized models will be very similar.
(iii)
We also introduced a computationally efficient scheme to relax the
mechanical energy between chemical events, which can also be employed
in future chemomechanical lattice models.
Within the allosteric model, we
achieve a better mechanical energy relaxation than
previous models \cite{VanBuren2005}
with significantly less computational steps
than a full Brownian dynamics simulation requires \cite{Zakharov2015}.
(iv) The idea of a feedback of mechanical forces onto the
hydrolysis rate can also be applied in future chemomechanical lattice
models: if hydrolysis leads to a weakening of lateral bonds, one could
expect mechanical strains that favor weakening of lateral bonds
also to favor hydrolysis.
In the future, our model could be extended to also include regulating TIP+
proteins \cite{Akhmanova2008} for which different mechanisms of how they
influence MTs could be implemented.
Comparing the results of such simulations with experimental data could help
to develop a mechanistic picture of the action of these proteins.
Another future extension is MT polymerization under force
\cite{Dogterom1997,Dogterom2005}.
So far, polymerization under force has been investigated using chemical models
\cite{VanDoorn2000,Kolomeisky2001,Stukalin2004,Ranjith2009,Krawczyk2011};
the influence of an external force on the microscopic level, in particular the
detailed dynamics of catastrophe events and the catastrophe-triggering
configurations is unknown.
\section{Funding}
We acknowledge funding from the German Research Foundation (DFG, www.dfg.de)
through Grant number KI 662/9-1.
The authors gratefully acknowledge computing time provided on
the Linux HPC cluster at Technical University Dortmund (LiDO3),
partially funded in the course of the Large-Scale Equipment
Initiative by the German Research Foundation (DFG) as project
271512359.
\bibliographystyle{naturemag-doi}
|
2,869,038,155,063 | arxiv | \section{Insert A head here}
\section{Introduction}
Thermalization process or entropy creation of isolated quantum systems is a long-standing issue, but not
well understood problem.
Relevant systems include the early
universe where the transition from a vacuum state to a thermalized
state occurs at the end of cosmic inflation, and the QCD matter created in the initial stage of relativistic heavy-ion
collisions where thermal matter should be formed in a rather short time.
It is known that both systems are well described in semiclassical approximation, and moreover
a chaotic behavior of the classical limit may play some role in the entropy production.
The present paper is concerned with the entropy production of an isolated quantum system for
which the semiclassical approximation is valid and the classical counter part may show a chaotic behavior.
To describe entropy
in a pure quantum system,
one may of course adopt
the von Neumann entropy~\cite{vonNeumann} as quantum mechanical entropy given by
\begin{align}
S_\mathrm{vN}
=& - \mathrm{Tr}\left[\rho \log \rho\right]
\ ,\label{eq:vonNeumann}
\end{align}
where $\rho$ is the density matrix.
For a pure state, however, $\rho$ is idempotent, $\rho^2=\rho$,
implying that the eigen value of $\rho$ is 0 or 1,
and the von Neumann entropy is zero.
Even if we start from a mixed state, the
time evolution described by a unitary operator will never
lead to entropy growth.
On the other hand,
the entropy production in a rarefied gas
composed of classical or quantum mechanical particles can be well described by
an analog of the $H$ function of Boltzmann given in terms of the distribution function$f(q, p)$:
\begin{align}
S= - \int \frac{d^Dqd^Dp}{(2\pi\hbar)^D}\,f(q,p)\,\log f(q,p) .
\label{Eq:Boltzmann-H}
\end{align}
It is noteworthy
that a phase-space description is desirable
for making classical-quantum correspondence clear, and
even natural when the semiclassical approximation is valid.
The standard method for such a description is to use
the celebrated Wigner function~\cite{Wigner},
which is defined as a Wigner transform of the density matrix:
The Wigner function $\fW(q,p)$ can be regarded as a quasi phase-space
distribution function.
The use of the Wigner function as the phase space distribution function ($f(q,p)=\fW(q,p)$ in Eq.~\eqref{Eq:Boltzmann-H}),
however, has essential drawbacks:
First, the Wigner function is, actually, not a genuine distribution function;
$\fW$ can be negative, which
prevents us to calculate the entropy density according to Eq.~\eqref{Eq:Boltzmann-H}.
Second, the entropy defined by Eq.~\eqref{Eq:Boltzmann-H} given
in terms of the Wigner function $f_W$ does not grow in time,
because the Wigner transform only gives an equivalent description of the quantum system in terms of, say,
the $q$- or $p$-representation~\cite{Gro46,Moy49,Hillery84,Lee95,Curt14}.
Some coarse graining of the phase space is needed to describe an entropy production.
In a classical chaotic system,
two adjacent points in the phase space
depart from each other exponentially in time.
If available phase space volume is limited,
the exponentially diffusing classical trajectories have to be folded
in a complicated manner in the phase space.
After a certain time starting from a localized phase space cell,
a given phase space cell $(2\pi\hbar)^D$ consists of the mixture
of trajectories stemming from the initially
occupied localized cell
and vacant regions not yet visited.
Since we cannot distinguish the phase space points in a cell
due to the uncertainty principle,
it is reasonable to define a phase space distribution
as a smeared or coarse-grained function over the phase space cell.
We adopt the Husimi function $\fH(q, p)$~\cite{Husimi} as such a coarse-grained distribution function,
which is defined as the expectation value
of the density matrix with respect to a coherent state $\vert z \rangle$.
It is readily shown that $\fH(q, p)$
is {\em semi-positive definite}, $\fH \geq 0$,
and a coarse-grained function of the Wigner function,
as will be shown in a later section.
It is shown \cite{TS1985,Takahashi1989} that the Husimi function faithfully describes the characteristic properties of
the underlying classical system,
and has been utilized to identify
the chaotic remnants in quantum systems~\cite{TS1985, Takahashi1989, Sugita03,Sugita-Aiba2002}.
Thus a natural candidate of the quantum mechanical entropy is given by (\ref{Eq:Boltzmann-H})
with $f(q,p)$ being substituted by the Husimi function $f_H(q, p)$.
This entropy was introduced by Wehrl~\cite{Wehrl} and may be called the Wehrl entropy,
although he himself called it the classical entropy and failed in identifying the distribution function
$\fH(q, p)$ with the Husimi function:
Such an identification was made later ~\cite{Anderson1993}.
We refer to the Wehrl entropy obtained by using the Husimi function
as the {\em Husimi-Wehrl (HW) entropy}~\cite{KMOS},
\begin{align}
S_\mathrm{HW}= - \int \frac{d^Dqd^Dp}{(2\pi\hbar)^D}\,\fH(q,p)\,\log \fH(q,p)\ .
\label{Eq:HWE}
\end{align}
It is worth mentioning that the HW entropy can be a good measure for a
quantum entanglement of a system including quantum optical systems~\cite{MZ2004,AA2007}.
For a one-dimensional case,
there is a minimum of $S_\mathrm{HW}=1$~\cite{Lieb,Wehrl1979},
in contrast to the von Neumann entropy,
which takes $S_\mathrm{vN}=0$ in the ground state.
It is also shown that the HW entropy takes a value close to the von Neumann
entropy at high temperature, and its growth rate coincides
with the Kolmogorov-Sina\"{i} entropy
for the one-dimensional inverted harmonic oscillator~\cite{KMOS}.
A direct evaluation of the HW entropy for a quantum system is a kind of challenge even for
the system with a few degrees of freedom because it involves a large-dimensional
integral over the phase space even apart from the cumbersome calculation of the
logarithm with precision.
Nevertheless the HW entropy and its time evolution have been calculated for some quantum
systems~\cite{OtherHWE,TsaiMuller}. The equation of motion (EOM) of the Husimi function is
given in \cite{Takahashi1989}, which contains a term of the order $\hbar$, and thus has a more complicated form
than that of the Wigner function even in the semiclassical approximation; see below.
To solve the complicated EOM of the Husimi function,
a test-particle method was proposed
by Tsai and Muller~\cite{TsaiMuller},
where the evolution of the test particles are determined
to reproduce some of the moments.
As already mentioned, the semiclassical approximation
is suitable to reveal the effect of the chaotic nature
of the classical counter part.
It is noteworthy that the time evolution of the Wigner function
in the semiclassical approximation
where the $\mathcal{O}(\hbar^2)$ terms are ignored
is readily obtained by solving
the classical Hamilton equation;
quantum mechanical information such as the uncertainty relation
is encoded in the initial Wigner function,
provided that it is given as the Wigner transform
of the quantum density matrix.
The time evolution of the Husimi function
is given by smearing the time-evolved Wigner function
obtained in the semiclassical approximation.
This is the method we adopt in this article.
We shall show its efficiency and usefulness
in describing entropy production
using a couple of quantum mechanical systems
whose respective classical counter systems are known to be chaotic.
We propose two methods to evaluate
the time evolution of the Husimi-Wehrl entropy.
One is an adaptation of the usual test-particle method without recourse to
the moments of the distribution function.
The other is a sequential application of Monte-Carlo integration,
which we call the two-step Monte-Carlo method.
We shall demonstrate the characteristics of the two methods
by numerical calculations,
and show that the simultaneous application of the two methods ensures the
reliability of the results of the HW-entropy's time evolution.
It should be noted that these two methods are, in principle, applicable
to systems with large degrees of freedom such as quantum field theories.
The paper is organized as follows.
In Sec.~\ref{sec:review},
we summarize some basic ingredients of the Wigner and Husimi functions
together with the HW entropy.
In Sec.~\ref{sec:methods},
we introduce the two numerical methods to evaluate the HW entropy
in an efficient way.
In Sec.~\ref{sec:results},
the quantum mechanical models are introduced
and numerical results of the Husimi-Wehrl entropy are shown.
The final section is devoted to a brief summary and concluding remarks.
\section{Wigner function, Husimi function, and Husimi-Wehrl entropy}
\label{sec:review}
In this section, we briefly review quantum mechanical
phase space distribution functions,
Wigner~\cite{Wigner} and Husimi~\cite{Husimi} functions,
and the phase space expression of the entropy, Husimi-Wehrl entropy~\cite{Wehrl}.
While
we introduce Wigner and Husimi functions
in one-dimensional quantum mechanics
in Subsec.~\ref{subsec:WignerHusimi}
and \ref{subsec:SemiClassical},
extension to multi-dimensional cases is straightforward.
\subsection{Wigner and Husimi functions}
\label{subsec:WignerHusimi}
The Wigner function~\cite{Wigner} is defined as a Wigner transform
of the density matrix
\begin{align}
\fW(q,p,t)
=& \Wig{\rho}(q,p,t)
\nonumber\\
\equiv& \int d\eta\,
e^{-ip\eta/\hbar}
\VEV{q+\frac{\eta}{2}\mid\rho(t)\mid q-\frac{\eta}{2}}\ .
\label{Eq:Wigner}
\end{align}
While the Wigner function $\fW(q,p)$
can be regarded as a quasi phase space distribution function and
provides intuitive picture of the phase space dynamics,
it is not semi-positive definite
and hence we cannot regard
$\fW(q,p)$ as the phase space probability density.
In order to overcome
the above drawbacks of the Wigner function,
Husimi introduced a Gaussian smeared Wigner function~\cite{Husimi},
known as the Husimi function,
\begin{align}
\fH(q,p)
=&\int \frac{dq' dp'}{\pi\hbar}
e^{-\Delta(q-q')^2/\hbar-(p-p')^2/\Delta\hbar}\,
\fW(q,p)
\ ,
\label{Eq:Husimi}
\end{align}
where $\Delta$ is an arbitrary width parameter
that gives the smearing manner in the phase space.
The Husimi function is defined also as the expectation value
of the density matrix with respect to a coherent state $\vert z \rangle$:
\beq
\fH(q, p)=\langle z \vert \rho \vert z \rangle,\quad z =(\Delta q+ip)/\sqrt{2\hbar \Delta},
\label{Eq:HusimiZ}
\eeq
for a one-dimensional case with $\Delta$ being an arbitrary constant.
Here the coherent state is given by
\beq
\vert{z}\rangle=
e^{z {a}^{\dag}-z^{\ast} {a}}\vert 0\rangle, \quad
{a}=(\Delta \hat{q}+i\hat{p})/\sqrt{2\hbar \Delta},
\eeq
where $\vert 0\rangle $ is the ground state; $\hat{a}\vert 0\rangle=0.$
It is readily shown that $\fH(q, p)$
is {\em semi-positive definite}, $\fH \geq 0$
by using Eq.~\eqref{Eq:HusimiZ};
$\fH=\left|\langle{z}\vert\psi\rangle\right|^2 \geq 0$
for a pure state $\vert{\psi}\rangle$,
and $\fH=\sum_i w_i \left|\langle{z}\vert\psi_i\rangle\right|^2 \geq 0$
for a mixed state specified by the density matrix
$\rho = \sum_i w_i \vert\psi_i\rangle\langle\psi_i\vert (w_i \geq 0)$.
The Husimi function $\fH(q,p)$ serves
as the probability density to observe the phase space
variables $(q,p)$ under a minimum wave packet $|z\rangle$,
and is now semi-positive definite, $\fH \geq 0$.
Compared with the Wigner function,
the Husimi function is smooth and
the peak of the Husimi function often appears around the expectation value
of the position and momentum~\cite{Takahashi1989, Sugita}.
\subsection{Time evolution in semiclassical approximation}
\label{subsec:SemiClassical}
The equation of motion (EOM) for the Wigner function $\fW$ is obtained
from the Wigner transform of the von Neumann equation for the density matrix,
$\partial\rho/\partial t=[H,\rho]/i\hbar$.
By applying the Wigner transform of the operator product,
$\Wig{(AB)}=\Wig{A}\exp(i\hbar(\overleftarrow{\nabla}_q \overrightarrow{\nabla}_p
-\overleftarrow{\nabla}_p \overrightarrow{\nabla}_q)/2)\Wig{B}$~\cite{Gro46,Moy49,Curt14},
commutators are replaced by Poisson brackets as
$\Wig{[A,B]}/i\hbar=\{A,B\}_\mathrm{PB}+\mathcal{O}(\hbar^2)$.
Thus the EOM for $\fW$ is given in terms of the Wigner transform $\Wig{H}$ of the
Hamiltonian $H$ as
\begin{align}
\frac{\partial\fW}{\partial t}
=&\left\{ \Wig{H}, \fW \right\}_\mathrm{PB}
+\mathcal{O}(\hbar^2)
\ .\label{Eq:EOMfW}
\end{align}
The Wigner transform $\Wig{H}$ of a Hamiltonian with the form of $H=p^2/2m + U(q)$
does not change its form.
We note that the $\mathcal{O}(\hbar^2)$ term in (\ref{Eq:EOMfW}) is proportional to
the third derivative of $\Wig{H}$ or $U$. Thus the EOM (\ref{Eq:EOMfW}) without
the $\mathcal{O}(\hbar^2)$ term turns out to be exact for some simple models such as
a (an inverted) harmonic oscillator.
The semiclassical EOM for $\fW$ is given by retaining the terms up to $\mathcal{O}(\hbar)$
in Eq.~\eqref{Eq:EOMfW}, which reads
\begin{align}
\frac{\partial\fW}{\partial t}
+\frac{\partial \Wig{H}}{\partial p}\,\frac{\partial \fW}{\partial q}
-\frac{\partial \Wig{H}}{\partial q}\,\frac{\partial \fW}{\partial p}
=0
\ .
\label{Eq:EOM}
\end{align}
We remark that the semiclassical EOM is exact for the linear systems mentioned above.
Equation \eqref{Eq:EOM} asserts that $\fW$ is constant along the classical
trajectory:
Let us see this. Let $(q(t; \bar{q}),\, p(t; \bar{p}))$ is
a solution of
the classical EOM, i.e., Hamilton's equation;
\begin{align}
\frac{dq}{dt}=&\frac{\partial \Wig{H}}{\partial p}
\ ,\quad
\frac{dp}{dt}=-\frac{\partial \Wig{H}}{\partial q}
\ ,\label{Eq:Canonical}
\end{align}
with an initial condition $(q(0)=\bar{q}, p(0)=\bar{p})$.
Then we have for $\fW(q(t; \bar{q}),\, p(t; \bar{p}),\, t)$,
\begin{align}
\frac{D\fW}{Dt}\equiv \frac{\partial\fW}{\partial t}
+\frac{dq}{dt}\frac{\partial\fW}{\partial q}
+\frac{dp}{dt}\frac{\partial\fW}{\partial p}
=0\ ,
\label{Eq:Wconst}
\end{align}
which implies that $\fW$ is time-independent;\,
$\fW(q(t; \bar{q}),\, p(t; \bar{p}),\, t)=\fW(\bar{q},\, \bar{p},\, 0)$.
Accordingly we have
\begin{align}
\fW(q,\, p,\, t)=\fW(q(-t; q),\, p(-t; p),\, 0).
\label{Eq:Wconst-1}
\end{align}
Thus
we can
obtain the semiclassical time evolution of the Wigner function
by solving the classical equation of motion.
Note that the quantum mechanical
effects are taken into account through the distribution of the initial value in the phase space
encoded in the Wigner function $\fW(q,\, p,\, 0)$ constructed
from the initial density matrix.
It is worth mentioning that the exact analytical solution of the time
evolution of $\fW$ for some linear systems including
a (stable) harmonic oscillator potential~\cite{Gro46,KMOS},
an inverted (unstable) harmonic oscillator potential~\cite{KMOS,OtherIHO}
and an external potential~\cite{KMOS} can be obtained.
Then even the analytic form of the Husimi function $\fH(q,\,p,\,t)$ for these systems
are readily obtained \cite{KMOS} by the Gaussian smearing of $\fW(q,p,t)$,
which is easy to perform analytically.
We note here that one may obtain the time evolution of the Husimi function $\fH(q,\, p,\, t)$
by solving the EOM for $\fH(q,\, p,\, t)$,
which involves terms proportional to $\hbar$, and thus
has a more complicated structure than that for $\fW(q,\,p,\, t)$
even in the semiclassical approximation~\cite{Takahashi1989}.
If one sticks to solve the EOM for $\fH$ directly, some numerical method would be necessary.
A test-particle method is adopted as such a numerical method by Tsai and Muller~\cite{TsaiMuller},
where the time evolution of test particles are determined
so as to reproduce some of moments. We remark that there are some ambiguities in such an approach
inherent in the moment method.
In this work, we do not adopt this direct method for obtaining
the time evolution of the Husimi function $\fH(q,\,p,\,t)$.
We take advantage of the fact that the EOM of the Wigner function $\fW(q,\,p,\, t)$
in this regime is given simply by solving the classical EOM, and obtain
$\fH(q,\,p,t)$ by
the Gaussian smearing of thus obtained $\fW(q,\,p,\,t)$.
This strategy should be workable and natural when the semiclassical approximation
is meaningful.
The remaining task that we have to do for obtaining the Husimi function is
just the multi-dimensional integrations over the phase space with the Gaussian kernel for the smearing,
which should be feasible by standard methods such as the Monte-Carlo integration.
\subsection{Husimi-Wehrl entropy}
\label{subsec:Wehrl}
Since the Wigner function $\fW$ is merely the Weyl transform of the density matrix,
any observable is calculable in terms of $\fW$ in principle, and it is also the case with the
Husimi function $\fH$. A drawback of the $\fW$ is that it can have negative values, and hence
is not suitable for the calculation of entropy.
As is mentioned in Introduction and the previous subsection,
the Husimi function is, in contrast, a semi-positive definite
{\em coarse-grained} phase space distribution function
smeared by a minimum wave packet, and hence
a good candidate for
the phase space distribution $f(q,p)$ to evaluate the entropy of
a quantum system,
as the $H$ function of Boltzmann in the classical system,
Eq.~\protect\eqref{Eq:Boltzmann-H},
or equivalently the Husimi-Wehrl entropy given in Eq.~\eqref{Eq:HWE}~\cite{Wehrl}.
An explicit form of the HW entropy in terms of the Wigner function
is given by
substituting the $D$-dimensional extension of Eq.~\eqref{Eq:Husimi}
into Eq.~\eqref{Eq:HWE},
\begin{align}
S_\mathrm{HW}(t)
=&-
\int \frac{d^D q d^D p}{(2\pi \hbar)^D}
\int \frac{d^D q' d^D p'}{(\pi\hbar)^D}
e^{-\Delta(q-q')^2/\hbar-(p-p')^2/\Delta\hbar}\,
\fW(q',p',t)\nonumber\\
&\times \log \left[\int \frac{d^D q'' d^D p''}{(\pi\hbar)^D}
e^{-\Delta(q-q'')^2/\hbar-(p-p'')^2/\Delta\hbar}\,
\fW(q'',p'',t)\right]
\ .\label{Eq:HWE2}
\end{align}
One may now recognize some difficulty of the numerical evaluation of
the HW entropy: It involves repeated numerical integrations over the multi-dimensional phase space,
and in particular one of them appears as an argument of logarithm, which
turns out to be quite problematic in the Monte-Carlo integration.
\section{Numerical methods to analyze the semiclassical time evolution of Husimi-Wehrl entropy}
\label{sec:methods}
Here, two numerical methods are introduced to
calculate the time dependence of the HW entropy
as given by the Gaussian smearing of the Wigner function obtained
in the semiclassical approximation. Both methods are based on
an adaptation of
the Monte-Carlo integration over the phase-space.
We call the two methods the test-particle (TP)
and two-step Monte-Carlo (tsMC) methods, respectively.
In this section, we deal with the $D$-dimensional system described by the
Hamiltonian $H=H(q,\,p)$, where $q$ and $p$ denote the $D$-dimensional vector, respectively, i.e.,
$q=(q_1,\,q_2,\dots\, ,q_D)$ and $p=(p_1,\,p_2,\dots\, ,p_D).$
\subsection{Test-particle method}
In the test-particle method~\cite{TPplasma,Wong1982,TPtext,GuideBUU},
the Wigner function is represented as a sum of the delta functions,
\begin{align}
\fW(q,\,p,\,t)=&\frac{(2\pi\hbar)^D}{N_\mathrm{TP}}\sum_{i=1}^{N_\mathrm{TP}}
\delta^D(q-q_i(t))\,\delta^D(p-p_i(t))
\ ,\label{Eq:WignerTP}
\end{align}
with the initial function
\[
\fW(q,p,0)=\frac{(2\pi\hbar)^D}{N_\mathrm{TP}}\sum_{i=1}^{N_\mathrm{TP}}\delta^D(q-q_i(0))\,\delta^D(p-p_i(0)),
\]
where $N_{\rm TP}$ is the total number of the test particles,
and their coordinates are given by $(q_i(t),\, p_i(t))$.
The initial distribution of the test particles
$(q_i(0),\,p_i(0))$\, $(i=1,\,2,\dots,\,D)$
is chosen so as to well sample that of $\fW(q,\, p,\,0)$:
Hence $\NTP$ is called the sampling number.
The time evolution of the coordinates
$(q_i(t),p_i(t))$
is determined by the EOM for $\fW(q,\,p,\,t)$, which is reduced to
the canonical equation of motion,
\begin{align}
\frac{dq_i}{dt}=&\frac{\partial \Wig{H}}{\partial p_i}
\ ,\quad
\frac{dp_i}{dt}=-\frac{\partial \Wig{H}}{\partial q_i}
\ ,\label{Eq:Canonical-D}
\end{align}
in the semiclassical approximation.
For the test-particle representation
of the Wigner function Eq.~\eqref{Eq:WignerTP},
the Husimi function is readily expressed as
\begin{align}
\fH(q,p,t)=&\frac{2^D}{\NTP} \sum_{i=1}^{\NTP}
e^{-\Delta (q-q_i(t))^2/\hbar-(p-p_i(t))^2/\Delta\hbar}
\ .\label{Eq:HusimiTP}
\end{align}
It is noteworthy that the
Husimi function here is a smooth function
in contrast to the corresponding Wigner function in Eq.~\eqref{Eq:WignerTP}.
Inserting the Wigner function \eqref{Eq:WignerTP} into Eq.~\eqref{Eq:HWE2},
the HW entropy in the test-particle method is
given as,
\begin{align}
S_\mathrm{HW}^\mathrm{(TP)}
=&-
\frac{1}{\NTP}
\sum_{i=1}^\NTP
\int \frac{d^Dqd^Dp}{(\pi\hbar)^D}\,
e^{-\Delta (q-q_i(t))^2/\hbar-(p-p_i(t))^2/\Delta\hbar}
\log \fH(q,p,t).
\end{align}
Now note that the integral over $(q,p)_i$ for each $i$ has a support
only around the positions of
the test particles $(q_i(t),\,p_i(t))$
due to the Gaussian function,
and then we can effectively perform the Monte-Carlo integration
as follows;
By generating a set of random numbers $(Q,P)_i$
with standard deviations of $\sqrt{\hbar/2\Delta}$ and $\sqrt{\hbar\Delta/2}$,
Monte-Carlo sampling point $(q,p)_i$ for each $i$ is obtained as
$(q,p)_i=(Q,P)_i+(q_i,p_i)$. Thus we reach the formula to be used
in the actual evaluation of the HW entropy in the test-particle method:
\begin{align}
S_\mathrm{HW}^\mathrm{(TP)}
\simeq&-
\frac{1}{\NMC\NTP}\sum_{k=1}^\NMC\sum_{i=1}^\NTP
\log\left[
\frac{2^D}{\NTP} \sum_{j=1}^{\NTP}
e^{-\Delta (Q_k+q_i(t)-q_j(t))^2/\hbar-(P_k+p_i(t)-p_j(t))^2/\Delta\hbar}
\right]
\ ,
\label{Eq:HWEtp}
\end{align}
where the amount of the sample number of $(Q,P)_i$ is denoted by $\NMC$.
\subsection{Two-step Monte-Carlo method}
The second method is a direct Monte-Carlo evaluation of the multi-dimensional integrals.
We rewrite Eq.~\eqref{Eq:HWE2} as
\begin{align}
S_\mathrm{HW}^\mathrm{(tsMC)}
=& -
\int \frac{d^DQd^DP}{(\pi\hbar)^D}\,e^{-\Delta Q^2/\hbar-P^2/\Delta\hbar}
\int \frac{d^Dqd^Dp}{(2\pi\hbar)^D}\,\fW(q,p,t)
\nonumber\\
&\times\log\left[
\int \frac{d^DQ'd^DP'}{(\pi\hbar)^D}\,e^{-\Delta (Q')^2/\hbar-(P')^2/\Delta\hbar}
\,\fW(q+Q+Q',p+P+P',t)
\right]
\nonumber\\
\simeq & -
\frac{1}{N_\mathrm{out}} \sum_{k=1}^{N_\mathrm{out}}
\log\left[
\frac{1}{N_\mathrm{in}} \sum_{l=1}^{N_\mathrm{in}}
\,\fW(q_k+Q_k+Q'_l,p_k+P_k+P'_l,t)
\right]
\nonumber\\
=&
-\left\langle
\log\left\langle{\fW(q+Q+Q',p+P+P',t)}\right\rangle_{Q'P'}
\right\rangle_{QPqp}
\ ,
\label{Eq:HWEtsMC}
\end{align}
where $(Q_k,P_k)$ and $(Q'_l,P'_l)$ are Gaussian random numbers
for the Monte-Carlo (MC) integration to compute Husimi function $\fH(q,p)$.
For the $(q,p)$-integration, we generate MC samples $(q',p')$ at $t=0$ according to
the initial distribution, and obtain the corresponding phase space sample points
$(q(q',p',t),p(q',p',t))$ at $t$
by solving the canonical equation of motion.
Under the semiclassical approximation,
$\fW$ is constant and the Jacobian is unity along the classical trajectory,
$J(q(t),p(t)/q'(0),p'(0))=1$.
Then we can replace the integral over $(q,p)$ in the first line
of Eq.~\eqref{Eq:HWEtsMC}
with the integral at $t=0$ by using the initial distribution
and the Liouville theorem as,
\begin{align}
&\int \frac{d^Dqd^Dp}{(2\pi\hbar)^D}
\fW(q,p,t) g(q,p)
\nonumber\\
=&\int \frac{d^Dq'd^Dp'}{(2\pi\hbar)^D}
\fW(q',p',0)\, g(q(q',p',t),p(q',p',t))
\ ,
\end{align}
where $(q',p')$ are the phase space coordinates at $t=0$,
and $(q(q',p',t),p(q',p',t))$ are those at $t$ evolved from $(q',p')$.
The Wigner function at $t$ in the log in Eq.~\eqref{Eq:HWEtsMC}
can be
obtained by the trace back of the trajectory from $t$ to $t=0$
as shown in Eq.(\ref{Eq:Wconst-1}).
Equation~\eqref{Eq:HWEtsMC} contains
an MC integral of a function obtained by an MC integral;
we first generate $(q',p')$ at $t=0$ according to the distribution
$f_W(q,p,0)$ and $(Q,P)$ as Gaussian random numbers,
and then perform the MC integral in the log by generating MC samples $(Q',P')$.
We call this procedure {\em two-step Monte-Carlo} (tsMC).
In the following sections, we
show the characteristic properties of the two methods and
demonstrate numerically how they work
using two-dimensional
quantum-mechanical systems.
\section{Numerical calculation of Husimi-Wehrl entropy in quantum Yang-Mills model}
\label{sec:results}
In this section, we show the numerical results of the HW entropy
in ``quantum Yang-Mills system'' \cite{qYMref},
obtained by the two distinct methods, TP and tsMC methods.
\subsection{Model Hamiltonian and setup of initial condition}
The Hamiltonian of the system
is given by
\begin{align}
H=\frac{1}{2m}(p_1^2+p_2^2)+\frac{1}{2}q_1^2q_2^2.
\label{qYMHamiltonian}
\end{align}
We have restricted ourselves to the two-dimensional case here.
The name, ``quantum Yang-Mills (qYM)'', is originated from the
fact that the spatially uniform Yang-Mills system
is reduced to a $(0+1)$-dimensional system, i.e.,
a quantum mechanical system, and its Hamiltonian
is just given by Eq.~\eqref{qYMHamiltonian}.
We adopt the initial condition
given by a minimal wave packet centered at $(q_1, q_2, p_1, p_2)=(0,0,10,10)$,
\begin{align}
f_{\rm W}(p_1,p_2,q_1,q_2,t=0)
=4e^{-[q^2_1+q^2_2+(p_1-10)^2+(p_2-10)^2]/\hbar}.
\label{Eq:initcond}
\end{align}
This initial condition is also adopted in Ref.~\cite{TsaiMuller}.
In the following, we show numerical results calculated by using the TP
and tsMC methods.
We show the results in the unit with $m=1$ and
$\hbar=1$, and take $\Delta=1$ for the wave packet width.
In the case of $\Delta \neq 1$, the smearing Gaussian is not symmetric in p and q directions. But the results do not change qualitatively.
We have confirmed that the results with $\Delta=0.1$ and $10$ are
qualitatively
the same as those with $\Delta=1$.
\subsection{Numerical results with TP method}
\label{subsec:qYM-TP}
First,
we show the numerical results of the HW entropy in the qYM system
calculated in the TP method using Eq.~\eqref{Eq:HWEtp}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{./qYM-TP.eps}
\caption{Time dependence of the HW entropy by using TP method in qYM,
with $N_{\rm TP}=100, 1000, 5000$ and $15000$, and $N_{\rm MC}=500$.
The arrow shows how the calculated HW entropy changes
as $N_{\rm TP}$ increases.}
\label{TP_QYMm}
\end{center}
\end{figure}
Figure \ref{TP_QYMm} shows the time evolution of the HW entropy
calculated in the TP method with the following test-particle numbers,
$N_{\rm TP}=100, 1000, 5000$ and $15000$.
The MC sample number is taken to be $N_{\rm MC}=500$.
The statistical errors are estimated for $\NMC$ samples from a standard deviation.
We note that the calculated HW entropy at each $t$ tends to increase
along with increasing $N_{\rm TP}$,
which is an artifact due to the small number of the test particles
$N_{\rm TP}$
and discussed later.
Apart from tiny fluctuations,
all the calculation show that the HW entropy first increases
in time
with a small oscillatory behavior being accompanied;
its local maxima are seen around $t\simeq 0.5$ and $1.7$.
We note that a similar behavior is also seen in Ref.~\cite{TsaiMuller}.
Entropy evaluated by the TP method has a (unphysical) maximum depending
on $N_{\rm TP}$, which causes apparent saturation at large $t$ in Fig.~\ref{TP_QYMm}.
In fact, when the system is chaotic and the phase space volume is very large,
all the test particles will be so separated
from each other in the phase space at later time
that only the $i=j$ terms in Eq.~\eqref{Eq:HWEtp} will remain.
In this limiting case,
the HW entropy as given in \eqref{Eq:HWEtp} is evaluated as follows;
\begin{align}
S_\mathrm{HW}^\mathrm{(TP)}
\to&-
\left\langle
\left[
\log\left(\frac{2^D}{\NTP}\right)
-\Delta Q^2/\hbar-P^2/\Delta\hbar
\right]
\right\rangle_{QP}
\nonumber\\
&=D-D\log{2}+\log{\NTP}
\ ,
\label{eq:limiting}
\end{align}
which gives the inevitable upper limit of $S_{\rm HW}^{({\rm TP})}$.
In Appendix A, we
examine the HW entropy of an inverted harmonic oscillator,
for which $S_{\rm HW}$ can be calculated analytically and is
found to increase permanently.
At later times, $\SHW$ is underestimated with small $\NTP$ values
because of the upper limit discussed above.
By comparison,
$\SHW$ at early times is calculated precisely in the TP method,
as long as $\NTP$ is large enough for $\SHW$ to converge.
From the above argument,
$\SHW(t)$ would be obtained reliably
as an extrapolated value in the limit of $\NTP \to \infty$.
The extrapolation should be made in the $\NTP$ range,
where the limiting value is larger than the HW entropy to be obtained.
The
limiting values are
$S_\mathrm{HW}^\mathrm{(TP)}=5.2, 7.5, 9.1$ and $10.2$
for $\NTP=100, 1000, 5000$ and $15000$, respectively.
The large-t values found in Fig.~\ref{TP_QYMm} are close
to these limiting values for smaller $\NTP$, i.e.,
$\NTP = 100$ and $1000$.
Thus we see that
the saturation behavior seen for smaller values of $N_\mathrm{TP}$
may be an artifact of the TP method.
In contrast
,
the large-t values for $N_\mathrm{TP} = 5000$ and $15000$
in Fig.~\ref{TP_QYMm}
are well below the limiting values
($9.1$ and $10.2$),
found free from the above mentioned artifact,
and can be used to obtain the extrapolated value at $\NTP\to\infty$,
as discussed later in Subsec.~\ref{subsec:CompTP}.
Thus we conclude that
the entropy production of the ``quantum Yang-Mills'' system
can be well described with the use of HW entropy as calculated
with the TP method
with sufficiently large number of the test particles.
\subsection{Numerical results with tsMC method}
Next, we show the numerical results of the HW entropy in qYM
in the tsMC method
using the formula Eq.~\eqref{Eq:HWEtsMC
.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{./qYM-tsMC.eps}
\caption{Time dependence of HW entropy calculated by using tsMC method.}
\label{MCPA_QYMm}
\end{center}
\end{figure}
Figure \ref{MCPA_QYMm} shows the time evolution of the HW entropy
calculated
in the tsMC method
with
the sample numbers
$N_{\rm in}=1200, 2400, 4800$ and $12000$.
$N_{\rm out}$ is taken to be the same as $N_{\rm in}$.
The errors
attached to $S_{\rm HW}$ in the
present figure is estimated only for the Monte-Carlo integrals
outside of $\log$ in Eq.~\eqref{Eq:HWEtsMC},
and those from the integral inside the $\log$ is not taken into account,
which causes an additional systematic error.
We see that the larger the value of $N_{\rm in}$, the smaller the HW entropy,
which is an opposite dependence on the sample number to that in the TP method.
Nevertheless the gross behavior in the time evolution of the HW entropy
is quite similar in the two methods apart from the tiny fluctuations;
After showing an oscillatory behavior in a
first short period, it increases in a monotonous way and
its growth rate decreases gradually.
More quantitative comparison of the two methods
will be presented in the next subsection.
\subsection{Comparison of the two methods}
\label{subsec:CompTP}
Figure~\ref{QYMmt10} shows the HW entropy at $t=10$
as a function of $N_{\rm TP}$ ($N_{\rm in}$)
in the TP (tsMC) method.
We fit a linear function $f(t)=at+b$
to the calculated $S_\mathrm{HW}(t)$ data
in the range $10-\Delta t\leq t \leq 10+\Delta t\ (\Delta t=1)$,
and adopt $f(t=10)$ as the HW entropy value at $t=10$.
This procedure provides a smoother curve
and reduces the errors coming from fluctuations
compared to directly using the raw data.
The HW entropy in the TP method
becomes larger with increasing $N_{\rm TP}$ as already mentioned;
At $t=10$,
$S_{\rm HW}\simeq 5.1$ for $N_{\rm TP}=100$
and $S_{\rm HW}\simeq 8.7$ for $N_{\rm TP}=15000$.
We also show
the fit results to the
data for larger samples, say $N_{TP}\ge 5000$, with a fit function,
\begin{align}
f(N)=a-\frac{b}{N^c}.
\label{eq:fitfunc}
\end{align}
The extrapolated value to $N_{\rm TP}\rightarrow \infty$ is
$9.19\pm 0.10$.
When we use other fit functions such as
$f(N)=a-b/(N/c+1)$ and
$f(N)=a-b/N+c/N^2$,
the fit results have differences with a standard deviation of $0.16$,
which should be considered as a systematic error.
Thus the HW entropy in the TP method is obtained as
\begin{align}
S_\mathrm{HW}^\mathrm{(TP)}(t=10)
=9.19 \pm 0.10~\mathrm{(stat.)} \pm 0.16~\mathrm{(syst.)}
\ .
\end{align}
With increasing $N_{\rm in}$,
the HW entropy calculated in the tsMC method decreases,
which is an opposite behavior to that in the TP method as noted before.
At $t=10$,
$S_{\rm HW}\simeq 13.2$ for $N_{\rm in}=1200$
and
$S_{\rm HW}\simeq 9.5$ for $N_{\rm in}=12000$.
We also show the fit results to the data.
We adopt Eq.~\eqref{eq:fitfunc} for the fit function.
From the fit results, the HW entropy in the tsMC method
is found to be
\begin{align}
S_\mathrm{HW}^\mathrm{(tsMC)}(t=10)
=9.01 \pm 0.21~\mathrm{(stat.)} \pm 0.06~\mathrm{(syst.)}
\ ,
\end{align}
where the central value and the statistical error are obtained
from the fit using Eq.~\eqref{eq:fitfunc},
and the systematic error is evaluated from the fits using several fit functions
as done in the TP method.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{./qYM-t10N.eps}
\caption{HW entropy in qYM at $t=10$ as a function of $\NTP$ ($\NMC$),
and its extrapolation to infinitely large $\NTP$ ($\NMC$)
in the TP (tsMC) method.
Filled circles (squares) show TP (tsMC) results,
and the solid (dashed) line shows a fit function to TP (tsMC) results.
The dotted line is limiting value given by Eq.~\eqref{eq:limiting}.
The shaded areas show the extrapolated value
in the limit of $\NTP, \NMC \to \infty$.}
\label{QYMmt10}
\end{center}
\end{figure}
\subsection{Discussions}
The time evolution of the HW entropies obtained in the TP and tsMC methods
shows a similar behavior with each other:
The HW entropy increases with an oscillatory behavior in the early stage,
then shows a monotonous increase with
a decreasing rate.
The HW entropy at each $t$ in the TP method increases along with $\NTP$,
while it decreases with increasing $\NIN$ in the tsMC method.
Thus we can guess that the real value of the HW entropy lies between
the results in the TP and tsMC methods.
Actually, the extrapolated values at $t=10$,
$S_\mathrm{HW}^\mathrm{(TP)}(t=10)=9.19\pm0.10\pm0.16$ at $\NTP\to\infty$
and $S_\mathrm{HW}^\mathrm{(tsMC)}(t=10)=9.01\pm0.21\pm0.06$ at $\NIN\to\infty$
in the TP and tsMC methods respectively,
are consistent with each other within the error.
These results are also
in agreement with that in Ref.\cite{TsaiMuller}.
These two methods, TP and tsMC methods,
give consistent results after $N\rightarrow\infty$ extrapolation.
On the other hand, with finite number of $\NTP$ and $N_{\rm in}$,
they could give seemingly inconsistent results
depending on the dynamics.
We here have a deeper look at this issue.
In the tsMC method, the entropy seems to
keep increasing even for the later time, in contrast to
the results in the TP method with finite $\NTP$
and in Ref.\cite{TsaiMuller}.
The discrepancy may come from
the special shape of the potential:
there are two flat directions in the potential for the qYM system,
although the width of them tends to shrink at large distances.
Then, the classical trajectory can keep growing along the flat direction,
which would cause an unlimited spreading of the Husimi function
and a permanent increase
of the HW entropy calculated in the semiclassical approximation.
(In the case of the TP methods,
there exists limiting value of the HW entropy depending on $\NTP$,
which gives rise to the apparent saturation of $S$ at large $t$.
)
By comparison,
it is shown that the exact energy spectra of the qYM are all discrete ones,
because of the
shrinking width leading to an increase of the kinetic energy due to
the uncertainty relation,
although the volume of $\{(p,q)|H(p,q)\le E\}$
is infinite \cite{Simon1983}.
Note that the discrete spectra implies that the wave functions of
the energy eigen states are all bound.
Thus the corresponding Husimi function would not have a support
at the infinite distance due to the quantum effect,
and the HW entropy may not show the ever increasing behavior
but have a saturated value.
This plausible conjecture can only be confirmed by a full quantum
calculation beyond the semiclassical approximation.
Such a calculation is beyond the scope of the present work and
will be left as a future work.
Instead, we shall take another model,
which is a modified version of the qYM one
free from flat directions in its potential.
\section{Modified quantum Yang-Mills model}
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{./mqYM010-TP.eps}
\caption{Time dependence of HW entropy by using TP method in modified qYM.}
\label{TP_MQYMm}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{./mqYM010-tsMC.eps}
\caption{Time dependence of HW entropy by using tsMC method in modified qYM.}
\label{MCPA_MQYMm}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{./mqYM010-t10N.eps}
\caption{HW entropy in mqYM at $t=10$ as a function of $\NTP$ ($\NMC$),
and its extrapolation to infinitely large $\NTP$ ($\NMC$)
in the TP (tsMC) method.
Filled circles (squares) show TP (tsMC) results,
and the solid (dashed) line shows a fit function to TP (tsMC) results.
The dotted line is limiting value given by Eq.~\eqref{eq:limiting}.
The shaded areas show the extrapolated value
in the limit of $\NTP, \NMC \to \infty$.}
\label{mQYMmt10}
\end{center}
\end{figure}
Let us consider the model in which
quartic potential terms are added to the qYM Hamiltonian;
\begin{align}
H=\frac{p_1^2}{2m}+\frac{p_1^2}{2m}+\frac{1}{2}g^2q_1^2q_2^2+\frac{\epsilon}{4}q_1^4+\frac{\epsilon}{4}q_2^4.
\label{Eq:H_mqYM}
\end{align}
We call the system ``modified quantum Yang-Mills (mqYM)''.
The system is studied in Ref.\cite{Sugita-Aiba2002,Sugita03}
with $g^2<0$ in the context of chaos.
It is apparent that
there is no flat direction in the potential due to the quartic terms.
We take $g^2=1$ and $\epsilon=0.1$ in the Hamiltonian,
Eq.~\eqref{Eq:H_mqYM}.
The mqYM system is found to be integrable
with $\epsilon/g^2=1, 1/3$ and $\infty$~\cite{Baker:1995bp,Sugita-Aiba2002}.
Our choice of $\epsilon/g^2=0.1$ is well apart from the integrable region.
Since $\epsilon$ is not very large,
the HW entropy shows a similar behavior to that in qYM at early times,
as shown later.
In this section,
we shall calculate the HW entropy of the mqYM system in
the TP and tsMC methods
.
The analyses are carried out in a similar way to those
for the qYM system.
In Figs.~\ref{TP_MQYMm} and \ref{MCPA_MQYMm},
we show the time evolution of the HW entropy in mqYM
calculated using the TP
($\NTP=500, 1000, 5000$ and $15000$ with $\NMC=500$)
and tsMC ($\NIN=600, 1200, 2400$ and $12000$) methods, respectively.
$N_{\rm out}$ is taken to be the same as $\NIN$ for tsMC.
The distribution function in Eq.~\eqref{Eq:initcond}
is used as the initial condition,
and the statistical errors are estimated for $\NMC$ ($\NIN$) samples
from a standard deviation in the TP (tsMC) method,
as in the qYM cases.
Both of the calculated results show that
the HW entropy first increases with an oscillatory behavior
and tends to saturate at later times, $t \gtrsim 6$.
The later-time $\SHW$ values depend on the sample number,
$\NTP$ and $\NIN$;
With increasing $\NTP$ ($\NIN$), the HW entropy increases (decreases)
in the TP (tsMC) method.
These are the features also found in qYM. By comparison, it
should be noted
that there seems to be saturation
of $\SHW$ both in the TP and tsMC methods in mqYM,
in contrast to qYM.
This may be originated from the
finite phase space volume where the Husimi function has a support.
In Fig.~\ref{mQYMmt10},
we show the HW entropy at $t=10$ as a function of $\NTP$ or $\NIN$.
We fit a linear
function
to calculated $S_\mathrm{HW}(t)$ results in the range $9<t<11$,
and adopt
$f(t=10)$ as the HW entropy value
at $t=10$.
In the TP method, $S_\mathrm{HW}^\mathrm{(TP)}(t=10)\simeq 6.4$ and $7.5$
for $\NTP=500$ and $15000$, respectively.
In tsMC, we find
$S_{\rm HW}^\mathrm{(tsMC)}(t=10)\simeq 9.4$
and $7.7$ for $\NIN=600$ and $12000$, respectively.
The extrapolated values of $\SHW$ at
$\NTP\to\infty$ and $\NIN\to\infty$
are found to be
\begin{align}
S_\mathrm{HW}^\mathrm{(TP)}(t=10)
=7.61\pm0.01\mathrm{(stat.)}\pm0.03\mathrm{(syst.)}\ ,
\\
S_\mathrm{HW}^\mathrm{(tsMC)}(t=10)
=7.53\pm0.01\mathrm{(stat.)}\pm0.04\mathrm{(syst.)}\ ,
\end{align}
in the TP and tsMC methods, respectively.
The central values and the statistical errors are obtained
from the fit using Eq.~\eqref{eq:fitfunc},
and the systematic error is evaluated from the fits using several fit functions.
These two values are consistent with each other within the error.
The observation shows that the two methods, tsMC and TP,
are especially effective for such a potential which bounds Husimi function
in finite region.
Thus, we are confident of the validity of the two methods
in the mqYM system.
\section{Summary}
We have discussed entropy creation in isolated quantum systems
by using the Husimi-Wehrl entropy evaluated
in a semiclassical treatment.
The semiclassical treatment is known to be useful in some of the systems
such as the
inflation in early universe
and the early stage of relativistic heavy ion collisions.
These systems are expected to bear instabilities and/or chaoticities
in their classical counter systems,
then the smearing of the phase space distribution by the minimal wave packet
causes the entropy production in terms of the Wehrl entropy or the $H$ function
of Boltzmann even in isolated quantum systems.
This is nothing but the Husimi-Wehrl entropy, the Wehrl entropy obtained
by using the Gaussian smeared Wigner function (Husimi function)
for the phase space distribution.
The semiclassical time evolution of the Husimi function is given
by solving a classical equation of motion and smearing with
a Gaussian packet.
Combining this semiclassical treatment
with the Monte-Carlo numerical integral technique,
we have developed two methods,
the test-particle (TP) method and the two-step Monte Carlo (tsMC) method.
We have
applied these two methods
to quantum mechanical systems in two dimensions,
the quantum Yang-Mills (qYM) and the modified quantum Yang-Mills (mqYM)
systems.
The classical counter systems of these are known to be chaotic.
We have demonstrated that the Husimi-Wehrl entropy obtained in the TP (tsMC)
method approaches the converged value from below (from above)
with an increasing sample number, then
we can
guess the true value of HW entropy.
We have further found that the results
of the TP and tsMC methods in the infinite sampling number limit
are consistent within the error.
Therefore, the simultaneous application of the two methods ensures
the reliability of the results of the Husimi-Wehrl entropy at a given time.
The extension of our methods to a multidimensional system
is
straightforward.
We expect that these methods are
useful in systems with many degrees of freedom
such as the quantum field theory.
These methods are, in principle, applicable to higher-dimensional problems,
and we have confirmed that they actually work in three and four dimensional
systems. In higher dimensions, we need much more Monte-Carlo samples to
obtain
statistically reliable results, and it would be necessary to make some
approximations
for practical purposes.
Work in this direction is in progress.
\section*{Acknowledgement}
We would like to thank Ayumu Sugita for a good lecture and useful suggestions.
This work was supported in part by
the Grants-in-Aid for Scientific Research from JSPS
(Nos.
20540265,
23340067,
24340054,
24540271,
15K0507
),
the Grants-in-Aid for Scientific Research on Innovative Areas from MEXT
(Nos. 23105713,
24105001, 24105008
),
and
by the Yukawa International Program for Quark-Hadron Sciences.
T.K. is supported by the Core Stage Back Up program in Kyoto University.
|
2,869,038,155,064 | arxiv | \section{Introduction}
\label{intro}
In 1982, Kingman introduced a process called the {\em coalescent}. \nocite{kingman:1982} This process provides a simple and elegant description of the genealogical (family) relationships amongst a set of neutral genes in a randomly mating (biologists would say {\em panmictic}) population of constant size. Since that time, spurred on by the flood of DNA sequence data, considerable effort has been spent extending Kingman's coalescent to incorporate things like varying population size, natural selection and spatial (and genetic) structure of populations. Analytic results for these coalescent models can be very hard to obtain, but it is relatively easy, at least in principle, to simulate them and so they have become fundamental tools in sequence analysis. However, models of spatial structure have largely concentrated on subdivided populations and a satisfactory model for the ancestry of a population evolving in a two-dimensional spatial continuum has remained elusive. Our aim in this paper is to present the first rigorous investigation of a new model that addresses some of the difficulties of existing models for spatially extended populations while retaining some analytic tractability. The rest of this introduction is devoted to placing this research in context. The reader eager to skip straight to the model and a precise statement of our main results should proceed directly to Section \ref{model}.
Our concern here is with the extension of the coalescent to spatially structured populations. In this setting it is customary to assume that the population is subdivided into {\em demes} of (large) constant size, each situated at a vertex of a graph $G$, and model the genealogical trees using the {\em structured} coalescent. As we trace backwards in time, within each deme the ancestral lineages follow Kingman's coalescent, that is each pair of lineages merges (or {\em coalesces}) into a single lineage at a constant rate, but in addition lineages can migrate between demes according to a random walk on the graph $G$. The genealogical trees obtained in this way coincide with those for a population whose forwards in time dynamics are given by Kimura's stepping stone model (Kimura~1953) \nocite{kimura:1953} or, as a special case, if $G$ is a complete graph, by Wright's island model (Wright~1931). \nocite{wright:1931}
The stepping stone model is most easily described when the population consists of individuals of just two types, $a$ and $A$ say. It can be extended to incorporate selection, but let us suppose for simplicity that these types are selectively neutral. Labelling the vertices of the graph $G$ by the elements of the (finite or countable) set $I$ and writing $p_i$ for the proportion of individuals in deme $i$ of type $a$, say, we have
\begin{equation}
\label{stepstone model} dp_i(t)=\sum_{j\in I}m_{ji}\left(p_j(t)-p_i(t)\right)dt+\sqrt{\gamma p_i(t)\left(1-p_i(t)\right)}dW_i(t)
\end{equation}
where $\{W_i(t); t\geq 0\}_{i\in I}$ is a collection of independent Wiener processes, $\gamma$ is a positive constant and $\{m_{ij}\}_{i,j\in I}$ specifies the rates of a continuous time random walk on $G$. The graph $G$, chosen to caricature the spatial structure of the population, is typically taken to be $\mathbb Z^2$ (or its intersection with a two-dimensional torus) and then one sets $m_{ij}=\kappa\mathbf{1}_{\{\|i-j\|=1\}}$, corresponding to simple random walk.
Although the stepping stone model is widely accepted as a model for structured populations, in reality, many populations are not subdivided, but instead are distributed across a spatial continuum. Wright~(1943) and Mal\'ecot~(1948) \nocite{wright:1943} \nocite{malecot:1948} derived expressions for the probability of identity of two individuals sampled from a population dispersed in a two-dimensional continuum by assuming on the one hand that genes reproduce and disperse independently of one another, and on the other hand that they are scattered in a stationary Poisson distribution. However, these assumptions are incompatible (Felsenstein~1975, Sawyer \& Fleischmann~1979). \nocite{felsenstein:1975} \nocite{sawyer/fleischmann:1979} The assumption of independent reproduction will result in `clumping' of the population and some local regulation will be required to control the local population density.
A closely related approach is to assume that the genealogical trees can be constructed from Brownian motions which coalesce at an instantaneous rate given by a function of their separation. The position of the common ancestor is typically taken to be a Gaussian centred on the midpoint between the two lineages immediately before the coalescence event (although other distributions are of course possible). However, the coalescent obtained in this way does not exhibit {\em sampling consistency}. That is, if we construct the genealogical tree corresponding to a sample of size $n$ and then examine the induced genealogical tree for a randomly chosen subsample of size $k<n$, this will not have the same distribution as the tree we obtain by constructing a system of coalescing lineages directly from the subsample. The reason is that whenever one of the lineages in the subsample is involved in a coalescence event in the full tree it will jump. Furthermore, just as in Mal\'ecot's setting, there is no corresponding {\em forwards} in time model for the evolution of the population.
Barton et al.~(2002) extend the formulae of Wright and Mal\'ecot to \nocite{barton/depaulis/etheridge:2002} population models which incorporate local structure. The probability of identity is obtained from a recursion over timeslices of length $\Delta t$. Two related assumptions are made. First, the ancestral lineages of genes that are sufficiently well separated are assumed to follow independent Brownian motions (with an effective dispersal rate which will in general differ from the forwards in time dispersal rate) and their chance of coancestry in the previous timeslice is negligible. Second, it must be possible to choose $\Delta t$ sufficiently large that the changes in the population over successive timeslices are uncorrelated. (For general $\Delta t$ this will not be the case. The movements of ancestral lineages in one time step may be correlated with their movements in previous steps if, for example, individuals tend to disperse away from temporarily crowded clusters.) Over all but very small scales, the resulting probability of identity can be written as a function of three parameters: the {\em effective dispersal rate}, the {\em neighbourhood size} and the {\em local scale}. However the usefulness of this result is limited due to a lack of explicit models for which the assumptions can be validated and the effective parameters established. Moreover, as explained in Barton et al.~(2002), although one can in principle extend the formula to approximate the distribution of genealogies amongst larger samples of well-separated genes, additional assumptions need to be made if such genealogies are to be dominated by pairwise coalescence. If several genes are sampled from one location and neighbourhood size is small then multiple coalescence (by which we mean simultaneous coalescence of {\em three} or more lineages) could become significant.
Multiple merger coalescents have received considerable attention from mathematicians over the last decade. Pitman~(1999) and Sagitov~(1999) introduced what we now call \nocite{pitman:1999} \nocite{sagitov:1999} {\em $\Lambda$-coalescents}, in which more than two ancestral lineages can coalesce in a single event, but {\em simultaneous} coalescence events are not allowed. Like Kingman's coalescent, these processes take their values among partitions of $\mathbb N$ and their laws can be prescribed by specifying the restriction to partitions of $\{1,2,\ldots ,n\}$ for each $n\in\mathbb N$. For our purposes, the $\Lambda$-coalescent describes the ancestry of a population whose individuals are labelled by $\mathbb N$. Each block in the partition at time $t$ corresponds to a single ancestor at time $t$ before the present, with the elements of the block being the descendants of that ancestor. Tracing backwards in time, the evolution of the $\Lambda$-coalescent is as follows: if there are currently $p$ ancestral lineages, then each transition involving $j$ of the blocks merging into one happens at rate
\begin{equation}
\label{betas in the coalescent} \beta_{p,j}^{\Lambda}=\int_{[0,1]}u^{j-2}(1-u)^{p-j}\Lambda(du),
\end{equation}
and these are the only possible transitions. Here, $\Lambda$ is a finite measure on $[0,1]$. Kingman's coalescent corresponds to the special case $\Lambda=\delta_0$, the point mass at the origin.
\begin{remark}
More generally, one can consider processes with simultaneous multiple coalescence events. Such coalescents were obtained as the genealogies of suitably rescaled population models by M\"ohle \& Sagitov~(2001). Independently, Schweinsberg~(2000) obtained the same class of coalescents and characterised the possible rates of mergers in terms of a single measure $\Xi$ on an infinite simplex. Coalescents which allow {\em simultaneous} multiple mergers are now generally referred to as {\em $\Xi$-coalescents}. \nocite{mohle/sagitov:2001} \nocite{schweinsberg:2000}
\end{remark}
Kingman's coalescent can be thought of as describing the genealogy of a random sample from a Fleming-Viot process. In the same way, a $\Lambda$-coalescent describes the genealogy of a random sample from a generalised Fleming-Viot process. This process takes its values among probability measures on $[0,1]$. We shall describe it in terms of its generator, $\mathcal R$ acting on functions of the form
$$
F(\rho)=\int f(x_1,\ldots ,x_p)\rho(dx_p)\ldots\rho(dx_1),
$$
where $p\in\mathbb N$ and $f:[0,1]^p\rightarrow\mathbb R$ is measurable and bounded. First we need some notation. If $x=(x_1,\ldots ,x_p)\in [0,1]^p$ and $J\subseteq \{1,\ldots ,p\}$ we write
$$
x_i^J=x_{\min J}\mbox{ if }i\in J, \mbox{ and } x_i^J=x_i \mbox{ if }i\notin J, \ \ i=1,\ldots ,p.
$$
Then for $\Lambda$ a finite measure on $[0,1]$, a $\Lambda$-Fleming-Viot process has generator
$$
{\mathcal R}F(\rho)=\sum_{J\subseteq\{1,\ldots ,p\}, |J|\geq 2} \beta_{p,|J|}^{\Lambda}\int\left(f(x_1^J,\ldots ,x_p^J)-f(x_1,\ldots ,x_p)\right) \rho(dx_p)\ldots \rho(dx_1),
$$
where $\beta_{p,j}^{\Lambda}$ is defined in Equation~(\ref{betas in the coalescent}). When $\Lambda(\{0\})=0$, this can also be written
$$
{\mathcal R}F(\rho)=\int_{(0,1]}\int_{[0,1]} \Big(F\big((1-u)\rho+u\delta_k\big)- F(\rho)\Big) \rho(dk)u^{-2}\Lambda(du).
$$
(When $\Lambda(\{0\})>0$, one must add a second term corresponding to a classical Fleming-Viot process and somehow dual to the Kingman part of the $\Lambda$-coalescent.) In this case, an intuitive way to think about the process is to consider a Poisson point process on $\mathbb R_+\times (0,1]$ with intensity measure $dt\otimes u^{-2}\Lambda (du)$, which picks jump times and sizes for $\rho(t)$. At a jump time $t$ with corresponding jump size $u$, a type $k$ is chosen according to $\rho(t-)$, an atom of mass $u$ is inserted at $k$ and $\rho(t-)$ is scaled down by $(1-u)$ so that the total mass remains equal to one, i.e.,
\begin{equation}
\label{lambda fv} \rho(t)=(1-u)\rho(t-)+u\delta_k.
\end{equation}
The duality between $\Lambda$-coalescents and $\Lambda$-Fleming-Viot processes was first proved by Bertoin \& Le Gall~(2003). Their approach uses a correspondence between the $\Lambda$-coalescents and stochastic flows of bridges. The duality can also be understood via the Donnelly \& Kurtz~(1999) `modified lookdown construction' and indeed is implicit there. An explicit explanation can be found in Birkner et al.~(2005). \nocite{bertoin/legall:2003} \nocite{birkner/blath/capaldo/etal:2005} \nocite{donnelly/kurtz:1999}
In recent work (described briefly in Etheridge~2008), \nocite{etheridge:2008} Barton \& Etheridge have proposed a new class of consistent forwards and backwards in time models for the evolution of allele frequencies in a population distributed in a two-dimensional (or indeed $d$-dimensional) spatial continuum which, in the simplest setting, can be thought of as spatial versions of the $\Lambda$-Fleming-Viot and $\Lambda$-coalescent models (although we emphasize that these are not the same as the spatial $\Lambda$-coalescents considered by Limic \& Sturm~2006). They \nocite{limic/sturm:2006} share many of the advantages of the classical models for spatially structured populations while overcoming at least some of the disadvantages. The idea is simple. Just as in the $\Lambda$-Fleming-Viot process, reproduction events are determined by a Poisson point process but now, in addition to specifying a time and a value $u$, this process prescribes a region of space which will be affected by the event. In what follows, the region will be a ball with random centre and radius. Within that region the effect is entirely analogous to Equation~(\ref{lambda fv}).
This approach differs from existing spatial models in three key ways. First, density dependent reproduction is achieved by basing reproduction events on neighbourhoods (whose locations are determined by the Poisson point process), rather than on individuals. Second, the offspring of a single individual can form a significant proportion of the population in a neighbourhood about the parent, capturing the essentially finite nature of the local population size. Third, large scale extinction-recolonisation events are explicitly incorporated. This reflects the large scale fluctuations experienced by real populations in which the movement and reproductive success of many individuals are correlated. For example, climate change has caused extreme extinction and recolonisation events that dominate the demographic history of humans and other species (e.g. Eller et al.~2004). \nocite{eller/hawks/relethford:2004}
The spatial $\Lambda$-Fleming-Viot process, like its classical counterpart, can be obtained as a limit of individual based models. Those prelimiting models are discussed in Berestycki et al.~(2009). \nocite{berestycki/etheridge/hutzenthaler:2009} In the (backwards in time) spatial $\Lambda$-coalescent, ancestral lineages move around according to dependent L\'evy processes (in fact they will be compound Poisson processes), jumping whenever they are affected by a reproduction event. Two or more lineages can coalesce if they are all affected by the same reproduction event.
Our first aim here is to provide a precise mathematical description of the spatial $\Lambda$-Fleming-Viot process and the corresponding spatial $\Lambda$-coalescent model and address questions of existence and uniqueness. This is achieved through adapting the work of Evans~(1997). \nocite{evans:1997} The idea is to first construct the dual (backwards in time) process of coalescing L\'evy processes corresponding to a finite sample from the population at time zero, and then to use a functional duality to define the forwards in time model. The principal difference between our setting and that of Evans is that, in his work, ancestral lineages evolve {\em independently} until they meet.
The system of coalescing L\'evy processes that describes the genealogy of a sample from the population, mirrors the system of coalescing random walks that plays the same r\^ole for the stepping stone model. For systems of coalescing walks a number of studies have investigated conditions under which, when viewed on an appropriate timescale, and for sufficiently well-separated samples, the effect of the geographical structure of the population can be summarised as a single `effective' parameter and the system of coalescing lineages converges to Kingman's coalescent. One of the first works along these lines is due to Cox~(1989), who considers random walks on a torus $\mathbb T(L)\cap\mathbb Z^d$ of sidelength $L$ with the walks coalescing instantly on meeting. This corresponds to taking $G=\mathbb T(L)\cap\mathbb Z^d$ and $\gamma=\infty$ in Equation~(\ref{stepstone model}). He shows that if one starts walks from any finite number $n\in\mathbb N$ of points chosen independently and uniformly at random from $\mathbb T(L)\cap\mathbb Z^d$, then in suitable time units, as $L\rightarrow\infty$, the number of surviving lineages is determined by Kingman's coalescent. For two spatial dimensions, this analysis was extended by Cox \& Durrett~(2002) and Z\"ahle et al.~(2005) to random walks on $\mathbb T(L)\cap\mathbb Z^2$ with delayed coalescence (corresponding to $\gamma<\infty$). \nocite{cox/durrett:2002} \nocite{zahle/cox/durrett:2005} It is natural to ask whether similar results are true here. Our second aim then is to establish conditions under which the genealogy of a sample taken at random from a large torus will converge to a non-spatial coalescent. We shall concentrate on the most difficult, but also most biologically relevant, case of two spatial dimensions. If reproduction events only affect bounded neighbourhoods, then, not surprisingly, we recover a Kingman coalescent limit. However, we also consider the more general situation in which in addition to `small' events that affect only bounded neighbourhoods we allow `large' extinction-recolonisation events (see Section \ref{section result} for the precise setting). Unless these events affect a non-negligible proportion of the torus, on a suitable timescale, asymptotically we once again recover a Kingman coalescent. The timescale is determined by the relative rates of `large' and `small' events. However, if we have extinction-recolonisation events that affect regions with sidelength of order ${\mathcal O}(L)$, then, again depending on the relative rates of `large' and `small' events, we can obtain a more general (non-spatial) {\em $\Lambda$-coalescent} limit or a system of coalescing Brownian motions (where the coalescence is non-local).
The rest of the paper is laid out as follows. In Section \ref{model} we define the model. In Section \ref{section result}, we give a precise statement of the conditions under which we obtain convergence of the genealogy of a random sample from a (two-dimensional) torus of side $L$ as $L\rightarrow\infty$. The corresponding convergence results are Theorem~\ref{result alpha<1} and Theorem~\ref{result alpha=1}. In Section \ref{section existence} we establish existence of the process and prove uniqueness in law. In Section \ref{levy processes} we gather the necessary results on L\'evy processes in preparation for our proofs of Theorem~\ref{result alpha<1} and Theorem~\ref{result alpha=1} in Sections \ref{alpha<1} and \ref{alpha=1}. Finally, Appendices \ref{appendix 1} and \ref{appendix 2} contain the proofs of the technical lemmas stated in Sections \ref{levy processes} and \ref{alpha<1}.
\section{The model}
\label{model}
First we describe a prelimiting model. Individuals in our population are assumed to have a {\em type} taken from $[0,1]$ and a spatial position in a metric space $E$ that we shall usually take to be $\mathbb R^2$ (or the torus $\mathbb T(L)$ in $\mathbb R^2$). Even though it will be clear that existence and uniqueness of the process holds in much greater generality, the model is primarily motivated by considerations for populations evolving in two-dimensional continua. The dynamics are driven by a Poisson point process $\Pi$ on $\mathbb R_+\times \mathbb R^2 \times (0,\infty)$ with intensity $dt\otimes dx\otimes \mu(dr)$. If $(t,x,r)\in\Pi$, the first component represents the time of a reproduction event. The event will affect only individuals in $B(x,r)$, the closed ball of centre $x$ and radius $r$. We require two more ingredients. The first, $m$, is a fixed positive constant which we shall refer to as the {\em intensity} of the model. Second, associated to each fixed radius $r>0$ there is a probability measure $\nu_r$ on $[0,1]$. In the sequel, we assume that the mapping $r\mapsto \nu_r$ is measurable with respect to $\mu$.
For definiteness, suppose that the population is initially distributed according to a spatially homogeneous Poisson process. The dynamics of our prelimiting model are described as follows. Suppose that $(t,x,r)\in\Pi$. Consider the population in $B(x,r)$ at time $t-$. If the ball is empty, then nothing happens. Otherwise, independently for each event:
\begin{enumerate}
\item{Select a `parent' uniformly at random from those individuals in $B(x,r)$ at time $t-$ and sample $u\in [0,1]$ at random according to $\nu_r$.}
\item{Each individual in $B(x,r)$, independently, dies with probability $u$, otherwise it is unaffected by the reproduction event.}
\item{Throw down offspring in the ball, with the same type as the selected parent (who may now be dead), according to an independent Poisson point process with intensity $\left.u\, m\,\mathrm{Leb}\right|_{B(x,r)}$ where $\mathrm{Leb}$ denotes Lebesgue measure.}
\end{enumerate}
We shall refer to these events as {\em reproduction events}, even though they are also used to model large-scale extinction-recolonisation events. Notice that recolonisation is modelled as being instantaneous even after a large scale extinction.
\begin{remark}
For simplicity we have described only a special version of the model in which, even when the reproduction event affects a large region, recolonisation is through a single founder. This guarantees that if we look at the genealogy of a sample from this population, although we may see more than two lineages coalescing in a single event, we do not see {\em simultaneous} mergers. More generally it would be natural to take a random number of colonists and then, on passing to the limit, the corresponding model would yield a spatial $\Xi$-coalescent.
\end{remark}
Any reproductive event has positive probability of leaving the corresponding region empty, but because the neighbourhoods determined by different reproduction events overlap, an empty region can subsequently become recolonised. Provided the measure $\mu(dr)$ decays sufficiently quickly as $r\rightarrow\infty$, Berestycki et al.~(2009) show that there is a critical value of $m$ above which the population, when started from a translation invariant initial condition, survives with probability one. The difficulty is that it is not easy to find an explicit expression for the distribution of the genealogical trees relating individuals in a sample from the population. Knowing that an ancestral lineage is in a given region of space gives us information about the rate at which that region was hit by reproduction events as we trace backwards in time. On the other hand, simulations reveal that this effect is rarely significant. Mathematically, we overcome this difficulty by considering a model in which the intensity $m$ is infinite, but we preserve some of the signature of a finite local population size by retaining the reproduction mechanism so that a non-trivial proportion of individuals in a neighbourhood are descended from a common ancestor. In particular, this will result in multiple coalescences of ancestral lineages.
Now let us describe the model that arises from letting $m\rightarrow \infty$. (That the prelimiting model really does converge to this limit will be proved elsewhere.) At each point $x\in\mathbb R^2$, the model specifies a probability measure on type space which we shall write $\rho(t,x,\cdot)$, or sometimes for brevity $\rho_x$. The interpretation is that if we sample an individual from $x$, then its type will be determined by sampling from $\rho_x$. The reproduction mechanism mirrors that for our discrete time model:
\begin{defn}[Spatial $\Lambda$-Fleming-Viot process] \label{def slfv}The {\em spatial $\Lambda$-Fleming-Viot process}, $\{\rho(t,x,\cdot), x\in\mathbb R^2, t\geq 0\}$ specifies a probability measure on the type space $[0,1]$ for every $t\geq 0$ and every $x\in\mathbb R^2$. With the notation above, the dynamics of the process are as follows. At every point $(t,x,r)$ of the Poisson point process $\Pi$, we choose $u\in [0,1]$ independently according to the measure $\nu_r(du)$. We also select a point $z$ at random from $B(x,r)$ and a type $k$ at random according to $\rho(t-,z,\cdot)$. For all $y\in B(x,r)$,
$$
\rho(t,y,\cdot)=(1-u)\rho(t-,y,\cdot)+u\delta_k.
$$
Sites outside $B(x,r)$ are not affected, that is $\rho(t,y,\cdot)=\rho(t-,y,\cdot)$ for every $y\notin B(x,r)$.
\end{defn}
\begin{remark}
There are many variants of this model, some of which are outlined in Etheridge~(2008). \nocite{etheridge:2008} The model presented here should be regarded as fitting into a general framework in which the key feature is that reproduction events are driven by a Poisson point process determining their times and spatial locations, rather than on individuals. Barton et al.~(2009) investigate a version of the model in which, instead of replacing a portion $u$ of the population in a disc at the time of a reproduction event, the proportion of individuals affected decays (in a Gaussian distribution) with the distance from the `centre' $x$ of the event. \nocite{barton/kelleher/etheridge:2009} Whereas in the disc based approach in the prelimiting (individual based) model we had to suppress reproduction events that affected empty regions, this is not necessary in the Gaussian model. Moreover, (in contrast to the disc model) in that setting the prelimiting model has the Poisson point process in $\mathbb R^2$ with constant intensity $m$ as a stationary distribution. Although the proofs would be rather involved, analogues of our results here should carry over to the Gaussian setting.
\end{remark}
Of course we must impose restrictions on the intensity measure if our process is to exist. To see what these should be, consider first the evolution of the probability measure $\rho(t,x,\cdot)$ defining the distribution of types at the point $x$. This measure experiences a jump of size $y\in A\subseteq (0,1]$ at rate
$$
\int_{(0,\infty)} \int_A \pi r^2 \nu_r(du)\mu(dr).
$$
By analogy with the $\Lambda$-Fleming-Viot process, we expect to require that
\begin{equation}
\label{condition 1} \Lambda(du)=\int_{(0,\infty)}u^2r^2 \nu_r(du)\mu(dr)
\end{equation}
defines a finite measure on $[0,1]$. In fact, in the spatial setting we require a bit more. To see why, suppose that $\psi$ is a bounded measurable function on $[0,1]$ and consider the form that the infinitesimal generator of the process must take on test functions of the form $\langle \rho(x,dk),\psi(k)\rangle$ (with angle brackets denoting integration). Denoting the generator, if it exists, by $G$ we shall have
\begin{eqnarray*}
G(\langle \rho,\psi\rangle)&=& \int_{\mathbb R^2}\int_{(0,\infty)}\int_{[0,1]}\int_{[0,1]} \frac{L_r(x,y)}{\pi r^2}\big(\langle (1-u)\rho(x,\cdot)+u\delta_k,\psi\rangle- \langle\rho(x,\cdot),\psi\rangle \big)\\
&&\phantom{AAAAAAAAAAAAAAAAAAAAAAAAAA}\rho(y,dk)\nu_r(du)\mu(dr)dy\\
&=& \int_{\mathbb R^2}\int_{(0,\infty)}\int_{[0,1]} \frac{L_r(x,y)}{\pi r^2}\,u\big(\langle \rho(y,\cdot),\psi\rangle- \langle\rho(x,\cdot),\psi\rangle\big) \nu_r(du)\mu(dr)dy,
\end{eqnarray*}
where $L_r(x,y)$ denotes the volume of the set $B(x,r)\cap B(y,r)$. Notice in particular that $L_r(x,y)\leq\pi r^2\mathbf{1}_{\{|x-y|\leq 2r\}}$. In the non-spatial case, this term vanishes (set $y=x$), but here if we want the generator to be well-defined on these test functions we make the stronger
\begin{assumption}
\begin{equation}
\label{condition for convergence} \tilde{\Lambda} (du)=\int_{(0,\infty)}ur^2 \nu_r(du)\mu(dr)
\end{equation}
defines a finite measure on $[0,1]$.
\end{assumption}
Condition~(\ref{condition for convergence}) controls the jumps of $\rho$ at a single point. Since we are going to follow Evans~(1997) in constructing our process via the dual process of coalescing lineages ancestral to a sample from the population, we should check that such a process is well-defined. First we define the coalescent process more carefully.
In order to make sense of the genealogy of a sample at any time, we extend the Poisson point process $\Pi$ of reproduction events to the whole time line $(-\infty,+\infty)$. We need some notation for (labelled) partitions.
\begin{notn}[Notation for partitions]
\label{notation for partitions}
\begin{enumerate}
\item{For each integer $n\geq 1$, let ${\mathcal P}_n$ denote the set of partitions of $\{1,\ldots,n\}$, and define a labelled partition of $\{1,\ldots,n\}$, with labels from a set $E$, to be a set of the form $\{(\pi_1,x_{\pi_1}),\ldots,(\pi_k,x_{\pi_k})\}$, where $\{\pi_1,\ldots,\pi_k\}\in {\mathcal P}_n$ and $(x_{\pi_1},\ldots,x_{\pi_k})\in E^k$. Let ${\mathcal P}_n^{\ell}$ be the set of all labelled partitions of $\{1,\ldots,n\}$. }
\item{For each $n\in \N$, let $\wp_n$ denote the partition of $\{1,\ldots,n\}$ into singletons. Moreover, if $E$ is the space of labels and $\mathbf{x}\equiv(x_1,\ldots,x_n)\in E^n$, let $\wp_n(\mathbf{x})$ denote the element $\{(\{1\},x_1),\ldots,(\{n\},x_n)\}$ of ${\mathcal P}_n^{\ell}$. }
\item{If $\pi \in {\mathcal P}_n^{\ell}$ for some $n\in \N$, then $\mathrm{bl}(\pi)$ will refer to the unlabelled partition of $\{1,\ldots,n\}$ induced by $\pi$ and if $a\in \mathrm{bl}(\pi)$, $x_a$ will be our notation for the label of $a$.}
\end{enumerate}
\end{notn}
Our genealogical process will be a labelled partition. As in classical representations of genealogical processes, a block of the partition at genealogical time $t\geq 0$ contains the indices of the initial lineages which share a common ancestor $t$ units of time in the past, and its label gives the current location of this ancestor in $E=\R^2$.
From the description of the forwards-in-time dynamics, the evolution of a sample of ancestral lineages represented by a labelled partition should be the following. We start with a finite collection of lineages at time $0$. At each point $(-t,x,r)\in \Pi$ (with $t\geq 0$ here, since genealogical time points towards the past), given that $u\in [0,1]$ is the result of the sampling according to $\nu_r$ each lineage present in the ball $B(x,r)$, independently, is affected (resp., is not affected) with probability $u$ (resp., $1-u$). A site $y$ is chosen uniformly in $B(x,r)$, and the blocks of all affected lineages merge into a single block labelled by $y$. The other blocks and their labels are not modified. We write $\{\A(t),\ t\geq 0\}$ for the Markov process of coalescing lineages described in this way. Its state space is $\bigcup_{n\geq 1}{\mathcal P}_n^{\ell}$. Note that $\A$ is constructed on the same probability space as that of the Poisson point process of reproduction events. Writing $\mathbb P$ for the probability measure on that space, we abuse notation slightly by writing $\prob_A$ to indicate that $\A(0)=A$, $\prob_A$-a.s. Now let us verify that our Condition~(\ref{condition for convergence}) is sufficient to ensure that the process $\{\A(t),t\geq 0\}$ is well-defined. Since two lineages currently at separation $y\in\mathbb R^2$ will coalesce if they are {\em both} involved in a replacement event, which happens at instantaneous rate
\begin{equation} \label{condition 3}
\int_{(|y|/2,\infty)}L_r(y,0)\left(\int_{[0,1]}u^2\nu_r(du)\right)\mu(dr),
\end{equation}
Condition~(\ref{condition for convergence}) is more than enough to bound the rate of coalescence of ancestral lineages. To guarantee that we can fit together the measures $\rho$ at different points in a consistent way, we also need to be able to control the spatial motion of ancestral lineages. Consider the (backwards in time) dynamics of a single ancestral lineage. It evolves in a series of jumps with intensity
\begin{equation}\label{jump intensity}
dt\otimes\int_{(|x|/2,\infty)}\int_{[0,1]}\frac{L_r(x,0)}{\pi r^2}\, u\,\nu_r(du)\mu(dr)dx
\end{equation}
on $\mathbb R_+\times\mathbb R^2$. If we want this to give a well-defined L\'evy process, then we require
\begin{equation}
\label{condition 2} \int_{\mathbb R^2}(1\wedge |x|^2)\left(\int_{(|x|/2,\infty)}\int_{[0,1]} \frac{L_r(x,0)}{\pi r^2}\,u\,\nu_r(du)\mu(dr)\right)dx<\infty.
\end{equation}
But Condition~(\ref{condition for convergence}) certainly guarantees this. In fact it ensures that the rate of jumps of each ancestral lineage is {\em finite}. In other words, ancestral lineages follow compound Poisson processes.
\begin{remark}
At first sight it is disappointing that we have to take Condition~(\ref{condition for convergence}) and hence obtain a system of coalescing compound Poisson processes rather than more general symmetric L\'evy processes that (\ref{condition 1}) and (\ref{condition 2}) would allow. However, biologically there is not much loss. The `gap' between Condition~(\ref{condition for convergence}) and the weaker Condition~(\ref{condition 1}) is that the latter would allow one to include very large numbers of extremely small jumps (in which only a tiny proportion of the population is affected) as the radius of the area affected by a reproduction event tends to zero. But in our population model, for small $r$ we expect that a {\em large} proportion of the population in the neighbourhood be replaced.
\end{remark}
\begin{remark}
Notice that the locations of ancestral lineages are {\em not} independent of one another. Knowing that one lineage has jumped tells us that a reproduction event has taken place that could have affected other lineages ancestral to our sample. Wilkins \& Wakeley~(2002) consider a somewhat analogous model in which a linear population evolves in discrete generations (see Wilkins~2004 for a two-dimensional analogue). Each individual in the parental generation scatters an infinite pool of gametes in a Gaussian distribution about themselves, and the next generation is formed by sampling from the pool of gametes at each point. Individuals are assumed to have a finite linear width to avoid the pathologies that arise when common ancestry in a continuum model requires two ancestral lineages to have a physical separation of zero. They observe that ``conditional on not coalescing in the previous generation, two lineages are slightly more likely to be further apart than closer together''. In their setting a change of coordinates settles the problem: the distance apart and the average position of two lineages do evolve independently. For us the dependencies between lineages are more complex because the presence of a jump contains the information that a reproduction event has taken place, whereas the conditioning obviously tells us nothing about the timing of events in the discrete generation model. \nocite{wilkins/wakeley:2002} \nocite{wilkins:2004}
\end{remark}
\section{The genealogy of points sampled uniformly from a large torus}
\label{section result}
We now turn our attention to populations evolving on a two-dimensional torus of sidelength $L$. Our goal is to describe the genealogy of a finite number of individuals sampled uniformly at random from the torus and subject to events of very different scales, as $L\rightarrow\infty$
To this end, we now consider a family of models indexed by $\N$. For each $L\in \N$, we consider a population evolving on the torus $\T(L)\subset \R^2$ of sidelength $L$. We identify $\T(L)$ with the subset $[-L/2,L/2]^2$ of $\R^2$ and use the Euclidean norm $|\cdot |$ induced on $\T(L)$ by this identification. Although $B_{\T(L)}(x,r)$ will be our notation for the ball in $\T(L)$ centred in $x$ and with radius $r$, we shall omit the subscript when there is no risk of confusion.
The population will be subject to two different classes of events that we call \emph{small} and \emph{large}. The region affected by each small event will be uniformly bounded (independently of the size of the torus). Large events will affect regions whose diameter is on the order of $\psi_L$ which will be taken to grow with $L$, but they will be less frequent. We shall assume that the rate at which a given ancestral lineage is affected by a large event is proportional to $1/\rho_L$ with $\rho_L$ also chosen to grow with $L$.
Now let us make the model more precise. Let $(\psi_L)_{L\geq 1}$ be an increasing sequence such that there exists $\alpha \in (0,1]$ satisfying
\begin{equation}\label{def alpha}
\lim_{L\rightarrow \infty}\frac{\log \psi_L}{\log L}=\alpha,
\end{equation}
and assume that $|\alpha\log L-\log \psi_L|=o((\log L)^{-1/2})$ as $L\rightarrow \infty$.
\begin{remark}
The latter assumption is not necessary since all our results would still hold with each occurrence of $(1-\alpha)\log L$ replaced by $\log (L\psi_L^{-1})$ (see the end of the proof of Proposition \ref{prop gathering}), but it is weak and considerably simplifies the presentation.
\end{remark}
Let $(\rho_L)_{L\geq 1}$ be an increasing sequence with values in $(0,+\infty]$, tending to infinity as $L\rightarrow \infty$. Finally, let $\mu^s(dr)$ and $\mu^B(dr)$ be two $\sigma$-finite Borel measures on $(0,\infty)$, independent of $L$, such that there exist some positive constants $R^s$ and $R^B$ satisfying
$$
\inf\big\{R:\mu^s\big((R,\infty)\big)=0\big\}=R^s <\infty \quad \mbox{ and }\quad \inf\big\{R:\mu^B\big((R,\infty)\big)=0\big\}=R^B <\infty.
$$
(For convenience, we ask that $R^B\leq 1/\sqrt{2}$ if $\alpha=1$.) To every $r\geq 0$, we associate two probability measures $\nu_r^s(du)$ and $\nu_r^B(du)$ on $[0,1]$, and we assume that for $\star \in \{B,s\}$ and for each $\e\in (0,R^\star)$,
\begin{equation}\label{coal at boundary}
\mu^\star\big(\big\{r\in [R^\star-\e,R^\star]:\nu_r^\star\neq \delta_0\big\}\big)>0.
\end{equation}
If Condition (\ref{coal at boundary}) does not hold, we decrease the corresponding radius $R^\star$ since otherwise the largest events never affect a lineage.
Let us suppose that for each $L\geq 1$, the reproduction events of the forwards in time model can be of two types :
\begin{itemize}
\item \textbf{Small events}, given by a Poisson point process $\Pi^s_L$ on $\R\times \T(L) \times (0,\infty)$ with intensity measure $dt\otimes dx \otimes \mu^s(dr)$. If $(t,x,r)$ is a point of $\Pi^s_L$, then the centre of the reproduction event is $x$, its radius is $r$ and the fraction of individuals replaced during the event is chosen according to $\nu_r^s$.
\item \textbf{Large events}, given by a Poisson point process $\Pi^B_L$ on $\R\times \T(L) \times (0,\infty)$, independent of $\Pi^s_L$ and with intensity measure $(\rho_L\psi_L^2)^{-1}dt\otimes dx \otimes \mu^B(dr)$. If $(t,x,r)$ is a point of $\Pi^B_L$, then the centre of the reproduction event is $x$, its radius is $\psi_Lr$ and the fraction of individuals replaced during the event is chosen according to $\nu_r^B$.
\end{itemize}
Notice that we allow $\rho_L$ to be infinite, in which case large events do not occur. Since $\Pi^s_L$ and $\Pi^B_L$ are independent, the reproduction events could be formulated in terms of a single Poisson point process to fit into the Definition \ref{def slfv} of the spatial $\Lambda$-Fleming-Viot process. However, our aim here is to disentangle the effects of events of different scales, hence our decomposition into two point processes.
\begin{remark}
Observe that, although the intensity of $\Pi_L^B$ is proportional to $(\rho_L\psi_L^2)^{-1}$, the rate at which a lineage is affected by (that is, jumps because of) a large event is of order $\mathcal{O}(\rho_L^{-1})$. Indeed, the volume of possible centres for such an event is proportional to $\psi_L^2$, so that the jump rate of a lineage due to the large events is given by
$$
\frac{1}{\rho_L\psi_L^2}\int_0^{R^B}\int_0^1 \pi (\psi_Lr)^2 u\ \nu_r^B(du)\mu^B(dr)= \frac{\pi}{\rho_L}\int_0^{R^B}\int_0^1 r^2 u\ \nu_r^B(du)\mu^B(dr).
$$
\end{remark}
In order for the genealogical processes, which we now denote by $\A^L$ to emphasize dependence on $L$, to be well-defined for every $L\in \N$, we assume that Condition~(\ref{condition for convergence}) is fulfilled. In this setting, the condition can be written
$$
\int_0^{R^s}\int_0^1r^2u\ \nu_r^s(du)\mu^s(dr) +\frac{1}{\rho_L}\int_0^{R^B}\int_0^1 r^2 u\ \nu_r^B(du)\mu^B(dr)<\infty.
$$
Let us introduce some more notation. We write
$$
\Gamma(L,1)\equiv \bigg\{x\in \T(L): |x|\geq \frac{L}{\log L}\bigg\},
$$
and for each integer $n\geq 2$,
\begin{eqnarray*}
\Gamma(L,n)& \equiv& \Big\{\{x_1,\ldots,x_n\} \in \T(L)^n: |x_i-x_j|\geq \frac{L}{\log L} \mathrm{\ \ for\ all\ }i\neq j\Big\},\\
\GA(L,n)&\equiv &\Big\{\big\{(a_1,x_{a_1}),\ldots,(a_k,x_{a_k})\big\}\in {\mathcal P}_n^{\ell}:\ \{x_{a_1},\ldots,x_{a_k}\}\in \Gamma(L,k)\Big\},
\end{eqnarray*}
where as before ${\mathcal P}_n^{\ell}$ denotes the labelled partitions of $\{1,\ldots ,n\}$. When we require an element $A$ of $\GA(L,n)$ to have exactly $n$ blocks, we shall write $A\in \GA(L,n)^*$.
In order to obtain a non-trivial limit, we rescale time for the process $\A^L$ by a factor that we denote $\vp_L$. Recall that if $A\in {\mathcal P}_n^{\ell}$ for some $n\in \N$, $\mathrm{bl}(A)$ stands for the unlabelled partition of $\{1,\ldots,n\}$ induced by $A$. For each $L\in\N$, let us define the (non-Markov) process $\A^{L,u}$ by
$$
\A^{L,u}(t)=\mathrm{bl}\big(\A^L(\vp_L t)\big),\qquad t\geq 0.
$$
Note that for each $L\in \N$, if we start $\A^L$ from $A_L$, a labelled partition of $\{1,\ldots,n\}$ with labels from $\mathbb T(L)$, then $\A^{L,u}$ takes its values in the Skorohod space $D_{{\mathcal P}_n}[0,\infty)$ of all c\`adl\`ag paths with values in ${\mathcal P}_n$ (the set of partitions of $\{1,\ldots,n\}$), $\prob_{A_L}$-a.s.
Recall the definition of $\alpha$ given in (\ref{def alpha}). In the absence of large events, our model is similar in many respects to the two-dimensional stepping stone model and so it comes as no surprise that just as for the stepping stone model, the genealogy of a random sample from the torus should converge (on a suitable timescale) to a Kingman coalescent as the size of the torus tends to infinity (see in particular Cox \& Griffeath 1986,1990, Cox \& Durrett 2002 and Z\"ahle et al. 2005 for precise statements of this result in different contexts). Our first result says that if $\alpha<1$, then we still obtain a Kingman coalescent, but the {\em timescale} will be influenced by the large events: the latter reduce the effective population size.
Before stating the result formally, let us try to understand why we should expect something like this to be true. To understand the appropriate timescale we just need to consider two lineages. The time they need to coalesce will be decomposed into two phases. If $\rho_L$ is not too big, the first phase will be the time until they first come within distance $2R^B\psi_L$ and the second will be the additional time required for them to coalesce. During the first phase they evolve according to independent compound Poisson processes. If $\rho_L$ is small enough, the coalescence event that will eventually occur during the second phase will, with probability close to one, be triggered by a large event. For larger values of $\rho_L$, large events will not be frequent enough to hit the two lineages when they are at a distance that would allow them to coalesce (i.e., less than $2R^B\psi_L$), and coalescence will instead be caused by a small-scale event. The first phase is then taken to be the time until the lineages first come within distance $2R^s$ of one another. The fact that with high probability they will not be hit by the same large-scale event means that once again they evolve (almost) independently of one another during this first phase. The second phase is now the time taken for them to coalesce due to a small event. The transition between these two regimes is when $\rho_L\propto\psi_L^2\log L$. Now suppose that we start from a sample in $\Gamma(L,n)$. The first phase is then long enough that, when it ends, the spatial location of lineages is no longer correlated with their starting points. Finally, why do large-scale events not lead to multiple mergers? The key point is that, when a pair of lineages ancestral to our sample first comes within $2R^B\psi_L$ of one another, all {\em other} pairs are still well-separated. So if $\rho_L$ is not too big, this pair will coalesce before a third lineage can come close enough to be affected by a common event. If we take larger $\rho_L$, the reason is exactly the same but now lineages have to come within distance $2R^s$ and coalescence is driven by small events.
Here then is the formal result which makes explicit the convergence in distribution of our spatial genealogies to a nonspatial coalescent process. In the following, $\sigma_s^2$ (resp., $\sigma_B^2 \psi_L^2\rho_L^{-1}$) is the variance of the displacement of a lineage during one unit of time due to small (resp., large) events, see~(\ref{variances}) below.
\begin{thm}
\label{result alpha<1} Let $\mathcal{K}$ denote Kingman's coalescent, and recall that for each $n\in \N$, $\wp_n$ denotes the partition of $\{1,\ldots,n\}$ into singletons. In the notation of (\ref{def alpha}), suppose $\alpha<1$ (and (\ref{coal at boundary}) holds). Then, for each integer $n\geq 2$ and any sequence $(A_L)_{L\in \N}$ such that $A_L\in \GA(L,n)^*$ for every $L$,
$$
\mathcal{L}_{\prob_{A_L}}(\A^{L,u})\Rightarrow \mathcal{L}_{\prob_{\wp_n}}(\mathcal{K}) \qquad \mathrm{as\ }L\rightarrow \infty,
$$
where
$$
\vp_L = \left\{ \begin{array}{ll} \frac{(1-\alpha)\rho_LL^2\log L}{2\pi \sigma_B^2\psi_L^2} &\qquad\mathrm{if\ }\rho_L^{-1}\psi_L^2\rightarrow \infty, \vspace{4pt}\\
\frac{(1-\alpha)L^2\log L}{2\pi(\sigma_s^2+b\sigma_B^2)}& \qquad \mathrm{if\ }\rho_L^{-1}\psi_L^2\rightarrow b\in[0,\infty)\ \mathrm{and}\ \frac{\psi_L^2\log L}{\rho_L}\rightarrow \infty, \vspace{4pt}\\
\frac{L^2\log L}{2\pi \sigma_s^2}& \qquad \mathrm{if}\ (\rho_L^{-1}\psi_L^4)_{L\geq 1} \ \mathrm{is\ bounded\ or}\ \frac{L^2\log L}{\rho_L}\rightarrow 0.
\end{array}\right.
$$
Here $\mathcal{L}_{\mathrm{P}}(X)$ denotes the law under the probability measure $\mathrm{P}$ of the random variable $X$ and $\Rightarrow$ refers to weak convergence of probability measures.
\end{thm}
For $\alpha=1$, things are more complicated. When $\psi_L$ is commensurate with $L$, large scale events cover a non-negligible fraction of the torus. If they happen too quickly, then they will be able to capture multiple lineages while the locations of those lineages are still correlated with their starting points. For intermediate ranges of $\rho_L$, lineages will have homogenised their positions on $\mathbb T(L)$ through small events, but not coalesced, before the first large event occurs and we can expect a $\Lambda$-coalescent limit. If the large events are too rare, then coalescence will be through small events and we shall recover the Kingman coalescent again.
To give a precise result we need to define the limiting objects that arise. In the case $\alpha =1$, for each $L\in \N$, we set
$$
\vp_L=\left\{\begin{array}{ll}\rho_L& \qquad \mathrm{if\ }\rho_L/(L^2\log L) \mathrm{\ has\ a\ finite\ limit}, \vspace{3pt}\\
\frac{L^2\log L}{2\pi \sigma_s^2}& \qquad \mathrm{if\ }\rho_L/(L^2\log L) \rightarrow +\infty, \end{array}\right.
$$
and define $\A^{L,u}$ as before. Since we shall need to keep track of the labels (spatial positions) of the ancestral lineages in some cases, it will also be convenient to introduce the following rescaling of $\A^L$, evolving on $\T(1)$ for all $L\in \N$:
$$
\bar{\A}^L(t)= \frac{1}{L}\ \A^L(\vp_L t),\qquad t\geq 0,
$$
where by this notation we mean that the labels are rescaled by a factor $L^{-1}$. Similarly, for $\mathbf{x}\in\mathbb T(1)^n$ we write $L\mathbf{x}$ for $(Lx_1,\ldots ,Lx_n)\in\mathbb T(L)^n$. Finally, let us introduce the processes which will appear as the limits of our rescaled genealogical processes.
\begin{defn}\label{def limit with space}
Let $b\in [0,\infty)$ and $c>0$. We call $\bar{\A}^{\infty,b,c}$ the Markov process with values in $\bigcup_{n\in \N}{\mathcal P}_n^{\ell}$ (with labels in $\T(1)$) such that \begin{enumerate}
\item The labels of the lineages perform independent Brownian motions on $\T(1)$ at speed $b\sigma_s^2$ (if $b=0$, the labels are constant), until the first large event occurs.
\item Large events are generated by a Poisson point process $\overline{\Pi}^B$ on $\R\times \T(1)\times (0,1/\sqrt{2}]$ with intensity measure $c^{-2}dt\otimes dx \otimes \mu^B(dr)$. At a point $(t,x,r)$ of $\ov{\Pi}^B$, a number $u\in [0,1]$ is sampled from the probability measure $\nu_r^B$, and each lineage whose label belongs to $B_{\T(1)}(x,cr)$ is affected (resp., is not affected) by the event with probability $u$ (resp., $1-u$), independently of each other. A label $z$ is chosen uniformly at random in $B_{\T(1)}(x,cr)$, and all the lineages affected merge into one block which adopts the label $z$. The other lineages (blocks and labels) remain unchanged.
\item The evolution of the labels starts again in the same manner.
\end{enumerate}
\end{defn}
\begin{remark}
Notice that this process looks like another spatial $\Lambda$-coalescent, except that now ancestral lineages perform independent spatial motions in between coalescence events. This process is dual (in the obvious way) to a spatial $\Lambda$-Fleming-Viot process in which, during their lifetimes, individuals move around in space according to independent Brownian motions.
\end{remark}
For each $r\in [0,1/\sqrt{2}]$, let $V_r$ denote the volume of the ball $B_{\T(1)}(0,r)$.
\begin{defn}\label{def lambda coal}Let $\beta\in [0,\infty)$ and $c>0$. We use $\Lambda^{(\beta,c)}$ to denote the $\Lambda$-coalescent, defined on $\bigcup_{n\in \N}{\mathcal P}_n$, for which if there are currently $m$ ancestral blocks, then each transition involving $k$ of them merging into one happens at rate
$$
\lambda^{(\beta,c)}_{m,k}= c^{-2}\int_0^{(\sqrt{2})^{-1}}\int_0^1(V_{cr}u)^k(1-V_{cr}u)^{m-k}\nu_r^B(du)\mu^B(dr)+ \beta\ \delta_{\{k=2\}}.
$$
\end{defn}
Recall the notation $\wp_n$ and $\wp_n({\mathbf x})$ introduced in Notation~\ref{notation for partitions}, and $\mathcal{L}_{\mathrm{P}}(X)$ and $\Rightarrow$ introduced in the statement of Theorem~\ref{result alpha<1}. We can now state the result for $\alpha=1$.
\begin{thm}
\label{result alpha=1} Suppose there exists $c>0$ such that for every $L\in \mathbb N$, $\psi_L = cL$. Let $n\in\N$, $\mathbf{x}\in \T(1)^n$ such that $x_i\neq x_j$ whenever $i\neq j$, and let $(A_L)_{L\in \N}$ be such that for every $L$, $A_L\in \GA(L,n)^*$. Then, as $L\rightarrow \infty$,
\noindent $(a)$ If $\rho_LL^{-2}\rightarrow b\in [0,\infty)$,
$$
\mathcal{L}_{\prob_{\wp_n(L\mathbf{x})}}\big(\bar{\A}^L\big)\Rightarrow \mathcal{L}_{\prob_{\wp_n(\mathbf{x})}}\big(\bar{\A}^{\infty,b,c}\big),
$$
\noindent $(b)$ If $\rho_LL^{-2}\rightarrow \infty$, $\frac{2\pi \sigma_s^2 \rho_L}{L^2 \log L}\rightarrow \beta \in [0,\infty)$ and if the total rate of occurrence of large events is finite (i.e., $\mu^B$ has finite total mass),
$$
\mathcal{L}_{\prob_{A_L}}\big(\A^{L,u}\big)\Rightarrow \mathcal{L}_{\prob_{\wp_n}}\big(\Lambda^{(\beta,c)}\big).
$$
\noindent $(c)$ If $\frac{\rho_L}{L^2 \log L}\rightarrow \infty$,
$$
\mathcal{L}_{\prob_{A_L}}\big(\A^{L,u}\big)\Rightarrow \mathcal{L}_{\prob_{\wp_n}}\big(\mathcal{K}\big).
$$
\end{thm}
Notice that the case $(a)$ differs from all other cases in that the influence of space does not disappear as $L\rightarrow \infty$ and the evolution of the limiting genealogy still depends on the precise locations of the lineages.
The intuition behind Theorem \ref{result alpha=1} is as follows. If $\psi_L \propto L$ large events cover a non-negligible fraction of the torus, and so only a few large events are sufficient to gather two lineages at a distance at which they can coalesce. However, a local central limit theorem will give us that on a timescale of order at most $\mathcal{O}(L^2)$, a lineage subject to only small events behaves approximately like Brownian motion, whereas after a time $t_L\gg L^2$, its distribution is nearly uniform on $\T(L)$ (for $L$ large enough, see Lemma \ref{lemma local TCL}). Since the mean time before a large event affects a lineage is of order ${\mathcal O}(\rho_L)$, the limiting genealogical process (when we include both large and small reproduction events) will depend on how $\rho_L$ scales with $L^2$. If $\rho_L$ is of order at most $\mathcal{O}(L^2)$, then space matters and the process $\A^L$ rescaled to evolve on $\T(1)$ on the timescale $\rho_L$ converges to a system of coalescing Brownian motions, whereas if $\rho_L\gg L^2$, the homogenisation of the labels/locations of the lineages before the occurrence of the first large event which affects them leads to a limiting unlabelled genealogical process given by an exchangeable coalescent with multiple mergers.
\begin{remark}\label{rk finite rate}
It is somehow disappointing that we must impose a finite rate of large events to obtain the convergence of Theorem \ref{result alpha=1}(b). Indeed, it seems that case (a) should give us the right picture: in the limit, in between large events lineages perform Brownian motions on the torus of sidelength 1 due to small events, except that now the time required for at least one lineage to be affected by a large event is so long that lineages exhaust space and their locations become uniformly distributed over the torus before they are taken by a coalescence event. However, when $\mu^B$ has infinite mass, lineages are infinitely often in the (geographical) range of a large reproduction event over any interval of time, and we need good control of their complete paths to actually be able to say something about the epoch and outcome of the first potential coalescence event. Now, observe that Equation (\ref{eq homogen}) can only be generalized to the finite-dimensional distributions of these paths, and does not guarantee that a large event cannot capture some of the lineages at a time when they are not uniformly distributed over $\T(1)$.
\end{remark}
Theorem \ref{result alpha=1} deals with the case where $\psi_L$ is proportional to $L$. Let us now comment on the remaining cases, in which $\alpha=1$ but $\psi_L\ll L$. First, it is easy to see that the convergence in $(c)$ still holds, since it is based on the fact that large events are so rare that none of them occurs before small events reduce the genealogical process to a single lineage.
Second, since the total rate of large events on the timescale $\rho_L$ is $\mu^B(\mathbb R_+)L^2/\psi_L^2$, it cannot be bounded unless $\mu^B\equiv 0$ (a situation we excluded in (\ref{coal at boundary})). On the other hand, for the reason expounded in Remark \ref{rk finite rate} we are unable to derive a limiting behaviour for the genealogy when large events can accumulate, and so the result of Theorem \ref{result alpha=1}$(b)$ has no counterpart when $\psi_L\ll L$.
Third, as explained above, when $\rho_L\leq b L^2$ any limiting process will necessarily have a spatial component. Now, because we start with lineages at distance $\mathcal{O}(L)$ of each other, we need to rescale space by $L$ in order to obtain a non trivial initial condition. The last parameter we need is the timescale $\varpi_L$ on which to consider the genealogical process. But a separation of timescales will not occur here, and so the computations done in Section \ref{levy processes} will show that the suitable choice of $\varpi_L$ depends on the precise behaviour of $\rho_L/L^2$ and $\rho_L/\psi_L^2$. Several limiting processes are thus possible, and since all the arguments needed to derive these limits are scattered in Sections \ref{levy processes} and \ref{alpha=1}, we chose not to detail them here.
\section{Existence and uniqueness of the forwards-in-time process}
\label{section existence}
Our spatial $\Lambda$-Fleming-Viot process associates a probability measure on type space to each point in $\mathbb R^2$. In other words, it takes its values among functions from $\mathbb R^2$ to ${\mathcal M}_1([0,1])$. \nocite{evans:1997} Evans~(1997) uses duality with a system of coalescing Borel right processes on a Lusin space $E$ to construct a family of Markov processes with values in the set of functions from $E$ to $\mathcal{M}_1(\{0,1\}^{\N})$ (or equivalently, to $\mathcal{M}_1([0,1])$). He also obtains uniqueness in distribution of the process. In his setting, coalescing particles evolve independently until they meet, at which point they instantly coalesce. In our case, the particles in the candidate dual do not move independently and nor do two particles hit by the same reproduction event necessarily coalesce, but nonetheless the key ideas from his construction remain valid. Note that, although we present the result in two dimensions, the proof carries over to other dimensions.
First we give a formal description of the coalescing dual and then we use the Evans' construction to give existence and uniqueness in law of a process $\rho$ which assigns a probability measure on $[0,1]$ to each point in $\mathbb R^2$. We then identify $\rho$ as the spatial $\Lambda$-Fleming-Viot process in which we are interested.
\subsection{State-space of the process and construction via duality}
We shall only present the main steps of the construction, and refer to Evans~(1997) for more details.
Let us define $\tilde{\Xi}$ as the space of all Lebesgue-measurable maps $\rho:\R^2 \rightarrow \mathcal{M}_1([0,1])$. Two elements $\rho_1$ and $\rho_2$ of $\tilde{\Xi}$ are said to be equivalent if $\mathrm{Leb}(\{x\in \R^2:\ \rho_1(x)\neq \rho_2(x)\})=0$. Let $\Xi$ be the quotient space of $\tilde{\Xi}$ by this equivalence relation. If $E$ is a compact space, let us write $C(E)$ for the Banach space of all continuous functions on $E$, equipped with the supremum norm $\| \cdot \|_{\infty}$. For each $n\in \N$, let $L^1(C([0,1]^n))$ be the Banach space of all Lebesgue-measurable maps $\Phi:(\R^2)^n\rightarrow C([0,1]^n)$ such that $\int_{(\R^2)^n} \|\Phi(x)\|_{\infty}\ dx <\infty$. A remark in Section 3 of Evans~(1997) tells us that the separability of $L^1(C([0,1]))$ and a functional duality argument guarantee that $\Xi$, equipped with the relative weak* topology, is a (compact) metrisable space. Finally, if $\lambda$ is a measure on a space $E'$, let us write $L^1(\lambda)$ for the set of all measurable functions $f:E'\rightarrow \R$ such that $\int_{E'}|f(e)|\lambda(de)<\infty$.
Let $n\in \N$. Given $\Phi\in L^1(C([0,1]^n))$, let us define a function $I_n(\cdot\ ;\Phi)\in C(\Xi)$ by
$$
I_n(\rho;\Phi)\equiv \int_{(\R^2)^n} \Big\langle \bigotimes_{1\leq i\leq n}\rho(x_i),\Phi(x_1,\ldots,x_n)\Big\rangle\ dx_1\ldots dx_n,
$$
where as before the notation $\langle \nu,f\rangle$ stands for the integral of the function $f$ against the measure $\nu$. We have the following lemma, whose proof is essentially that of Lemma~3.1 in Evans~(1997).
\begin{lemma}\label{lemm set of functions}The linear subspace spanned by the constant functions and functions of the form $I_n(\cdot\ ;\Phi)$, with $\Phi= \psi\otimes \big(\prod_{i=1}^n\chi_i\big)$, $\psi\in L^1(dx^{\otimes n})\cap C((\R^2)^n)$ and $\chi_i\in C([0,1])$ for all $1\leq i\leq n$ is dense in $C(\Xi)$.
\end{lemma}
We need a last definition before stating the existence and uniqueness result. Let $n\in \N$. For any $\rho\in \Xi$, $\pi\in {\mathcal P}_n^{\ell}$ such that $\mathrm{bl}(\pi)=\{a_1,\ldots,a_k\}$, and any bounded measurable function $F:[0,1]^n\rightarrow \R$, we set
$$
\Upsilon_n(\rho;\pi;F)\equiv \int_{[0,1]^k}F(v_{a^{-1}(1)},\ldots,v_{a^{-1}(n)}) \rho(x_{a_1})(dv_{a_1})\ldots \rho(x_{a_k})(dv_{a_k}),
$$
where $a^{-1}(i)$ is the (unique) block $a_j$ which contains $i$ and $v_{a_j}$ is the variable used for the measure $\rho(x_{a_j})$. In words, we assign the same variable to all coordinates which belong to the same block in the partition $\pi$. (Recall that $x_a$ is our notation for the label of block $a$.) Recall also the notation $\wp_n(\mathbf{x})$ and $\mathcal{A}$ introduced in Notation \ref{notation for partitions} and the following paragraph.
\begin{thm}\label{theo existence}There exists a unique, Feller, Markov semigroup $\{Q_t,t\geq 0\}$ on $\Xi$ such that for all $n\in \N$ and $\Phi \in L^1(C([0,1]^n))$, we have
\begin{equation}\label{def semigroup}
\int Q_t(\rho,d\rho')I_n(\rho';\Phi)=\int_{(\R^2)^n} \E_{\wp_n(\mathbf{x})}\big[\Upsilon_n\big(\rho;\mathcal{A}(t);\Phi(x_1,\ldots,x_n)\big)\big] dx_1\ldots dx_n.
\end{equation}
Consequently, there exists a Hunt process $\{\rho(t),t\geq 0\}$ with state-space $\Xi$ and transition semigroup $\{Q_t,t\geq 0\}$.
\end{thm}
Before proving Theorem \ref{theo existence}, let us make two comments on this result. First, since the $\Xi$-valued process we obtain is a Hunt process it is c\`adl\`ag and quasi-left continuous, that is, it is almost surely left-continuous at any previsible stopping time (see e.g. Rogers \& Williams 1987 \nocite{rogers/williams:1987} for a definition of quasi-left continuous filtrations). However, more precise statements on its space-time regularity seem to be a delicate question, which will require a thorough investigation.
Second, as in Kimura's stepping stone model introduced in (\ref{stepstone model}), the duality relation (\ref{def semigroup}) can be interpreted in terms of genealogies of a sample of individuals. Indeed, recall the stepping stone model is dual to the system $(\{n_i(t);\ i\in I\})_{t\geq 0}$ of particles migrating from deme $i$ to deme $j$ at rate $m_{ji}$ and coalescing in pairs at rate $1/N_e$ when in the same deme: for any $t\geq 0$, we have
$$
\E\bigg[\prod_{i\in I}p_i(t)^{n_i(0)}\bigg]=\E\bigg[\prod_{i\in I}p_i(0)^{n_i(t)}\bigg].
$$
These equations show that a function (here the $n_i(0)$-th moments) of the frequencies at different sites of $\Z^2$ and at (forward) time $t$ can be expressed in terms of the genealogy of a sample made of $n_i(0)$ individuals in deme $i$ for every $i\in I$, and run for a (backward) time $t$: all lineages having coalesced by time $t$ necessarily carry the same type, whose law is given by the type distribution at the site where their ancestor lies at backward time $t$ (or forward time $0$). Equation~(\ref{def semigroup}) can be interpreted in exactly the same manner, but holds for a much wider collection of functions of $\rho$ and $\mathcal{A}$.
\medskip
\noindent\emph{Proof of Theorem \ref{theo existence}: } The observation that the construction of Evans~(1997) can also be justified in our setting follows from Remark~(a) at the end of his Section~4.
Existence and uniqueness of $\mathcal{A}$ are easy from Assumptions (\ref{condition 3}) and (\ref{condition 2}). Next, we must verify consistency of $\mathcal{A}$ in the sense of his Lemma~2.1. In fact, this is the `sampling consistency' described in the introduction and was a primary consideration in writing down our model. It follows since the movement of the labels of a collection of blocks does not depend on the blocks themselves and from the fact that a coalescence event of the form $\{(\{1\},x_1),(\{2\},x_2)\}\rightarrow \{(\{1,2\},x)\}$ for a pair of particles corresponds to a jump $\{(\{1\},x_1)\}\rightarrow \{(\{1\},x)\}$ onto the same site $x\in \R^2$ if we restrict our attention to the first particle.
The next property needed in the construction is that provided it is true at $t=0$, for every $t>0$ the distribution of the labels in $\mathcal{A}(t)$ has a Radon-Nikodym derivative with respect to Lebesgue measure, and furthermore an analogue of Evans' Equation (4.2) holds. In the setting of Evans~(1997), the first requirement stems from
the independence of the spatial motions followed by different labels and the corresponding result for a single label. Here, since the motion of all lineages is driven by the same Poisson process of events, their movements are correlated. However, the desired property is still satisfied. To see this, note that each jump experienced by a lineage in
the interval $[-t,0]$ takes it to a position that is uniformly distributed over the open ball affected by the corresponding reproduction event. Thus, if $\A(t)$ has $k$ blocks and $D\subset (\R^2)^k$ has zero Lebesgue measure, the probability that the labels of the blocks of $\A(t)$ belong to $D$ is equal to $0$. Equation (4.2) of Evans~(1997) then still holds, without Evans' additional assumption of the existence of a dual process for the motion of one lineage (which anyway is satisfied since our lineages perform symmetric L\'evy processes).
The last step is to check the strong continuity of the semigroup $\{Q_t,t\geq 0\}$, but this readily follows from the relation (\ref{def semigroup}) and the Feller property of $\A$ (which is itself evident since jumps do not accumulate in our dual process).
The desired conclusion now follows from Theorem~4.1 in Evans~(1997).$\hfill\square$
\subsection{Identification of the process}
We can use (\ref{def semigroup}) to derive an expression for the infinitesimal generator of $\{\rho(t),t\geq 0\}$ acting on the functions $I_n(\cdot\ ;\Phi)$ considered in Lemma~\ref{lemm set of functions}. This lemma and the uniqueness result stated in Theorem~\ref{theo existence} guarantee that it will be sufficient to characterize the process $\rho$ and to show that it corresponds to the evolution we described in Section~\ref{model} in terms of a Poisson point process of reproduction events.
Let $n\in \N$ and $\Phi\in C(\Xi)$ be such that $\Phi= \psi\otimes \big(\prod_{i=1}^n\chi_i\big)$, where $\psi\in L^1(dx^{\otimes n})\cap C((\R^2)^n)$ and $\chi_i\in C([0,1])$ for all $1\leq i\leq n$. Writing $G$ for the generator of the process $\rho$ and $\G_n$ for the generator of the coalescing L\'evy processes $\mathcal{A}$ acting on functions of ${\mathcal P}_n^{\ell}$, we obtain from (\ref{def semigroup}) that \setlength\arraycolsep{1pt}
\begin{eqnarray}
GI_n(&\rho&; \Phi)= \lim_{t\rightarrow 0}\frac{\E_{\rho}[I_n(\rho(t),\Phi)]-I_n(\rho,\Phi)}{t} \nonumber\\
&=& \lim_{t\rightarrow 0}\frac{1}{t}\int_{(\R^2)^n} \psi(x_1,\ldots,x_n) \bigg\{ \E_{\wp_n(\mathbf{x})}\Big[\Upsilon_n\Big(\rho;\mathcal{A}(t);\prod_{i=1}^n \chi_i\Big)\Big] - \prod_{i=1}^n \langle \rho(x_i),\chi_i\rangle \bigg\}\ dx^{\otimes n} \nonumber\\
& =& \int_{(\R^2)^n} \psi(x_1,\ldots,x_n)\ \G_n\Big[\Upsilon_n\Big(\rho;\ \cdot\ ;\prod_{i=1}^n \chi_i\Big)\Big](\wp_n(\mathbf{x}))\ dx^{\otimes n}\label{def generator on Xi}.
\end{eqnarray}
Note that the quantity on the right-hand side of (\ref{def generator on Xi}) is well-defined (and the interchange of limit and integral is valid) since $\psi$ belongs to $L^1(dx^{\otimes n})$ and the rate at which at least one of $k\leq n$ blocks is affected by a reproduction event is bounded by $n$ times the integral in (\ref{condition for convergence}), so that $\A$ is a jump-hold process and its generator satisfies
$$
\Big\|\G_n\Big[\Upsilon_n\Big(\rho;\ \cdot\ ;\prod_{i=1}^n \chi_i\Big)\Big]\Big\|_{\infty}\leq 2Cn\ \Big\|\Upsilon_n\Big(\rho;\ \cdot\ ;\prod_{i=1}^n \chi_i\Big)\Big\|_{\infty}\leq 2Cn \prod_{i=1}^n\|\chi_i\|_{\infty} <\infty
$$
for a given constant $C<\infty$.
Using the description of the evolution of $\A$ in terms of events in $\Pi$, the right-hand side of (\ref{def generator on Xi}) is equal to \setlength\arraycolsep{1pt}
\begin{eqnarray}
\int_{(\R^2)^n}& &dx^{\otimes n} \psi(x_1,\ldots,x_n)\int_{\R^2}dy\int_0^{\infty}\mu(dr)\int_0^1\nu_r(du) \int_{B(y,r)}\frac{dz}{\pi r^2}\nonumber\\
& \times& \sum_{I\subset\{1,\ldots,n\}}\bigg[\prod_{i\in I}\mathbf{1}_{B(y,r)}(x_i)\prod_{i'\notin I}\mathbf{1}_{B(y,r)^c}(x_{i'})\bigg] \nonumber \\
& \times& \sum_{J\subset I}u^{|J|}(1-u)^{|I|-|J|}\bigg[\prod_{i\notin J}\big\langle \rho(x_i),\chi_i\big\rangle \bigg]\bigg[\Big\langle \rho(z),\prod_{j\in J}\chi_j\Big\rangle- \prod_{j\in J}\big\langle \rho(x_j),\chi_j\big\rangle \bigg],\phantom{AAA} \label{alt expression}
\end{eqnarray}
where $|\cdot|$ stands for cardinality. Indeed, given $x_1,\ldots,x_n$ in (\ref{alt expression}), only one term in the sum over $I \subset \{1,\ldots,n\}$ is non-zero. For this particular term, each of the $|I|$ blocks whose labels lie in $B(y,r)$ belong to the set $J$ of the blocks affected by the event with probability $u$ (independently of one another), and the affected blocks adopt the label $z$. After some algebra and several uses of Fubini's theorem, we obtain that (\ref{alt expression}) is equal to \setlength\arraycolsep{1pt}
\begin{eqnarray}
\int_{\R^2}dy \int_0^{\infty} &\mu&(dr)\int_0^1\nu_r(du)\int_{B(y,r)}\frac{dz}{\pi r^2}\int_0^1\rho_z(dk)\int dx_1\ldots dx_n\ \psi(x_1,\ldots,x_n)\nonumber\\
&\times& \sum_{I\subset \{1,\ldots,n\}}\prod_{j\notin I}\big\{\mathbf{1}_{B(y,r)^c}(x_j)\langle \rho_{x_j},\chi_j\rangle\big\}\prod_{i\in I}\mathbf{1}_{B(y,r)}(x_i)\nonumber
\\
&&\qquad \qquad\times \bigg(\prod_{i\in I}\big\langle(1-u)\rho_{x_i}+u\delta_k,\chi_i\big\rangle- \prod_{i\in I}\big\langle \rho_{x_i},\chi_i\big\rangle\bigg),\label{mp for lfv}
\end{eqnarray}
which is precisely the generator of the forwards in time process of Section \ref{model}. Using Theorem~\ref{theo existence}, we arrive at the following result. \begin{propn}\label{prop mp for lfv} The martingale problem associated to the operator $G$ defined by (\ref{mp for lfv}) on functions of the form given in Lemma~\ref{lemm set of functions} is well-posed. Furthermore, the spatial $\Lambda$-Fleming-Viot process $\rho$ of Theorem~\ref{theo existence} is the solution to it.
\end{propn}
\section{Some estimates for symmetric L\'evy processes}
\label{levy processes}
In this section, we gather some results on symmetric L\'evy processes that we shall need to call upon in our proofs of Theorem~\ref{result alpha<1} and Theorem~\ref{result alpha=1}. For the sake of clarity, the proofs of the three lemmas are given in Appendix \ref{appendix 1}.
First, we introduce some notation that we shall use repeatedly.
\begin{notn}
\label{entrance time}
\begin{enumerate}
\item{In the following, we shall suppose that all the random objects considered are constructed on the same probability space $(\Omega, \mathcal{F}, \prob)$, and if $X$ is a process defined on $\Omega$ with state-space $E$ and $x\in E$, we shall write $\prob_x$ for the probability measure on $\Omega$ under which $X(0)=x$ a.s. }
\item{For a stochastic process $\{X_t\}_{t\geq 0}$ evolving in $\mathbb T(L)$, we shall write $T(R,X)$ for the {\em first entrance time} of $X$ into $B_{\T(L)}(0,R)$. When there is no ambiguity, we write simply $T(R)$.}
\end{enumerate}
\end{notn}
Let $(\ell^L)_{L\geq 1}$ be a sequence of L\'evy processes such that for each $L\in \N$, $\ell^L$ evolves on the torus $\T(L)$ and $\ell^L(1)-\ell^L(0)$ has a covariance matrix of the form $\sigma_L^2\mathrm{Id}$. Assume that the following conditions hold.
\begin{assumption}
\label{assumptions for levy processes}
\begin{description}
\item [(i)] There exists $\sigma^2>0$ such that $\sigma_L^2\rightarrow \sigma^2$ as $L\rightarrow \infty$.
\item [(ii)] $\E_0\big[|\ell^L(1)|^4\big]$ is bounded uniformly in $L$.
\end{description}
\end{assumption}
Our first lemma describes the time $\ell^L$ needs to reach a ball of radius $d_L\ll L$ around $0$, when it starts at distance $\mathcal{O}(L)$ of the origin (recall the definition of $\Gamma(L,1)$ given in Section \ref{section result}).
\begin{lemma}\label{lemma entrance_levy}Let $(d_L)_{L\geq 1}$ be such that $\liminf_{L\rightarrow \infty}d_L >0$ and $\frac{\log^+(d_L)}{\log L}\rightarrow \gamma \in [0,1)$ as $L\rightarrow \infty$. Then,
\begin{equation}
\label{eq entrance_levy}\lim_{L\rightarrow \infty}\sup_{t\geq 0}\sup_{x_L\in \Gamma(L,1)} \left|\prob_{x_L}\left[T(d_L,\ell^L)> \frac{(1-\gamma)L^2\log L}{\pi \sigma^2}\ t\right]- e^{-t}\right|=0.
\end{equation}
\end{lemma}
The proof of Lemma \ref{lemma entrance_levy} follows that of Theorem~2 in Cox \& Durrett~(2002). \nocite{cox/durrett:2002} In particular, we shall use the following local central limit theorem (which is the counterpart in our setting of Lemma 3.1 in Cox \& Durrett~2002). Let $\lfloor z \rfloor$ denote the integer part of $z\in \R$, and write $p^L(x,t)$ for $\prob_x[\ell^L(t)\in B(0,d_L)]$.
\begin{lemma}\label{lemma local TCL}
\noindent$(a)$ Let $\e_L=(\log L)^{-1/2}$. There exists a constant $C_1<\infty$ such that for every $L\geq 2$,
\begin{equation}
\sup_{t\geq \integ{\e_LL^2}}\ \sup_{x\in \T(L)}\ \frac{\integ{\e_LL^2}}{d_L^2}\ p^L(x,t)\leq C_1.
\end{equation}
$(b)$ If $v_L\rightarrow \infty$ as $L\rightarrow \infty$, then
\begin{equation}
\lim_{L\rightarrow \infty}\ \sup_{t\geq \integ{v_LL^2}}\ \sup_{x\in \T(L)}\ \frac{L^2}{d_L^2}\left|\ p^L(x,t)-\frac{\pi d_L^2}{L^2}\right|=0.
\end{equation}
$(c)$ If $u_L\rightarrow \infty$ as $L\rightarrow \infty$ and $I(d_L,x)\equiv 1+(|x|^2\vee d_L^2)$, then
\begin{equation}
\lim_{L\rightarrow \infty}\ \sup_{x\in \T(L)}\ \sup_{u_LI(d_L,x)\leq t\leq \e_LL^2}\ \left|\frac{2\sigma_L^2t}{d_L^2}\ p^L(x,t)-1 \right|=0.
\end{equation}
$(d)$ There exists a constant $C_2<\infty$ such that for every $L\geq 1$,
\begin{equation}
\sup_{t\geq 0}\sup_{x\in \T(L)}\left(1+\frac{|x|^2}{d_L^2}\right)p^L(x,t)\leq C_2.
\end{equation}
\end{lemma}
In essence, Lemma \ref{lemma local TCL} says that on the timescale $d_L^2\ll t\ll L^2$, the L\'evy process $\ell^L$ behaves like two-dimensional Brownian motion, whereas at any given time $t\gg L^2$, its location is roughly uniformly distributed over $\mathbb T(L)$.
Another consequence of Lemma \ref{lemma local TCL} is the following result, which bounds the probability that $\ell^L$ hits a ball of bounded radius during a `short' interval of time in the regime $t\gg L^2$.
\begin{lemma}\label{lemm no entrance}Fix $R>0$. Let $(U_L)_{L\geq 1}$ and $(u_L)_{L\geq 1}$ be two sequences increasing to infinity such that $U_LL^{-2}\rightarrow \infty$ as $L\rightarrow \infty$ and $2u_L\leq L^2(\log L)^{-1/2}$ for every $L\geq 1$. Then, there exist $C>0$ and $L_0\in \N$ such that for every sequence $(U'_L)_{L\geq 1}$ satisfying $U_L'\geq U_L$ for each $L$, every $L\geq L_0$ and all $x\in \T(L)$,
$$
\prob_x\Big[T(R,\ell^L) \in [U_L'-u_L, U_L'\big]\Big]\leq \frac{C u_L}{L^2}.
$$
\end{lemma}
\section{Proof of Theorem~\ref{result alpha<1}}
\label{alpha<1}
Armed with the estimates of Section \ref{levy processes}, we can now turn to the proofs of our main results.
\begin{notn}
\label{motion of ancestral lineages} For each $L\geq 1$, let $\{\xi^L(t),t\geq 0\}$ be the L\'evy process on $\T(L)$ whose distribution is the same as that of the motion of a single lineage subject to the large and small reproduction events generated by $\Pi_L^s$ and $\Pi_L^B$.
\end{notn}
In the rest of this section, we assume that the assumptions of Theorem \ref{result alpha<1} are satisfied.
\subsection{Coalescence time for two lineages}\label{section coal}
We begin by studying the genealogical process of a pair of lineages starting at distance $\mathcal{O}(L)$ from each other. Since the motions $\xi_1^L$ and $\xi_2^L$ of the lineages are distributed like two independent copies of the process $\xi^L$ until the random time $T_L$ at which they come at distance less than $2R^B\psi_L$, the difference $$
X^L(t)\equiv \xi_1^L(t)-\xi_2^L(t),\qquad 0\leq t\leq T_L
$$
has the same distribution as $\big\{\xi^L(2t),\ 0\leq t\leq \frac{1}{2}\ T(2R^B\psi_L,\xi^L)\big\}$. We shall use Lemma~\ref{lemma entrance_levy} to derive the limiting distribution of $T_L$, but first we need to introduce the relevant variances. Consider a single lineage. Because it jumps at a finite rate owing to small and large events, the following two quantities are well-defined and finite~:
\begin{equation}
\label{variances} \sigma_s^2\equiv \int y^2\ \chi^s(dy,dz) \qquad \mathrm{and}\qquad \sigma_B^2 \equiv \int y^2\ \chi^B(dy,dz),
\end{equation}
where $\chi^s$ stands for the intensity measure of the small jumps experienced by the lineage and $\chi^B$ for that of the large jumps renormalised by $\psi_L^{-1}$ (the form of these two measures is given in (\ref{jump intensity})). We now have all the ingredients we need to describe the asymptotic `gathering time' of two lineages.
\begin{propn}\label{prop gathering} $(a)$ If $\rho_L^{-1}\psi_L^2\rightarrow \infty$ as $L\rightarrow \infty$, then
$$
\lim_{L\rightarrow \infty}\ \sup_{t\geq 0}\ \sup_{A_L\in \GA(L,2)^*}\ \bigg|\ \prob_{A_L}\left[T_L> \frac{(1-\alpha)\rho_LL^2\log L}{2\pi \sigma_B^2\psi_L^2}\ t\right] -e^{-t}\bigg|=0.
$$
\noindent$(b)$ If $\rho_L^{-1}\psi_L^2\rightarrow b \in [0,\infty)$ as $L\rightarrow \infty$, then
$$
\lim_{L\rightarrow \infty}\ \sup_{t\geq 0}\sup_{A_L\in \GA(L,2)^*}\ \bigg|\ \prob_{A_L}\left[T_L> \frac{(1-\alpha)L^2\log L}{2\pi (\sigma_s^2 + b \sigma_B^2)}\ t\right] - e^{-t}\bigg|=0.
$$
\end{propn}
\noindent \emph{Proof of Proposition \ref{prop gathering}: } Let us first recall two results on Poisson point processes, which are consequences of the exponential formula given, for instance, in Section 0.5 of \nocite{bertoin:1996} Bertoin~(1996). Following Bertoin's notation, let $\{e(t),t\geq 0\}$ be a Poisson point process on $\R\times \R_+$ with intensity measure $\kappa(dy)\otimes dt$, where the Borel measure $\kappa$ satisfies
\begin{equation}
\int_{\R}|1-e^y|\kappa(dy)<\infty \qquad \mathrm{and}\qquad \int_{\R}y^m \kappa(dy)=0, \quad m\in \{1,3\}.
\end{equation}
Under these conditions, we have for each fixed $t>0$
\begin{eqnarray}
\E\bigg[\Big(\sum_{s\leq t}e(s)\Big)^2\bigg]&=& t \int_{\R} y^2 \kappa(dy), \label{PPP moment 2} \\
\E\bigg[\Big(\sum_{s\leq t}e(s)\Big)^4\bigg]&=& 3t^2\bigg(\int_{\R}y^2 \kappa(dy)\bigg)^2+t\int_{\R}y^4 \kappa(dy). \label{PPP moment 4}
\end{eqnarray}
These properties will be useful in computing the variances and fourth moments of the random variables considered below.
Let us start with the proof of $(a)$. Consider the process $\ell^L$ defined by: for every $t\geq 0$,
$$
\ell^L(t)=\frac{1}{\psi_L}\ \xi^L\big(2\rho_Lt\big).
$$
This process evolves on the torus of sidelength $\psi_L^{-1}L$, and makes jumps of size $\mathcal{O}(\psi_L^{-1})$ at a rate of order $\mathcal{O}(\rho_L)$, as well as jumps of size $\mathcal{O}(1)$ at a rate of order $\mathcal{O}(1)$.
Let us check that $\ell^L$ satisfies the assumptions of Lemma \ref{lemma entrance_levy}. To this end, we view $\ell^L(1)$ starting at $0$ as the sum of its jumps and adapt the problem to use the results on Poisson point processes given above. First, let us define $\hat{\ell}^L$ as the L\'evy process on $\R^2$ evolving like $\ell^L$ (but without periodic conditions). For $i\in \{0,1\}$ and each $L\geq 1,\ t\geq 0$, let $\hat{\ell}^{L,i}(t)$ denote the $i$-th coordinate of $\hat{\ell}^L(t)$. Note that the distance reached by $\ell^L$ up to a given time $t$ is less than or equal to the distance at which $\hat{\ell}^L$ traveled up to $t$, and so we can write
\begin{eqnarray*}
\E_0\big[|\ell^L(1)|^4\big]\leq \E_0\big[|\hat{\ell}^L(1)|^4\big]&=& \E_0\Big[\Big\{ \hat{\ell}^{L,1}(1)^2 + \hat{\ell}^{L,2}(1)^2\Big\}^2\Big] \\
&\leq& 2 \Big\{\E_0\big[\hat{\ell}^{L,1}(1)^4\big]+\E_0\big[\hat{\ell}^{L,2}(1)^4\big]\Big\}.
\end{eqnarray*}
By symmetry, we need only bound $\E_0\big[\hat{\ell}^{L,1}(1)^4\big]$. Let us denote by $a_1,a_2,\ldots\in [-2R^s/\psi_L,$ $2R^s/\psi_L]^2$ (resp., $b_1,b_2,\ldots\in [-2R^B,2R^B]^2$) the sequence of the jumps of $\hat{\ell}^{L,1}$ before time $1$ due to small (resp., large) events. Using the convexity of $y\mapsto y^4$, we have \begin{equation}\label{eq different jumps}
\E_0\big[\hat{\ell}^{L,1}(1)^4\big]=\E_0\bigg[\Big(\sum_i a_i + \sum_j b_j\Big)^4\bigg]\leq 8\ \E_0\bigg[\Big(\sum_i a_i\Big)^4 + \Big(\sum_j b_j\Big)^4\bigg].
\end{equation}
Applying (\ref{PPP moment 4}) to each term on the right-hand side of (\ref{eq different jumps}) yields
\begin{equation}\label{bound coord}
\E_0\big[(\hat{\ell}^{L,1}(1))^4\big]\leq 96\frac{\rho_L^2}{\psi_L^4}\ \sigma_s^4 + 16\frac{\rho_L}{\psi_L^4}\ \int y^4 \chi^s(dy,dz)+96 \sigma_B^4 +16\int y^4 \chi^B(dy,dz),
\end{equation}
which is bounded uniformly in $L$ since $\rho_L\psi_L^{-2}$ vanishes as $L$ grows to infinity, and each integral is finite. Coming back to the original problem, we obtain that Assumption~\ref{assumptions for levy processes} (ii) holds for the sequence of processes $(\ell^L)_{L\geq 1}$.
Concerning Assumption~\ref{assumptions for levy processes} (i), observe that $\sigma_L^2$ is simply the variance of $\ell^{L,1}(1)$. To obtain the asymptotic behaviour of $\sigma_L^2$, we show that up to time $1$, $\ell^L$ does not see that it is on a torus. Hence, with high probability $\ell^{L,1}(1)^2= \hat{\ell}^{L,1}(1)^2$ and so
$$
\E_0\big[\ell^{L,1}(1)^2\big] \approx \E_0\big[\hat{\ell}^{L,1}(1)^2\big]= 2\frac{\rho_L}{\psi_L^2}\int y^2 \chi^s(dy,dz)+2\int y^2\chi^B(dy,dz) = 2\sigma^2_B +o(1)
$$
as $L\rightarrow \infty$, where the second equality uses (\ref{PPP moment 2}). To make the first equality rigorous, we apply Doob's maximal inequality to the submartingale $|\hat{\ell}^L|^4$. This yields, with a constant $C>0$ which may change from line to line,
$$
\prob_0\bigg[\sup_{0\leq s\leq 1}|\hat{\ell}^L(s)|>\frac{L}{3\psi_L}\bigg] \leq \frac{C\psi_L^4}{L^4}\ \E_0\big[|\hat{\ell}^L(1)|^4\big].
$$
But the calculation in (\ref{bound coord}) shows that the latter expectation is finite, and so
\begin{equation}\label{proba far}
\prob_0\bigg[\sup_{0\leq s\leq 1}|\hat{\ell}^L(s)|>\frac{L}{3\psi_L}\bigg] \leq C\frac{\psi_L^4}{L^4}.
\end{equation}
On the event $\mathcal{E}_L\equiv\big\{\sup_{0\leq s\leq 1}|\hat{\ell}^L(s)|\leq \frac{L}{3\psi_L}\big\}$, the paths of $\ell^L$ and $\hat{\ell}^L$ can be coupled so that $\ell^L(s)=\hat{\ell}^L(s)$ for every $s\in [0,1]$, and since these quantities are bounded for each $L$ we can write
\begin{eqnarray}
\E_0\big[(\ell^{L,1}(1))^2\big]&=&\E_0\big[(\hat{\ell}^{L,1}(1))^2\ \mathbf{1}_{\mathcal{E}_L}\big]+\E_0\big[(\ell^{L,1}(1))^2 \ \mathbf{1}_{\mathcal{E}_L^c}\big] \nonumber\\
&=& \E_0\big[(\hat{\ell}^{L,1}(1))^2\big]-\E_0\big[(\hat{\ell}^{L,1}(1))^2 \ \mathbf{1}_{\mathcal{E}_L^c}\big] +\E_0\big[(\ell^{L,1}(1))^2 \ \mathbf{1}_{\mathcal{E}_L^c}\big].
\label{var ell1}
\end{eqnarray}
By (\ref{proba far}) and the fact that $\ell^L$ evolves on the torus of size $L\psi_L^{-1}$, the last term on the right-hand side of (\ref{var ell1}) is bounded by
$$
C\ \frac{L^2}{\psi_L^2}\times \frac{\psi_L^4}{L^4}=C\ \frac{\psi_L^2}{L^2}\rightarrow 0\qquad \mathrm{as\ }L\rightarrow \infty.
$$
For the second term on the right-hand side of (\ref{var ell1}), let $\hat{s}_L(1)\equiv \sup_{0\leq s\leq 1}|\hat{\ell}^L(s)|$. Using Fubini's theorem on the second line, we have
\begin{eqnarray}
\E_0\big[(\hat{\ell}^{L,1}(1))^2 \ \mathbf{1}_{\mathcal{E}_L^c}\big]&\leq & \E_0\big[\hat{s}_L(1)^2 \ \mathbf{1}_{\mathcal{E}_L^c}\big] \nonumber\\
&=& \int_0^{\infty}\prob_0\Big[\hat{s}_L(1)> \frac{L}{3\psi_L}\vee \sqrt{y}\Big]\ dy \nonumber\\
&=& \frac{L^2}{9\psi_L^2}\ \prob_0\Big[\hat{s}_L(1)> \frac{L}{3\psi_L}\Big] + \int_{\frac{L^2}{9\psi_L^2}}^{\infty} \prob_0\big[\hat{s}_L(1)> \sqrt{y}\big]\ dy.\phantom{AAA}
\label{calcul variance}
\end{eqnarray}
Now, by the argument leading to (\ref{proba far}), $\prob_0[\hat{s}_L(1)>\sqrt{y}]$ is bounded by $Cy^{-2}$ for each $y>0$, where $C$ is a constant independent of $y$. Consequently, the right-hand side of (\ref{calcul variance}) is bounded by
$$
C'\ \frac{\psi_L^2}{L^2}+C\int_{L^2/(9\psi_L^2)}^{\infty}\frac{dy}{y^2}\ \rightarrow \ 0 \qquad \mathrm{as\ }L\rightarrow \infty.
$$
Coming back to (\ref{var ell1}), we can conclude that
$$
\sigma_L^2=2\sigma_B^2 +o(1)\qquad \mathrm{as\ }L\rightarrow \infty.
$$
If we now recall the equality in distribution described at the beginning of the section, we can use Lemma \ref{lemma entrance_levy} applied to $\ell^L$ on the torus of size $L\psi_L^{-1}$ and the entrance time into $B(0,2R^B)$ to write that
\begin{equation}\label{almost hitting}
\lim_{L\rightarrow \infty}\ \sup_{t\geq 0}\ \sup_{A_L\in \GA(L,2)^*}\ \left|\prob_{A_L}\left[T_L> \frac{\rho_L(L/\psi_L)^2\log (L/\psi_L)}{2\pi \sigma_B^2}\ t\right] - e^{-t}\right|=0.
\end{equation}
By the assumption on $|\alpha\log L-\log(\psi_L)|$ introduced just after (\ref{def alpha}) and Lemma \ref{lemm no entrance} applied to $\ell^L$ to bound the probability that $T_L$ lies between $\frac{\rho_LL^2\log (L/\psi_L)}{2\pi \sigma_B^2\psi_L^2}$ and $\frac{(1-\alpha)\rho_LL^2\log L}{2\pi \sigma_B^2\psi_L^2}$, $(a)$ of Proposition \ref{prop gathering} follows from (\ref{almost hitting}).
Let us now turn to the proof of $(b)$. This time, we define $\ell^L$ for every $t\geq 0$ by
$$
\ell^L(t)=\frac{1}{\psi_L}\ \xi^L(2\psi_L^2t).
$$
Similar calculations give, as $L\rightarrow \infty$,
$$
E_0\left[|\ell^L(1)|^2\right]=2\sigma^2_s + 2 b \sigma_B^2 +o(1) \qquad \mathrm{if\ }\rho_L^{-1}\psi_L^2\rightarrow b\in [0,\infty).
$$
and $E_0\left[|\ell^L(1)|^4\right]$ is bounded uniformly in $L$. We can therefore apply Lemma \ref{lemma entrance_levy} to $\ell^L$ as above.$\hfill\square$
\medskip
Having established the time that it takes for two lineages starting from distance $L$ apart to come close enough together that they have a chance to coalesce, we now calculate the additional time required for them to actually do so. We shall have to distinguish between several regimes, depending on whether large or small events prevail in the evolution of the pair of lineages. Our goal in the rest of this section is to prove the following result.
\begin{thm}\label{theo time coal}
For each $L\in \N$, let $t_L$ denote the coalescence time of the pair of lineages under consideration. Then,
\noindent $(a)$ If $\frac{\psi_L^2}{\rho_L}\rightarrow \infty$ as $L\rightarrow \infty$,
$$
\lim_{L\rightarrow \infty}\ \sup_{t\geq 0}\ \sup_{A_L\in \GA(L,2)^*}\left|\ \prob_{A_L}\left[t_L> \frac{(1-\alpha)\rho_LL^2\log L}{2\pi \sigma_B^2\psi_L^2}\ t\right]-e^{-t}\right|=0.
$$
\noindent $(b)$ If $\frac{\psi_L^2}{\rho_L}\rightarrow b\in [0,\infty)$ and $\frac{\psi_L^2\log L}{\rho_L}\rightarrow \infty$ as $L\rightarrow \infty$,
$$
\lim_{L\rightarrow \infty}\ \sup_{t\geq 0}\ \sup_{A_L\in \GA(L,2)^*}\ \left|\ \prob_{A_L}\left[t_L> \frac{(1-\alpha)L^2\log L}{2\pi (\sigma_s^2+b\sigma_B^2)}\ t\right]-e^{-t}\right|=0.
$$
\noindent $(c)$ If $(\frac{\psi_L^4}{\rho_L})_{L\geq 1}$ is bounded or $\frac{L^2\log L}{\rho_L}\rightarrow 0$ as $L\rightarrow \infty$ (and so $\frac{\psi_L^2\log L}{\rho_L}\rightarrow 0$), then
$$
\lim_{L\rightarrow \infty}\ \sup_{t\geq 0}\ \sup_{A_L\in \GA(L,2)^*}\ \left|\ \prob_{A_L}\left[t_L> \frac{L^2\log L}{2\pi \sigma_s^2}\ t\right]-e^{-t}\right|=0.
$$
\end{thm}
The cases $(a)$ and $(b)$ are separated only because the timescales of interest are not of the same order, but the reasons why they hold are identical: in both cases, large jumps are frequent enough that, once the lineages have been gathered at distance $2R^B\psi_L$, they coalesce in a time negligible compared to $T_L$. In contrast, in $(c)$ we assume that the rate at which the lineages are affected by large events is so slow that we have to wait for the lineages to be gathered at distance less than $2R^s$ before they have a chance to coalesce (and they do so in a negligible time compared to $L^2\log L$). If none of the above conditions hold, then the proof of $(c)$ will show that, also in this case, the probability that a large event affects the lineages when they are at distance less than $2R^B\psi_L$ and before a time of order $\mathcal{O}(L^2\log L)$ vanishes as $L$ tends to infinity. However, we are no longer able to describe precisely the limiting behaviour of $t_L$, see Remark \ref{rk last cases}.
Let us first make more precise the sense in which the additional time to coalescence is negligible once the lineages have been gathered at the right distance.
\begin{propn}\label{prop coal time} Let $(\Phi_L)_{L\geq 1}$ be a sequence tending to infinity as $L\rightarrow \infty$.
\noindent $(a)$ If $(\Phi_L)_{L\geq 1}$ is such that $\frac{\rho_L}{\psi_L^2\log \Phi_L}\rightarrow 0$ as $L\rightarrow \infty$, we have
\begin{equation}
\label{short coal 1}\lim_{L\rightarrow \infty}\sup_{A_L}\ \prob_{A_L}\big[t_L> \Phi_L\rho_L\big]=0,
\end{equation}
where the supremum is taken over all samples $A_L= \big\{(\{1\},x_1^L),(\{2\},x_2^L)\big\}$ such that $|x_1^L-x^L_2|\leq 2R^B\psi_L$.
\noindent $(b)$ Under no additional condition, we have
\begin{equation}
\label{short coal 2}\lim_{L\rightarrow \infty}\sup_{A'_L}\ \prob_{A'_L}\big[t_L> \Phi_L\big]=0,
\end{equation}
where the supremum is now taken over all samples $A'_L= \big\{(\{1\},x_1^L),(\{2\},x_2^L)\big\}$ such that $|x_1^L-x^L_2|\leq 2R^s$.
\end{propn}
Taking $\Phi_L=\frac{L^2}{\rho_L\log L}(1\wedge \rho_L \psi_L^{-2})$, the result in $(a)$ shows that when $\frac{\psi_L^2\log L}{\rho_L}\rightarrow \infty$, the coalescence time of two lineages at distance at most $2R^B\psi_L$ is indeed much smaller than $T_L$ (which is of order $L^2\log L\times \big(1\wedge \rho_L\psi_L^{-2}\big)$ by Proposition \ref{prop gathering}).
\medskip
\noindent \emph{Proof of Proposition \ref{prop coal time}: }Recall that for each $L\in \N$, we defined $X^L$ as the difference between the locations of the lineages $\xi^L_1$ and $\xi^L_2$ on the torus $\T(L)$. In the following, if both lineages are affected by the same event, we shall consider that $X^L$ hits $0$ but the number of lineages remains equal to $2$, which means that they can separate again later (if the measures $\nu^s_r$ and $\nu^B_r$ are not all the point mass at $1$). However, it is the first time at which such an event occurs which will be of interest, and we keep the notation $t_L$ to denote this time. As we already noticed, $X^L$ behaves like $\{\xi^L(2t),\ t\geq 0\}$ outside $B\big(0,2R^B\psi_L\big)$, whereas inside the ball it can hit $0$ owing to reproduction events affecting both lineages $\xi^L_1$ and $\xi^L_2$.
\noindent \textbf{Case $(a)$.} For each $L\in \N$, set $q^L_0=Q^L_0\equiv 0$ and for every $i\geq 1$,
$$
Q^L_i\equiv \inf \Big\{t> q^L_{i-1}:\ X^L(t)\notin B\Big(0,\frac{7}{4}R^B\psi_L\Big) \Big\}
$$
and
$$
q^L_i\equiv \inf \Big\{t>Q^L_i:\ X^L(t)\in B\Big(0,\frac{3}{2}R^B \psi_L\Big) \Big\},
$$
with the convention that $\inf \ \emptyset =+\infty$. We shall use the following lemmas, which will enable us to describe how $X^L$ wanders around in $\T(L)$, independently of whether it ever hits $0$ or not.
\begin{lemma}\label{lemm excursion}There exist a function $g:\R_+\rightarrow \R_+$ vanishing at infinity, $C_q>0$, $u_q>1$ and $L_q\in \N$ such that for every $L\geq L_q$ and
$u\geq u_q$,
$$
\sup_{x\in B(0,4R^B)\setminus B(0,(7/4)R^B)} \prob_{\psi_Lx}\big[q^L_1>\rho_Lu\big] \leq g(u) \qquad \mathrm{if\ }\rho_L=\mathcal{O}(\psi_L^2),
$$
$$
\sup_{x\in B(0,4R^B)\setminus B(0,(7/4)R^B)} \prob_{\psi_Lx}\big[q^L_1>\psi_L^2u\big] \leq \frac{C_q}{\log u} \qquad \mathrm{if\ }\rho_L^{-1}\psi_L^2\rightarrow 0.
$$
\end{lemma}
Lemma \ref{lemm excursion} will give us good control of the probability of a long excursion outside $B(0,(3/2)R^B\psi_L)$.
\begin{lemma}\label{lemm time in B} Suppose that
\begin{equation}\label{condition nu}
\mathrm{Leb}\big(\big\{r\in [0,R^B]:\ \nu^B_r \notin \{\delta_0,\delta_1\}\big\}\big)>0.
\end{equation}
Then, there exists a constant $C_Q<\infty$ such that for each $L\geq 1$,
$$
\sup_{x\in B(0,(3/2)R^B)}\frac{1}{\rho_L}\ \E_{\psi_Lx}\big[Q_1^L\big]< C_Q.
$$
\end{lemma}
Condition (\ref{condition nu}) guarantees that, whenever $X^L$ hits $0$, it has a chance not to remain stuck at this value for all times. Lemma \ref{lemm time in B} then tells us that $X^L$ starting within $B((3/2)R^B\psi_L)$ needs an average time of order $\mathcal{O}(\rho_L)$ to reach distance $(7/4)R^B\psi_L$ from the origin.
\begin{lemma}\label{lemm proba coal}Suppose that $\rho_L\psi_L^{-2}$ remains bounded as $L\rightarrow \infty$. Then, there exists $\theta_1 \in (0,1)$ such that for every $L\geq 1$,
\begin{equation}\label{hitting prob 1}
\inf_{x\in B(0,(3/2)R^B)}\prob_{\psi_Lx}\big[X^L \mathrm{\ hits\ }0 \mathrm{\ before\ leaving\ }B\big(0,(7/4)R^B\psi_L\big)\big]\geq \theta_1.
\end{equation}
If $\liminf_{L\rightarrow \infty} \rho_L^{-1}\psi_L^2= 0$, there exist $\theta_2 \in (0,1)$ and $\theta_3>0$ such that \setlength\arraycolsep{1pt}
\begin{eqnarray}
\inf_{x\in B(0,(3/2)R^B)}\prob_{\psi_Lx}\big[ X^L \mathrm{\ hits\ }0 \mathrm{\ before\ leaving}& B&\big(0,(7/4)R^B\psi_L\big)\big]\nonumber\\
& \geq & \theta_2\bigg(1-\exp\Big\{-\theta_3\frac{\psi_L^2}{\rho_L}\Big\}\bigg).\label{hitting prob 2}
\end{eqnarray}
\end{lemma}
The proofs of these lemmas are given in Appendix \ref{appendix 2}.
The following technique is inspired by that used in Cox \& Durrett~(2002) and Z\"ahle et al.~(2005), \nocite{cox/durrett:2002} \nocite{zahle/cox/durrett:2005} although the motions of the lineages and the mechanism of coalescence here are more complex and require slightly more work. Our plan is first to find a good lower bound on the number of times the lineages meet at distance less than $(3/2)R^B\psi_L$ (and then separate again) before time $\Phi_L\rho_L$. In a second step, we use the estimates on the probability that during such a gathering the lineages merge before separating again derived in Lemma \ref{lemm proba coal}, and obtain that coalescence does occur before $\Phi_L\rho_L$ with probability tending to $1$. For the sake of clarity, we show (\ref{short coal 1}) in the case where $\rho_L\psi_L^{-2}$ remains bounded, and then comment on how to adapt the arguments in the general case.
Assume first that Condition~(\ref{condition nu}) holds. Recall the definition of $Q_i^L$ and $q_i^L$ given above, and define $k_L$ by
$$
k_L\equiv \max\big\{n:\ Q_n^L \leq \Phi_L\rho_L\big\}.
$$
By Lemma \ref{lemm proba coal}, there exists a positive constant $\theta_1$ such that for every $L\geq 1$ and $x\in B(0,(3/2)R^B\psi_L)$,
$$
\prob_x\big[X^L \mathrm{\ hits\ }0 \mathrm{\ before\ leaving\ }B\big(0,(7/4)R^B\psi_L\big)\big]\geq \theta_1.
$$
Hence, for every $x\in B\big(0,2R^B\psi_L\big)$, we have
\begin{equation}\label{eq k_L}
\prob_x\big[t_L>\Phi_L\rho_L\big]\leq \prob_x\big[t_L>Q_{k_L}^L\big]\leq \E_x\big[\big(1-\theta_1\big)^{k_L}\big].
\end{equation}
Let us fix $x\in B\big(0,2R^B\psi_L\big)$ and show that $k_L \rightarrow \infty$ as $L\rightarrow \infty$, in $\prob_x$-probability. The fact that the bounds obtained below do not depend on $x\in B\big(0,2R^B\psi_L\big)$ will then give us the desired uniformity. Let $M\in \N$. We have
\setlength\arraycolsep{1pt}
\begin{eqnarray}\prob_x\big[k_L<M\big]&=&\prob_x\big[Q^L_M> \Phi_L\rho_L \big] \nonumber\\
& = &\prob_x\bigg[\sum_{i=1}^M(Q_i^L-q^L_{i-1})+\sum_{i=1}^{M-1}(q_i^L-Q_i^L)> \Phi_L\rho_L\bigg]\nonumber \\
&\leq &\sum_{i=1}^M\prob_x\bigg[Q_i^L-q_{i-1}^L>\frac{\Phi_L\rho_L}{2M}\bigg] + \sum_{i=1}^{M-1}\prob_x\bigg[q_i^L-Q_i^L> \frac{\Phi_L\rho_L}{2(M-1)}\bigg],\phantom{AAAA} \label{ineg k_L}
\end{eqnarray}
where the last inequality uses the fact that at least one of the $2M-1$ terms of the sums on the second line must be larger than a fraction $(2M-1)^{-1}$ of the total time. Now, using the Markov inequality, the strong Markov property at time $q_{i-1}^L$ and then Lemma~\ref{lemm time in B}, we can write for each $i$
\begin{eqnarray*}\prob_x\bigg[Q_i^L-q_{i-1}^L>\frac{\Phi_L\rho_L}{2M}\bigg] &\leq & \frac{2M}{\Phi_L\rho_L}\ \E_x \big[Q_i^L-q_{i-1}^L\big] \\
& \leq & \frac{2M}{\Phi_L\rho_L}\sup_{y\in B(0,(3/2)R^B)}\E_{\psi_Ly}\big[Q_1^L\big]\\ &\leq & \frac{2MC_Q}{\Phi_L}.
\end{eqnarray*}
If we now apply the strong Markov property to $X^L$ at time $Q^L_i$ and use Lemma \ref{lemm excursion} together with the fact that $X^L(Q_i^L)\in B(0,4R^B\psi_L)$ with probability one, we obtain for each $i$, and $L$ large enough
$$
\prob_x\bigg[q_i^L-Q^L_i>\frac{\Phi_L\rho_L}{2(M-1)}\bigg]\leq g\bigg(\frac{\Phi_L}{2(M-1)}\bigg).
$$
Coming back to (\ref{ineg k_L}), we arrive at
$$
\prob_x\big[k_L< M\big] \leq \frac{2M^2C_Q}{\Phi_L} + (M-1) g\bigg(\frac{\Phi_L}{2(M-1)}\bigg) \rightarrow 0, \qquad \mathrm{as \ }L\rightarrow \infty.
$$
To complete the proof of $(a)$ when Condition~(\ref{condition nu}) holds and $\rho_L\psi_L^{-2}$ remains bounded, let $\e>0$ and fix $M = M(\e)\in \N$ such that
$$
(1-\theta_1)^M <\e.
$$
Splitting the expectation in (\ref{eq k_L}) into the integral over $\{k_L\geq M\}$ and $\{k_L< M\}$ yields
$$
\limsup_{L\rightarrow \infty}\sup_{x\in B(0,2R^B\psi_L)}\prob_x\big[t_L>\Phi_L\rho_L\big]\leq \e + \limsup_{L\rightarrow \infty}\sup_{x\in B(0,2R^B\psi_L)}\prob_x\big[k_L<M\big] = \e,
$$
and since $\e$ was arbitrary, the desired result follows.
When Condition~(\ref{condition nu}) is fulfilled but $\rho_L\psi_L^{-2}$ is unbounded as $L\rightarrow \infty$, we can apply the same technique to obtain (\ref{short coal 1}). This time, using the second result of Lemma \ref{lemm proba coal} we can write as in (\ref{eq k_L}) that, for every $x\in B\big(0,2R^B\psi_L\big)$,
$$
\prob_x\big[t_L>\Phi_L\rho_L\big]\leq \E_x\bigg[\bigg(1-\theta_2\Big(1-\exp\Big\{-\theta_3
\frac{\psi_L^2}{\rho_L}\Big\}\Big)\bigg)^{k_L}\bigg].
$$
The same arguments as above (using the second part of Lemma \ref{lemm excursion}) yield, for $L$ large enough,
\begin{eqnarray*}
\sup_{x\in B(0,2R^B\psi_L)}\prob_x\bigg[k_L<M \frac{\rho_L}{\psi_L^2}\bigg]& \leq &\frac{2C_QM^2\rho_L^2}{\psi_L^4\Phi_L} + \frac{C_qM\rho_L}{\psi_L^2\log(\Phi_L/2M)},
\end{eqnarray*}
which tends to $0$ as $L$ tends to infinity by our assumption of $(\Phi_L)_{L\geq 1}$. We conclude in the same manner, using the fact that when $\psi_L^2/\rho_L\rightarrow \infty$,
$$
\bigg(1-\theta_2\Big(1-\exp\Big\{-\theta_3 \frac{\psi_L^2}{\rho_L}\Big\}\Big)\bigg)^{M\rho_L/\psi_L^2}\sim e^{-\theta_2\theta_3 M}.
$$
Let us finish the proof of $(a)$ by removing the assumption (\ref{condition nu}). In the preceding proof, the main idea is that each time $X^L$ passes through $B\big(0,(3/2)R^B\psi_L\big)$, the two lineages have an opportunity to try to coalesce and their success probability is bounded from below by the quantity obtained in Lemma \ref{lemm proba coal}. However, if we do not assume that (\ref{condition nu}) holds, $X^L$ may become stuck at $0$ once it has hit it, and so the number $k_L$ of such sojourns in $B\big(0,(3/2)R^B\psi_L\big)$ may be finite. This makes our arguments break down. Nevertheless, $X^L$ can only hit $0$ through a coalescence event, and so this issue is merely an artefact of the technique of the proof. To overcome it, let us increase the rate of reproduction events by a factor $2$, but divide each probability to be affected by 2. Overall, coalescence will take a longer time in this new setting, but the motions of the lineages before their coalescence time will remain identical in distribution.
More precisely, assume that (\ref{condition nu}) does not hold. Define $\hat{\Pi}^B_L$ as a Poisson point process on $\R\times \T(L)\times (0,\infty)$, independent of $\Pi^s_L$ and $\Pi^B_L$ and with intensity measure $2(\rho_L\psi_L^2)^{-1} dt\otimes dx\otimes \mu^B(dr)$, and for each $r>0$ such that $\nu_r^B=\delta_1$, set $\hat{\nu}^B_r \equiv \delta_{1/2}$. Let also $\hat{\Pi}^s_L$ be a Poisson point process with the same distribution as $\Pi^s_L$ and independent of all the other point processes. Call $\hat{X}^L$ the process defined in the same manner as $X^L$ but with $\Pi^B_L$ (resp., $\Pi^s_L$, $\nu^B_r$) replaced by $\hat{\Pi}^B_L$ (resp., $\hat{\Pi}^s_L$, $\hat{\nu}^B_r$). By computing the intensity of the jumps of a single lineage, one can observe that it is equal to
$$
dt\otimes \bigg(\frac{2}{\rho_L}\int_{|x|/2}^{R^B}\frac{L_r(x)}{2\pi r^2}\ \mathbf{1}_{\{\nu_r^B=\delta_1\}} \mu^B(dr)d(\psi_Lx) +\int_{|x|/2}^{R^s}\int_0^1\frac{L_r(x)}{\pi r^2}\ u\ \nu^s_r(du)\mu^s(dr)dx\bigg),
$$
which is precisely that of $\xi^L$. Here, $L_r(x)$ stands for the volume of $B(0,r)\cap B(x,r)$. If we now compute the coalescence rate of two lineages at distance $z\in [0,2R^B\psi_L]$, we obtain the same term due to small events for $X^L$ and $\hat{X}^L$, to which is added the respective contributions of large events
$$
\frac{1}{\rho_L}\int_{z/2}^{R^B}L_r(z)\mathbf{1}_{\{\nu_r^B = \delta_1\}}\mu^B(dr) \qquad \mathrm{and}\qquad \frac{1}{2\rho_L}\int_{z/2}^{R^B}L_r(z)\mathbf{1}_{\{\nu_r^B= \delta_1\}}\mu^B(dr).
$$
Hence, the evolutions of both processes follow the same law outside $B(0,2R^B\psi_L)$, the contribution of large events whose area encompasses only one of the two lineages is identical even within $B(0,2R^B\psi_L)$, and coalescence occurs at a higher rate for $X^L$ than for $\hat{X}^L$. This gives us for every $L\geq 1$ and $x\in \T(L)$,
$$
\prob_x\big[t_L>\Phi_L\rho_L \big] \leq \prob_x\big[\hat{t}_L>\Phi_L\rho_L \big],
$$
where $\hat{t}_L$ is defined in an obvious manner. But Condition~(\ref{condition nu}) holds for $\hat{X}^L$, and so we can use the result obtained in the previous paragraph to complete the proof of $(a)$ when (\ref{condition nu}) does not hold.
\medskip
\noindent \textbf{Case $(b)$.} The arguments are essentially the same. First of all, since we assumed that $\rho_L$ grows to infinity as $L\rightarrow \infty$, and because
$$
\prob_x\big[t_L>\Phi_L\big]\leq \prob_x\big[t_L>\Phi'_L\big]
$$
whenever $\Phi_L\geq \Phi'_L$, we can restrict our attention to sequences $(\Phi_L)_{L\geq 1}$ such that $\rho_L^{-1}\Phi_L\rightarrow 0$ as $L\rightarrow \infty$. Let $\mathcal{E}_L$ denote the event that no large events affected any of the lineages before time $\Phi_L$. Let $\theta_{\mathrm{max}} \in (0,\infty)$ be such that the maximal rate at which at least one of the two lineages of the sample is affected by a large event is less than $\theta_{\mathrm{max}} \rho_L^{-1}$ (recall that the total rate at which at least one of two lineages is affected is smaller than twice the corresponding rate for a single lineage, which is finite and independent of the location of the lineage). For each $L\in \N$, define $e_L$ as an exponential random variable, with parameter $\theta_{\mathrm{max}} \rho_L^{-1}$. By our assumption on $\Phi_L$, we can write $$
\prob_x [\mathcal{E}_L^c]\leq \prob[e_L\leq \Phi_L] = 1-\exp\bigg\{-\frac{\theta_{\mathrm{max}} \Phi_L}{\rho_L}\bigg\}\rightarrow 0, \quad \mathrm{as\ }L\rightarrow \infty.
$$
The distribution of the process $X^L$ up to the first time at which it is affected by a large event is equal to that of $\tilde{X}^L$ (defined as the process experiencing only small events) up to the random time $e(\tilde{X}^L)$, so that if $\rho_L^{-1}\theta_{B,L}(x)$ is the rate at which at least one of two lineages at separation $x\in \T(L)$ is affected by a large event, then for each $t\geq 0$ and $y\in \T(L)$
$$
\prob_y\big[e(\tilde{X}^L)>t\big]=\E_y\bigg[\exp\bigg\{-\int_0^t \frac{\theta_{B,L}\big(\tilde{X}^L(s)\big)}{\rho_L}\ ds\bigg\}\bigg].
$$
By the definition of $\theta_{\mathrm{max}}$, for each $L\in \N$ the variable $e_L$ is stochastically bounded by $e(\tilde{X}^L)$. Consequently, if $\tilde{t}_L$ denotes the coalescence time associated to $\tilde{X}^L$ (or, more precisely, to the model where lineages are affected only by small events), we have for each $x\in B(0,2R^s)$
\begin{eqnarray*}
\prob_x\big[t_L\geq \Phi_L\big]&\leq & \prob_x\big[t_L\geq \Phi_L;\ \mathcal{E}_L\big] + \prob_x\big[\mathcal{E}_L^c\big]\\
&\leq & \prob_x\big[\tilde{t}_L\geq \Phi_L\big]+o(1) \quad \mathrm{as\ }L\rightarrow \infty,
\end{eqnarray*}
where the remaining terms converge to $0$ uniformly in $x\in \T(L)$. Then, an easy modification of the proof of $(a)$ with ``$\psi_L=\rho_L=1$'' yields the desired result and completes the proof of Proposition \ref{prop coal time}. $\hfill\square$
\medskip
We can now turn to the proof of Theorem \ref{theo time coal}.
\noindent \emph{Proof of Theorem \ref{theo time coal}: }
\noindent \textbf{Cases $(a)$ and $(b)$. }For $(a)$, let us define $\Phi_L$ for each $L\in \N$ by
$$
\Phi_L= \frac{\rho_LL^2}{\psi_L^2\log L}.
$$
Let $t>0$ and $(A_L)_{L\geq 1}$ be such that $A_L\in \GA(L,2)^*$ for each $L\in \N$. Introducing the time $T_L$ needed for the two lineages of the sample to come at distance less than $2R^B\psi_L$, we can write \setlength\arraycolsep{1pt}
\begin{eqnarray}
\prob_{A_L}&\bigg[&t_L > \frac{(1-\alpha)\rho_LL^2\log L}{2\pi \sigma^2_B\psi_L^2}\ t\bigg]\nonumber \\
& =& \prob_{A_L}\bigg[t_L>\frac{(1-\alpha)\rho_LL^2\log L}{2\pi \sigma^2_B\psi_L^2}\ t;\ T_L>\frac{(1-\alpha)\rho_LL^2\log L}{2\pi \sigma^2_B\psi_L^2}\ t- \Phi_L\bigg] \label{proof coal 1}
\\ & & + \prob_{A_L}\bigg[t_L>\frac{(1-\alpha)\rho_LL^2\log L}{2\pi \sigma^2_B\psi_L^2}\ t;\ T_L\leq \frac{(1-\alpha)\rho_LL^2\log L}{2\pi \sigma^2_B\psi_L^2}\ t- \Phi_L\bigg].\phantom{AAA}\label{proof coal 2}
\end{eqnarray}
Using the strong Markov property at time $T_L$ and the uniform convergence derived in Proposition \ref{prop coal time}$(a)$, we obtain that the expression in (\ref{proof coal 2}) tends to $0$ as $L\rightarrow \infty$ independently of the choice of $t>0$ and $(A_L)_{L\in\N}$. For (\ref{proof coal 1}), note that \setlength\arraycolsep{1pt}
\begin{eqnarray}
\bigg|\ \prob_{A_L}& \bigg[& t_L>\frac{(1-\alpha)\rho_LL^2\log L}{2\pi \sigma^2_B\psi_L^2}\ t;\ T_L>\frac{(1-\alpha)\rho_LL^2\log L}{2\pi \sigma^2_B\psi_L^2}\ t- \Phi_L\bigg] \nonumber \\
& & - \prob_{A_L}\bigg[T_L>\frac{(1-\alpha)\rho_LL^2\log L}{2\pi \sigma^2_B\psi_L^2}\ t\bigg]\bigg|\nonumber \\
&\leq & \prob_{A_L}\bigg[\frac{(1-\alpha)\rho_LL^2\log L}{2\pi \sigma^2_B\psi_L^2}\ t- \Phi_L\leq T_L \leq \frac{(1-\alpha)\rho_LL^2\log L}{2\pi \sigma^2_B\psi_L^2}\ t\bigg].\phantom{AA}\label{bound T_L}
\end{eqnarray}
Since $X^L$ (defined at the beginning of Section \ref{section coal}) has the same law as $\{\xi^L(2t),t\geq 0\}$ until the random time $T_L$, we can bound the quantity in (\ref{bound T_L}) by working directly with the latter process. In order to apply Lemma~\ref{lemm no entrance} to $\big\{\psi_L^{-1}\xi^L(2\rho_Lt),t\geq 0\big\}$, with
$$
U_L=\frac{(1-\alpha)L^2\log L}{4\pi \sigma^2_B\psi_L^2},\quad u_L=\frac{\Phi_L}{2\rho_L}=\frac{L^2}{2\psi_L^2\log L}\quad \mathrm{and} \ R=2R^B,
$$
we need to check that $U_L\psi_L^2L^{-2}\rightarrow \infty$ and $u_L\leq \frac{L^2}{\psi_L^2\sqrt{\log (L/\psi_L)}}$ (recall that this process evolves on the torus of size $\psi_L^{-1}L$.) Both conditions are fulfilled here, and so by Lemma~\ref{lemm no entrance}, the right-hand side of (\ref{bound T_L}) is bounded by
$$
\frac{C\Phi_L\psi_L^2}{\rho_LL^2}=\frac{C}{\log L}\rightarrow 0\qquad \mathrm{as\ }L\rightarrow \infty.
$$
Hence, coming back to (\ref{proof coal 1}), we can use the result of Proposition \ref{prop gathering} and the uniformity in $t>0$ and $(A_L)_{L\geq 1}$ of our estimates to obtain
$$
\lim_{L\rightarrow \infty}\sup_{t\geq 0}\sup_{A_L\in \GA(L,2)^*}\left|\ \prob_{A_L}\bigg[t_L>\frac{(1-\alpha)\rho_LL^2\log L}{2\pi \sigma^2_B\psi_L^2}\ t\bigg] - e^{-t}\right|=0.
$$
The proof of $(b)$ follows exactly the same lines, with $\Phi_L\equiv L^2(\log L)^{-1}$ and Lemma~\ref{lemm no entrance} applied to $\psi_L^{-1}\xi^L(2\psi_L^2\cdot)$.
\medskip
\noindent \textbf{Case $(c)$. }In contrast with the two previous cases, where coalescence in the limit is due to large events only, here the pair of lineages can coalesce only through a small event. To see this, let us define $T^*_L$ as the first time at which the two lineages (indexed by $L$) come at distance less than $2R^s$ from each other, and $\tau_L$ as the first time at which at least one of them is affected by a large event while they are at distance less than $2R^B\psi_L$ (i.e., while $X^L\in B(0,2R^B\psi_L)$). Note that for each $L$, $T^*_L$ and $\tau_L$ are stopping times with respect to the filtration $\{\mathcal{F}_t,\ t\geq 0\}$ associated to $\Pi_L^s\cup \Pi_L^B$ as we trace backwards in time. In addition, define $\tilde{T}_L^*$ as the entrance time of $\xi^L$ into $B(0,2R^s)$ and $\tilde{\tau}_L$ as the first time $\xi^L$ makes a jump of size $\mathcal{O}(\psi_L)$ while it is lying in $B(0,2R^B\psi_L)$. These two random times are stopping times with respect to the filtration $\{\tilde{\mathcal{F}}_t,\ t\geq 0\}$ associated to $\xi^L$. We claim that for each $L\in \N$,
\begin{equation}\label{claim identity}
\big\{X^L(t),\ t<\tau_L \wedge T^*_L\big\} \stackrel{(d)}{=} \big\{\xi^L(2t),\ 2t<\tilde{\tau}_L \wedge \tilde{T}^*_L\big\},
\end{equation}
where the notation $\stackrel{(d)}{=}$ refers to equality in distribution. Indeed, as long as $X^L$ has not entered $B(0,2R^s)$ and no large event has affected it while it lay in $B(0,2R^B\psi_L)$, coalescence events are impossible and the rates and distributions of the jumps of both processes are identical. We cannot include the terminal times in (\ref{claim identity}) since the values of the processes will differ if $\tau_L\wedge T^*_L= \tau_L$ and the corresponding event is a coalescence, but since $X^L$ and $\xi^L$ are jump processes with finite rates, we can easily see that the event $\{\tau_L\wedge T^*_L= \tau_L\}$ (resp., $\tilde{\tau}_L\wedge \tilde{T}^*_L= \tilde{\tau}_L$) is $\mathcal{F}_{(\tau_L\wedge T^*_L)-}$ (resp., $\tilde{\mathcal{F}}_{(\tilde{\tau}_L\wedge \tilde{T}^*_L)-}$) -measurable. Hence, for each $L\in \N$, $A = \wp_2(x_1,x_2)$ and $x\equiv x_1-x_2\in \T(L)$, we have
\begin{equation}\label{equality proba tau}
\prob_A\big[\tau_L<T_L^*\big]=\prob_x\big[\tilde{\tau}_L<\tilde{T}_L^*\big].
\end{equation}
Let us now bound the right-hand side of (\ref{equality proba tau}) under the assumption that $(\rho_L^{-1}\psi_L^4)_{L\in \mathbb N}$ is bounded. Analogous computations to those in the proof of Proposition \ref{prop gathering} show that $\{\xi^L(2t),t\geq 0\}$ itself satisfies Assumption~\ref{assumptions for levy processes} with $\sigma_L^2=2\sigma_s^2+o(1)$ as $L\rightarrow \infty$. Hence, Lemma \ref{lemma entrance_levy} applied with $d_L=2R^s$ gives us
\begin{equation}\label{coal small}
\lim_{L\rightarrow \infty}\ \sup_{t\geq 0}\ \sup_{x_L\in \Gamma(L,1)}\bigg|\ \prob_{x_L}\bigg[\tilde{T}_L^*>\frac{L^2\log L}{2\pi \sigma_s^2}\ t\bigg]-e^{-t}\bigg|=0.
\end{equation}
Let $\theta_{\mathrm{max}}<\infty$ be such that for every $L\in \N$, the rate at which $\xi^L$ makes a jump of size $\mathcal{O}(\psi_L)$ is bounded by $\theta_{\mathrm{max}}/\rho_L$. Fixing $\e>0$ and $K>0$ such that $e^{-2\pi \sigma_s^2 K}< \e$, we have for $L$ large enough and any sequence $(x_L)_{L\geq 1}$ such that $x_L\in \Gamma(L,1)$ for every $L$: \setlength\arraycolsep{1pt}
\begin{eqnarray}
\prob_{x_L}\big[&\tilde{\tau}_L&<\tilde{T}_L^*\big]\nonumber\\
& = & \prob_{x_L}\big[\tilde{\tau}_L<\tilde{T}_L^*\leq KL^2\log L \big]+\prob_{x_L}\big[\tilde{\tau}_L<\tilde{T}_L^*\ ;\ \tilde{T}_L^*>KL^2\log L\big]\nonumber \\
&\leq & \prob_{x_L}\big[\tilde{\tau}_L< KL^2\log L \big]+\prob_{x_L}\big[\tilde{T}_L^*>KL^2\log L\big]\nonumber \\
& \leq & \E_{x_L}\bigg[1-\exp\Big\{-\frac{\theta_{\mathrm{max}}}{\rho_L}\int_0^{KL^2\log L}\mathbf{1}_{B(0,2R^B\psi_L)}\big(\xi^L(2s)\big)ds\Big\}\bigg]+ \e. \label{eq large event}
\end{eqnarray}
Splitting the integral below into the sum $\int_0^{\psi_L^2\sqrt{\log L}}+ \int_{\psi_L^2\sqrt{\log L}}^{L^2/\sqrt{\log L}}+ \int_{L^2/\sqrt{\log L}}^{L^2\sqrt{\log L}}+ \int_{L^2\sqrt{\log L}}^{KL^2\log L}$ and using the four results of Lemma \ref{lemma local TCL}, there exists $L_0\in \N$, and $a_1,a_2>0$ independent of $L$, $(x_L)_{L\geq 1}$ and $K>0$, such that for every $L\geq L_0$,
$$
\E_{x_L}\bigg[\int_0^{KL^2\log L}\mathbf{1}_{B(0,2R^B\psi_L)} \big(\xi^L(2s)\big)ds\bigg]\leq (a_1+a_2K)\psi_L^2 \log L.
$$
Hence, the first term on the right-hand side of (\ref{eq large event}) is bounded by
$$
\E_{x_L}\bigg[\frac{\theta_{\mathrm{max}}}{\rho_L}\int_0^{KL^2\log L}\mathbf{1}_{B(0,2R^B\psi_L)}\big(\xi^L(2s)\big)ds\bigg] \leq \theta_{\mathrm{max}}(a_1+a_2K)\frac{\psi_L^2\log L}{\rho_L},
$$
which tends to $0$ as $L\rightarrow \infty$, independently of the sequence $(x_L)_{L\geq 1}$ considered. As $\e$ in (\ref{eq large event}) is arbitrary, we can conclude that $$
\lim_{L\rightarrow \infty}\sup_{x_L\in \Gamma(L,1)}\prob_{x_L}\big[\tilde{\tau}_L<\tilde{T}_L^*\big]=0,
$$
and by (\ref{equality proba tau}), the same result holds for $X^L$ and any sequence $(A_L)_{L\in \N}$ such that $A_L\in \GA(L,2)^*$ for every $L$. In words, we have obtained that with probability tending to $1$, any pair of lineages starting at distance $\mathcal{O}(L)$ from each other gather at distance $2R^s$ before having a chance to coalesce through a large reproduction event. By using the same method as in $(a)$ but this time with the result of Proposition \ref{prop coal time} $(b)$ and with Proposition \ref{prop gathering} replaced by (\ref{coal small}), we obtain the desired conclusion under the assumption that $(\rho_L^{-1}\psi_L^4)_{L\in \mathbb N}$ is bounded.
When $\rho_L\gg L^2\log L$, with probability increasing to $1$ no large events at all affect any of the lineages by the time they are gathered at distance $2R^s$ by small events. The result then follows from the same arguments, with $\xi^L$ replaced by the motion of a single lineage subject to only small reproduction events. $\hfill\square$ \medskip
\begin{remark}\label{rk last cases} Let us comment on the cases not covered by the theorem, that is $\psi_L^4\gg \rho_L$, $\rho_L$ is of order at most $L^2\log L$ and $\rho_L^{-1}\psi_L^2\log L$ has a finite limit (possibly $0$). When the latter limit is positive, from the results obtained so far coalescence events due to small and to large reproduction events occur on the same timescale and depend on the precise paths of the two lineages. Therefore, we do not expect $t_L$ to be exponentially distributed (with a deterministic parameter). When $\rho_L^{-1}\psi_L^2\log L$ tends to $0$, the same reasoning as in the proof of $(c)$ gives us that the probability that a large reproduction event causes the two lineages to coalesce before a time of order $L^2\log L$ vanishes as $L\rightarrow \infty$. However, $X^L$ does not satisfy the conditions of Section \ref{levy processes} (Assumption~\ref{assumptions for levy processes}) as it does when the assumptions of $(c)$ hold. Using instead $\ell^L\equiv \psi_L^{-1}X^L(\psi_L^2\cdot)$, the time needed for the lineages to come at distance less than $2R^s$ translates into $T(\ell^L,2R^s/\psi_L)$, which is not covered by Lemma \ref{lemma entrance_levy} and requires estimates of the entrance time of the jump process into a ball of shrinking radius, which we have been unable to obtain.
\end{remark}
\subsection{Convergence to Kingman's coalescent}
\label{subsection kingman} To complete the proof of Theorem~\ref{result alpha<1}, we now turn to the genealogy of a finite sample, starting at distance $\mathcal{O}(L)$ from each other on $\T(L)$.
We can already see from our analysis for a single pair of lineages that our spatial $\Lambda$-coalescent is similar in several respects to the coalescing random walks dual to the two-dimensional voter and stepping-stone models with short-range interactions (see e.g. \nocite{cox/griffeath:1986} \nocite{cox/griffeath:1990} Cox \& Griffeath~1986, 1990 for a study on $\Z^2$, and Cox~1989 or Z\"ahle et al.~2005 \nocite{cox:1989} \nocite{zahle/cox/durrett:2005} for examples on the torii $\T(L)\cap \Z^2$). It will therefore be no surprise that the analogy carries over to larger samples. In most of the papers cited above, the authors are interested in the sequence of processes giving the number of blocks in the ancestral partition. They show that, when the initial distance between the lineages grows to infinity, the finite-dimensional distributions of these counting processes converge to those of a pure death process corresponding to a time-change of the number of blocks of Kingman's coalescent. In Cox \& Griffeath~(1990), \nocite{cox/griffeath:1990} more elaborate arguments yield the convergence of the finite-dimensional distributions of the unlabelled genealogical processes to those of Kingman's coalescent. Instead of adding a new instance of such proofs to the literature, we shall simply explain why the same method applies to our case. This will also enable us to prove the tightness of the unlabelled genealogical processes.
\medskip
\noindent \emph{Proof of Theorem~\ref{result alpha<1}: } \textbf{(i) Convergence of the finite-dimensional distributions.}
We follow here the proofs in Cox \& Griffeath~(1986) \nocite{cox/griffeath:1986} (for the number of blocks of the ancestral partition) and Cox \& Griffeath~(1990) \nocite{cox/griffeath:1990} (for the unlabelled genealogical process of a system of coalescing simple random walks on $\Z^2$). Notice that, since we work on the torii $\T(L)$, our rescaling of time differs from Cox and Griffeath's. Another significant difference is the fact that, in their model, lineages move independently of each other until the first time two of them are on the same site, upon which they coalesce instantaneously. In our setting, the movements of lineages are defined from the same Poisson point processes, and two lineages having reached a distance that enables them to coalesce can separate again without coalescing.
Despite these differences, Lemma \ref{lemm lineages far} below shows that a key ingredient of their proof is still valid here: at the time when two lineages coalesce, the others are at distance $\mathcal{O}(L)$ from each other and from the coalescing pair. To state this result, we need some notation. Let $\tau_{ij}$ be the first time lineages $i$ and $j$ come within distance less than $2R^B\psi_L$ (resp., $2R^s$) if $\rho_L\ll \psi_L^2\log L$ (resp., $\rho_L\gg \psi_L^2\log L$) and $\tau$ be the minimum of the $\tau_{ij}$'s over all pairs considered. Let also $\tau^*_{ij}$ be the coalescence time of the ancestral lines of $i$ and $j$, and $\tau^*$ be the minimum of the $\tau^*_{ij}$ over all lineages considered. Finally, for each $i$ we shall denote the motion in $\T(L)$ of the block containing $i$ by $\xi_i^L$.
\begin{lemma}\label{lemm lineages far}Under the conditions of Theorem \ref{result alpha<1}, we have
\begin{eqnarray}
\lim_{L\rightarrow\infty}\sup_{A_L\in \GA(L,4)^*}\prob_{A_L}\bigg[\tau^*=\tau^*_{12}\ ;\ |\xi_1^L(\tau^*)-\xi_3^L(\tau^*)|\leq \frac{L}{\log L}\bigg] = 0, \label{lineage distribution 1} \\
\lim_{L\rightarrow\infty}\sup_{A_L\in \GA(L,4)^*}\prob_{A_L}\bigg[\tau^*=\tau^*_{12}\ ;\ |\xi_3^L(\tau^*)-\xi_4^L(\tau^*)|\leq \frac{L}{\log L}\bigg] = 0.
\label{lineage distribution 2}
\end{eqnarray}
\end{lemma}
The proof of Lemma \ref{lemm lineages far} is deferred to Appendix \ref{appendix 2}.
The other ingredients required to apply Cox and Griffeath's techniques are a control on the probability of ``collision'' for two lineages during a short interval of time, obtained here in Lemma \ref{lemm no entrance}, and the uniform convergence of the coalescence time of two lineages, which constitutes our Theorem \ref{theo time coal}. With these estimates, one can obtain the limiting rates of decrease of the number of blocks of $\A^{L,u}$ (namely those of the number of blocks in Kingman's coalescent), and the fact that mergers are only binary as in Cox \& Griffeath~(1986). In particular, the counterpart of their Proposition~2 here gives us that for each $n\in \N$,
\begin{equation}\label{conv holding time}
\lim_{L\rightarrow \infty}\sup_{t\geq 0}\sup_{A_L\in \GA(L,n)^*}\Big|\ \prob_{A_L}\big[|\A^{L,u}(t)|=n\big]-\exp\Big\{-\frac{n(n-1)}{2}\ t\Big\}\Big|=0,
\end{equation}
which we state here because we shall need it for the case $\alpha=1$ (observe that our $L$ corresponds to their $t$). Note that in Proposition 2 of Cox \& Griffeath~(1986), the right-hand side of their equation gives the probability that the number of blocks is less than $n$, instead of equal to $n$ as it is stated. Furthermore, in (\ref{conv holding time}) the supremum is over $t\geq 0$ instead of $t\in [0,T]$ for some $T>0$ (as in Cox \& Griffeath~1986). Our argument for this modification is the fact that the two quantities we are comparing are monotone decreasing in $t$ and both tend to $0$.
Then, the same arguments lead to the proof that any pair of lineages is equally likely to be the first one to coalesce, as in Lemma 1 of Cox \& Griffeath~(1990). The uniformity of the estimates obtained enables us to proceed by induction to show the uniform convergence (on a compact time-interval) of the one-dimensional distributions of $\A^{L,u}$ to those of $\mathcal{K}$, which translate into the uniform convergence of the finite-dimensional distributions, still on intervals of the form $[0,T]$. We refer to \nocite{cox/griffeath:1990} Cox \& Griffeath~(1990) for the complete proof of these results.
\medskip
\noindent \textbf{(ii) Tightness.}
This follows easily from the fact that the labelled partition $\A^L$ with initial value in $\GA(L,n)^*$ for some $n\in \N$ lies in $\GA(L,n)$ immediately after each coalescence event, with probability tending to $1$. Indeed, for each $L\in \N$, let $\gamma_1^L< \ldots < \gamma_{n-1}^L$ be the ranked epochs of jumps of $\A^{L,u}$ (if less than $n-1$ jumps occur, then the last times are equal to $+\infty$ by convention). Let also $n\in \N$, $A_L\in \GA(L,n)^*$ for every $L\geq 1$, and following Ethier \& Kurtz~(1986), for \nocite{ethier/kurtz:1986} every $\delta,T>0$ let $w'(\A^{L,u},T,\delta)$ denote the modulus of continuity of the process $\A^{L,u}$ on the time interval $[0,T]$ and with time-step $\delta$. Let $\e>0$. With the convention that $(+\infty)-(+\infty)=+\infty$, we have
\begin{equation}\label{eq tightness}
\prob_{A_L}\big[w'(\mathcal{A}^L,T,\delta)>\e\big]\leq \sum_{k=2}^n \prob_{A_L}[\gamma_k^L-\gamma_{k-1}^L<\delta].
\end{equation}
An easy recursion using the fact that we consider only finitely lineages and the uniform bounds obtained in Lemma \ref{lemm lineages far} enables us to write that for all $k\in \{1,\ldots,n-1\}$,
$$
\sup_{A_L'\in \GA(L,n)^*}\prob_{A_L'}[\gamma_k^L<\infty\ ;\ \A^L(\vp_L\gamma_k^L)\notin \GA(L,n)]\rightarrow 0,\qquad \mathrm{as\ }L\rightarrow \infty.
$$
This result and an application of the strong Markov property at time $\gamma_{k-1}^L$ yield
\begin{eqnarray}
\prob_{A_L}[\gamma_k^L-\gamma_{k-1}^L<\delta]&=&\E_{A_L}[\ind{\A^L(\vp_L\gamma_{k-1}^L)\in \GA(L,n)}\prob_{\A^L(\vp_L\gamma_{k-1}^L)}[\gamma_1^L<\delta]]+o(1)\nonumber\\
&\leq& \frac{(n-k)(n-k-1)}{2} \sup_{A'_L\in \GA(L,2)^*}\prob_{A'_L}[\gamma_1^L<\delta]+o(1)\label{bound w}
\end{eqnarray}
as $L\rightarrow \infty$, where the last line uses the consistency of the genealogy to bound the probability that a first coalescence event occurs to the sample of lineages before $\delta$ by the sum over all pairs of lineages of this sample of the probability that they have coalesced by time $\delta$ (note that there are at most $(n-k)(n-k-1)/2$ possible pairs just after $\gamma_{k-1}^L$). But these probabilities converge uniformly to $1-e^{-\delta}$ by Theorem \ref{theo time coal}, and so for $\delta$ small enough, we can make the right-hand side of (\ref{bound w}) less than $\e/(n^3)$ for $L$ large enough ($n$ is fixed here). Coming back to (\ref{eq tightness}), this gives us
$$
\limsup_{L\rightarrow \infty} \prob_{A_L}[w'(\mathcal{A}^L,T,\delta)>\e]\leq \e.
$$
Since ${\mathcal P}_n$ is a compact metrisable space, we can apply Corollary 3.7.4 in Ethier \& Kurtz~(1986) to complete the proof.$\hfill\square$
\section{Proof of Theorem~\ref{result alpha=1}}
\label{alpha=1}
We now turn to the case $\psi_L\propto L$. We still have small reproduction events of size $\mathcal{O}(1)$, but now large events have sizes $\mathcal{O}(L)$ (and rate $\mathcal{O}(\rho_L^{-1})$), so that they cover a non-negligible fraction of the torus $\T(L)$. By Lemma \ref{lemma local TCL}, if the lineages were only subject to small reproduction events, the location of a single lineage would be nearly uniformly distributed on $\T(L)$ after a time $t_L\gg L^2$. This suggests several limiting behaviours for the genealogical process $\A^L$, according to how $\rho_L$ scales with $L^2$:
\begin{itemize}
\item If $\rho_L$ is order at most $\mathcal{O}(L^2)$, then large reproduction events occur at times when the locations of the lineages are still correlated with their starting points, and so we expect space (i.e., labels in the representation we adopted) to matter in the evolution of $\A^L$.
\item If $L^2 \ll \rho_L \ll L^2\log L$, then the lineages have the time to homogenise their locations over $\T(L)$ before the first large event occurs, but not to come at distance $2R^s$ from each other. Hence, large events should affect lineages independently of each other, and bring the genealogy down to the common ancestor of the sample before any pair of lineages experiences a coalescence due to small events.
\item If $\rho_L\approx L^2 \log L$, the fact that pairs of lineages have now the time to gather at distance $2R^s$ should add a Kingman part (i.e., almost surely binary mergers) to the genealogical process obtained in the previous point.
\item If $\rho_L\gg L^2\log L$, Kingman's coalescent due to small reproduction events should bring the ancestry of a sample of lineages down to a single lineage before any large event occurs, so that the limiting genealogy will not see these large events.
\end{itemize}
\emph{Proof of Theorem~\ref{result alpha=1}: }For $(a)$, let us write down the generator $\ov{\G}_L$ of $\bar{A}^L$ applied to functions of the $\T(1)$-labelled partitions of $\{1,\ldots,n\}$. Recall the notation $x_a$ for the label of the block $a$ of a labelled partition $A\in {\mathcal P}_n^{\ell}$ (introduced in Notation \ref{notation for partitions}), and write $|A|$ for the number of blocks of $A$. For each $L\geq 1$, $f$ of class $C^3$ with respect to the labels and $A\in {\mathcal P}_n^{\ell}$ such that any pair $(a_1,a_2)$ of blocks of $A$ satisfies $|x_{a_1}-x_{a_2}|\geq (2R^s)/L$, we have
\begin{eqnarray}
\ov{\G}_Lf(A)& = & \rho_L\sum_{i=1}^{|A|} \int_{\T(L)} dy \int_0^{R^s}\mu^s(dr)\frac{L_r(y)}{\pi r^2}\int_0^1\nu^s_r(du)u \nonumber \\
& & \qquad \quad \times \Big[f\Big(A\setminus \big\{(a_i,x_{a_i})\big\}\cup \Big\{\Big(a_i,x_{a_i}+\frac{y}{L}\Big)\Big\}\Big)-f(A)\Big] + \G^{(B)}(A),\phantom{AAA} \label{generator a}
\end{eqnarray}
where we wrote $A=\big\{(a_1,x_{a_1}),\ldots,(a_{|A|},x_{a_{|A|}})\big\}$ and \setlength\arraycolsep{1pt}
\begin{eqnarray*}
&\G^{(B)}&(A)\\
& =& \frac{1}{c^2}\int_{\T(1)} dz\int_0^{(\sqrt{2})^{-1}}\mu^B(dr)\int_{B(z,cr)}\frac{dy}{V_{cr}}\sum_{I\subset \{1,\dots,|A|\}}\prod_{i\in I}\ind{x_i\in B(z,cr)}\prod_{j\notin I}\ind{x_j\notin B(z,cr)}\\
& \ \ \times& \sum_{J\subset I}\int_0^1u^{|J|}(1-u)^{|I|-|J|}\nu^B_r(du)\bigg[f\bigg(A\setminus\Big(\bigcup_{i\in J}\{(a_i,x_{a_i})\}\Big)\cup\Big\{\Big(\bigcup_{i\in J}a_i,y\Big)\Big\}\bigg)-f(A)\bigg]
\end{eqnarray*}
is the generator of the coalescence events due to large reproduction events (recall $V_r$ is the volume of the ball $B_{\mathbb T(1)}(0,r)$). Note that $\G^{(B)}$ does not depend on $L$. Let us look at a particular term in the sum on the right-hand side of (\ref{generator a}). Since $f$ is of class $C^3$ with respect to the labels of the blocks, a Taylor expansion and the symmetry of the jumps due to small events give us
\begin{eqnarray*}
\rho_L\int & dy & \int_0^{R^s}\mu^s(dr)\frac{L_r(y)}{\pi r^2}\int_0^1\nu_r(du)u \Big[f\Big(A\setminus \big\{(a_i,x_{a_i})\big\}\cup \Big\{\Big(a_i,x_{a_i}+\frac{y}{L}\Big)\Big\}\Big)-f(A)\Big] \\
& = & \frac{\rho_L}{L^2}\ \frac{\sigma^2_s}{2}\ \Delta_i f(A) +\mathcal{O}\Big(\frac{\rho_L}{L^3}\Big),
\end{eqnarray*}
where $\Delta_i$ is the Laplacian operator on $\T(1)$ applied to the label of the block $a_i$ only. Since $\rho_LL^{-2}\rightarrow b\in [0,\infty)$ by assumption and because $f$ is continuous on a compact space, we obtain that $\ov{\G}_Lf$ defined on the compact set $E_L\equiv \big\{A\in {\mathcal P}_n^{\ell}: L|x_{a_i}-x_{a_j}|\geq 2R^s\ \forall i\neq j\}$ converges uniformly towards
$$
\ov{\G}f(A)\equiv \frac{b\sigma_s^2}{2} \sum_{i=1}^{|A|}\Delta_i f(A) + \G^{(B)}f(A).
$$
Now, by the same technique as in Section \ref{levy processes}, one can prove that the gathering time at distance $2R^s$ of two lineages starting at distance $\mathcal{O}(L)$ on $\T(L)$ and subject only to small events converges uniformly on the time scale $\frac{L^2\log L}{\pi \sigma_s^2}$ to an $\mathrm{Exp}(1)$ random variable (in the sense of Lemma \ref{lemma entrance_levy}). In addition, since the new location of a lineage affected by a large event is chosen uniformly over a ball of $\T(L)$ whose radius is of order $\mathcal{O}(L)$, if a large event affects a pair of lineages but does not lead to their coalescence, then the probability that the lineages are at distance less than $L(\log L)^{-1}$ just after the event vanishes as $L\rightarrow 0$. If we call $\check{T}^*_L$ the first time at which two lineages on $\T(L)$ are gathered at distance $2R^s$ and $t_L^*$ their coalescence time in the original timescale, we readily obtain that for any $u>0$, and $x'_1\neq x'_2\in \T(1)^2$,
$$
\lim_{L\rightarrow \infty}\prob_{\wp_2(Lx'_1,Lx'_2)}\big[t_L^*>\check{T}^*_L\ ;\ \check{T}^*_L\leq \rho_Lu\big]= 0.
$$
Indeed, as we already mentioned, if a large event does not make the lineages coalesce then with probability tending to one, the latter start at separation $\mathcal{O}(L)$ and do not have the time to meet at distance $2R^s$ before the next large event. Now, the number of large reproduction events that the pair of lineages experiences before time $\rho_Lu$ can be stochastically bounded by a Poisson random variable whose parameter is finite and independent of $L$. Hence, if none of them leads to a coalescence then with probability tending to $1$, $\check{T}^*_L> \rho_Lu$. It follows that, if $u>0$ is fixed, we can use the consistency of the genealogy and write
$$
\prob_{\wp_n(L\mathbf{x})}[\exists t\in [0,u]: \bar{\A}^L(t)\notin E_L]\leq \sum_{i< j=1}^{n}\prob_{\{(\{i\},Lx_i),(\{j\},Lx_j)\}}\big[t_L^*>\check{T}^*_L ; \check{T}^*_L\leq \rho_Lu\big] \rightarrow 0.
$$
Consequently, one can use Corollary 4.8.7 in Ethier \& Kurtz~(1986) (with $E_L$ as the subspace of interest in condition $(f)$) to conclude that the law under $\prob_{\wp_n(L\mathbf{x})}$ of $\bar{A}^L$ converges to that of $\bar{\A}^{\infty,b,c}$ as processes in the Skorohod space of all c\`adl\`ag paths with values in the $\T(1)$-labelled partitions of $\{1,\ldots,n\}$.
Let us now prove $(b)$. Recall the assumption that the total rate at which large events occur is finite, that is $M\equiv c^{-2}\mu^B([0,(\sqrt{2})^{-1}])<\infty$. Let us first analyse what happens during the first event which may affect the unlabelled ancestral partition.
Define for each $L\geq 1$ the stopping time $e^L_1$ by the following property: $\rho_Le_1^L$ is the first time on the original timescale at which either a large event occurs, or $\A^L$ undergoes a coalescence event due to small reproduction events. Since large and small reproduction events are independent, $\rho_Le_1^L$ has the same distribution as the minimum of two following independent random times:
\begin{itemize}
\item the first time of occurrence of a large event, that is an $\mathrm{Exp}\big(M/\rho_L\big)$-random variable.
\item the time $t^*_L$ at which a first coalescence event occurs between lineages of the genealogical process $\tilde{\A}^L$ evolving only owing to small reproduction events.
\end{itemize}
By (\ref{conv holding time}) applied to the case $\rho_L\equiv +\infty$ (i.e., no large events occur), $\frac{2\pi \sigma^2_s}{L^2\log L}\ t^*_L$ converges to an $\mathrm{Exp}\big(n(n-1)/2\big)$-random variable under $\prob_{A_L}$, uniformly in $(A_L)_{L\in \N}$ such that $A_L\in \GA(L,n)^*$ for every $L$. It is then straightforward to obtain
\begin{equation}\label{conv coal times}
\lim_{L\rightarrow \infty}\sup_{t\geq 0}\sup_{A_L\in \GA(L,n)^*}\bigg|\ \prob_{A_L}[e_1^L>t] - \exp\Big(-\Big\{M+\beta \frac{n(n-1)}{2}\Big\}t\Big)\bigg|=0,
\end{equation}
where the formulation is also valid for $\beta =0$. Also, by the independence of $\Pi_L^s$ and $\Pi_L^B$, for every $(A_L)_{L\in \N}$ as above we have (with an abuse of notation)
$$
\prob_{A_L}\big[\rho_Le^L_1 = t_L^*\big]= \E_{A_L}\Big[\exp\Big\{-\frac{M}{\rho_L}\ t_L^*\Big\}\Big].
$$
Using Fubini's theorem and a change of variable, we can write
\begin{eqnarray*}
\E_{A_L}\Big[\exp\Big\{-\frac{M}{\rho_L}\ t_L^*\Big\}\Big] &=& \int_0^1 \prob_{A_L}\Big[\exp\Big\{-\frac{M}{\rho_L}\ t_L^*\Big\}>s\Big]ds\\
&=& \int_0^1 \prob_{A_L}\bigg[\frac{2\pi \sigma_s^2}{L^2\log L}\ t_L^*< -\frac{2\pi\sigma^2_s\rho_L}{ML^2\log L}\ \log s\bigg]ds\\
&=& \frac{ML^2\log L}{2\pi \sigma_s^2\rho_L}\int_0^{\infty}e^{-\frac{ML^2\log L}{2\pi \sigma_s^2\rho_L} u}\ \prob_{A_L}\bigg[\frac{2\pi \sigma_s^2}{L^2\log L}\ t_L^*<u\bigg]du\\
&=& 1- \frac{ML^2\log L}{2\pi \sigma_s^2\rho_L}\int_0^{\infty}e^{-\frac{ML^2\log L}{2\pi \sigma_s^2\rho_L} u}\ \prob_{A_L}\bigg[\frac{2\pi \sigma_s^2}{L^2\log L}\ t_L^*\geq u\bigg]du.
\end{eqnarray*}
When $\beta>0$, we have $\frac{ML^2\log L}{2\pi \sigma_s^2\rho_L}\rightarrow \frac{M}{\beta}$ and so we can use the uniform convergence derived in (\ref{conv holding time}) and the fact that the distribution of $t_L^*$ does not charge points to conclude that
$$
\lim_{L\rightarrow \infty}\sup_{A_L\in \GA(L,n)^*}\bigg|\prob_{A_L}\big[\rho_Le_1^L = t_L^*\big]-\frac{\beta n(n-1)}{\beta n(n-1)+2M}\bigg|=0.
$$
The limit holds also for $\beta=0$ by a trivial argument. A byproduct of this result is the existence of a constant $C_0>0$ and $L_0\in \N$ such that, for all $L\geq L_0$ and $(A_L)_{L\in \N}$ as above, $\prob_{A_L}[\rho_Le_1^L < t_L^*]\geq C_0$. We shall need this fact in the next paragraph.
By Theorem \ref{result alpha<1} in the case $\rho_L\equiv\infty$, up to an error term tending uniformly to $0$, on the event $\big\{\rho_Le_1^L=t_L^*\big\}$ the transition occurring to $\A^{L,u}$ at time $\rho_Le_1^L$ is the coalescence of a pair of blocks, each pair having the same probability to be the one which coalesces. Let us show that, conditioned on $\big\{\rho_Le_1^L < t^*_L\big\}$, the locations of the lineages at time $(\rho_Le_1^L)-$ are approximately distributed as $n$ independent uniform random variables on $\T(L)$. We use again the notation $\tau_{ij},\tau_{ij}^*$ and $\tau,\tau^*$($=t_L^*$ here) introduced in the proof of Theorem \ref{result alpha<1} for the gathering time at distance $2R^s$ and the coalescence time of lineages $i$ and $j$, and their minima (once again on the original timescale). These quantities depend on $L$ but, for the sake of clarity, we do not reflect that in our notation. In order to use our results on L\'evy processes, we need to make sure that no pairs of lineages have come at distance less than $2R^s$ before time $\rho_Le_1^L$. We have for each $L\in \N$
\begin{equation}\label{eq big event first}
\prob_{A_L}\big[\tau <\rho_Le_1^L\big|\ \rho_Le_1^L< t_L^*\big]\leq \sum_{i<j=1}^n \prob_{A_L}\big[\tau_{ij}<\rho_Le_1^L\big|\ \rho_Le_1^L< t_L^*\big],
\end{equation}
Each term $(i,j)$ on the right-hand side of (\ref{eq big event first}) is bounded by \setlength\arraycolsep{1pt}
\begin{eqnarray}
\prob_{A_L}\big[&\tau_{ij}&<\rho_Le_1^L-\log L\big|\ \rho_Le_1^L< t_L^*\big] \nonumber\\ & &\qquad +\prob_{A_L}\big[\rho_Le_1^L- \log L \leq \tau_{ij}<\rho_Le_1^L\big|\ \rho_Le_1^L< t_L^*\big]\nonumber \\
&\leq & C_0^{-1}\Big\{\prob_{A_L}\big[\tilde{\tau}_{ij}^* >\tilde{\tau}_{ij}+ \log L\big] + \prob_{A_L}\big[\tilde{\tau}_{ij}\in [\varsigma_L-\log L,\varsigma_L)\big]\Big\}, \label{independence}
\end{eqnarray}
where for each $L\in \N$, $\varsigma_L$ is an $\mathrm{Exp}(M/\rho_L)$-random variable independent of all other variables, and $\tilde{\tau}_{ij}$ and $\tilde{\tau}_{ij}^*$ are defined as above, but for the process $\tilde{\A}^L$. By the strong Markov property applied at time $\tilde{\tau}_{ij}$ and the result of Proposition \ref{prop coal time} $(b)$, the first term on the right-hand side of (\ref{independence}) converges to $0$ uniformly in $A_L\in \GA(L,n)^*$. By a simple change of variable, the second term is equal to
$$
M\int_0^{\infty} e^{-Ms}\ \prob_{A_L} \big[\tilde{\tau}_{ij}\in [\rho_Ls-\log L,\rho_Ls)\big]ds \leq M\int_0^{\infty} e^{-Ms}\ C\ \frac{\log L}{L^2}\ ds\ \rightarrow\ 0,
$$
where the inequality comes from Lemma \ref{lemm no entrance}. Therefore, back to (\ref{eq big event first}) we obtain that
\begin{equation}\label{independence motions tilde A}
\lim_{L\rightarrow \infty}\sup_{A_L\in \GA(L,n)^*}\prob_{A_L}\big[\tau<\rho_Le_1^L\big|\ \rho_Le_1^L< t_L^*\big]= 0.
\end{equation}
Now, let $D_1,\ldots,D_n$ be $n$ measurable subsets of $\T(1)$, and for each $i\in \{1,\ldots,n\}$ and $L\geq 1$, let $LD_i\subset \T(L)$ be the dilation of $D_i$ by a factor $L$. Let us show that
\begin{eqnarray}
\lim_{L\rightarrow \infty}\sup_{A_L\in \GA(L,n)^*}\Big|\ \prob_{A_L}&\Big[&(\xi_1^L,\ldots,\xi_n^L)(\rho_Le_1^L-)\in (LD_1)\times \ldots \times (LD_n)\big|\rho_Le_1^L< t^*_L\Big]\nonumber \\& & \qquad \qquad \qquad \qquad\qquad \qquad \qquad - \prod_{i=1}^n \mathrm{Leb}(D_i)\Big|=0,\label{eq homogen}
\end{eqnarray}
where $\xi_i^L(t)$ denotes the location of the $i$-th lineage of $\A^L$ at time $t$. To do so, let us use the fact that on the event $\big\{\rho_Le_1^L< t^*_L\big\}$, the genealogical process $\A^L$ up to time $\rho_Le_1^L$ has the same distribution as $\tilde{\A}^L$ up to time $\varsigma_L$ and on the event $\{\tilde{\tau}^*>\varsigma_L\}$. We have \setlength\arraycolsep{1pt}
\begin{eqnarray}
&\prob_{A_L}&\Big[(\xi_1^L,\ldots,\xi_n^L)(\rho_Le_1^L-)\in \prod_{i=1}^n(LD_i)\Big|\rho_Le_1^L< t^*_L\Big]\nonumber\\
&=&\frac{1}{\prob_{A_L}[\rho_Le_1^L< t_L^*]}\ \prob_{A_L}\Big[(\xi_1^L,\ldots,\xi_n^L)(\rho_Le_1^L-)\in \prod_{i=1}^n(LD_i);\ \rho_Le_1^L< t^*_L\Big]\nonumber\\
&=& \frac{1}{\prob_{A_L}[\rho_Le_1^L< t_L^*]}\ \prob_{A_L}\Big[(\tilde{\xi}_1^L,\ldots,\tilde{\xi}_n^L)(\varsigma_L-)\in \prod_{i=1}^n(LD_i);\ \varsigma_L< \tilde{\tau}^*\Big]\nonumber \\
&=& \frac{1}{\prob_{A_L}[\rho_Le_1^L< t_L^*]}\ \prob_{A_L}\Big[(\tilde{\xi}_1^L,\ldots,\tilde{\xi}_n^L)(\varsigma_L-)\in \prod_{i=1}^n(LD_i);\ \varsigma_L< \tilde{\tau}\Big]+\eta_L(A_L)\nonumber \\
&=&\frac{M}{\prob_{A_L}[\rho_Le_1^L< t_L^*]}\ \int_0^\infty ds\ e^{-Ms} \prob_{A_L}\Big[(\tilde{\xi}_1^L,\ldots,\tilde{\xi}_n^L)(\rho_Ls-)\in \prod_{i=1}^n(LD_i);\nonumber \\
& & \qquad \qquad \qquad \qquad\qquad \qquad \qquad \qquad\qquad \tilde{\tau}>\rho_Ls\Big]+\eta_L(A_L), \label{au1}
\end{eqnarray}
where $\eta_L(A_L)$ tends to $0$ uniformly in $(A_L)_{L\in \N}$ by (\ref{independence motions tilde A}) and the fact that $\prob_{A_L}\big[\rho_Le_1^L< t_L^*\big]$ does not vanish.
Let us fix $s>0$ for a moment, and consider the corresponding probability within the integral. Up to time $\tilde{\tau}$, the movements of the lineages are distributed as $n$ independent copies $\hat{\xi}^L_1,\ldots,\hat{\xi}^L_n$ of the motion of a single lineage, for which an easy modification of Lemma \ref{lemma local TCL} $(b)$ tells us that, if $(\e_L)_{L\in \N}$ is such that $\e_L\rightarrow 0$ but $\e_L\rho_L\gg L^2$ as $L\rightarrow \infty$,
\begin{equation}\label{asymptotic uniformity}
\lim_{L\rightarrow \infty}\ \sup_{v\geq \e_L}\ \sup_{x\in \T(L)}\ \left|\prob_{x}\big[\hat{\xi}^L(v\rho_L)\in (LD)\big]-\mathrm{Leb}(D)\right|=0.
\end{equation}
However, it is not entirely clear that this convergence will still hold for $n$ independent lineages on the event $\{\hat{\tau}>\rho_Ls\}$ (where $\hat{\tau}$ is the first time at which at least two of them come at distance less than $2R^s$). Keeping the notation $A_L$ for the initial value of the set of lineages and denoting the set of $n$ (non-coalescing) motions by $\hat{\A}^L$, we have
\begin{eqnarray*}
&\prob_{A_L}&\big[(\hat{\xi}_1^L,\ldots,\hat{\xi}_n^L)(\rho_L s-)\in (LD_1)\times \ldots \times (LD_n);\ \hat{\tau}\leq \rho_Ls\big]\\
&=& \E_{A_L}\bigg[\ind{\hat{\tau}\leq \rho_Ls}\ \prob_{\hat{\A}^L(\hat{\tau})}\Big[(\hat{\xi}_1^L,\ldots,\hat{\xi}_n^L)\big((\rho_L s-\hat{\tau})-\big)\in (LD_1)\times \ldots \times (LD_n)\Big]\bigg].
\end{eqnarray*}
Splitting the preceding integral into $\big\{\rho_L(s-\e_L)\leq \hat{\tau}\leq \rho_Ls\big\}$ and $\big\{\hat{\tau}<\rho_L(s- \e_L)\big\}$, we can use (\ref{asymptotic uniformity}) in the latter case to write \setlength\arraycolsep{1pt}
\begin{eqnarray}
\E_{A_L}&\bigg[&\ind{\hat{\tau}\leq \rho_Ls}\ \prob_{\hat{\A}^L(\hat{\tau})}\Big[(\hat{\xi}_1^L,\ldots,\hat{\xi}_n^L)\big((\rho_L s-\hat{\tau})-\big)\in \prod_{i=1}^n(LD_i)\Big]\bigg]\nonumber\\
& & =\E_{A_L}\bigg[\ind{\rho_L(s-\e_L)\leq \hat{\tau}\leq \rho_Ls}\ \prob_{\hat{\A}^L(\hat{\tau})}\Big[(\hat{\xi}_1^L,\ldots,\hat{\xi}_n^L)\big((\rho_L s-\hat{\tau})-\big)\in \prod_{i=1}^n(LD_i)\Big]\bigg]\nonumber\\
& & \ \ + \Big(\prod_{i=1}^n \mathrm{Leb}(D_i)\Big)\prob_{A_L}[\hat{\tau}<\rho_L(s- \e_L)]+ \delta_L(A_L), \label{au2}
\end{eqnarray}
where $(\delta_L(A_L))_{L\in \N}$ tends to zero uniformly in $(A_L)_{L\in \N}$ as $L$ tends to infinity (we still impose that $A_L\in \GA(L,n)^*$ for every $L$). By the convergence of the distribution function of $\frac{\tilde{\tau}}{L^2\log L}$ to that of an exponential random variable, uniformly in the time variable and in $(A_L)_{L\in \N}$, we obtain that $\prob_{A_L}[\rho_L(s-\e_L)\leq \hat{\tau}\leq \rho_Ls]$ converges to $0$ uniformly in $(A_L)_{L\in \N}$ (which is also true if $\beta=0$, i.e., $\rho_L\ll L^2\log L$). Hence, we can find a sequence $(\delta'_L(A_L))_{L\in\N}$ decreasing to $0$ uniformly in $(A_L)_{L\in \N}$, such that the whole sum on the right-hand side of (\ref{au2}) is equal to
$$
\Big(\prod_{i=1}^n \mathrm{Leb}(D_i)\Big)\prob_{A_L}[\hat{\tau}\leq \rho_Ls ]+ \delta'_L(A_L).
$$
Likewise, we can find another sequence $(\delta_L'')_{L\in \N}$ decreasing to zero uniformly in $(A_L)_{L\in \N}$ such that
$$
\prob_{A_L}\big[(\hat{\xi}_1^L,\ldots,\hat{\xi}_n^L)(\rho_Ls-)\in (LD_1)\times \ldots \times (LD_n)\big]= \prod_{i=1}^n \mathrm{Leb}(D_i) + \delta''_L(A_L).
$$
Subtracting the two last equalities, we obtain
$$
\prob_{A_L}\Big[(\hat{\xi}_1^L,\ldots,\hat{\xi}_n^L)(\rho_L s-)\in \prod_{i=1}^n(LD_i); \hat{\tau}> \rho_Ls\Big]= \bigg\{\prod_{i=1}^n \mathrm{Leb}(D_i)\bigg\}\prob_{A_L}[\hat{\tau}> \rho_Ls ]+ o(1),
$$
where the remainder decreases to $0$ uniformly in $s>0$ and $(A_L)_{L\geq 1}$ such that $A_L\in \GA(L,n)^*$ for each $L$. Coming back to (\ref{au1}), we obtain that it is equal to \setlength\arraycolsep{1pt}
\begin{eqnarray*}
\frac{M}{\prob_{A_L}[\rho_Le_1^L< t_L^*]}\int_0^\infty& ds& e^{-Ms} \bigg\{\Big(\prod_{i=1}^n \mathrm{Leb}(D_i)\Big)\prob_{A_L}[\tilde{\tau}> \rho_Ls ] +o(1)\bigg\}\\
&= & \frac{\prob_{A_L} [\tilde{\tau}> \varsigma_L]}{\prob_{A_L}[\tilde{\tau}^*>\varsigma_L]}\ \prod_{i=1}^n \mathrm{Leb}(D_i) + o(1) \\
&=& \frac{\prob_{A_L} [\tilde{\tau}^* > \varsigma_L]+o(1)}{\prob_{A_L}[\tilde{\tau}^*>\varsigma_L]}\ \prod_{i=1}^n \mathrm{Leb}(D_i)+o(1),
\end{eqnarray*}
where the last line uses (\ref{independence motions tilde A}). We can thus conclude that (\ref{eq homogen}) holds.
Condition on the first event being a large reproduction event. By the description of such an event, the result for the genealogical process is the merger of at most one group of blocks into a bigger block. Furthermore, the transitions depend only on the number of blocks and their labels, so for convenience we derive the transition probabilities for $A_L$ of the form $\wp_n(\mathbf{x})$ only, although we shall use the result later for more general labelled partitions. Let $\pi$ be a partition of $\{1,\ldots,n\}$ such that $\pi$ has exactly one block of size greater than $1$, which we call $J$. Then if the large event has centre $x$ and radius $cr$ in $\T(1)$, the probability that the transition undergone by $\A^{L,u}$ is $\wp_n\rightarrow \pi$ is the probability that at this time, at least all the lineages in $J$ have labels in $B(x,cr)$ and are really affected by the event, and all the other lineages present in $B(x,cr)$ are not affected by the event. Summing over all possible choices $I\subset \{1,\ldots,n\}\setminus J$ for these ``other lineages'' ($I$ can be empty) and using (\ref{eq homogen}), the probability of the transition $\wp_n\rightarrow \pi$ up to a vanishing error is given by \begin{eqnarray}
\sum_{I}&V_{cr}^{|J|+|I|}&(1-V_{cr})^{n-|J|-|I|}\int_0^1u^{|J|}(1-u)^{|I|}\nu_r^B(du)\nonumber\\
&=& \int_0^1 (uV_{cr})^{|J|}\sum_{i=0}^{n-|J|}\binom{n-|J|}{i}V_{cr}^i(1-V_{cr})^{n-|J|-i}(1-u)^i \nu_r^B(du) \nonumber \\
&=& \int_0^1 (uV_{cr})^{|J|}((1-u)V_{cr}+1-V_{cr})^{n-|J|}\nu_r^B(du)\nonumber \\
&=& \int_0^1 (uV_{cr})^{|J|}(1-uV_{cr})^{n-|J|}\nu_r^B(du).\label{transition lambda}
\end{eqnarray}
We now have the results we need to show $(b)$. For every $L\in \N$, let us consider again the time $e_1^L$ introduced earlier, and define for each integer $i\geq 2$, \begin{eqnarray*}
e_i^L= \inf\big\{t> e_{i-1}^L\ &:&\ \rho_Lt\in \Pi_L^B\mathrm{\ or\ } \rho_Lt \mathrm{\ is\ the\ epoch\ of\ a\ coalescence} \\
& & \mathrm{due\ to\ small\ events}\big\}.
\end{eqnarray*}
Let us also define similar times corresponding to $\Lambda^{(\beta,c)}$. From the expression of its rates given in Definition \ref{def lambda coal}, $\Lambda^{(\beta,c)}$ is composed of a Kingman part (i.e., only binary mergers) run at rate $\beta$, and of a set of multiple mergers due to the part $\Lambda^{(0)}$ of its $\Lambda$-measure with the atom at $0$ removed. Furthermore, the finite measure $\Lambda^{(0)}$ on $[0,1]$ is given by
\begin{eqnarray*}
\Lambda^{(0)}(dv)&= &c^{-2}v^2\int_0^{(\sqrt{2})^{-1}}\nu^B_r\big(\big\{u:\ uV_{cr}\in dv\big\}\big)\mu^B(dr) \\
& = & c^{-2}v^2\int_0^{(\sqrt{2})^{-1}}\ind{V_{cr}\geq v}\nu_r^B\Big(d\frac{v}{V_{cr}}\Big)\mu^B(dr) .
\end{eqnarray*}
Following Pitman's Poissonian construction of a coalescent with multiple mergers (whose $\Lambda$-measure has no atom at $0$, see Pitman~1999), \nocite{pitman:1999} let us define $\Pi$ as a Poisson point process on $\R_+\times [0,1]$ with intensity $dt\otimes v^{-2}\Lambda^{(0)}(dv)$. Note that because of our assumption on $M$, $v^{-2}\Lambda^{(0)}(dv)$ is also a finite measure, with total mass $M$. The atoms of $\Pi$ constitute the times at which $\Lambda^{(\beta,c)}$ acting on the partitions of $\N$ experiences a multiple collision, and the probabilities that any given lineage is affected by the event. The Kingman part of $\Lambda^{(\beta,c)}$ is superimposed on this construction by assigning to all pairs of blocks of the current partition independent exponential clocks with parameter $\beta$, giving the time at which the corresponding pair merges into one block.
From now on, we consider only the restriction of $\Lambda^{(\beta,c)}$ to ${\mathcal P}_n$, although we do not make it appear in the notation. Let $e_1$ be the minimum of the first time a pair of blocks of $\Lambda^{(\beta,c)}$ merges due to the Kingman part and of the time corresponding to the first point of $\Pi$. Define $e_i$ in a similar manner for all $i\geq 2$, so that $(e_i)_{i\in \N}$ is an increasing sequence of random times at which $\Lambda^{(\beta,c)}$ may undergo a transition. Our goal is to show that the finite-dimensional distributions of $\big\{(e_i^L,\A^{L,u}(e_i^L)),\ i\in \N\big\}$ under $\prob_{A_L}$ converge to those of $\big\{(e_i,\Lambda^{(\beta,c)}(e_i)),\ i\in \N\big\}$ under $\prob_{\wp_n}$, as $L\rightarrow \infty$. Since $\A^{L,u}$ (resp., $\Lambda^{(\beta,c)}$) can jump only at the times $e_i^L$ (resp., $e_i$), the fact that only finitely many jumps occur to $\Lambda^{(\beta,c)}$ in any compact time interval, together with Proposition 3.6.5 in Ethier \& Kurtz~(1986) enable us to conclude that this convergence yields $(b)$. We proceed by induction, by showing that for each $i\in \N$:
\medskip
\noindent \textit{$H(i)$ : if $a_L\in \GA(L,n)$ for each $L$ and there exists $\pi_0\in {\mathcal P}_n$ such that for all $L\in \N$, $\mathrm{bl}(a_L)=\pi_0$, then as $L\rightarrow \infty$}
$$
\mathcal{L}_{\prob_{a_L}}\big(\big\{(e_1^L,\A^{L,u}(e_1^L)),\ldots,(e_i^L,\A^{L,u}(e_i^L))\big\}\big) \Rightarrow \mathcal{L}_{\prob_{\pi_0}}\big(\big\{(e_1,\Lambda^{(\beta,c)}(e_1)),\ldots,(e_i,\Lambda^{(\beta,c)}(e_i))\big\}\big).
$$
(Note that $a_L$ can have less than $n$ blocks).
Let us start by $H(1)$. Let $t\geq 0$, $\pi\in {\mathcal P}_n$ and write $n_0$ for the number of blocks of $\pi_0$. We have, in the notation used in the previous paragraph (and with $\tilde{\A}^{L,u}$ defined as the unlabelled partition induced by $\tilde{\A}^L$ on the timescale $\rho_L$), \setlength\arraycolsep{1pt}
\begin{eqnarray}
&\prob_{a_L}&\big[ e_1^L\leq t;\ \A^{L,u}(e_1^L)=\pi \big]\nonumber\\
&= & \prob_{a_L}\big[e_1^L\leq t;\ \A^{L,u}(e_1^L)=\pi;\ \rho_Le_1^L=t_L^* \big]+ \prob_{a_L}\big[e_1^L\leq t;\ \A^{L,u}(e_1^L)=\pi;\ \rho_Le_1^L< t_L^* \big]\nonumber\\
&=& \prob_{a_L}\big[t_L^*\leq \rho_Lt;\ \tilde{\A}^{L,u}(t_L^*/\rho_L)=\pi;\ t_L^* <\zeta_L \big]\label{H(1) term one}\\
& & + \prob_{a_L}\big[e_1^L\leq t;\ \A^{L,u}(e_1^L)=\pi\big|\ \rho_Le_1^L<t_L^* \big]\prob_{a_L}\big[\rho_Le_1^L <t_L^*\big].\label{H(1) term two}
\end{eqnarray}
By Theorem \ref{result alpha<1} applied with $\rho_L\equiv +\infty$, $\tilde{\A}^{L,u}$ with initial value $a_L$ converges as $L\rightarrow \infty$ to Kingman's coalescent $\mathcal{K}^{(\beta)}$ started at $\pi_0$ and run at rate $\beta$, as a process in $D_{{\mathcal P}_n}[0,\infty)$ (if $\beta=0$, then $\tilde{\A}^{L,u}$ converges to the constant process equal to $\pi_0$). Hence, by the independence of $\tilde{\A}^L$ and $\zeta_L$ for every $L$ and a simple time-change, the quantity in (\ref{H(1) term one}) tends to that corresponding to $\mathcal{K}^{(\beta)}$, that is
\begin{equation}\label{first event kingman}
\prob_{\pi_0}\big[\mathcal{K}^{(\beta)}(e_1^{\mathcal{K}})=\pi\big] \prob_{\pi_0}\big[e_1^{\mathcal{K}}< t\wedge \zeta \big],
\end{equation}
where $e_1^{\mathcal{K}}$ is distributed like an $\mathrm{Exp}\big(\beta\frac{n_0(n_0-1)}{2}\big)$-random variable and stands for the epoch of the first event occurring to $\mathcal{K}^{(\beta)}$, and $\zeta$ is an $\mathrm{Exp}(M)$-random variable. By the construction of $\Lambda^{(\beta,c)}$ given in the last paragraph, (\ref{first event kingman}) is the probability that the first event occurring to $\Lambda^{(\beta,c)}$ happens before time $t$, is due to the Kingman part of the coalescent and leads to the transition $\pi_0\rightarrow \pi$. For (\ref{H(1) term two}), note first that because $\Pi_L^B$ and $\Pi_L^s$ are independent, if we condition on $\rho_Le_1^L$ being the time of the first point $(t_1^L,x_1^L,r_1^L)$ of $\Pi_L^B$, then $e_1^L$ and the pair $(x_1^L,r_1^L)$ are independent. Hence, we have for each $L\geq 1$
\begin{eqnarray*}
\prob_{a_L}\big[e_1^L\leq t&;& \A^{L,u}(e_1^L)=\pi\big|\ \rho_Le_1^L<t_L^* \big]\\
& =& \prob_{a_L}\big[e_1^L\leq t\big|\ \rho_Le_1^L<t_L^* \big]\prob_{a_L}\big[\A^{L,u}(e_1^L)=\pi\big|\ \rho_Le_1^L<t_L^* \big].
\end{eqnarray*}
Using (\ref{conv coal times}) and the same reasoning as for (\ref{H(1) term one}), we can write \setlength\arraycolsep{1pt}
\begin{eqnarray*}
\prob_{a_L} \big[e_1^L\leq t\big|\ \rho_Le_1^L<t_L^* \big]&\prob_{a_L}&\big[\rho_Le_1^L<t_L^*\big] \\
&=& \prob_{a_L}\big[e_1^L\leq t;\ \rho_Le_1^L<t_L^*\big]\\
&=& \prob_{a_L}\big[e_1^L\leq t\big]-\prob_{a_L}\big[e_1^L\leq t;\ \rho_L e_1^L=t_L^*\big]\\
&\rightarrow & \exp\Big\{-\Big(M+\beta \frac{n_0(n_0-1)}{2}\Big)t \Big\}-\prob_{\pi_0}\big[e_1^{\mathcal{K}} \leq t \wedge \zeta \big]\\
&=& \prob_{\pi_0}\big[\zeta < t \wedge e_1^{\mathcal{K}}\big],
\end{eqnarray*}
where the last equality comes from the fact that an $\mathrm{Exp}\big(\beta\frac{n_0(n_0-1)}{2}\ +M\big)$-random variable has the same distribution as the minimum of an $\mathrm{Exp}\big(\beta\frac{n_0(n_0-1)}{2}\big)$- and an $\mathrm{Exp}(M)$-random variables, independent of each other. In addition, by the calculation done in (\ref{transition lambda}),
$$
\prob_{a_L}\big[\A^{L,u}(e_1^L)=\pi\big|\ \rho_L e_1^L<t_L^*\big]\rightarrow \prob_{\pi_0}\big[\Lambda^{(0)}(e_1^{\Lambda})=\pi\big],\qquad \mathrm{as}\ L\rightarrow \infty,
$$
where $e_1^{\Lambda}$ is the time of the first event of $\Pi$. Combining the above, and recognizing the transition probability of $\Lambda^{(\beta,c)}$ through the decomposition obtained, we can write
$$
\lim_{L\rightarrow \infty}\prob_{a_L}\big[e_1^L\leq t;\ \A^{L,u}(e_1^L)=\pi \big]=\prob_{\pi_0}\big[e_1\leq t;\ \Lambda^{(\beta,c)}(e_1)=\pi \big].
$$
Since this result holds for each $t\geq 0$ and $\pi_0\in {\mathcal P}_n$, using a monotone class argument we can conclude that the distribution of $\big(e_1^L,\A^{L,u}(e_1^L)\big)$ under $\prob_{a_L}$ converges to the distribution of $(e_1,\Lambda^{(\beta,c)}(e_1))$ under $\prob_{\pi_0}$ as $L\rightarrow \infty$. This proves $H(1)$.
Suppose that $H(i-1)$ holds for some $i\geq 2$. Let $D\subset (\R_+)^{i-1}$, $t\geq 0$ and $\pi_1,\ldots,\pi_i\in {\mathcal P}_n$. Let also $L\in \N$. By the strong Markov property applied to $\A^L$ at time $\rho_Le_{i-1}^L$, we have \setlength\arraycolsep{1pt}
\begin{eqnarray*}
\prob_{a_L}\big[ \big(e_1^L,\ldots,e_{i-1}^L\big)&\in& D;\ e_i^L-e_{i-1}^L\leq t;\ \A^{L,u}(e_1^L)=\pi_1,\ldots, \A^{L,u}(e_i^L)=\pi_i\big]\\
&=& \E_{a_L}\Big[\ \mathbf{1}_{\{(e_1^L,\ldots,e_{i-1}^L)\in D\}}\ \mathbf{1}_{\{\A^{L,u}(e_1^L)=\pi_1,\ldots, \A^{L,u}(e_{i-1}^L)=\pi_{i-1}\}} \\
& &\qquad\qquad\qquad \times \prob_{\A^L(\rho_Le_{i-1}^L)}\big[e_1^L\leq t;\ \A^{L,u}(e_1^L)=\pi_i\big]\Big].
\end{eqnarray*}
First, using arguments analogous to those leading to Lemma \ref{lemm lineages far}, up to an error term vanishing uniformly in $(a_L)_{L\in \N}$ such that $a_L\in \Gamma(L,n)$ for every $L\in \N$, we can consider that $\A^L(\rho_Le_{i-1}^L)\in \GA(L,n)$. As $\mathrm{bl}\big(\A^L(\rho_Le_{i-1}^L)\big)=\pi_{i-1}$ for each $L$, we can therefore use $H(1)$ to write that
$$
\lim_{L\rightarrow \infty}\prob_{\A^L(\rho_Le_{i-1}^L)}\big[e_1^L\leq t;\ \A^{L,u}(e_1^L)=\pi_i\big]= \prob_{\pi_{i-1}}\big[e_1\leq t;\ \Lambda^{(\beta,c)}(e_1)=\pi_i\big],
$$
and so dominated convergence and $H(i-1)$ give us \setlength\arraycolsep{1pt}
\begin{eqnarray*}
& &\lim_{L\rightarrow \infty}\prob_{a_L}\big[\big(e_1^L,\ldots,e_{i-1}^L\big)\in D;\ e_i^L-e_{i-1}^L\leq t;\ \A^{L,u}(e_1^L)=\pi_1,\ldots, \A^{L,u}(e_i^L)=\pi_i\big]\\
& &= \E_{\pi_0}\Big[\ \mathbf{1}_{\{(e_1,\ldots,e_{i-1})\in D\}}\ \mathbf{1}_{\{\Lambda^{(\beta,c)}(e_1)=\pi_1,\ldots, \Lambda^{(\beta,c)}(e_{i-1})=\pi_{i-1}\}} \prob_{\pi_{i-1}}\big[e_1\leq t;\ \Lambda^{(\beta,c)}(e_1)=\pi_i\big]\Big]\\
& & = \prob_{\pi_0}\big[\big(e_1,\ldots,e_{i-1}\big)\in D;\ e_i-e_{i-1}\leq t;\ \Lambda^{(\beta,c)}(e_1)=\pi_1,\ldots, \Lambda^{(\beta,c)}(e_i)=\pi_i\big],
\end{eqnarray*}
which again yields $H(i)$ by standard arguments. The induction is now complete, and so we can conclude that the finite-dimensional distributions of the embedded Markov chain and the holding times of $\A^{L,u}$ under $\prob_{a_L}$ converge as $L\rightarrow \infty$ towards those of $\Lambda^{(\beta,c)}$ under $\prob_{\pi_0}$. The proof of $(b)$ is then complete.
To finish, suppose that $\rho_L\gg L^2\log L$. Then, we can find a sequence $\Phi_L$ increasing to $+\infty$ such that
$$
\sup_{A\in \GA(L,n)}\prob_A[\ \mathrm{a\ large\ event\ affects\ at\ least\ one\ lineage\ before\ time\ }\Phi_LL^2\log L]\rightarrow 0
$$
as $L\rightarrow \infty$. Hence, we can couple $\A^L$ with the process $\tilde{\A}^L$ which experiences only small events, so that the time by which they differ at step $L$ is larger than $\Phi_L$ with probability tending to one, uniformly in the sequence $(A_L)_{L\geq 1}$ chosen as above. By the results obtained in Section \ref{alpha<1} with $\rho_L\equiv +\infty$, we know that $\tilde{\A}^{L,u}$ converges in distribution towards $\mathcal{K}$, as a process in $D_{{\mathcal P}_n}[0,\infty)$. Since the sample size $n$ is finite and under Kingman's coalescent, a sample of $n$ lineages reaches a common ancestor in finite time almost surely, $(c)$ follows.$\hfill\square$
|
2,869,038,155,065 | arxiv | \section{Introduction}
Let $\mu$ be a Borel probability measure on $\R^d$. We call $\mu$ a {\it spectral measure} if there exists a countable subset $\Lambda \sse \R^d$ such that the family of exponential functions
$$\set{ e_\lambda(x) = e^{-2\pi i \lambda \cdot x}: \lambda \in \Lambda}
$$
forms an orthonormal basis in $L^2(\mu)$. The set $\Lambda$ is called a \textit{spectrum} of $\mu$, and we say that $(\mu,\Lambda)$ is a \textit{spectral pair}.
The existence of a spectrum of $\mu$ is a basic question in harmonic analysis, and it may date back to Fuglede's seminal paper \cite{Fuglede-1974}. In this paper, Fuglede conjectured that if $\Gamma \sse \R^d$ is a measurable subset with positive finite Lebesgue measure, the normalized Lebesuge measure on $\Gamma$ is a spectral measure if and only if $\Gamma$ tiles $\R^d$ by translations. Tao \cite{Tao-2004} and the others \cite{Farkas-Matolcsi-Mora-2006,Farkas-Revesz-2006,Kolountzakis-Matolcsi-2006a,Kolountzakis-Matolcsi-2006b,Matolcsi-2005}
have disproved Fuglede's conjecture in both directions for $d\ge 3$. However, the connection between spectrality and tiling still attracts considerable attentions, and some interesting positive results have been proved for special cases \cite{Iosevich-Katz-Tao-2003,Laba-2001}.
In 1998, Jorgenden and Perdersen \cite{Jorgensen-Pedersen-1998} showed that $\mu_{4,\set{0,2}}$ is a spectral measure with a spectrum $\Lambda$,
where $\mu_{4,\set{0,2}}$ is the self-similar measure with equal weights generated by the iterated function system (IFS) $$\set{ f_1(x) =\frac{x}{4}, f_2(x) = \frac{x+2}{4} },$$ and the set $\Lambda$ is given by
\begin{equation}\label{lambda}
\Lambda = \bigcup_{n=1}^\f \set{\ell_1 + 4\ell_2 + \cdots + 4^{n-1} \ell_n: \ell_1,\ell_2,\cdots,\ell_n \in \set{0,1}},
\end{equation}
but the standard middle-third Cantor measure is not a spectral measure.
Note that the fractal measure $\mu_{4,\set{0,2}}$ and the Lebesgue measure are mutually singular.
We refer the readers to \cite{Falco03} for the details of fractal sets and measures. This aroused the research interest of spectrality of fractal measures. From then on, there are abundant literatures on this topic \cite{An-Fu-Lai-2019,An-He-He-2019,An-He-2014,An-He-Lau-2015,An-He-Li-2015,An-Wang-2021,Dai-He-Lau-2014, Deng-Chen-2021,Dutkay-Han-Sun-2014,Dutkay-Haussermann-Lai-2019,Dutkay-Jorgensen-2007,Dutkay-Jorgensen-2012, Dutkay-Lai-2014,Dutkay-Lai-2017,Fu-Wen-2017,He-Tang-Wu-2019,Jorgensen-Pedersen-1998,Laba-Wang-2002, Strichartz-2000,Strichartz-2006,Wang-Dong-Liu-2018}.
There are many surprising phenomena for singular continuous spectral measures. In \cite{Dutkay-Jorgensen-2012}, Dutkay and Jorgensen showed that besides the set $\Lambda$ defined in (\ref{lambda}), the sets $5\Lambda$, $7\Lambda$, $11\Lambda$, $13\Lambda$, $17\Lambda$, $\cdots$ are all spectra of $\mu_{4,\set{0,2}}$.
Moreover, the convergence of mock Fourier series is distinct with respect to different spectra.
In \cite{Strichartz-2006}, Strichartz proved that the Fourier series of continuous functions converges uniformly with respect to the spectral measure $\mu_{4,\set{0,2}}$ with the spectrum $\Lambda$, but Dutkey, Han and Sun \cite{Dutkay-Han-Sun-2014} showed that there exists a continuous function such that its Fourier series is divergent at $0$ with respect to the spectrum $17\Lambda$.
To study fractal spectral measures, the Hadamard triple is a fundamental tool. We write $\#$ for the cardinality of a set. Let $R\in M_d(\Z)$ be a $d\times d$ expanding matrix (i.e. all eigenvalues have modulus strictly greater than $1$) with integral entries. Let $B,L\in \Z^d$ be two finite subsets of integral vectors with $N=\#B=\#L \ge 2$. If the matrix
$$\left[ \frac{1}{\sqrt{N}} e^{-2\pi i (R^{-1}b) \cdot \ell} \right]_{b\in B,\ell \in L}$$
is unitary, we call $(R, B, L)$ a {\it Hadamard triple} in $\R^d$ . We write $\delta_a$ for the Dirac measure at a point $a$, and for a finite subset $A\sse \R^d$, we write
$$\delta_{A} = \frac{1}{\# A} \sum_{a \in A} \delta_a.$$
It is clear that $(R, B, L)$ is a Hadamard triple if and only if the set $L$ is a spectrum of the discrete measure $\delta_{R^{-1}B}$,
To construct more flexible examples of fractal spectral measures, different Hadamard triples are applied to the discrete measures in infinite convolution. The following question is largely investigated.
\begin{question}
Given a sequence of Hadamard triples $\set{(R_j,B_j,L_j): j \ge 1}$ in $\R^d$, under what conditions is the infinite convolution $$\mu=\delta_{R_1^{-1} B_1} * \delta_{(R_2 R_1)^{-1} B_2} * \cdots * \delta_{(R_n \cdots R_2 R_1)^{-1} B_n} * \cdots$$ a spectral measure ?
\end{question}
Many affirmative results have been obtained in \cite{An-Fu-Lai-2019,An-He-He-2019,An-He-2014,An-He-Lau-2015,An-He-Li-2015,Dutkay-Haussermann-Lai-2019,Dutkay-Lai-2017,Fu-Wen-2017,Laba-Wang-2002,Wang-Dong-Liu-2018}.
When all Hadamard triples are the same one, the infinite convolution reduces to a self-affine measure (which is called a self-similar measure for $d=1$).
{\L}aba and Wang \cite{Laba-Wang-2002} proved that the self-similar measure with equal weights generated by a Hadamard triple in $\R$ is a spectral measure, and Dutkay, Haussermann and Lai \cite{Dutkay-Haussermann-Lai-2019} generalized it to self-affine measures in higher dimension.
In this paper, we explore the spectrality of the random convolutions generated by finitely many Hadamard triples in $\R$.
Let $\set{(N_j, B_j, L_j): 1 \le j \le m}$ be finitely many Hadamard triples in $\R$.
Let $\Omega$ be the symbolic space over the alphabet $\set{1,2,\cdots, m}$.
Given a sequence of positive integers $\{n_k\}_{k=1}^\f$ and $\omega=(\omega_k)_{k=1}^\f \in \Omega$, let $\mu_{\omega,\set{n_k}}$ be the random convolution given by
\begin{equation}\label{mu-subsequence}
\mu_{\omega,\set{n_k}} = \delta_{N_{\omega_1}^{-n_1} B_{\omega_1}} * \delta_{N_{\omega_1}^{-n_1} N_{\omega_2}^{-n_2} B_{\omega_2}} * \cdots * \delta_{N_{\omega_1}^{-n_1} N_{\omega_2}^{-n_2} \cdots N_{\omega_k}^{-n_k} B_{\omega_k} }* \cdots ,
\end{equation}
where $\omega_k$ determines the Hadamard triple chosen in $k$-th convolution. For $n_k =1$ for all $k \ge 1$, we write
\begin{equation}\label{mu-omega}
\mu_\omega = \delta_{N_{\omega_1}^{-1} B_{\omega_1}} * \delta_{(N_{\omega_1} N_{\omega_2})^{-1} B_{\omega_2}} * \cdots * \delta_{(N_{\omega_1} N_{\omega_2}\cdots N_{\omega_k})^{-1} B_{\omega_k} }* \cdots.
\end{equation}
The random convolution generated by finitely many Hadamard triples in $\R^d$ was first studied by Strichartz in \cite{Strichartz-2000}, where he showed that the random convolution is spectral under certain separation condition. But, in general, the separation condition is not easy to check.
Dutkay and Lai also considered this question in \cite{Dutkay-Lai-2017}, and they generalized the Strichartz criterion under no-overlap condition and showed that for some special cases, there exists a common spectrum for almost all random convolutions.
In this paper, we give a simple condition on the sets $\set{B_j: 1\le j \le m}$ in $\R$, and we prove that all random convolutions are spectral measures.
\begin{theorem}\label{main-result}
Let $\set{(N_j, B_j, L_j): 1 \le j \le m}$ be finitely many Hadamard triples in $\R$.
Suppose that $\gcd(B_j - B_j) =1$ for $1\le j \le m$.
Given a sequence $\set{n_k}_{k=1}^\f$ of positive integers, let $\mu_{\omega,\set{n_k}}$ be given by \eqref{mu-subsequence}. Then $\mu_{\omega,\set{n_k}}$is a spectral measure for every $\omega \in \Omega$.
\end{theorem}
The following corollary is an immediate consequence of the above theorem.
\begin{corollary}\label{main-cor}
Let $\set{(N_j, B_j, L_j): 1 \le j \le m}$ be finitely many Hadamard triples in $\R$.
Suppose that $\gcd(B_j - B_j) =1$ for $1\le j \le m$.
Let $\mu_{\omega}$ be given by \eqref{mu-omega}. Then $\mu_{\omega}$ is a spectral measure for every $\omega \in \Omega$.
\end{corollary}
To prove Theorem \ref{main-result}, we need to consider the infinite convolution generated by a sequence of Hadamard triples $\set{(N_n,B_n,L_n): n \ge 1}$ in $\R$.
We write
\begin{equation}\label{mu-n}
\mu_n= \delta_{N_{1}^{-1} B_1} * \delta_{(N_1 N_2)^{-1} B_2} * \cdots * \delta_{(N_1 N_2 \cdots N_n)^{-1} B_n }.
\end{equation}
\emph{We always assume that $\mu_n$ converges weakly to a Borel probability measure $\mu$.}
It is known that if
\begin{equation}\label{condition-weak-converge}
\sum_{n=1}^{\f} \frac{\max\set{|b|: b \in B_n}}{|N_1 N_2 \cdots N_n|} < \f,
\end{equation}
then $\mu_n$ converges weakly to a Borel probability measure $\mu$, and moreover, the measure $\mu$ has a compact support.
Noting that there are only finitely many Hadamard triples in Theorem~\ref{main-result}, it is obvious that the condition (\ref{condition-weak-converge}) is satisfied.
The weak limit measure $\mu$ may be written as an infinite convolution
\begin{equation}\label{mu-infinite-convolution}
\begin{split}
\mu & = \delta_{N_{1}^{-1} B_1} * \delta_{(N_1 N_2)^{-1} B_2} * \cdots * \delta_{(N_1 N_2 \cdots N_n)^{-1} B_n } * \cdots \\
& = \mu_n * \mu_{>n}.
\end{split}
\end{equation}
We scale the measure $\mu_{>n}$ and define
\begin{equation}\label{nu-large-than-n}
\nu_{>n}(\;\cdot\;) = \mu_{>n}\left( \frac{1}{N_1 N_2 \cdots N_n} \; \cdot\; \right).
\end{equation}
Obviously, the spectrality of $\mu$ is affected by the property of the sequence $\{\nu_{>n}\}$. Therefore, the equi-positivity condition plays an important role in the study of spectrality.
\begin{definition}\label{def-equipositive}
We call $\Phi \sse \mcal{P}(\R^d)$ an equi-positive family if there exist $\ep>0$ and $\delta>0$ such that for $x\in [0,1)^d$ and $\mu\in \Phi$, there exists an integral vector $k_{x,\mu} \in \Z^d$ such that
$$ |\wh{\mu}(x+y+k_{x,\mu})| \ge \ep,$$
for all $ |y| <\delta,$ where $k_{x,\mu} =0$ for $x=0$.
\end{definition}
The equi-positivity condition was first used in \cite{Dutkay-Haussermann-Lai-2019} for self-affine measures with compact support. Our definition is more general than the one in~\cite{An-Fu-Lai-2019} since we do not assume that the family $\Phi \sse \mcal{P}(K)$ for some compact subset $K\sse \R^d$. The two definitions are equivalent for the tight family of probability measures, see Section~\ref{sec_pre} for the definition of tightness.
The following theorem is a generalization of Theorem 1.2 in~\cite{Dutkay-Lai-2017} and Theorem 3.2 in~\cite{ An-Fu-Lai-2019} for $d= 1$, and it is key to prove our main conclusion Theorem \ref{main-result}.
\begin{theorem}\label{general-result}
Let $\set{(N_n, B_n , L_n): n \ge 1}$ be a sequence of Hadamard triples in $\R$. Let the probability measure $\mu$ be the weak limit of $\mu_n$ given by \eqref{mu-n}.
If there exists a subsequence $\{ n_j \}$ of positive integers such that the family $\{ \nu_{>n_j} \}$ is equi-positive, then $\mu$ is a spectral measure.
\end{theorem}
At the end, we give an example to show that our assumption $\gcd(B_j - B_j) =1$ for $1\le j \le m$ in Theorem \ref{main-result} is necessary.
\begin{example}
Let $N_1 = N_2 = 2$, $B_1 = \set{0,1}$, $B_2 = \set{0,3}$, $L_1 = L_2=\set{0,1}$.
We have that $(N_i, B_i, L_i)$ is a Hadamard triple for $i=1,2$, but $\gcd(B_2 - B_2) =3 \ne 1$.
Let $\eta = 1 2^\f$, and the random convolution
$$\mu_\eta = \frac{1}{3} \mathscr{L}|_{[0,1/2]} + \frac{2}{3} \mathscr{L}|_{[1/2,3/2]} + \frac{1}{3} \mathscr{L}|_{[3/2,2]}, $$
where $\mathscr{L}$ denotes the Lebesgue measure in $\R$.
It has been showed that an absolutely continuous spectral measure must be uniform on its support \cite{Dutkay-Lai-2014}.
It is clear that $\mu_\eta$ is not uniformly distributed on its support $[0,2]$.
Thus, $\mu_\eta$ cannot be a spectral measure.
On the other hand, by Theorem 1.5 in \cite{Dutkay-Lai-2017}, there exists a subset $\Lambda \sse \Z$ such that $\Lambda$ is a spectrum of $\mu_\omega$ for $\PP$-a.e. $\omega \in \Omega = \set{1,2}^\f$, where $\PP$ is the product probability with equal weights on $\Omega$.
This implies that $\eta$ is in the exceptional set which is $\PP$-measure zero.
\end{example}
We organize our paper as follows: in Section~\ref{sec_pre}, we recall some definitions and review some already-known results which are essential in our proofs;
in Section~\ref{sec_ea}, we study properties of the equi-positivity and admissibility in $\R^d$, and we prove Theorem \ref{general-result};
we give the proof of Theorem \ref{main-result} in Section~\ref{sec_pf}.
\section{Preliminaries}\label{sec_pre}
First, we give some simple facts about symbolic spaces which are frequently used in our context.
Let $\Omega$ be the symbolic space over the alphabet $\set{1,2,\cdots,m}$, i.e.
$$
\Omega=\set{1,2,\cdots,m}^{\N}.
$$
We topologize the symbolic space $\Omega$ by the metric
$$ d(\omega,\eta) = 2^{-\min\set{k \ge 1: \omega_k \ne \eta_k} }
$$
for distinct $\omega=\omega_1\omega_2\cdots$, $\eta=\eta_1\eta_2\cdots \in \Omega$ to make $\Omega$ into a compact metric space.
It is well-known that a sequence $\set{\omega(j)}_{j=1}^\f \sse \Omega$ converges to $\omega$ if and only if for each $k \ge 1$, there exists a $j_0 \ge 1$ such that for every $j \ge j_0$, $$\omega_1(j) \omega_2(j) \cdots \omega_k(j) = \omega_1 \omega_2 \cdots \omega_k.$$
The left shift on $\Omega$ is denoted by $\sigma$, that is, for $\omega=\omega_1\omega_2\cdots \in \Omega$,
$$\sigma(\omega) = \omega_2 \omega_3 \cdots.$$
Let $\mcal{P}(\R^d)$ denote the set of all Borel probability measures on $\R^d$.
For $\mu \in \mcal{P}(\R^d)$, the support of $\mu$ is defined by $$\mathrm{spt}(\mu) = \R^d \sm \bigcup \set{ U\sse \R^d: \text{ $U$ is open, and $\mu(U)=0$}},$$
i.e., the smallest closed subset with full measure.
For a compact subset $K \sse \R^d$, let $$\mcal{P}(K) = \set{ \mu\in \mcal{P}(\R^d): \mathrm{spt}(\mu) \sse K }. $$
For $\mu \in \mcal{P}(\R^d)$, the Fourier transform of $\mu$ is defined by
$$ \wh{\mu}(\xi) = \int_{\R^d} e^{-2\pi i \xi \cdot x} \D \mu(x). $$
It is easy to verify that $\wh{\mu}$ is uniformly continuous and $\wh{\mu}(0) =1$.
Let $\mu,\mu_1,\mu_2,\cdots \in \mcal{P}(\R^d)$. Recall that $\mu_n$ \textit{converges weakly} to $\mu$ if $$\lim_{n \to \f} \int_{\R^d} f(x) \D \mu_n(x) = \int_{\R^d} f(x) \D \mu(x),$$
for all $ f \in C_b(\R^d),$ where $C_b(\R^d)$ is the set of all bounded continuous functions on $\R^d$.
Let $\Phi \sse \mcal{P}(\R^d)$. We say that $\Phi$ is {\it tight} (sometimes called uniformly tight) if for each $\ep>0$ there exists a compact subset $K \sse \R^d$ such that $$\inf_{\mu \in \Phi} \mu(K) > 1 - \ep.$$
We refer the readers to \cite{Bil03} for more details about tightness. Given $\Phi \sse \mcal{P}(K)$ for some compact subset $K \sse \R^d$, it is clear that $\Phi$ is tight.
Next we cite two well-known theorems which are frequently applied to weak limit of probability measures in our context.
\begin{theorem}\label{equivalent-condition-weak-converge}
Let $\mu,\mu_1,\mu_2,\cdots \in \mcal{P}(\R^d)$. Then $\mu_n$ converges weakly to $\mu$ if and only if $\displaystyle \lim_{n \to \f} \wh{\mu}_n(\xi)=\wh{\mu}(\xi)$ for every $\xi \in \R^d$.
\end{theorem}
\begin{theorem}\label{weak-compactness}
Let $\Phi \sse \mcal{P}(\R^d)$. Then $\Phi$ is tight if and only if, for every sequence $\{ \mu_n \} \sse \Phi$, there exists a subsequence $\{ \mu_{n_j} \}$ and a Borel probability measure $\mu \in \mcal{P}(\R^d)$ such that $\mu_{n_j}$ converges weakly to $\mu$ as $j \to \f$.
\end{theorem}
A family of continuous functions $\mcal{F} \sse C(\R^d)$ is called \textit{equicontinuous} if for each $\ep>0$ there exists $\delta >0$ such that $|f(x)-f(y)| < \ep$ for all $x,y\in \R^d$ satisfying $|x-y| <\delta$ and all $f\in \mcal{F}$.
The following lemma shows that the Fourier transforms of a tight family of probability measures is equicontinuous.
\begin{lemma}\label{equicontinuous}
Let $\Phi \sse \mcal{P}(\R^d)$. If $\Phi$ is tight, then the family $\set{\wh{\mu}: \mu\in \Phi}$ is equicontinuous.
\end{lemma}
\begin{proof}
For each $\ep>0$, since $\Phi$ is tight, there exists a compact subset $K \sse \R^d$ such that $$\inf_{\mu \in \Phi} \mu(K) > 1 - \frac{\ep}{3}.$$
Then we may find $\delta >0$ such that for all $|y| < \delta$ and all $x\in K$, $$\left| 1- e^{2\pi i y\cdot x} \right| < \frac{\ep}{3}.$$
Thus, for all $\mu \in \Phi$ and all $\xi_1,\xi_2 \in \R^d$ with $|\xi_1 - \xi_2| <\delta$, we have
\begin{align*}
\left| \wh{\mu}(\xi_1) - \wh{\mu}(\xi_2) \right| & = \left| \int_{\R^d} e^{-2\pi i \xi_1 \cdot x} \big(1- e^{ 2 \pi i (\xi_1 - \xi_2) \cdot x} \big) \D \mu(x) \right| \\
& \le \int_K \left| 1- e^{ 2\pi i (\xi_1 - \xi_2) \cdot x} \right| \D \mu(x) + \int_{\R^d \sm K} \left| 1- e^{2\pi i (\xi_1 - \xi_2) \cdot x} \right| \D \mu(x) \\
& \le \frac{\ep}{3} \mu(K) + 2 \mu(\R^d \sm K) < \ep.
\end{align*}
Therefore, the family $\set{\wh{\mu}: \mu\in \Phi}$ is equicontinuous.
\end{proof}
For $\mu,\nu \in \mcal{P}(\R^d)$, the convolution $\mu *\nu$ is given by $$ \mu*\nu(B) = \int_{\R^d} \nu(B-x) \D \mu(x)= \int_{\R^d} \mu(B-y) \D \nu(y), $$
for every Borel subset $B\sse \R^d$.
Equivalently, the convolution $\mu *\nu$ is the unique Borel probability measure satisfying $$\int_{\R^d} f(x) \D \mu*\nu(x) = \int_{\R^d \times \R^d} f(x+y) \D \mu \times \nu(x,y),$$
for all $ f \in C_b(\R^d).$
It is easy to check that $$\wh{\mu *\nu}(\xi) = \wh{\mu}(\xi) \wh{\nu}(\xi).$$
Using Theorem \ref{equivalent-condition-weak-converge}, it is straightforward to obtain the following lemma.
\begin{lemma}\label{convolution-weak-convergence}
Let $\{\mu_n\}, \{\nu_n\} \sse \mcal{P}(\R^d)$.
If $\mu_n$ and $\nu_n$ converge weakly to $\mu$ and $\nu$ respectively, then we have $\mu_n * \nu_n$ converges weakly to $\mu*\nu$.
\end{lemma}
To prove the spectrality of measures, we have to rely on the properties of Hadamard triples. We list some useful facts in the following lemma, see \cite{Laba-Wang-2002,Dutkay-Haussermann-Lai-2019} for details.
\begin{lemma}\label{lemma-HT}
Let $(R,B,L)$ be a Hadamard triple in $\R^d$. Then we have
\noindent$(\mathrm{i})$ $(R,B+b_0, L+\ell_0)$ is also a Hadamard triple for all $b_0,\ell_0 \in \Z^d$;
\noindent$(\mathrm{ii})$ the elements in $B$ are in distinct residue classes modulo $R \Z^d$; the elements in $L$ are in distinct residue classes modulo $R^{\mathrm{T}} \Z^d$, where the superscript ${^\mathrm{T}}$ denotes the transpose of a matrix;
\noindent$(\mathrm{iii})$ if $\wt{L} \equiv L \pmod{ R^\mathrm{T}\Z^d}$, then $(R,B,\wt{L})$ is also a Hadamard triple;
\noindent$(\mathrm{iv})$ if $\set{(R_j, B_j, L_j): 1\le j \le n}$ are finitely many Hadamard triples in $\R^d$, let $$\mathbf{R}= R_n R_{n-1} \cdots R_1,\quad \mathbf{B} = (R_n R_{n-1} \cdots R_2) B_1 + \cdots + R_n B_{n-1} + B_n, $$ and $$\mathbf{L} = L_1 + R_1^{\mathrm{T}} L_2 + \cdots + (R_1^{\mathrm{T}} R_2^{\mathrm{T}} \cdots R_{n-1}^{\mathrm{T}}) L_n,$$
then $(\mathbf{R},\mathbf{B},\mathbf{L})$ is a Hadamard triple.
\end{lemma}
The next theorem is often used to check whether a measure is spectral.
\begin{theorem}\label{criterion}
Let $\mu \in \mcal{P}(\R^d)$, and let $\Lambda \in \R^d$ be a countable subset. We define $$Q(\xi) = \sum_{\lambda\in \Lambda} |\wh{\mu}(\lambda + \xi)|^2.$$
\noindent$(1)$ The family of exponential functions $\set{e_\lambda(x): \lambda \in \Lambda}$ forms an orthonormal set in $L^2(\mu)$ if and only if $Q(\xi) \le 1$ for all $ \xi \in \R^d$.
\noindent$(2)$ The family of exponential functions $\set{e_\lambda(x): \lambda \in \Lambda}$ forms an orthonormal basis in $L^2(\mu)$ if and only if $Q(\xi) = 1$ for all $\xi \in \R^d$.
\end{theorem}
Although Jorgensen and Pedersen in~\cite{Jorgensen-Pedersen-1998} proved this theorem only for probability measures with compact support,
it actually holds for all probability measures on $\R^d$. The argument is essentially identical.
\section{Equi-positivity and admissibility}\label{sec_ea}
\subsection{Equi-positivity}
First, we give the proof of Theorem \ref{general-result}, and it is inspired by~\cite{An-Fu-Lai-2019, Strichartz-2000}.
\begin{proof}[The proof of Theorem \ref{general-result}]
By Lemma \ref{lemma-HT} (i), we may assume that $0\in L_n$ for $n\ge 1$.
Since the family $\{\nu_{>n_j}\}$ is equi-positive, there exist $\ep>0$ and $\delta>0$ such that for $x\in [0,1)$ and $j \ge 1$, there exists an integer $k_{x,j} \in \Z$ such that
$$ |\wh{\nu}_{>n_j}(x+y+k_{x,j})| \ge \ep,$$
for all $|y| <\delta$,
and $k_{x,j}=0$ for $x=0$.
For integers $q > p \ge 0$, we define $\mbf{N}_{p,q} = N_{p+1} N_{p+2} \cdots N_q $, $$ \mbf{B}_{p,q}= N_{p+1} N_{p+2} \cdots N_q \left( \frac{B_{p+1}}{ N_{p+1}} + \frac{B_{p+2}}{N_{p+1} N_{p+2}} + \cdots + \frac{B_q}{ N_{p+1} N_{p+2} \cdots N_q} \right) $$
and $$\mbf{L}_{p,q} = L_{p+1} + N_{p+1} L_{p+2} + \cdots + ( N_{p+1} N_{p+2}\cdots N_{q-1}) L_q.$$
By Lemma \ref{lemma-HT} (iv), $(\mbf{N}_{p,q}, \mbf{B}_{p,q}, \mbf{L}_{p,q})$ is a Hadamard triple.
We construct a sequence of finite subsets $\Lambda_i \sse \Z$ for $i \ge 1$ by induction.
Let $m_1 = n_1$ and $\Lambda_1 = \mbf{L}_{0, m_1}$. Note that $0\in \Lambda_1$ and $\Lambda_1$ is a spectrum of $\mu_{m_1}$.
For $i \ge 2$, suppose that $\Lambda_{i-1}$ has been defined, and
we choose a sufficiently large element $m_i$ in the sequence $\set{n_j}$ such that $m_i > m_{i-1}$ and for all $ \lambda \in \Lambda_{i-1}$,
\begin{equation}\label{ineqla}
\left| \frac{\lambda}{N_1 N_2 \cdots N_{m_i}} \right| < \frac{\delta}{2}.
\end{equation}
Now we define
\begin{equation} \label{defLam}
\Lambda_i = \Lambda_{i-1} + \mbf{N}_{0,m_{i-1}} \set{ \lambda + k_{\lambda,i} \cdot \mbf{N}_{m_{i-1}, m_i} : \lambda \in \mbf{L}_{m_{i-1}, m_i} },
\end{equation}
where, by the equi-positivity of $\{ \nu_{>n_j} \}$, the integers $k_{\lambda,i}\in \Z$ are chosen to satisfy
\begin{equation}\label{lowerbound}
\left| \wh{\nu}_{>m_i}\left( \frac{\lambda}{N_{m_{i-1}+1} \cdots N_{m_i}} + y + k_{\lambda,i} \right) \right|\ge \ep,
\end{equation}
for all $ |y|<\delta$, and $k_{\lambda,i} =0$ for $\lambda=0$.
Note that $\mu_{m_{i-1}} = \delta_{\mbf{N}_{0,m_{i-1}}^{-1} \mbf{B}_{0,m_{i-1}}}$ is a spectral measure with a spectrum $\Lambda_{i-1}$, and
$$\mu_{m_i} = \delta_{\mbf{N}_{0,m_{i-1}}^{-1} \mbf{B}_{0,m_{i-1}}} * \delta_{(\mbf{N}_{0,m_{i-1}} \mbf{N}_{m_{i-1}, m_i})^{-1} \mbf{B}_{m_{i-1},m_i}}. $$
By Lemma \ref{lemma-HT} (iii) and (iv), the set $\Lambda_i$ is a spectrum of $\mu_{m_i}$. Since $0\in \mbf{L}_{m_{i-1}, m_i}$ and $0\in \Lambda_{i-1}$, it is clear that $0\in \Lambda_i$ and $\Lambda_{i-1} \sse \Lambda_i$.
We write $$\Lambda = \bigcup_{i=1}^\f \Lambda_i,$$ and we prove that $\Lambda $ is a spectrum of $\mu$. By Theorem~\ref{criterion}, it is equivalent to show that
for each $ \xi \in \R$,
$$Q(\xi) = \sum_{\lambda \in \Lambda} |\wh{\mu}(\lambda + \xi)|^2=1.$$
For each $\xi \in \R$, since $\Lambda_i$ is a spectrum of $\mu_{m_i}$, by Theorem~\ref{criterion}, we have
\begin{equation}\label{n_i=1}
\sum_{\lambda \in \Lambda_i} |\wh{\mu}_{m_i}(\lambda + \xi)|^2 =1.
\end{equation}
It follows that
\begin{eqnarray*}
\sum_{\lambda \in \Lambda_i} \big |\wh{\mu}(\lambda + \xi)\big|^2 &=& \sum_{\lambda \in \Lambda_i} \big|\wh{\mu}_{m_i}(\lambda + \xi)\big|^2 \big|\wh{\mu}_{>m_i}(\lambda + \xi)\big|^2 \\
&\le& \sum_{\lambda \in \Lambda_i} \big|\wh{\mu}_{m_i}(\lambda + \xi)\big|^2 \\
&\le&1.
\end{eqnarray*}
Letting $i \to \f$, we obtain that
\begin{equation}\label{ineqQ}
Q(\xi) \le 1,
\end{equation}
for all $\xi \in \R$.
Fix $\xi\in \R$. For each $\lambda \in \Lambda$, we define
$$f(\lambda) = |\wh{\mu}(\lambda+ \xi)|^2,$$
and for $i \ge 1$,
$$ f_i(\lambda) =
\begin{cases}
|\wh{\mu}_{m_i}(\xi+\lambda)|^2, & \mbox{if } \lambda \in \Lambda_i; \\
0, & \mbox{if } \lambda \in \Lambda \sm \Lambda_i.
\end{cases}
$$
For each $\lambda \in \Lambda$, there exists $i_0 \ge 1$ such that $\lambda \in \Lambda_i$ for $i \ge i_0$, and it follows that $$ \lim_{i \to \f} f_{i}(\lambda) = \lim_{i \to \f} |\wh{\mu}_{m_i}(\xi+\lambda)|^2 =f(\lambda). $$
Choose an integer $i_0 \ge 1$ sufficiently large such that
\begin{equation}\label{ineqxi}
\left| \frac{\xi}{N_1 N_2 \cdots N_{m_{i_0}}} \right| < \frac{\delta}{2}.
\end{equation}
For each $\lambda \in \Lambda_i$ where $i > i_0$, by~\eqref{defLam}, we have that
$$\lambda= \lambda_1 + (N_1 N_2 \cdots N_{m_{i-1}})\lambda_2 + (N_1 N_2 \cdots N_{m_i}) k_{\lambda_2,i},$$
where $\lambda_1 \in \Lambda_{i-1}$ and $\lambda_2\in \mbf{L}_{m_{i-1}, m_i}$.
By ~\eqref{ineqla} and \eqref{ineqxi}, we have that
$$\Big|\frac{\lambda_1 + \xi}{N_1 N_2 \cdots N_{m_i}}\Big| < \delta.$$
It follows from \eqref{lowerbound} that
\begin{align*}
f(\lambda) & = |\wh{\mu}(\lambda + \xi)|^2 = \big|\wh{\mu}_{m_i}(\lambda + \xi)\big|^2 \big|\wh{\mu}_{>m_i}(\lambda + \xi)\big|^2 \\
&=\big|\wh{\mu}_{m_i}(\lambda + \xi)\big|^2 \left| \wh{\nu}_{>m_i}\left( \frac{\lambda + \xi}{N_1 N_2 \cdots N_{m_i}} \right) \right|^2 \\
& = \big|\wh{\mu}_{m_i}(\lambda + \xi)\big|^2 \left| \wh{\nu}_{>m_i}\left( \frac{\lambda_2}{N_{m_{i-1}+1} \cdots N_{m_i}} + \frac{\lambda_1 + \xi}{N_1 N_2 \cdots N_{m_i}} + k_{\lambda_2,i} \right) \right|^2 \\
& \ge \ep^2 f_i(\lambda).
\end{align*}
Therefore, for $i > i_0$, we have $$f_i(\lambda) \le \ep^{-2} f(\lambda),$$
for all $\lambda \in \Lambda.$ Let $\rho$ be the counting measure on the set $\Lambda$. We have that
$$ \int_\Lambda f(\lambda) \D \rho(\lambda) = \sum_{\lambda \in \Lambda} |\wh{\mu}(\lambda + \xi)|^2 = Q(\xi) .
$$
By ~\eqref{ineqQ}, $f(\lambda)$ is integrable with respect to the counting measure $\rho$.
Applying the dominated convergence theorem and \eqref{n_i=1}, we obtain that
\begin{eqnarray*}
Q(\xi) &=& \lim_{i \to \f} \int_\Lambda f_i(\lambda) \D \rho(\lambda) \\
&=& \lim_{i \to \f} \sum_{\lambda \in \Lambda_i} |\wh{\mu}_{m_i}(\lambda + \xi)|^2 \\
&=&1.
\end{eqnarray*}
Hence, by Theorem~\ref{criterion}, the family $\set{e_\lambda(x): \lambda \in \Lambda}$ is an orthonormal basis in $L^2(\mu)$, and $\mu$ is a spectral measure.
\end{proof}
\subsection{Admissibility}
The equi-positivity condition is rather technical, and it is not easy to check. Therefore, the admissible family is introduced to guarantee the existence of equi-positive family. For $\mu \in \mcal{P}(\R^d)$, we write
\begin{equation}\label{integral-periodic-zero}
\mcal{Z}(\mu) = \set{\xi \in \R^d: \wh{\mu}(\xi+k) = 0 \text{ for all } k \in \Z^d},
\end{equation}
for the {\it integral periodic zero set} of $\mu$.
For $\Phi \sse \mcal{P}(\R^d)$, we write
$$
\mathrm{cl}(\Phi) = \set{ \mu \in \mcal{P}(\R^d): \text{ there exists $\set{\mu_n} \sse \Phi$ such that $\mu_n$ converges weakly to $\mu$} },$$
for the closure of $\Phi$ with respect to the weak topology on $\mcal{P}(\R^d)$.
\begin{definition}
Let $\Phi \sse \mcal{P}(\R^d)$. We call $\Phi$ an admissible family if $\mcal{Z}(\mu) = \emptyset$ for every $\mu\in \mathrm{cl}(\Phi)$.
\end{definition}
The following theorem shows that the admissibility implies the equi-positivity under the tightness condition. The proof is similar to the one in \cite{An-Fu-Lai-2019}.
\begin{theorem}\label{admissible-to-equipositive}
Suppose that $\Phi \sse \mcal{P}(\R^d)$ is tight.
If $\Phi$ is an admissible family, then $\Phi$ is equi-positive.
\end{theorem}
\begin{proof}
We first claim that for each $x\in [0,1]^d$, there exists $\ep_x>0$ such that for all $\mu\in \Phi$ we have $$ \sup\set{ |\wh{\mu}(x+k)|: \; k \in \Z^d } > \ep_x. $$
Suppose that we may find $x_0 \in [0,1]^d$ such that for each $n \ge 1$ there exists $\mu_n \in \Phi$ satisfying $$ \sup\set{ |\wh{\mu}_n(x_0+k)|: \; k \in \Z^d } \le \frac{1}{n}. $$
Since $\{ \mu_n \} \sse \Phi$ and $\Phi$ is tight, by Theorem \ref{weak-compactness}, there exists a subsequence $\{ \mu_{n_j}\}$ and a Borel probability measure $\mu \in \mcal{P}(\R^d)$ such that $\mu_{n_j}$ converges weakly to $\mu$.
It follows from Theorem \ref{equivalent-condition-weak-converge} that for every $k \in \Z^d$ we have $$ \wh{\mu}(x_0+k) = \lim_{j \to \f} \wh{\mu}_{n_j}(x_0+k) = 0. $$
This implies $x_0\in \mcal{Z}(\mu)$.
It contradicts to $\mcal{Z}(\mu)=\emptyset$ since $\Phi$ is admissible and $\mu \in \mathrm{cl}(\Phi)$ .
Therefore, for $x\in [0,1]^d$ and $\mu \in \Phi$, there exists an integral vector $k_{x,\mu}\in \Z^d$ such that $$ |\wh{\mu}(x+k_{x,\mu})| > \ep_x . $$
By Lemma \ref{equicontinuous}, the family $\set{\wh{\mu}: \mu \in \Phi}$ is equicontinuous.
Thus, for each $\ep_x>0$, there exists $\delta_x>0$ such that
$$ |\wh{\mu}(y_1) -\wh{\mu}(y_2)| <\frac{\ep_x}{2}, $$ for all $ \mu\in \Phi$ and all $|y_1 - y_2|<\delta_x$.
It follows that for $\mu \in \Phi$ and $|y| < \delta_x$,
\begin{equation}\label{ineqmue}
|\wh{\mu}(x+y+k_{x,\mu})| \ge |\wh{\mu}(x+k_{x,\mu})| - |\wh{\mu}(x+y+k_{x,\mu}) - \wh{\mu}(x+k_{x,\mu})| > \frac{\ep_x}{2}.
\end{equation}
Since $[0,1]^d$ is compact and
$$ [0,1]^d \sse \bigcup_{x\in [0,1]^d} B(x,\delta_x/2), $$
there exist finitely many $x_1, x_2, \cdots, x_p \in [0,1]^d$ such that $$ [0,1]^d \sse \bigcup_{j=1}^p B(x_j,\delta_{x_j}/2). $$
Let $ \ep = \min \set{ \ep_{x_j}/2: j=1,2,\cdots, p }$ and $\delta = \min\{ \delta_{x_j}/2: j=1,2,\cdots,p \}$.
For each $x\in [0,1)^d \sm \set{0}$, we may find some $1\le j \le p$ such that $x\in B(x_j,\delta_{x_j}/2) $.
For $\mu \in \Phi$ and $|y| < \delta$, we have $| x- x_j + y | < \delta_{x_j}$, and, by \eqref{ineqmue}, it follows that
\begin{eqnarray*}
|\wh{\mu}(x+y+k_{x_j,\mu})| &=& \left|\wh{\mu}\big( x_j + (x-x_j+y)+k_{x_j,\mu}\big) \right| \\
&\ge& \frac{\ep_{x_j}}{2} \geq \ep.
\end{eqnarray*}
Thus, we set $k_{x,\mu}=k_{x_j,\mu}$.
For $x=0$, noting that $|y| < \delta$ implies $|y| < \delta_{x_1}$, we have that
$$ |\wh{\mu}(y)| \ge |\wh{\mu}(0)| - |\wh{\mu}(0) -\wh{\mu}(y)| \ge 1- \frac{\ep_{x_1}}{2} \ge \ep ,$$
for $\mu \in \Phi$ and $|y| < \delta$. Therefore, we set $k_{x,\mu}=0$ for $x=0$, and the conclusion holds.
\end{proof}
Next theorem is a direct consequence of Theorem~\ref{general-result} and Theorem \ref{admissible-to-equipositive}.
\begin{theorem}\label{thm-adm}
Let $\set{(N_n, B_n , L_n): n \ge 1}$ be a sequence of Hadamard triples in $\R$. Let the probability measure $\mu$ be the weak limit of $\mu_n$ given by \eqref{mu-n}.
If there exists a subsequence of positive integers $\{ n_j \}$ such that the family $\{ \nu_{>n_j} \}$ is tight and admissible, then $\mu$ is a spectral measure.
\end{theorem}
\section{The proof of Theorem \ref{main-result}}\label{sec_pf}
For a finite subset $B \sse \Z$, we set
$$
M_{B}(\xi) = \frac{1}{\# B} \sum_{b \in B} e^{-2\pi i b \xi}.
$$
In fact, $M_B$ is the Fourier transform of the discrete measure $\delta_B$.
If $(N,B,L)$ is a Hadamard triple in $\R$, then the set $L$ is a spectrum of the discrete measure $\delta_{N^{-1}B}$. Since $\wh{\delta}_{N^{-1} B}(\xi) = M_B(\xi/N)$, by Theorem \ref{criterion}, we have that for all $\xi \in \R$,
$$
\sum_{\ell\in L} \left| M_{B}\left( \frac{\xi+\ell}{N}\right) \right|^2 = 1.
$$
Recall that $\set{(N_j, B_j, L_j): 1 \le j \le m}$ is a set of finitely many Hadamard triples in $\R$, and for $ \omega\in \Omega$,
$$ \mu_\omega = \delta_{N_{\omega_1}^{-1} B_{\omega_1}} * \delta_{(N_{\omega_1} N_{\omega_2})^{-1} B_{\omega_2}} * \cdots * \delta_{(N_{\omega_1} N_{\omega_2}\cdots N_{\omega_k})^{-1} B_{\omega_k} }* \cdots. $$
The Fourier transform of $ \mu_\omega$ is given by
$$
\wh{\mu}_{\omega}(\xi) = \wh{\mu}_{\sigma^n(\omega)}\big( (N_{\omega_1} \cdots N_{\omega_n})^{-1}\xi \big) \prod_{k=1}^n M_{B_{\omega_k}}\big( (N_{\omega_1} \cdots N_{\omega_k})^{-1}\xi\big).$$
For a function $f:\R \to \C$, we denote the zero set of $f$ by
$$\mcal{O}(f)=\set{x\in \R: f(x) =0} .
$$
The next lemma indicates that there are only finitely many zero points of $\{\wh{\mu}_\omega\}$ in every finite interval.
\begin{lemma}\label{lemma-finite-set}
For every $h >0$, the set
\begin{equation}\label{set-finite-1}
[-h,h] \cap \left( \bigcup_{\omega \in \Omega} \mcal{O}(\wh{\mu}_\omega) \right)
\end{equation}
is finite.
\end{lemma}
\begin{proof}
Since there are only finitely many Hadamard triples, we may find a compact subset $K \sse \R$ such that $\mu_\omega \in \mcal{P}(K)$ for all $\omega \in \Omega$. Therefore, the family $\{\mu_\omega: \omega \in \Omega\}$ is tight. By Lemma \ref{equicontinuous}, we have that $\{\wh{\mu}_\omega: \omega \in \Omega\}$ is equicontinuous.
Since $\wh{\mu}_\omega(0)=1$ for every $\omega \in \Omega$, there exists $\delta>0 $ such that for all $ \omega \in \Omega$ and all $|y|<\delta$, $$|\wh{\mu}_\omega(y)| \ge 1/2.$$
Suppose that $\wh{\mu}_{\omega}(\xi)=0$. We choose a sufficiently large integer $n$ such that $$|(N_{\omega_1} \cdots N_{\omega_n})^{-1}\xi| < \delta.$$
Note that
$$ \wh{\mu}_{\omega}(\xi) = \wh{\mu}_{\sigma^n(\omega)}\big( (N_{\omega_1} \cdots N_{\omega_n})^{-1}\xi \big) \prod_{k=1}^n M_{B_{\omega_k}}\big( (N_{\omega_1} \cdots N_{\omega_k})^{-1}\xi\big).$$
Thus, we have $M_{B_{\omega_k}}\big( (N_{\omega_1} \cdots N_{\omega_k})^{-1}\xi\big)=0$ for some $k \ge 1$.
Therefore, $\wh{\mu}_{\omega}(\xi)=0$ if and only if there exists $k \ge 1$ such that
$$
M_{B_{\omega_k}}\big( (N_{\omega_1} \cdots N_{\omega_k})^{-1}\xi\big) =0.
$$
It follows that
\begin{align*}
\mcal{O}(\wh{\mu}_\omega)& = \bigcup_{k=1}^\f N_{\omega_1} N_{\omega_2} \cdots N_{\omega_k} \mcal{O}(M_{B_{\omega_k}}) \\
& \sse \bigcup_{j=1}^m \bigcup_{k=1}^\f N_{\omega_1} N_{\omega_2} \cdots N_{\omega_k} \mcal{O}(M_{B_j}) \\
& \sse \bigcup_{j=1}^m \bigcup_{k_1 =0}^\f \bigcup_{k_2=0}^\f \cdots \bigcup_{k_m =0}^\f N_1^{k_1} N_2^{k_2} \cdots N_m^{k_m} \mcal{O}(M_{B_j}).
\end{align*}
Therefore, it suffices to show that for every $1\le j \le m$, the set
\begin{equation}\label{set-finite-2}
[-h, h] \cap \left( \bigcup_{k_1 =0}^\f \bigcup_{k_2=0}^\f \cdots \bigcup_{k_m =0}^\f N_1^{k_1} N_2^{k_2} \cdots N_m^{k_m} \mcal{O}(M_{B_j}) \right)
\end{equation}
is finite.
Since $M_{B_j}$ is extendable to an entire function on the complex plane, the set $\mcal{O}(M_{B_j})$ is a discrete set in $\R$.
Noting that $0 \not \in \mcal{O}(M_{B_j})$, we may find $\delta_j >0$ such that
$$[-\delta_j,\delta_j] \cap \mcal{O}(M_{B_j}) = \emptyset.$$
If $k_1 + k_2 +\cdots+ k_m > \log(h/\delta_j) / \log 2$, then we have
$$
|N_1^{k_1} N_2^{k_2} \cdots N_m^{k_m}| \delta_j \ge 2^{k_1 +k_2 + \cdots+ k_m} \delta_j > h.
$$
It follows that $[-h,h] \cap \big( N_1^{k_1} N_2^{k_2} \cdots N_m^{k_m} \mcal{O}(M_{B_j}) \big) =\emptyset $.
Therefore, there are only finitely many $m$-tuples $(k_1, k_2, \cdots, k_m)$ such that
$$[-h,h] \cap \big( N_1^{k_1} N_2^{k_2} \cdots N_m^{k_m} \mcal{O}(M_{B_j}) \big) \ne \emptyset. $$
Since $\mcal{O}(M_{B_j})$ is a discrete set in $\R$,
we conclude that the set in (\ref{set-finite-2}) is finite, and this completes the proof.
\end{proof}
\begin{proposition}\label{periodic-zero-set}
Suppose that $\gcd(B_j - B_j) =1$ for $1\le j \le m$.
Then we have $\mcal{Z}(\mu_\omega) = \emptyset$ for every $\omega \in \Omega$.
\end{proposition}
\begin{proof}
For $a\in \R$, recall that $\wh{\mu*\delta_a}(\xi) = e^{-2\pi i a \xi} \wh{\mu}(\xi)$, and we have that $\mcal{Z}(\mu) = \mcal{Z}(\mu*\delta_a)$.
This implies that the translation does not change the integral period zero set of a measure.
Let $\wt{B}_j$ be the translation of $B_j$ such that $0\in \wt{B}_j $ for $1 \le j \le m$.
Let $\wt{L}_j$ be the subset of $ \set{0,1,\cdots, |N_j|-1}$ such that $\wt{L}_j \equiv L_j \pmod{ N_j\Z}$ for $1\le j \le m$. By Lemma~\ref{lemma-HT} (i) and (iii), $(N_j,\wt{B}_j ,\wt{L}_j)$ is still a Hadamard triple for $1 \le j \le m$. Let $\mu_\omega$ and $\wt{\mu}_\omega$ be the random convolution generated by $\{(N_j, B_j , L_j), 1\leq j \leq m\}$ and $\{(N_j,\wt{B}_j ,\wt{L}_j), 1\leq j \leq m\}$, respectively.
Clearly we have that $\mcal{Z}(\mu_\omega) = \mcal{Z}(\wt{\mu}_\omega) $.
Therefore, for simplicity, we assume that $0\in B_j$ and $L_j \sse \set{0,1,\cdots, |N_j|-1}$ for $1 \le j \le m$.
We prove this proposition by contradiction.
Suppose that there exists $\omega \in \Omega$ such that $\mcal{Z}(\mu_\omega) \ne \emptyset$. For $\ell \in L_j$, we write
$$
\tau_{\ell,j}(x) = N_j^{-1}(x+\ell).
$$
Arbitrarily choose $\xi_0 \in \mcal{Z}(\mu_\omega)$ and set $Y_0 = \set{\xi_0}$. For $n \ge 1$, we define
$$ Y_n = \set{ \tau_{\ell,\omega_n}(\xi) : \;\xi \in Y_{n-1}, \; \ell \in L_{\omega_n}, \; M_{B_{\omega_n}}\big( \tau_{\ell,\omega_n}(\xi) \big) \ne 0 }.
$$
First, we show that for each $n \ge 1$,
$$ \# Y_{n-1} \le \# Y_n. $$
Since for each $ \xi \in Y_{n-1}$,
$$ \sum_{\ell \in L_{\omega_n} } \left| M_{B_{\omega_n}}\big( \tau_{\ell,\omega_n}(\xi) \big) \right|^2 =1,$$
there exists at least one element $\ell \in L_{\omega_n}$ such that $M_{B_{\omega_n}}\big( \tau_{\ell,\omega_n}(\xi) \big) \ne 0$.
On the other hand, for distinct $\ell_1 \ell_2 \cdots \ell_n \ne \ell_1' \ell_2' \cdots \ell_n'$ where $\ell_j, \ell_j' \in L_{\omega_j}$, we have
$$\tau_{\ell_n,\omega_n} \circ \cdots \circ \tau_{\ell_2,\omega_2} \circ \tau_{\ell_1,\omega_1} (\xi_0) \ne \tau_{\ell_n',\omega_n} \circ \cdots \circ \tau_{\ell_2',\omega_2} \circ \tau_{\ell_1',\omega_1}(\xi_0).$$
Otherwise, $$ \frac{ \xi_0 + \ell_1 + N_{\omega_1} \ell_2 + \cdots + N_{\omega_1} \cdots N_{\omega_{n-1}} \ell_n }{ N_{\omega_1} N_{\omega_2} \cdots N_{\omega_n} } = \frac{ \xi_0 + \ell_1' + N_{\omega_1} \ell_2' + \cdots + N_{\omega_1} \cdots N_{\omega_{n-1}} \ell_n' }{N_{\omega_1} N_{\omega_2} \cdots N_{\omega_n}}, $$
this is, $$ \ell_1 + N_{\omega_1} \ell_2 + \cdots + N_{\omega_1} N_{\omega_2} \cdots N_{\omega_{n-1}} \ell_n = \ell_1' + N_{\omega_1} \ell_2' + \cdots + N_{\omega_1} N_{\omega_2} \cdots N_{\omega_{n-1}} \ell_n'.$$
Let $j_0 = \min\set{1 \le j \le n: \ell_j \ne \ell_j'}$.
Then we have $N_{\omega_{j_0}} \mid \ell_{j_0} - \ell_{j_0}'$.
But by Lemma \ref{lemma-HT} (ii), the elements in $L_{\omega_{j}}$ are in distinct residue classes modulo $N_{\omega_j} \Z$. This leads to a contradiction.
Therefore, we conclude that $\# Y_{n-1} \le \# Y_n$ for $n \ge 1$.
Next, we prove that for each $n \ge 0$,
$$ Y_n \sse \mcal{Z}(\mu_{\sigma^n(\omega)}) .$$
For $n=0$, it is clear that $Y_0 \sse \mcal{Z}(\mu_\omega)$. For $n \ge 1$, we assume that $Y_{n-1} \sse \mcal{Z}(\mu_{\sigma^{n-1}(\omega)})$.
For each $\tau_{\ell,\omega_n}(\xi) \in Y_{n}$ where $\xi \in Y_{n-1}$, $\ell \in L_{\omega_n}$, and $M_{B_{\omega_n}}\big( \tau_{\ell,\omega_n}(\xi) \big) \ne 0$, we have that for every $k \in \Z$,
\begin{align*}
0 & = \wh{\mu}_{\sigma^{n-1}(\omega)}(\xi + \ell + N_{\omega_n} k) \\
&= M_{B_{\omega_{n}}}\left( \frac{\xi + \ell}{ N_{\omega_n} } +k \right) \wh{\mu}_{\sigma^{n}(\omega)} \left( \frac{\xi + \ell}{N_{\omega_n}} +k \right)\\
& = M_{B_{\omega_{n}}}\big( \tau_{\ell,\omega_n}(\xi) \big) \wh{\mu}_{\sigma^{n}(\omega)} \big( \tau_{\ell,\omega_n}(\xi) +k \big),
\end{align*}
where the last equality follows from the integral periodicity of $M_{B_{\omega_n}}$.
Since $M_{B_{\omega_{n}}}\big( \tau_{\ell,\omega_n}(\xi) \big) \ne 0$, we have that
$$ \wh{\mu}_{\sigma^{n}(\omega)} \big( \tau_{\ell,\omega_n}(\xi) +k \big) =0 ,
$$ for all $k \in \Z$.
This implies that $\tau_{\ell,\omega_n}(\xi) \in \mcal{Z}(\mu_{\sigma^{n}(\omega)})$. Thus, we have $Y_{n} \sse \mcal{Z}(\mu_{\sigma^{n}(\omega)})$.
By induction, we obtain that $Y_n \sse \mcal{Z}(\mu_{\sigma^n(\omega)})$ for all $n \ge 0$.
Finally, we use the increasing cardinality of $Y_n$ and Lemma \ref{lemma-finite-set} to deduce a contradiction.
For every $\xi \in Y_n$, by the definition of $Y_n$, we write $\xi$ as
\begin{eqnarray*}
\xi &=& \tau_{\ell_n,\omega_n} \circ \cdots \circ \tau_{\ell_2,\omega_2} \circ \tau_{\ell_1,\omega_1} (\xi_0) \\
&=& \frac{ \xi_0 + \ell_1 + N_{\omega_1} \ell_2 + \cdots + N_{\omega_1} \cdots N_{\omega_{n-1}} \ell_n }{ N_{\omega_1} N_{\omega_2} \cdots N_{\omega_n} } .
\end{eqnarray*}
Since $|N_{\omega_j}|\geq 2$ and $0\le \ell_j < |N_{\omega_j}|$, we have that
\begin{eqnarray*}
|\xi| &\le& \frac{|\xi_0|}{2^n} + \frac{1}{2^{n-1}} + \frac{1}{2^{n-2}} + \cdots + 1 \\
&\le& |\xi_0| + 2.
\end{eqnarray*}
Let $h= |\xi_0| + 2$. Then $Y_n \sse [-h , h]$ for every $n \ge 0$.
It follows that
\begin{eqnarray*}
Y_n &\sse& [-h,h] \cap \mcal{Z}(\mu_{\sigma^n(\omega)}) \\
&\sse& [-h,h] \cap \left( \bigcup_{\eta \in \Omega} \mcal{O}(\wh{\mu}_\eta) \right).
\end{eqnarray*}
By Lemma \ref{lemma-finite-set}, the set $[-h,h] \cap \left( \bigcup_{\eta \in \Omega} \mcal{O}(\wh{\mu}_\eta) \right)$ is finite. Since the sequence of $\# Y_n$ is increasing, there exists $n_0 \ge 1$ such that $\# Y_n = \# Y_{n+1}$ for all $n>n_0$.
Given $n>n_0$, for every $\xi \in Y_{n}$, there exists a unique $\ell_0 \in L_{\omega_{n+1}}$ such that $M_{B_{\omega_{n+1}}}\big( \tau_{\ell_0,\omega_{n+1}}(\xi) \big) \ne 0$.
Recall that
$$\sum_{\ell \in L_{\omega_{n+1}}} \left| M_{B_{\omega_{n+1}}}\big( \tau_{\ell,\omega_{n+1}}(\xi) \big) \right|^2 =1.
$$
This implies that $$ \left| M_{B_{\omega_{n+1}}}\big( \tau_{\ell_0,\omega_{n+1}}(\xi) \big)\right| = \left| \frac{1}{\# B_{\omega_{n+1}}} \sum_{b\in B_{\omega_{n+1}}} e^{-2\pi i b \tau_{\ell_0,\omega_{n+1}}(\xi) } \right| =1. $$
Since $0 \in B_{\omega_{n+1}}$, we have that $b \tau_{\ell_0,\omega_{n+1}}(\xi) \in \Z$ for $b\in B_{\omega_{n+1}}$.
Since $\gcd(B_{\omega_{n+1}} - B_{\omega_{n+1}}) =1$ and $0\in B_{\omega_{n+1}}$, it is clear that $\gcd(B_{\omega_{n+1}})=1$, which is equivalent to that there exist integers $m_b \in \Z$ for $b \in B_{\omega_{n+1}}$ such that $\sum_{b\in B_{\omega_{n+1}}} m_b b =1$.
This implies that $$ \tau_{\ell_0,\omega_{n+1}}(\xi) = \sum_{b\in B_{n+1}} m_b b\tau_{\ell_0,\omega_{n+1}}(\xi) \in \Z. $$
Since $\wh{\mu}_{\sigma^{n+1}(\omega)}(0) =1$, we have that $\mcal{Z}(\mu_{\sigma^{n+1}(\omega)}) \cap \Z = \emptyset$,
which contradicts to the fact
$$\tau_{\ell_0,\omega_{n+1}}(\xi) \in Y_{n+1} \sse \mcal{Z}(\mu_{\sigma^{n+1}(\omega)}).
$$
Therefore, the conclusion holds.
\end{proof}
\begin{proposition}\label{prop_closure}
Let $\Phi=\set{ \mu_\omega: \omega \in \Omega }$. Then we have $\mathrm{cl}(\Phi)=\Phi$.
\end{proposition}
\begin{proof}
For $\omega \in \Omega$ and $n \ge 1$, we write $$ \mu_{\omega,n} = \delta_{N_{\omega_1}^{-1} B_{\omega_1}} * \delta_{(N_{\omega_1} N_{\omega_2})^{-1} B_{\omega_2}} * \cdots * \delta_{(N_{\omega_1} N_{\omega_2}\cdots N_{\omega_n})^{-1} B_{\omega_n} },$$
and $\mu_\omega = \mu_{\omega,n} * \mu_{\omega,>n}$.
Let $ h = 1 + \max\set{ |b|: b \in B_j, 1\le j \le m } $.
It is clear that for each $\omega \in \Omega$ and each $n \ge 1$,
$$ \mathrm{spt}(\mu_{\omega,n}) \sse [-(h-1), h-1], $$
and
$$\mathrm{spt}(\mu_{\omega,>n}) \sse [-2^{-n} h, 2^{-n} h]. $$
Arbitrarily choose a sequence $\{ \mu_{\omega(j)} \}_{j=1}^\infty $ of probability measures in $\Phi$ which converges weakly to a probability measure $\mu \in \mcal{P}(\R)$, where $\{\omega(j)\}_{j=1}^\infty$ is a sequence in $\Omega$ and we write $\omega(j)=\omega_1 (j)\omega_2 (j)\cdots $.
Since $\Omega$ is compact, $\{\omega(j)\}_{j=1}^\infty$ has a convergent subsequence. For simplicity, we assume that $\{\omega(j)\}_{j=1}^\infty$ converges to $\eta=(\eta_k)_{k=1}^\f \in \Omega$.
Next, we prove that $\mu_{\omega(j)}$ converges weakly to $\mu_{\eta}$.
Fix $f \in C_b(\R)$. For each $\ep>0$, since $f$ is uniformly continuous on the interval $[-h,h]$, there exists a real number $\delta$ satisfying $0<\delta < 1$ such that for all $x,y \in [-h,h]$ with $|x-y| <\delta$, we have $$ |f(x) - f(y)| < \frac{\ep}{2}. $$
Then we choose a sufficiently large integer $n$ such that $2^{-n} h < \delta$.
For every $\omega \in \Omega$, we have that
\begin{align*}
\int_{\R} f(x) \D \mu_{\omega}(x) & = \int_{\R} f(x) \D \mu_{\omega,n}* \mu_{\omega,>n} (x) \\
&= \int_{\R^2} f(x+y) \D \mu_{\omega,n}\times \mu_{\omega,>n} (x,y) \\
& = \int_{\R} \int_{\R} f(x+y) \D\mu_{\omega,>n}(y) \D \mu_{\omega,n}(x).
\end{align*}
Hence, by the uniform continuity of $f$ on $[-h,h]$, we get that
\begin{eqnarray*}
&& \left| \int_{\R} f(x) \D \mu_{\omega}(x) - \int_{\R} f(x) \D \mu_{\omega,n}(x) \right| \\
&=& \left| \int_{\R} \int_{\R} \big( f(x+y)- f(x) \big) \D\mu_{\omega,>n}(y) \D \mu_{\omega,n}(x) \right| \\
&\le& \int_{-(h-1)}^{h-1} \int_{-2^{-n}h}^{2^{-n}h} \left| f(x+y)- f(x) \right| \D\mu_{\omega,>n}(y) \D \mu_{\omega,n}(x) \\
&\le& \frac{\ep}{2}.
\end{eqnarray*}
Since the sequence $\set{\omega(j)}$ converges to $\eta$, there exists an integer $j_0\ge 1$ such that for all $j \ge j_0$,
$$ \omega_1(j) \omega_2(j) \cdots \omega_n(j) = \eta_1 \eta_2 \cdots \eta_n,
$$
So for $j\ge j_0$, we have that
$$ \left| \int_{\R} f \D \mu_{\omega(j)} - \int_{\R} f \D \mu_{\eta} \right| \le
\left| \int_{\R} f \D \mu_{\omega(j)} - \int_{\R} f \D \mu_{\omega(j),n} \right| + \left| \int_{\R} f \D \mu_{\eta} - \int_{\R} f \D \mu_{\eta,n} \right| \le \ep. $$
It follows that $$ \lim_{j \to \f} \int_{\R} f \D \mu_{\omega(j)} = \int_{\R} f \D \mu_{\eta},$$
for all $ f \in C_b(\R)$. Therefore, $\mu_{\omega(j)}$ converges weakly to $\mu_{\eta}$.
By the uniqueness of weak limit, we have that $\mu = \mu_\eta \in \Phi$, and the conclusion holds.
\end{proof}
\begin{corollary}\label{corollary}
Suppose that $\gcd(B_j - B_j) =1$ for $1\le j \le m$.
Then the family $\Phi=\set{ \mu_\omega: \omega \in \Omega }$ is admissible, and thus, is equi-positive.
\end{corollary}
\begin{proof}
By Proposition~\ref{periodic-zero-set} and Proposition~\ref{prop_closure}, we have that $\Phi=\set{ \mu_\omega: \omega \in \Omega }$ is admissible. Since there are only finitely many Hadamard triples, we may find a compact subset $K \sse \R$ such that $\mu_\omega \in \mcal{P}(K)$ for all $\omega \in \Omega$. This implies that $\Phi=\set{ \mu_\omega: \omega \in \Omega }$ is tight.
By Theorem~\ref{admissible-to-equipositive}, we have that $\Phi=\set{ \mu_\omega: \omega \in \Omega }$ is equi-positive.
\end{proof}
Finally, we are ready to prove the main theorem.
\begin{proof}[The proof of Theorem \ref{main-result}]
Fix a sequence of positive integers $\set{n_k}$ and $\omega \in \Omega$,
and write $$\mu = \mu_{\omega,\set{n_k}} = \delta_{N_{\omega_1}^{-n_1} B_{\omega_1}} * \delta_{N_{\omega_1}^{-n_1} N_{\omega_2}^{-n_2} B_{\omega_2}} * \cdots * \delta_{N_{\omega_1}^{-n_1} N_{\omega_2}^{-n_2} \cdots N_{\omega_k}^{-n_k} B_{\omega_k} }* \cdots . $$
Note that $\mu$ is the corresponding infinite convolution generated by the sequence of Hadamard triples
$\set{\big( (N_{\omega_k})^{n_k}, B_{w_k}, (N_{\omega_k})^{n_k-1} L_{\omega_k} \big): k \ge 1}$.
Recall the notations in (\ref{mu-infinite-convolution}) and (\ref{nu-large-than-n}), and we have $$ \nu_{>k} = \delta_{ N_{\omega_{k+1}}^{-n_{k+1}} B_{\omega_{k+1}} } * \delta_{ N_{\omega_{k+1}}^{-n_{k+1}} N_{\omega_{k+2}}^{-n_{k+2}} B_{\omega_{k+2}} } * \cdots.$$
Let $\eta = \omega_1^{n_1} \omega_2^{n_2} \omega_3^{n_3}\cdots $, this is, $\eta_{j} = \omega_k$ for $n_1 + \cdots + n_{k-1} < j \le n_1 +\cdots + n_k$.
By Lemma \ref{convolution-weak-convergence}, we factor the measure $\mu_\eta$ into $\mu_\eta = \mu * \rho$ for some $\rho\in \mcal{P}(\R)$. (The measure $\rho$ is also an infinite convolution, but we do not need the precise formula of $\rho$ in our proof.)
It is also easy to check that $\mu_{\sigma^{n_1 + \cdots + n_k}(\eta)} = \nu_{>k} * \rho_k$ for some $\rho_k\in \mcal{P}(\R)$.
Thus, for each $\xi \in \R$, we have
\begin{equation}\label{eq-4-3}
|\wh{\mu}_{\sigma^{n_1 + \cdots + n_k}(\eta)}(\xi)| = |\wh{\nu}_{>k}(\xi) \wh{\rho}_k(\xi)| \le |\wh{\nu}_{>k}(\xi)|.
\end{equation}
By Corollary \ref{corollary}, the family $\{ \mu_{\sigma^{n_1 + \cdots + n_k}(\eta)} \}_{k=1}^\f$ is equi-positive.
It follows from Definition \ref{def-equipositive} and (\ref{eq-4-3}) that the family $\{ \nu_{>k} \}_{k=1}^\f$ is also equi-positive.
Therefore, by Theorem \ref{general-result}, $\mu = \mu_{\omega,\set{n_k}}$ is a spectral measure.
\end{proof}
\section*{Acknowledgements}
The authors thank to Prof Xingang He and
Lixiang An for their helpful comments.
Wenxia Li is supported by NSFC No. 12071148, 11971079 and Science and Technology Commission of Shanghai Municipality (STCSM) No.~18dz2271000.
Jun Jie Miao is partially supported by Shanghai Key Laboratory of PMMP (18dz2271000).
|
2,869,038,155,066 | arxiv | \section{Algorithms}
\label{sec:algo}
\subsection{A Reduction to the Infinite Window Case}
\label{sec:reduction}
In this section we design sliding window algorithms by a generic reduction to standard infinite window streaming algorithms. Our reduction can work with any standard streaming algorithms that satisfy some natural properties. Throughout this section, we assume that the constraint in the submodular maximization problem is hereditary (see the preliminaries for definition).
\begin{definition}
\label{def:good-algo}
Let $\mathcal{A}$ be an algorithm operating on a stream of elements, and $\mathcal{A}(S)$ be the value of the solution found by $\mathcal{A}$ on the stream $S$. We say $\mathcal{A}$ is $c$-compliant if it satisfies the following conditions:
\begin{itemize}
\item {\em (Monotonicity)} If $S_1$ is a prefix of $S_2$ then $\mathcal{A}(S_1) \le \mathcal{A}(S_2)$.
\item {\em ($c$-Approximation)} For any stream $S$, let $\mathtt{OPT}$ be the optimal solution. Then $\mathcal{A}(S) \ge c\cdot f(\mathtt{OPT})$.
\end{itemize}
\end{definition}
The following lemma is similar to the smooth histogram technique introduced in \cite{BO07}. However, smooth histogram only works for problems with good approximations. More precisely, it is assumed that there is an algorithm with $(1-\epsilon)$ approximation where $\epsilon < 1/4$, which does not hold for many submodular problems. Our algorithm (Algorithm \ref{alg:reduction}) also works for streaming algorithms \emph{without} the monotonicity property, though monotonicity allows us to get a better approximation. Any streaming algorithm can be modified to satisfy the monotonicity property at the cost of increased update time: after every update, the algorithm calculates a solution and it keeps track of the best solution over time.
\begin{lemma}
\label{lem:smooth}
Let stream $S_1\circ S_2 \circ S_3$ be the concatenation of three (sub)streams $S_1, S_2, S_3$. Given a $c$-compliant algorithm $\mathcal{A}$, if $\mathcal{A}(S_1\circ S_2) \le (1+\epsilon) \mathcal{A}(S_2)$, then $\mathcal{A}(S_2\circ S_3) \ge \frac{c}{2+\epsilon}\cdot f(\mathtt{OPT}_{123})$ where $\mathtt{OPT}_{123}$ is the optimal solution for the stream $S_1\circ S_2 \circ S_3$.
\end{lemma}
\begin{proof}
Let $\mathtt{OPT}_{123}, \mathtt{OPT}_{23}, \mathtt{OPT}_{12}$ be the optimal solution for the streams $S_1\circ S_2\circ S_3, S_2\circ S_3, S_1\circ S_2$, respectively. We have
\begin{equation}
\label{eq:b-1}
\textstyle \frac{1}{c} \cdot \mathcal{A}(S_2\circ S_3) \ge f(\mathtt{OPT}_{23}).
\end{equation}
We also have
\begin{equation}
\label{eq:b-2}
\textstyle f(\mathtt{OPT}_{12}) \le \frac{1}{c} \cdot \mathcal{A}(S_1\circ S_2) \le \frac{1+\epsilon}{c} \cdot \mathcal{A}(S_2) \le \frac{1+\epsilon}{c} \cdot \mathcal{A}(S_2\circ S_3).
\end{equation}
Combining (\ref{eq:b-1}) and (\ref{eq:b-2}), we obtain
\begin{eqnarray*}
\textstyle \frac{2+\epsilon}{c} \cdot \mathcal{A}(S_2 \circ S_3) &\ge& f(\mathtt{OPT}_{12})+f(\mathtt{OPT}_{23}) \\
&\ge& f(\mathtt{OPT}_{123}\cap S_1)\\ & &+ f(\mathtt{OPT}_{123}\cap (S_2\circ S_3)) \\ &\ge& f(\mathtt{OPT}_{123}).
\end{eqnarray*}
\end{proof}
We can also show a similar lemma for algorithms satisfying the $c$-approximation property but not monotonicity.
\begin{lemma}
\label{lem:smooth2}
Let stream $S_1\circ S_2 \circ S_3$ be the concatenation of three (sub)streams $S_1, S_2, S_3$. Given an algorithm $\mathcal{A}$ with $c$-approximation property, if $\mathcal{A}(S_1\circ S_2) \le (1+\epsilon) \mathcal{A}(S_2)$, then $\mathcal{A}(S_2\circ S_3) \ge \frac{c^2}{c+1+\epsilon}\cdot f(\mathtt{OPT}_{123})$ where $\mathtt{OPT}_{123}$ is the optimal solution for the stream $S_1\circ S_2 \circ S_3$.
\end{lemma}
The proof is similar to that of Lemma \ref{lem:smooth2}. The major modification is to use the following inequality, which does not require the monotonicity property, as a replacement of (\ref{eq:b-2}),
\begin{eqnarray*}
\label{eq:b-4}
f(\mathtt{OPT}_{12}) &\le& \frac{1}{c} \cdot \mathcal{A}(S_1\circ S_2) \le \frac{1+\epsilon}{c} \cdot \mathcal{A}(S_2) \\
&\le& \frac{(1+\epsilon)}{c}f(\mathtt{OPT}_{23})
\le \frac{1+\epsilon}{c^2} \mathcal{A}(S_2\circ S_3).
\end{eqnarray*}
We have the following theorem.
\begin{theorem}
\label{thm:reduction}
There is an algorithm for constrained submodular maximization over sliding windows that achieves a $c/(2+\epsilon)$-approximation using $O(s/\epsilon \cdot \log M)$ space and $O(t/\epsilon \cdot \log M)$ update time per item, provided that there is a corresponding $c$-compliant streaming algorithm using $s$ space and $t$ update time per item.
\end{theorem}
\begin{algorithm2e}[t]
\DontPrintSemicolon
\KwIn{$M$: an upper bound of $f(\mathtt{OPT})$ over sliding windows. $W$: the size of the window. $\mathcal{A}$ is an infinite window streaming algorithm which we use as a blackbox.}
\ForEach{new incoming element $e_i$}{
start a new instance $\mathcal{A}^{(i)}$\;
drop all maintained instances $\mathcal{A}^{(y)}$ where $y \le i - W$\;
update all the remaining instances, denoted by $\mathcal{A}^{(t_1)}, \ldots, \mathcal{A}^{(t_u)}$, with $e_i$\;
$j \gets 1$\;
\While{$j < u$}{
$x \gets u$\; \label{ln:a-1}
\While{ $(1+\epsilon) \mathcal{A}^{(t_x)}(e_{t_x}, \ldots, e_i) < \mathcal{A}^{(t_j)}(e_{t_j}, \ldots, e_i)$}{
$x \gets x - 1$
}
Prune all $\mathcal{A}^{(t_v)}$ with $j < v < x$\; \label{ln:a-2}
\If{$x \leq j$}{
$x \gets j + 1$
}
$j \gets x$
}
}
\Return (at query) $\mathcal{A}^{(t_b)}(e_{t_b}, \ldots, e_{\mathtt{now}})$ where $t_b = \min \{t_j \in [\mathtt{now} - W + 1, \mathtt{now}]\ |\ \mathcal{A}^{(t_j)} \text{ is maintained}\}$
\caption{{\small\tt SW-RD}$(\mathcal{A}, W, M)$: A Reduction to Infinite Window Streaming Algorithms}
\label{alg:reduction}
\end{algorithm2e}
The pseudocode of the algorithm is described in Algorithm~\ref{alg:reduction}. We now explain it in words. The algorithm maintains a collection of instances of $\mathcal{A}$ starting at different times $t_1 < t_2 < \ldots < t_u$, which we will denote by $\mathcal{A}^{(t_1)}, \mathcal{A}^{(t_2)}, \ldots, \mathcal{A}^{(t_u)}$. Upon receiving a new element $e_i$, we perform the following operations. We first create a new instance $\mathcal{A}^{(i)}$. Next, we drop those instances of $\mathcal{A}$ that are expired, and update all the remaining instances with the new element $e_i$. Finally we perform a pruning procedure: We start with $t_1$. Let $t_x$ be the maximum time step among all the maintained instances of $\mathcal{A}$ such that $(1+\epsilon) \mathcal{A}^{(t_x)}\ge \mathcal{A}^{(t_1)}$. We prune all the instances $\mathcal{A}^{(t_v)}$ where $1 < v < x$ (Line~\ref{ln:a-1}-\ref{ln:a-2}). We repeat this pruning procedure with (the next unpruned time step) $t_x$ and continue until we reach $t_u$. Note that the purpose of the pruning step is to make the remaining data stream satisfy $\mathcal{A}(S_1\circ S_2) \le (1+\epsilon) \mathcal{A}(S_2)$, so that Lemma \ref{lem:smooth} or Lemma \ref{lem:smooth2} applies.
The space usage of the algorithm is again easy to bound. Note that after the pruning (we rename the remaining instances as $\mathcal{A}^{t_1}, \mathcal{A}^{t_2}, \ldots, \mathcal{A}^{t_u}$), for each $j = 1, \ldots, u-2$, we have $\mathcal{A}^{(t_j)} > (1+\epsilon) \mathcal{A}^{(t_{j+2})}$. Thus the number of instances of $\mathcal{A}$ is bounded by $O(1/\epsilon \cdot \log M)$ at all time steps. When doing the pruning, we only need to calculate the value of each instance once, so the processing time per item is $O(t/\epsilon \cdot \log M)$.
We next give the proof of the correctness. Let us consider a window $[i - W + 1, i]$, and let $t_b = \min \{t_j \in [i - W + 1, i]\ |\ j = 1, \ldots, u \}$. For this window we will report whatever $\mathcal{A}^{(t_b)}$ reports. It is easy to see that if $t_b = i - W + 1$, then the algorithm is obviously correct since $\mathcal{A}^{(t_b)}$ is a streaming algorithm starting from time $t_b$. We next consider the case when $t_b > i - W + 1$. Let $t_c$ be the time step when the last $\mathcal{A}^{(t)}$ in $\{\mathcal{A}^{(i - W + 1)} \ldots, \mathcal{A}^{(t_b-1)}\}$ was pruned, and let $t_a < i - W +1$ be the largest time step before $i-W+1$ such that $\mathcal{A}^{(t_a)}$ is active after the pruning step at time $t_c$. Note that $t_a$ must exist because pruning always happens between two active instances (at Line 9 of the algorithm, we prune between $j$ and $x$ exclusively). It is clear that $t_a < t_b \le t_c$. Let $S_1 = (e_{t_a}, \ldots, e_{t_b-1})$, $S_2 = (e_{t_b}, \ldots, e_{t_c})$, and $S_3 = (e_{t_c+1}, \ldots, e_i)$. By the pruning rule, we have $(1+\epsilon) \mathcal{A}(S_2) \ge \mathcal{A}(S_1\circ S_2)$.
Plugging in Lemma~\ref{lem:smooth}, we have
\begin{equation}
\label{eq:use-lemma}
\textstyle \mathcal{A}(S_2\circ S_3) \ge \frac{c}{2+\epsilon} \cdot f(\mathtt{OPT}_{123}),
\end{equation}
where $\mathtt{OPT}_{123}$ is the optimal solution for the stream $S_1\circ S_2 \circ S_3$, which includes the window $[i-W+1, i]$.
\smallskip
In \cite{BMKK14} the authors gave a $(1/2-\epsilon)$-compliant algorithm in the standard streaming model for monotone submodular maximization subject to cardinality constraint $k$, using $O(k \log k / \epsilon)$ space and $O(\log k/\epsilon)$ update time per item. We thus have the following corollary.
\begin{corollary}
\label{cor:reduction-kdd}
There is an algorithm for monotone submodular maximization with a cardinality constraint over sliding windows that achieves a $(1/4 -\epsilon)$-approximation using $O(k \log k /\epsilon^2 \cdot \log M)$ words of space and $O(\log k /\epsilon^2 \cdot \log M)$ update time per item, where $k$ is the cardinality constraint.
\end{corollary}
If we drop the requirement of monotonicity, we have the following result. The proof is the same as that for Theorem \ref{thm:reduction}, but uses Lemma \ref{lem:smooth2} instead of Lemma \ref{lem:smooth} in (\ref{eq:use-lemma}).
\begin{theorem}
\label{thm:reduction2}
There is an algorithm for constrained submodular maximization over sliding windows that achieves a $c^2/(c + 1 +\epsilon)$-approximation using $O(s/\epsilon \cdot \log M)$ space and $O(t/\epsilon \cdot \log M)$ update time per item, provided that there is a corresponding $c$-approximation streaming algorithm that uses $s$ space and $t$ update time per item.
\end{theorem}
In~\cite{CK15}, the authors gave a $1/(4p)$-approximation algorithm in the standard streaming model for monotone submodular maximization subject to $p$-matroid constraints using $O(k)$ space, where $k$ is the maximum rank of the $p$-matroids. We thus have:
\begin{corollary}
There is an algorithm for monotone submodular maximization subject to $p$-matroid constraints over sliding windows that achieves a $1/(4p+(1+\epsilon)16p^2)$-approximation using $O(k /\epsilon \cdot \log M)$ words of space, where $k$ is the maximum rank of the $p$-matroids.
\end{corollary}
A similar result can be obtained by plugging the deterministic approximation algorithm in \cite{CGQ15}.
\input{heuristic}
\section{Appendix}
\section{Applications}
The class of submodular functions contains a broad range of useful functions. Here we discuss two examples that have been used extensively in operations research, machine learning, and data mining. The performance of our algorithms in these settings is discussed in the experiments section.
\subsection{Maximum Coverage}
Let $\S = \{S_1, S_2, \ldots, S_n\}$ be a collection of subsets of $[M] = \{1, 2, \ldots, M\}$. In the Maximum Coverage problem, we want to find \emph{at most} $k$ sets from $\S$ such that the cardinality of their union can be maximized. More precisely, we define the utility function as $f(\S') = |\cup_{S\in \S'}S|$, where $\S'$ is a subset of $\S$. It is straightforward to verify that the utility function defined is monotone submodular. The Maximum Coverage problem is a classical optimization problem and it is NP-Hard. We can formulate it using our notations as
$\argmax_{\S' \subseteq \S,~|\S'| \leq k} f(\S').$
\subsection{Active Set Selection in Kernel Machines}
Kernel machines \cite{SS02} are powerful non-parametric learning techniques. They use kernels to reduce non-linear problems to linear tasks that have been well studied. The data set $V = \{x_1, x_2, \ldots, x_n\}$ is represented in a transformed space via the $n \times n$ kernel matrix $K_V$ whose $(i,j)$-th cell is $\mathcal{K}(x_i, x_j)$
where $\mathcal{K}: V\times V \rightarrow \mathbb{R}$ is the kernel function which is symmetric and positive definite.
For large-scale problems, even representing the matrix $K_V$, which requires $O(n^2)$ space, is prohibited. The common practice is to select a small representative subset $S \subseteq V$ and only work with $K_S$. One popular way to measure the quality of selected set $S$ is to use \emph{Informative Vector Machine} (IVM) introduced by Laurence et al.\ \cite{LSH03}. Formally, we define $f: 2^V \rightarrow \mathbb{R}$ with $f(S) = \frac{1}{2} \log\det\left( \mathbf{I} + \sigma^{-2} K_S \right)$, where $\mathbf{I}$ is the identity matrix and $\sigma > 0$ is a parameter. IVM has a close connection to the entropy of muti-variable Gaussian distribution \cite{B14}. It has been shown that $f$ is a monotone submodular function (see, e.g., \cite{B14}). We can then select the set $S\subset V$ by solving $\argmax_{S:|S|\leq k} f(S)$.
\section{conclusion}
\section{Experiments}
\label{sec:exp}
In this section, we compare the following algorithms experimentally. We use the objective functions introduced in the previous section, and the dataset is fed as a data stream. We try to continuously maximize the objective functions on the most recent $W$ data points.
\begin{itemize}
\item {\small\tt Greedy}: the standard greedy algorithm (c.f.\ \cite{NWF78}); does not apply to sliding windows.
\item {\small\tt SieveStream}: the Sieve Streaming algorithm in \cite{BMKK14}; does not apply to sliding windows.
\item {\small\tt SieveNaive}: Algorithm \ref{alg:snaive} in this paper.
\item {\small\tt SieveGreedy}: Algorithm \ref{alg:sgreedy} in this paper
\item {\small\tt Random}: random sampling over sliding windows \cite{BDM02} (i.e. maintain a random $k$ samples of elements in the sliding window at any time).
\item {\small\tt SW-RD}: Algorithm~\ref{alg:reduction} in this paper, using {\small\tt SieveStream}\ as the $c$-compliant algorithm.
\end{itemize}
Note that neither {\small\tt Greedy}\ nor {\small\tt SieveStream}\ can be used for submodular maximization over sliding windows. We thus have to run them in each selected window individually. If we want to continuously (i.e. for all sliding windows) report the solutions, then we need to initialize one instance of {\small\tt SieveStream}\ or {\small\tt Greedy}\ for each window, which is space and time prohibitive.
We run {\small\tt Greedy}\ as it provides a benchmark of the qualities of solutions. We run {\small\tt SieveStream}\ in selected windows since {\small\tt SW-RD}\ uses it as a subroutine and we want to see how good the solutions of {\small\tt SW-RD}\ is compared with the original {\small\tt SieveStream}\ in practice.
We have implemented all algorithms in C++ with the support of the C++ linear algebra library {\tt Armadillo} \cite{arma10}.
All experiments are conducted on a laptop equipped with an Intel Core i5 1.7GHz x 2 processor and 4GB RAM. The operating system is Linux Mint 17.2.
\paragraph{Datasets} We use three time-series datasets.
\begin{itemize}
\item {\small\tt Eyes} \cite{eye13}: this dataset is from one continuous EEG measurement with the Emotiv EEG Neuroheadset. The duration of the measurement is 117 seconds. The dataset contains $14,980$ instances, each of which can be considered as a vector of dimension $15$.
\item {\small\tt GasSensor} \cite{FSH+15}: this dataset contains the acquired time series from 16 chemical sensors exposed to gas mixtures at varying concentration levels. Together with $3$ other features, each record can be considered as a vector of dimension $19$. There are $4,178,504$ records in total. We normalize the dataset first by column, and then by row.
\item {\small\tt WorldCup} \cite{wc98}: this dataset contains all the requests made to the 1998 World Cup Web site on June 7, 1998. There are $5,734,309$ requests made on that day and we consider the requested resource URLs in each second as a set. This results in $24 \times 3600 = 86,400$ sets.
\end{itemize}
\begin{figure}[!ht]
\centering$
\arraycolsep=0.2pt\def1.0{1.0}
\begin{array}{cc}
\includegraphics[width=0.35\textwidth]{eye-5-2000-02}&
\includegraphics[width=0.35\textwidth]{eye-20-2000-02}
\end{array}$
\caption{{\small\tt Eyes}\ dataset for active set selection; $k=5$ in the left figure, $k=20$ in the right;
$W=2000$; $c = 20$ in {\small\tt SieveGreedy}; $x$-axis specifies the windows; $y$-axis is the utility}
\label{fig:eye-acc}
\end{figure}
\begin{figure}[!ht]
\centering$
\arraycolsep=0.2pt\def1.0{1.0}
\begin{array}{cc}
\includegraphics[width=0.35\textwidth]{ethy-5-2000-02}&
\includegraphics[width=0.35\textwidth]{ethy-20-2000-02}
\end{array}$
\caption{{\small\tt GasSensor}\ dataset for active set selection; $k=5$ in the left figure, $k=20$ in the right;
$W=10000$; $c = 20$ in {\small\tt SieveGreedy}}
\label{fig:gas-acc}
\end{figure}
\begin{figure*}[!ht]
\centering
\subfloat[][$k=5$]{\includegraphics[width=0.35\textwidth]{wc-5-2000-02}}
\subfloat[][$k=20$]{\includegraphics[width=0.35\textwidth]{wc-20-2000-02}}
\subfloat[][{\small\tt SieveGreedy}, $k=5$]{\includegraphics[width=0.35\textwidth]{wc-sgreedy-c}\label{fig:wc-sgreedy-c}}
\caption{\small {\small\tt WorldCup}\ dataset for maximum coverage;
$W=2000$; $c = 20$ in {\small\tt SieveGreedy}\ except (\ref{fig:wc-sgreedy-c})
}
\label{fig:wc-acc}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subfloat[][{\small\tt SW-RD}]{\includegraphics[width=0.35\textwidth]{eye-calls-rd-02}\label{fig:call-rd}}
\subfloat[][{\small\tt SieveNaive}]{\includegraphics[width=0.35\textwidth]{eye-calls-snaive-02}\label{fig:call-snaive}}\\
\subfloat[][{\small\tt SieveGreedy}, $c=20$]{\includegraphics[width=0.35\textwidth]{eye-calls-sgreedy-02}\label{fig:call-sgreedy}}
\subfloat[][{\small\tt SieveGreedy}]{\includegraphics[width=0.35\textwidth]{eye-calls-sgreedy-02-c}\label{fig:call-sgreedy-c}}
\caption{{\small\tt Eyes}\ dataset for active set selection; \# function calls normalized by {\small\tt SieveStream}}\label{fig:eye-time}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subfloat[][{\small\tt SieveStream}]{\includegraphics[width=0.35\textwidth]{eye-space-sieve}\label{fig:space-sieve}}
\subfloat[][{\small\tt SW-RD}]{\includegraphics[width=0.35\textwidth]{eye-space-rd-nm}\label{fig:space-rd-nm}}\\
\subfloat[][{\small\tt SieveNaive}]{\includegraphics[width=0.35\textwidth]{eye-space-snaive-nm}\label{fig:space-snaive-nm}}
\subfloat[][{\small\tt SieveGreedy}, $c=20$]{\includegraphics[width=0.35\textwidth]{eye-space-sgreedy-nm}\label{fig:space-sgreedy-nm}}
\caption{{\small\tt Eyes}\ dataset for active set selection; space usages measured by the peak number of items kept in the buffer;
(b), (c), and (d) are normalized by the space usages of {\small\tt SieveStream}}\label{fig:eye-space}
\end{figure*}
\paragraph{Discussion on the Results}
For the application of active set selection, we run experiments on both {\small\tt Eyes}\ and {\small\tt GasSensor}\ datasets. We choose the squared exponential kernel as the kernel function: $\mathcal{K}(x_i, x_j) = \exp(-\|x_i - x_j\|_2^2 / h^2)$; we set $\sigma = 1$ and $h=0.75$. For the application of maximum coverage problem, we run experiments on the {\small\tt WorldCup}\ dataset. For all algorithms, we set $\epsilon = 0.2$.
It can be observed from Figure \ref{fig:eye-acc}, Figure \ref{fig:gas-acc} and Figure \ref{fig:wc-acc} that the maximum utility given by the standard greedy changes when we slide the window over the data stream.
In Figure \ref{fig:eye-acc}, {\small\tt SieveStream}, {\small\tt SW-RD}, {\small\tt SieveGreedy}\ and {\small\tt SieveNaive}\ generate results of almost the same quality as the one given by {\small\tt Greedy}, and {\small\tt Random}\ gives the worst results in all selected windows. In both Figure \ref{fig:gas-acc} and Figure \ref{fig:wc-acc}, results generated by {\small\tt SW-RD}, {\small\tt SieveNaive}, {\small\tt SieveGreedy}\ and {\small\tt SieveStream}\ are slightly worse than the one given by {\small\tt Greedy}. In most windows, {\small\tt SieveGreedy}\ is as good as {\small\tt SieveStream}. {\small\tt SieveNaive}\ also performs well in most windows, but it is worse than {\small\tt Random}\ in some windows. In theory, {\small\tt SW-RD}\ can be worse than {\small\tt SieveStream}\ by a factor of $2$, but our experiments show that solutions returned by the two algorithms have similar utilities.
Figure \ref{fig:wc-sgreedy-c} shows in {\small\tt SieveGreedy},
increasing $c$ will slightly increase the utility.
For the comparisons of space/time costs, we only include the results of {\small\tt Eyes}\ dataset due to the space constraints. Similar results can observed on other datasets as well. Figure \ref{fig:eye-time} compares the time costs on {\small\tt Eyes}\ dataset. We measure the time costs by the numbers of function calls (of the submodular function). All results are normalized by the corresponding costs of {\small\tt SieveStream}.
By Theorem \ref{thm:reduction} the time cost of {\small\tt SW-RD}\ is independent of $k$ and $W$ once it is normalized by the corresponding cost of {\small\tt SieveStream}. This result has been confirmed by Figure \ref{fig:call-rd}. Figure \ref{fig:call-snaive} shows that {\small\tt SieveNaive}\ is as fast as {\small\tt SieveStream}. Figure \ref{fig:call-sgreedy} shows that increasing $k$ will increase the cost of {\small\tt SieveGreedy}, while increasing $W$ will decrease the cost. This is because items in the solution buffers are less likely to expire for small $k$ and large $W$.
Figure \ref{fig:call-sgreedy-c} shows how the time costs of {\small\tt SieveGreedy}\ are affected by the values of $c$.
Figure \ref{fig:eye-space} compares the space costs on {\small\tt Eyes}\ dataset. To be consistent with the theorems, we measure the space usages by the maximum numbers of items kept in memory. To compare with the costs of {\small\tt SieveStream}, we also normalize the costs of {\small\tt SW-RD}, {\small\tt SieveNaive}\ and {\small\tt SieveGreedy}\ by the corresponding costs of {\small\tt SieveStream}.
Figure \ref{fig:space-snaive-nm} and Figure \ref{fig:space-sgreedy-nm} show that the space usages of {\small\tt SieveNaive}\ and {\small\tt SieveGreedy}\ are almost the same as {\small\tt SieveStream}.
\paragraph{Summary}
We conclude from our experiments that (1) the distribution of data stream changes over sliding windows in our tested datasets; (2) in terms of solution quality, {\small\tt SW-RD}, {\small\tt SieveNaive}\ and {\small\tt SieveGreedy}\ generate comparable results as {\small\tt SieveStream}, and {\small\tt Random}\ is clearly the worst. {\small\tt SieveNaive}\ can sometimes perform very badly, while {\small\tt SW-RD}\ (the only algorithm with theoretical guarantees) and {\small\tt SieveGreedy}\ are relatively stable; and (3) {\small\tt SieveNaive}\ is the most time and space efficient algorithm among {\small\tt SW-RD}, {\small\tt SieveNaive}\ and {\small\tt SieveGreedy}, and the performance of {\small\tt SieveGreedy}\ is close for large window size and small $k$. For large window size and small $k$, {\small\tt SieveGreedy}\ runs very fast and the only extra space it uses compared with {\small\tt SieveStream}\ is the buffer of samples (i.e. $B$). Depending on the value of $\epsilon^{-1}\log M$, {\small\tt SW-RD}\ typically uses $10$-$20$x processing time and $10$-$20$x space compared to {\small\tt SieveStream}.
\subsection{Heuristic Algorithms}
\label{sec:heuristic}
In this section, we introduce two heuristic algorithms based on {\small\tt SieveStream}\ proposed by \cite{BMKK14}. {\small\tt SieveStream}\ achieves a $(1/2 - \epsilon)$-approximation for cardinality constrained monotone submodular maximization in the standard streaming model.
We briefly review how {\small\tt SieveStream}\ works. To simplify the description, we assume an upper bound of $f(\mathtt{OPT})$ (denote as $M$) is given. \cite{BMKK14} also shows how one can get rid of this assumption by estimating $f(\mathtt{OPT})$ on the fly. The algorithm works as follows: we guess in parallel the thresholds $T = (1 + \epsilon)^0, (1 + \epsilon)^1, \ldots, (1 + \epsilon)^L$ where $L = \log_{1+\epsilon}M = O(\frac{\log M}{\epsilon})$. For each fixed $T$ we maintain a buffer $S$ as the solution over the data stream. Upon receiving a new item $e_i$, we add it to the buffer if the cardinality constraint has not yet been violated (i.e. $|S| < k$) and the marginal gain $f(e_i | S) > (T / 2 - f(S)) / (k - |S|)$. \cite{BMKK14} shows that as long as $(1-\epsilon) f(\mathtt{OPT}) \le T \le f(\mathtt{OPT})$, the corresponding $S$ satisfies $f(S) \geq (1-\epsilon) f(\mathtt{OPT}) / 2$. So we simply return the best among all buffers.
The first heuristic algorithm {\small\tt SieveNaive}\ is very simple. For each threshold $T$ and its associated $S$ in {\small\tt SieveStream}, upon receiving a new item $e_i$, we first drop the expired item (if any). All other steps are exactly the same as {\small\tt SieveStream}.
The second heuristic {\small\tt SieveGreedy}\ is a hybrid of {\small\tt SieveStream}\ and the standard greedy algorithm {\small\tt Greedy}\ \cite{NWF78}. Let $c > 0$ be a parameter and $W$ be the window size. We maintain $B$ as a buffer of samples over the sliding window. Upon receiving a new item $e_i$, we add $e_i$ to $B$ with probability $c/W$, and drop expired item (if any) from $B$. On the other hand, we maintain an instance of {\small\tt SieveStream}\ with the following modification: whenever an item $e$ in a buffer $S$ (associated with a certain $T$) expired, we update $S$ by using {\small\tt Greedy}\ to choose a solution of size $(|S| - 1)$ from $B\cup S\backslash\{e\}$.
The pseudocodes of the two heuristics are presented in Algorithm \ref{alg:snaive} and Algorithm \ref{alg:sgreedy} respectively.
\begin{algorithm2e}[t]
\DontPrintSemicolon
\KwIn{$k$ the cardinality constraint; $W$: the size of the window; $M$ an upper bound of }
$L \gets\log_{1+\epsilon}M$\;
\ForEach{$T = (1+\epsilon)^0, (1 + \epsilon)^1, \ldots, (1 + \epsilon)^L$}{
$S_T \gets \emptyset$
}
\ForEach{new incoming element $e_i$}{
\ForEach{$T = (1+\epsilon)^0, (1 + \epsilon)^1, \ldots, (1 + \epsilon)^L$}{
Drop expired item (if any) from $S_T$\;
\If{$|S_T| < k$ and $f(e_i | S_t) > (T / 2 - f(S_T)) / (k - |S_T|)$}{
$S_T \gets S_T \cup \{e_i\}$
}
}
}
\Return{(at query) $\argmax_{S_T}f(S_T)$}
\caption{{\small\tt SieveNaive}($k, W, M$)}
\label{alg:snaive}
\end{algorithm2e}
\begin{algorithm2e}[t]
\DontPrintSemicolon
\KwIn{$k$ the cardinality constraint; $W$: the size of the window; $M$ an upper bound of $f(\mathtt{OPT})$; $c$ parameter to control the sample probability}
$L \gets\log_{1+\epsilon}M$\;
\ForEach{$T = (1+\epsilon)^0, (1 + \epsilon)^1, \ldots, (1 + \epsilon)^L$}{
$S_T \gets \emptyset$
}
$B\gets \emptyset$\;
\ForEach{new incoming element $e_i$}{
Add $e_i$ to $B$ with probability $\frac{c}{W}$\;
Drop expired item (if any) from $B$\;
\ForEach{$T = (1+\epsilon)^0, (1 + \epsilon)^1, \ldots, (1 + \epsilon)^L$}{
\If{there exists an expired item $e$ in $S_T$}{
$S_T \gets$ output of running {\small\tt Greedy}\ on $B\cup S_T\backslash\{e\}$ with cardinality constraint $(|S_T| - 1)$\;
}
\If{$|S_T| < k$ and $f(e_i | S_T) > (T / 2 - f(S_T)) / (k - |S_T|)$}{
$S_T \gets S_T \cup \{e_i\}$
}
}
}
\Return{(at query) $\argmax_{S_T}f(S_T)$}
\caption{{\small\tt SieveGreedy}($k, W, c, M$)}
\label{alg:sgreedy}
\end{algorithm2e}
\section{Introduction}
\label{sec:intro}
The last few decades have witnessed an explosion in the amount of data involved in machine learning tasks. In many cases, the data volume exceeds our storage capacity and demands new techniques that can effectively learn while operating within stringent storage and computation limits. {\em Streaming algorithms}~\cite{Muthukrishnan05} have emerged as a powerful approach to cope with massive data volume. In this model, the learning algorithm processes the dataset one element at a time by a linear scan and keeps only a small summary of the dataset in the memory; it is then able to compute the objective function on the processed elements using the summary. Various popular techniques in machine learning operate in this model, such as stochastic gradient descent, perceptron, etc. This model is particularly useful when the dataset is too large to fit in the memory or if the data is generated in real time (e.g., in online learning).
A common issue in online learning/streaming data mining is that the underlying distribution that generates the data may change over time. Therefore we had better consider only the most recently data in the stream.
Streaming algorithms over sliding windows have already been designed for several problems, including $k$-median clustering~\cite{BDMO03}, kernel least square regression \cite{VVS06}, $k$-means and coreset construction~\cite{BLLM15}, etc.
The window size can again be very large and does not fit the main memory. The problem becomes more severe in the case when kernel method is applied to deal with the nonlinear system -- the resulting kernel matrix may need $O(W^2)$ memory where $W$ is the window size. A natural idea to resolve this issue is to select representative data items from the window for computing the objective function.
Submodular functions, an abstract and broad class of functions, have recently become an attractive candidate for modeling a variety of scenarios in machine learning, from exemplar-based clustering~\cite{KG10}, summarization~\cite{SSSJ12} to determinantal point processes~\cite{GKT12}. In recent years there have been quite some work on designing streaming algorithm for optimizing submodular functions~\cite{KG10,KMVV15,BMKK14,CK15,CGQ15}. However, we are not aware of any previous work dealing with streaming data over sliding windows in the context of submodular optimization.
In this work, we present a general reduction from the sliding window model to the standard streaming model. As a consequence, we immediately obtain algorithms in the sliding window model by combining with previous works in the standard streaming model for cardinality constraints~\cite{KG10,KMVV15,BMKK14}, matroid and matching constraints~\cite{CK15}, and non-monotone functions~\cite{CGQ15}. We also propose a few heuristics and compare their performance on real-world datasets.
\section{Preliminaries}
\label{sec:pre}
Let $V$ be the finite ground set (possibly multiset). Let $f:2^V \rightarrow \mathbb{R}$ be a function mapping subsets of $V$ to numbers. We say $f$ is \emph{submodular} if $f(A \cup \{v\}) - f(A) \geq f(B \cup \{v\}) - f(B)$ for any $A \subseteq B \subset V$, and $v \in V\backslash B$. We define $f(v|A) = f(A\cup \{v\}) - f(A)$ as the \emph{marginal gain} of $v$ given $A$. If further we have $f(A) \leq f(B)$ for any $A \subseteq B \subseteq V$, we say $f$ is \emph{monotone}.
In the general form of \emph{constrained submodular maximization} problem, we consider solving
\begin{equation}
\label{eq:general}
\argmax_{S\in \mathcal{I}} f(S),
\end{equation}
where $\mathcal{I}$ is a collection of subsets of $V$ which we call the \emph{constraint}. In particular, when $\mathcal{I} = \{ S \subseteq V ~|~ |S| \leq k\}$, Expression (\ref{eq:general}) is known as the \emph{cardinality constrained submodular maximization} problem. Definitions for other constraints can be found in e.g. \cite{B14}. We say constraint $\mathcal{I}$ \emph{hereditary} if $A \in \mathcal{I}$ implies that any subset of $A$ is also in $\mathcal{I}$.
For a constrained submodular maximization problem, we use $\mathtt{OPT}$ to represent the solution of (\ref{eq:general}). W.l.o.g.\ we assume $1 \le f(\mathtt{OPT}) \le M$ for a parameter $M$.
In the streaming model, we consider the ground set $V$ as an ordered sequence of {\em items} $e_1, e_2, \ldots, e_n$, each consumes one word of space. Each index is called a \emph{time step}. In the sliding window model, we specify the window size $W$. At any time step, we are only interested in the most recent $W$ items which defines the ground set $V$ at that moment. Note that when $W \rightarrow \infty$, $V$ becomes the set of all received items; we thus also call the standard streaming model the infinite window model. For two streams/sequences $S_1$ and $S_2$, let $S_1\circ S_2$ be their concatenation, which is $S_1$ followed by $S_2$.
Let $\mathcal{A}$ be an algorithm solving the constrained submodular maximization problem. We use $\mathcal{A}(V)$ to represent the function value of the solution returned by $\mathcal{A}$ operating on the ground set $V$.
\subsection{The DP-based Greedy Algorithm}
\label{sec:greedy}
In this section we prove the following theorem for maximizing \emph{monotone} submodular function subject to cardinality constraint $k$.
\begin{theorem}
\label{thm:greedy}
There is an algorithm for cardinality constrained monotone submodular maximization over sliding windows that achieves a $(1/2-\epsilon)$-approximation using $O(k^2/\epsilon \cdot \log M)$ words of space and $O(k/\epsilon \cdot \log M)$ update time (in terms of number of function calls) per item.
\end{theorem}
\begin{algorithm2e}[t]
\DontPrintSemicolon
\KwIn{$k$ the cardinality constraint; $W$: the size of the window; $T$ the threshold. }
\ForEach{$j = 0, 1, \ldots, k$}{
$(\ell_j, S_j) \gets (-1, \emptyset)$
}
\ForEach{new incoming element $e_i$}{
set $(\ell_0, S_0) \gets (i, \emptyset)$\;
\ForEach{$j = 0, 1, \ldots, k$}{
\If{$\ell_j \le i - W$}{
set $\ell_j \gets -1$
}
}
\ForEach{$j = 0, 1, \ldots, k$ s.t. $\ell_j \neq -1$}{ \label{line:loop}
\If{$f(e_i | S_j) \ge T$ and $\ell_j > \ell_{j+1}$}{ \label{line:conditions}
set $\ell_{j+1} \gets \ell_j$, and $S_{j+1} \gets S_j \cup \{e_i\}$
}
}
}
\Return{(at query) $S_{j_{\max}}$ where $j_{\max} = \max\{j ~|~ 0 \leq j \leq k, j \neq -1\}$}
\caption{{\small\tt ThreshGreedy}($k, W, T$)}
\label{alg:greedy-thresh}
\end{algorithm2e}
\begin{algorithm2e}[t]
\DontPrintSemicolon
\KwIn{$k$ the cardinality constraint; $W$: the size of the window; $M$: an upper bound of $f(\mathtt{OPT})$ over sliding windows.}
$L \gets 1 + \log_{1+\epsilon}M$\;
\ForEach{$T = \frac{(1 + \epsilon)^0}{2k}, \frac{(1 + \epsilon)^1}{2k}, \ldots, \frac{(1 + \epsilon)^L}{2k}$}{
create an instance of {\small\tt ThreshGreedy}$(k, W, T)$
}
run all created instances in parallel\;
\Return{(at query) the best solution among those returned by all instances}
\caption{{\small\tt SW-DP}$(k, W, M)$}
\label{alg:greedy}
\end{algorithm2e}
The pseudocode of the algorithm is described in Algorithm~\ref{alg:greedy-thresh} and Algorithm~\ref{alg:greedy}. We run instances of {\small\tt ThreshGreedy}\ for different thresholds. For a fixed threshold $T$, we maintain $\ell_j$ and $S_j$ for each $0 \leq j \leq k$, where $\ell_j$ is the last starting time step such that we still manage to get $j$ elements (denoted by $S_j$) passing $T$. Upon receiving a new element $e_i$, we set $(\ell_0, S_0) \gets (i, \emptyset)$ and drop all the expired $\ell_j$'s by marking their values to be $-1$. Finally for all the active $\ell_j$'s, we use the conditions in Line \ref{line:conditions} of Algorithm \ref{alg:greedy-thresh} to decide if the value of $(\ell_j, S_j)$ needs to be updated.
We create $L = O(\epsilon^{-1}\log M)$ instances of {\small\tt ThreshGreedy}, and each instance tracks $\ell_j$ and $S_j$ for all $0 \leq j \leq k$. Together with the fact that each $S_j$ uses $O(k)$ words, the space usage is bounded by $O(\epsilon^{-1}k^2\log M)$. The update time is straightforward once we note that in each instance of {\small\tt ThreshGreedy}, $j$ loops from $0$ to $k$ for every item in Line \ref{line:loop} of Algorithm \ref{alg:greedy-thresh}.
We next prove the correctness of our algorithm. Consider the choice of $T$ satisfying
\begin{equation}
\label{eq:T-ineq}
f(\mathtt{OPT})/(2k) \le T \le (1+\epsilon) f(\mathtt{OPT}) / (2k).
\end{equation}
Let $j_{\max} = \max\{j ~|~ 0 \leq j \leq k, \ell_{j} \neq -1\}$. If $j_{\max} = k$, then we must have $f(S_{j_{\max}}) \ge T \cdot k \ge f(\mathtt{OPT})/2$. We now consider the case when $j_{\max} < k$. It is easy to see that for all $e \in \mathtt{OPT} \setminus S_{j_{\max}}$, when they are considered to be added to a predecessor $S_{j_e}$ of $S_{j_{\max}}$ (thus $S_{j_e} \subseteq S_{j_{\max}}$), they are rejected. Note that they cannot be rejected by the check $\ell_{j_e} > \ell_{j_e+1}$, since otherwise by the definition $\ell_{j_e}$ can never be part of the best solution. We thus must have $f(e|S_{j_e}) < T$. By submodularity, we have
\begin{eqnarray}
f((\mathtt{OPT}\setminus S_{j_{\max}}) \cup S_{j_{\max}}) - f(S_{j_{\max}}) &\le& \sum_{e \in \mathtt{OPT}\setminus S_{j_{\max}}} f(e | S_{j_{\max}}) \nonumber \\
&\le& \sum_{e \in \mathtt{OPT}\setminus S_{j_{\max}}} f(e | S_{j_e}) \leq kT \nonumber \\
&\le& (1+\epsilon)f(\mathtt{OPT})/2. \label{eq:a-1}
\end{eqnarray}
On the other hand,
\begin{equation}
\label{eq:a-2}
f((\mathtt{OPT}\setminus S_{j_{\max}})\cup S_{j_{\max}}) = f(\mathtt{OPT} \cup S_{j_{\max}}) \ge f(\mathtt{OPT}).
\end{equation}
By (\ref{eq:a-1}) and (\ref{eq:a-2}) we have $f(S_{j_{\max}}) \ge (1-\epsilon)f(\mathtt{OPT})/2$.
Therefore, for any sliding window query, the solution returned by {\small\tt SW-RD}\ is no worse than $(1-\epsilon)f(\mathtt{OPT})/2$. This is because we always have an instance of {\small\tt ThreshGreedy}\ that uses a thresh $T$ satisfying (\ref{eq:T-ineq}).
|
2,869,038,155,067 | arxiv | \section{Introduction}
Non-Human Traffic or traffic generated by robots (or bots) is estimated to constitute close to half of all web traffic~\cite{badbotreport}.
Some bots have a legitimate purpose (e.g. web crawlers) while others try to intrude the systems with malicious intent.
It is estimated that half of all bot traffic has a malicious intent \cite{badbotreport}.
Good bots identify themselves but malicious bots have an incentive to spoof their user agents and behave like humans.
Malicious bots may be designed to generate fake reviews, scrape price or content, crack credentials, infiltrate payment systems, defraud advertisers, or spam online forums. Recommendation and personalization systems are particularly vulnerable to bot activity \cite{zhang2006analysis}.
The major challenge in building machine learning (ML) models to detect bad bots is getting labeled data.
In this context, ML methods that aim to learn from positive and unlabeled data (PU learning) provide promise \cite{elkan2008learning}.
PU learning learns from data where only a subset of one class is labeled.
We explore an application of PU learning to malicious non-human traffic detection on the web.
Considering humans as the positive class, we can identify positive instances by assuming that only humans purchase on e-commerce web sites, clear CAPTCHAs, or visit from validated IP addresses.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\linewidth]{intro-figure.pdf}
\caption{Problem set-up: circles denote humans and crosses denote bots. Only the blue circles are known to be humans, the true label for all grey instances is unknown. The fact that larger circles are more likely to be blue denotes that the observation's attributes determine their likelihood of being selected (labeled). Under SCAR (Selected Completely at Random) violation, the classification goal is to identify the dashed dividing line between the two classes.}
\label{problem}
\end{figure}
Current PU learning frameworks assume that the labeled subset of the positive class is \textit{Selected Completely at Random (SCAR)} from the positive class, where the labeling mechanism does not depend on the attributes of the example~\cite{elkan2008learning}. That is, the labeled subset of humans is not influenced by the features of the observations. Unfortunately, such an assumption is hard to justify in practice. For example, it is reasonable to expect that not all human visitors to an e-commerce web site are equally likely to make a purchase. This requires us to revisit the PU framework to handle problems where the random sampling assumption is violated. Figure \ref{problem} describes the problem we are addressing.
In this work, we address the question of classifying a web session as originating from a human surfer or a robot, using PU learning. Our contribution includes two novel models to handle biased sampling within the positive class, one of which is a scalable version of the proposals in \cite{bekker2019beyond}. In our experiments, positive-unlabeled scenarios are artificially created in a publicly available intrusion detection dataset. We notice that the proposed approaches perform better than existing PU learning models~\cite{elkan2008learning, liu2003building}. In a proprietary e-commerce dataset, our methods work well in distinguishing humans from bots. We call our framework ``Botcha". Given the limited need for labeled data, it is readily applied in the wild. Filtering out all bot traffic allows recommendation and personalization systems to learn from unbiased data from real human activity.
\section{Related Work}
Malicious non-human activity on the web has been observed in the context of fake reviews, information theft, spread of misinformation, spam on social networks, and click fraud in advertising~\cite{gao2017research, displayAdFraud}. Given the diverse, dynamic and often adversarial nature of web fraud, it is imperative to find new strategies to detect bots and data driven strategies hold promise for this.
While there has been some work to build recommendation systems that are robust to adversarial attacks \cite{zhang2006analysis, he2018adversarial}, in this work we aim to filter out all bot traffic to provide unbiased data to the recommendation and personalization systems to learn from.
To classify a visitor as a bot or human, the standard machine learning strategy requires representative examples from both classes, and building a supervised learning model that is able to differentiate between them. Due to limited labeled data for bot detection, alternative data-efficient strategies have also been investigated. Semi-supervised learning has been applied to the bot detection problem~\cite{worldThatCounts}.
Unfortunately, while it is reasonable to expect that we have a reliable subset of humans labeled, bots on the web are adversarial, ever-evolving and hard to sample from, this renders semi-supervised learning limited in scope.
PU Learning requires only a subset of one of two classes to be labeled. Hence, PU learning is appealing in the bot detection problem, where we can assume that a subset of humans are labeled. Early work in~\cite{puWithWeightedLR} and~\cite{elkan2008learning} has shown how PU learning can achieve the effectiveness of standard supervised learning. We believe that the PU learning framework is natural for use in a variety of fraud detection applications on the web.
Empirical success in a variety of scenarios has led to recent attention in the class of PU learning algorithms~\cite{bekker2020learning}.
Unfortunately, most prior work in this area makes the assumption that the labeled points are randomly sampled from the positive class.
This assumption is referred to as Selected Completely at Random~\cite{elkan2008learning, bekker2019beyond}. That is to say, the positively labeled examples in the dataset are a random unbiased sample of the universe of positive examples, and are not a function of the attributes of the data point. In order to allow the building of PU learning based models in scenarios where this is an unrealistic assumption, we build on prior work by \cite{bekker2019beyond}, which presents two challenges. First, the model strategy presented in~\cite{bekker2019beyond} requires the analyst to decide on a set of features to compute the propensity score. Second, the proposal requires optimisation using an Expectation-Maximization (EM) algorithm. Unfortunately, the EM Algorithm is known to be slow to converge~\cite{jollois2007speed}, given that we would like to apply this to a scenario with tens of millions of data points and hundreds of features, this presents certain challenges in the direct application of this to our work. To test our proposed algorithms, we first conduct a series of simulation experiments on standard supervised learning datasets representing different fraud-like setups - we artificially hide the true labels which we then hope to recover via the learning algorithm, thereby showing the viability of our methods.
\section{Models}
\label{sec:model}
We first describe the notations and then briefly review PU learning work in~\cite{elkan2008learning} (Section \ref{sec:vanilla}). In section \ref{sec:mam} and \ref{sec:ram}, we describe the proposed approaches, which are the main contributions of the paper.
\subsection{Notation \& Prerequisites}
\label{sec:notations}
To distinguish humans from bots we need to learn a classifier that generate the probabilities $p(y|\bm{x})$, where $y{\in}\{0, 1\}$ denotes if the observation was generated by a human ($y{=}1$) or bot and $\bm{x}$ is the feature vector.
The dataset for PU learning are examples $(\bm{x}, y, s)$ from a space $\mathcal{X} \times \mathcal{Y} \times \mathcal{S}$, where $\mathcal{X}$ and $\mathcal{Y}$ denote the feature space and label space. The binary variable $s$ represents if the example is labeled.
Since only positive examples (humans) are labeled, $p(y{=}1|s{=}1)=1$. Marginalizing $p(s{=}1|x)$ over $y$, we get:
\begin{equation*}
\label{marginalizeY}
p(s{=}1|\bm{x}) = p(s{=}1|y{=}1,\bm{x}) \times p(y{=}1|\bm{x}) + p(s{=}1|y=0,\bm{x}) \times p(y{=}0|\bm{x})
\end{equation*}
Now, $p(s{=}1|y{=}0,\bm{x})=0$ since only the positive examples are labeled. This leads to
\begin{equation}
\label{eq:ratioForPU}
p(y{=}1|\bm{x}) = \frac{p(s{=}1|\bm{x})}{p(s{=}1|y{=}1,\bm{x})}
\end{equation}
Equation (\ref{eq:ratioForPU}) forms the basis of all models which we describe next.
\subsection{Vanilla Model (EAM)}
\label{sec:vanilla}
The work by Elkan and Noto is based on the SCAR assumption, which assumes that the labeled positive examples were chosen uniformly at random from the universe of positive examples \cite{elkan2008learning}.
Formally this means $p(s{=}1|y{=}1,\bm{x})$ $=p(s{=}1|y{=}1)$, i.e., the sampling process is independent of $\bm{x}$. We can rewrite equation (\ref{eq:ratioForPU}) as
\begin{equation}
\label{eq:eam}
p(y{=}1|\bm{x}) = \frac{p(s{=}1|\bm{x})}{c} \text{ , with } c=\frac{1}{n} \smashoperator{\sum_{\bm{x} : y{=}1}}P(s{=}1|\bm{x}).
\end{equation}
The constant $c$ represents the fraction of labeled positive points and $n$ is the size of the labeled set. Note that the numerator can be obtained by training a classifier that separates the labeled ($s{=}1$) points from the unlabeled ($s{=}0$). Similarly, $c$ can be estimated using this trained classifier and a validation set. Averaging predicted scores of known positives in the validation set gives estimate for $c$.
We refer to this model as \textit{Elkan's Assumption Model} (EAM) in our experiments and it forms the baseline for our methods. For detailed derivation and discussion we refer the readers to \cite{elkan2008learning}.
\subsection{Modified Assumption Model (MAM)}
\label{sec:mam}
The SCAR assumption above enables the building PU Learning models for a range of scenarios. However, we believe that this is an unrealistic assumption and argue that explicitly accounting for selection bias for the known positives allows us to build models that are more aligned to the data.
We propose \textit{Modified Assumption Model} (MAM), geared towards practical cases where labeling is performed via a stratified procedure. Instead of using the SCAR assumption, we make a more lenient assumption that known positives come from two sub-groups, where for one the sampling depends on $\bm{x}$ and the other is independent of $\bm{x}$.
We introduce a new binary variable $b{\in}\{0, 1\}$ that indicates which of the two sub-groups a given labeled example (i.e. $s{=}1$) comes from.
So, $b{=}0$ indicates that value of $s$ is independent of $\bm{x}$, whereas $b{=}1$ implies that value of $s$ is dependent on $\bm{x}$. Marginalizing over b, we get:
\begin{equation}
\begin{split}
p(s{=}1|y{=}1,\bm{x}) = p(s{=}1|{y=}1,b{=}1,\bm{x}) \times p(b{=}1|y{=}1,\bm{x}) \\ + \; p(s{=}1|y{=}1,b{=}0,\bm{x}) \times p(b{=}0|y{=}1,\bm{x}) \nonumber
\end{split}
\end{equation}
Since $s$ is independent of $\bm{x}$ when $b{=}0$, so $p(s{=}1|y{=}1,b{=}0,\bm{x}) = c$ and given that $p(b{=}0|y{=}1,\bm{x}) = 1 - p(b{=}1|y{=}1,\bm{x})$, we can re-write above equation as
\begin{equation}
\label{eq:mam_equation}
p(y{=}1|\bm{x}) = \frac{p(s{=}1|\bm{x})}{c + p(b{=}1|y{=}1,\bm{x}) \times (1{-}c)} \text{, with } c=\frac{1}{n} \smashoperator{\sum_{\text{\textbf{\textit{x}} : y{=}1 \& b{=}0}}}P(s{=}1|\bm{x})
\end{equation}
Similar to EAM the numerator can be obtained by training a classifier that separates the labeled ($s{=}1$) points from the unlabeled ($s{=}0$). The denominator model can be trained using $b{=}1$ and $b{=}0$ sets (note that points in these sets are labeled and positive, i.e., $s{=}1$ and $y{=}1$).
The constant $c$ can be estimated by averaging the scores predicted by numerator model for examples with $b{=}0$ in the validation set. If $p(b{=}1|y{=}1,\bm{x})=0$ for all data points, i.e., sampling is independent of $\bm{x}$, we recover EAM from MAM. Our MAM proposal closely relates to the proposals made in \cite{bekker2019beyond}, however the algorithm in \cite{bekker2019beyond} does not scale to large scale bot detection datasets.
\subsection{Relaxed Assumption Model (RAM)}
\label{sec:ram}
The most general model, referred as \textit{Relaxed Assumption Model} (RAM), does not make any assumption about $s$ being independent of $\bm{x}$.
Instead, we attempt to model this process explicitly, i.e., we build a model for $p(s{=}1|y{=}1,\bm{x})$ - the denominator in equation (\ref{eq:ratioForPU}). We first acquire a set of positive unlabeled examples of $y{=}1$ with $s{=}0$ and then utilize standard binary classification methods to distinguish $s{=}0$ from $s{=}1$ amongst the positive examples.
We propose the use of a nearest neighbour based method that finds points in the dataset that are \textit{close} to the known positives but are not in the sampled set ($s{=}1$). Since any point outside the sampled set is $s{=}0$, a nearest neighbour to a $(y{=}1, s{=}1)$ point not in this set is implicitly taken to be $(y{=}1, s{=}0)$. Note that this assumption may not always be true. As in the other models, our aim is to find techniques that are robust even when the modeling assumption may be wrong. It is important to note that we do not alter the numerator in equation (\ref{eq:ratioForPU}) and hence training classifier for numerator remains identical to EAM.
\section{Experiments and Results} \label{Simulations}
\subsection{Simulated Experiments on a Public Dataset}
\label{ssec2}
In our first set of experiments we artificially create PU learning datasets by hiding the ground truth labels of a labeled dataset during training.
We then evaluate the trained model on a labeled test set.
The simulations primarily involve controlling the subset of positive data points that are labeled for training, and all the other instances are unlabeled.
The simulated datasets having varying degree of ``randomness", one of the extremes is a completely random subset of positive samples (satisfying SCAR perfectly).
The other is extreme is a carefully crafted subset of positive samples where SCAR assumption is violated.
{
\begin{table*}[ht]
\small
\scalebox{0.85}{\begin{tabular}{|l|l|c|c|c|c|c|c|c|c|}
\multicolumn{2}{l}{} & \multicolumn{8}{c}{\includegraphics[ height=7mm]{arrow.png}}\\
\hline
& Method & \multicolumn{2}{c|}{Mixing $m{=}0$} & \multicolumn{2}{c|}{Mixing $m{=}30$} & \multicolumn{2}{c|}{Mixing $m{=}70$} & \multicolumn{2}{c|}{Mixing $m{=}100$}\\
& & AUC & Pr@Recall99 & AUC & Pr@Recall99 & AUC & Pr@Recall99 & AUC & Pr@Recall99\\\hline
\multirow{4}{*}[-1.5ex]{\rotatebox[origin=c]{0}{\textbf{topper $t{=}0.90$}}}
& Biased SVM \cite{liu2003building}~ & 0.705 & 0.524 & 0.705 & 0.576 & 0.688 & 0.535 & 0.689 & 0.560\\
& EAM \cite{elkan2008learning}& 0.757 & 0.719 & 0.760 & 0.751 & \cellcolor{blue!25}\textbf{0.776} & \cellcolor{blue!25}\textbf{0.751} & \cellcolor{blue!25}\textbf{0.792} & \cellcolor{blue!25}\textbf{0.697}\\
& MAM (proposed) & \cellcolor{blue!10}0.811 & \cellcolor{blue!10}0.724 & \cellcolor{blue!10}0.761 &\cellcolor{blue!10} 0.736 & 0.778 & 0.737 & 0.701 & 0.636\\
& RAM (proposed) & \cellcolor{blue!25}\textbf{0.897} & \cellcolor{blue!25}\textbf{0.724} & \cellcolor{blue!25}\textbf{0.837} & \cellcolor{blue!25}\textbf{0.756} & \cellcolor{blue!10}0.770 & \cellcolor{blue!10}0.743 & \cellcolor{blue!10}0.765 & \cellcolor{blue!10}0.669\\\hline
\multirow{4}{*}[-1.5ex]{\rotatebox[origin=c]{0}{\textbf{topper $t{=}0.925$}}}
& Biased SVM \cite{liu2003building}~ & 0.624 & 0.517 & 0.691 & 0.512 & 0.666 & 0.519 & 0.669 & 0.513\\
& EAM \cite{elkan2008learning}& 0.761 & 0.730 & 0.761 & 0.751 & \cellcolor{blue!25}\textbf{0.774} & \cellcolor{blue!25}\textbf{0.747} & \cellcolor{blue!25}\textbf{0.791} & \cellcolor{blue!25}\textbf{0.701}\\
& MAM (proposed) & \cellcolor{blue!10}0.831 & \cellcolor{blue!10}0.737 & \cellcolor{blue!10}0.792 &\cellcolor{blue!10} 0.752 & 0.743 & 0.717 & 0.721 & 0.682\\
& RAM (proposed) & \cellcolor{blue!25}\textbf{0.906} & \cellcolor{blue!25}\textbf{0.773} & \cellcolor{blue!25}\textbf{0.812} & \cellcolor{blue!25}\textbf{0.767} & \cellcolor{blue!10}0.764 & \cellcolor{blue!10}0.745 & \cellcolor{blue!10}0.748 & \cellcolor{blue!10}0.700\\\hline
\end{tabular}}
\caption {Test-set performance on public dataset. The {\color{blue!75}best algorithm} in each column is colored {\color{blue!75}blue} and {\color{blue!20}second best is light blue}. RAM and MAM perform significantly better when SCAR assumption is violated (low randomness). EAM only provides a marginal improvement over RAM when the known positives are a random subset from positive class.}
\label{tab:kdd_result}
\end{table*}
\textbf{Public Dataset:}
We use the KDDCUP'99 dataset (NSL-KDD Dataset), a widely adopted labeled dataset for network intrusion detection.
The train and test datasets have a total of $148,517$ records with $43$ features each.
To get around known problems with the dataset~\cite{kddcupCisda}, we merge the given train and test records, which we then re-split into train, validation and test sets in a 80:10:10 proportion.
Overall, the dataset contains $71,463$ intrusive sessions (all intrusions are bot-generated) while the rest are legitimate sessions.
\textbf{Data Simulations:}
The process of creating artificial datasets involves hiding the labels for all negative points and a proportion of the positive points.
We sample a labeled subset of positive data points to create the known subset of positives.
We first build a supervised classifier to score each data point.
The classification task here is to distinguish intrusive vs legitimate sessions and the score is the predicted class probability.
We use this score to introduce sampling bias in creating the known subset of positives.
Using a Random Forest classifier, we achieve an AUC (area under ROC curve) value of $0.9921$ on the training data and $0.9911$ on the test data.
We then curate different PU learning datasets by performing sampling over the scored data points by controlling two parameters as described below.
\textit{1. Topper}: This parameter is used to introduce sampling bias by selecting only those positive points whose prediction score (using supervised model) is higher than the $t^\text{th}$ quantile of all positive labeled points.
This selection of the top fraction of positives introduces a sampling bias, since we are only selecting points with a high score.
The idea is to capture spread within the positive class, and one meaningful scale is to use the estimated probability that a point is positive, given it's features.
Note that sampling is only done for positive class, the labels for all negative points are hidden.
\textit{2. Mixing}: This parameter controls `randomness' for the known subset of positives.
After creating a sample of known positives based on topper parameter, at value $m$ we swap $m\%$ of the selected points with points from the positive set, the swapping is done with replacement.
As we move from $m{=}0$ to $m{=}100$ we decrease the sampling bias in the set and correspondingly increase the randomness. A mixing of 100\% means SCAR is completely satisfied.
The subset obtained at a particular value of $t$ and $m$ is the known labeled subset of positives, and the remaining points (all negatives and the unsampled positives) are treated as unlabeled.
With distinct values of $t$ and $m$ we obtain different simulated datasets.
At a particular value of the topper parameter ($t$), with $m{=}100$ we get a completely random sample of positive class (satisfying SCAR), on the other end with $m{=}0$ we get extremely biased sample, containing only high scoring points.
When $m{<}100$ the sampling is not completely random and depends on score of the supervised model that uses all the features $\bm{x}$.
Consequently, the sampling variable $s$ is not independent of $\bm{x}$ and the dataset does not align with the assumption of Elkan and Noto.
We show that in cases of biased sampling, the proposed methods outperform the baseline approaches that rely on SCAR assumption.
\textbf{Results on simulated datasets:}
We train MAM and RAM and compare against the baselines - EAM \cite{elkan2008learning} and biased SVM \cite{liu2003building} - on simulated datasets with varying degree of randomness.
For uniformity, we use Random Forest as the base classifier for all three methods EAM, MAM and RAM.
Biased SVM uses a SVM formulation \cite{liu2003building}.
The performance metrics are AUC (area under ROC curve) and Precision@Recall99 , precision when 99\% of known positives in the validation set are classified correctly.
Unlike the standard `0.5' threshold for classification we set classification threshold such that 99\% of the legitimate sessions (positives) are correctly classified as legitimate.
This is particularly important since in real systems we do not wish to interrupt legitimate users with any scrutiny.
And so, Precision@Recall99 is an important metric to consider.
The results for the simulated experiments are shown in Table \ref{tab:kdd_result}.
When the sampling is extreme, \textit{towards the left with smaller mixing parameter}, RAM and MAM perform significantly better than the EAM along both the evaluation metrics.
With more randomness by increasing mixing, EAM beats other methods but our proposed RAM still has a competitive performance. Biased SVM has poor performance throughout.
This shows that in extremely biased situations the proposed models MAM and RAM provide significant improvements by explicitly accounting for the sampling bias.
On the other hand, EAM provides slight improvement at high mixing (random sample) since it is tailored specifically for such scenarios, where the SCAR assumption holds true.
\subsection{Application to Real E-Commerce Data}
\label{ecomm_expt}
This section describes application of RAM to a proprietary dataset from traffic logs of an e-commerce website.
\textbf{Data Description:}
The data contains a record for every page request, here referred to as a `hit'.
We consider a one week period, and collapse these records into `sessions' for each user.
A session combines a series of hits made by an user, the session ends with 30 minutes of inactivity.
Overall we identify $3.6$ million unique visitors from $6$ million sessions, and more than 100 million hits.
The task is to label a session as arising from a human or a bot.
The sessions from legitimate bots are filtered out using user-agent strings.
The feature representation of all sessions utilizes a standard set of technology (e.g. browser and device types), behavioral (e.g. time between hits) and session related (e.g. timezone and time-of-day).
Since this is an e-commerce website, we also have information as to whether a particular session resulted in a purchase. This information is leveraged to build our partial set of positives, the details are presented next.
\begin{comment}
\begin{table}
\begin{center}
\caption{Attributes used for the E-commerce dataset}
\label{tab:attributes}
\begin{tabular} { |c|c|c|c| }
\hline
Visitor Information & Device Information & Location Information & Session Information\\ [0.5ex]
\hline\hline
visitor id &PC/tablet/mobile & city &session length\\
visit number &OS and device brand & region &hits per sec\\
visit start time &browser type & country & unique URLs count \\%and search count\\
session id &JavaScript version &time zone &time between hits\\
\hline
\end{tabular}
\end{center}
\end{table}
\end{comment}
\textbf{Known subset of positives:}
Out of the $6$ million sessions, $36k$ $(0.6\%)$ sessions are \textit{purchase} sessions and $360k (6\%)$ sessions belong to an identified purchaser.
We label this $6\%$ of sessions as positive (Human class).
The dataset is then split into train, test, and validation sets in a 80:10:10 ratio for modeling purposes.
\textbf{Partially labeled test dataset:}
To validate our approach, we split the test data into $3$ groups of points and observe the distribution of prediction scores across these classes. This split is based on heuristics which we describe next. \textit{Positive data points}: The subset of sessions which had a user corresponding to a purchase session in the training dataset. \textit{Negative data points}: The subset of sessions which have been originated from AWS/Azure servers are tagged as negative, the assumption being that browsing sessions originating from these cloud environments are unlikely to be initiated by humans. The set of azure/AWS IPs are publicly available \cite{18,19}. \textit{Unlabeled data points}: The set of sessions which are neither tagged as positive nor negative.
\begin{comment}
\begin{itemize}
\item \textit{Positive data points}: The subset of sessions which had a user/ cookie id corresponding to a purchase session in the training dataset.
\item \textit{Negative data points}: The subset of sessions which have been originated from AWS/Azure servers are tagged as negative, the assumption being that browsing sessions originating from these cloud environments are unlikely to be originated by humans. The set of azure/AWS IPs are publicly available \cite{18,19}.
\item \textit{Unlabeled data points}: The set of sessions which are neither tagged as positive nor negative.
\end{itemize}
\end{comment}
It is important to note that all points during model training had the label of `Positive' or `Unlabeled'.
The `Negative' label is only used for validation.
Note that the set of known positives is neither complete (not all Humans purchase), nor is it an unbiased sample (different users have a varying propensities for purchases).
\begin{table}[ht]
\centering
\scalebox{0.85}{
\begin{tabular}{LLLL}
\hline
Class of sessions & No. of sessions & No. predicted human & \% predicted human\\\hline
Positive & 74k & 73k & $\sim 99\%$\\
Negative & 24k & 608 & $\sim 2.5\%$\\
Unlabeled & 1.08M & 890k & $\sim 82\%$\\
\hline\\[-1em]
Total & 1.18M & 965k & $\sim 82\%$\\
\hline
\end{tabular}
}
\caption{Test-set observations for E-commerce dataset.}
\label{table:obs}
\end{table}
Using the validation set, we identify a threshold that captures $99\%$ of positive labels. And the output score of the RAM model is converted into a boolean \textit{is-human} label by using this threshold. Table \ref{table:obs} shows the break-up of the traffic in the dataset and how RAM classifies points from each of these classes. As seen in the table, we misclassify only a few negatively labeled sessions (<3\%) and in total, we have close to 82\% human traffic as reported by this model. We expect a high human traffic since the website has strict login requirements for accessing their content. Additionally, we observe a stark separation in the prediction scores for positive and negative class, most positive samples had a score close to 1, while negatives were scored close to 0.
\section{Conclusions}
In this paper, we have addressed the problem of detecting and filtering non-human traffic using positive and unlabeled data. Providing recommendation and personalization systems unbiased data to learn from, leads to a better experience to the end-customer.
We specifically accounted for the \textit{selected completely at random} assumption in standard PU Learning methods and conducted simulation studies for validation.
We also evaluated our most general model, RAM, on a large real word e-commerce dataset.
We showed that the model clearly separates the known positives from negatives chosen via a heuristic.
Given the scale of fraud due to bots, such bot detection systems have a clear utility.
And the methods described in this paper show promising results in addressing the endemic bot problem.
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,155,068 | arxiv | \section{Introduction}\label{sec:intro}
Strings play a key role in any programming language due to the many and different ways in which they are used, for instance to dynamically access object properties, to hide the program code by using string-to-code statements and reflection, or to manipulate data-interchange formats, such as JSON, just to name a few. Despite the great effort spent in reasoning about strings, static analysis often failed to manage programs that heavily manipulate strings, mainly due to the inaccuracy of the results and the prohibitive amount of resources (time, space) required to retrieve useful information on strings. One the one hand, finite height string abstractions~\cite{costantini2015} are computable in a reasonable time, but precision is suddenly lost when using advanced string manipulation. On the other hand, more sophisticated abstractions (e.g., the ones reported in~\cite{arceri2019-fa,mstring}) compute precise results but they require a huge, and sometimes unrealistic, computational cost, making such code intractable for these abstractions. A good representative of such abstractions is the finite state automata domain~\cite{arceri2019-fa}. Over-approximating strings into finite state automata has shown to increase string analysis accuracy in many scenarios, but it does not scale up to real world programs dealing with statically unknown inputs and long text manipulations.
The problem of statically analyzing strings has been already tackled in different contexts in the literature~\cite{mstring,arceri2019-fa,park2016,sas2003,madsen2014,abdulla2014,costantini2015}. The original finite state automata abstract domain has been defined in~\cite{arceri2019-fa} in the context of dynamic languages, providing an automata-based abstract semantics for common ECMAScript string operations. The same abstract domain has been integrated also for defining a sound-by-construction analysis for string-to-code statements~\cite{arceri2020}. The authors of~\cite{otherautomata} provided an automata abstraction merged with interval abstractions for analyzing JavaScript arrays and objects. In~\cite{sas2003}, the authors propose static analysis of Java strings based on the abstraction of the control-flow graph as a context-free grammar. Regular strings~\cite{regstrings} is an abstraction of the finite state automata domain and approximates strings as a strict subset of regular expressions. Even if it is not tackled the problem of analyzing strings, in~\cite{midtgaard2016} is proposed a lattice-based generalization of regular expressions, showing a regular expressions-based domain parametric from a lattice of reference. Finally, automata have been also involved in model checking in order to tackle the well-known problem of state space explosion~\cite{bouajjani2004,bouajjani2006}.
\vskip10pt
In this paper we introduce $\tarsis$, a new abstract domain for string values based on finite state automata (FSA). Standard FSA has been shown to provide precise abstractions of string values when all the components of such strings are known, but with high computational cost. Instead of considering standard finite automata built over an alphabet of single characters, $\tarsis$ considers automata that are built over an alphabet of strings. The alphabet comprises a special value to represent statically unknown strings. This avoids the creation of self-loops with any possible characters as input, which otherwise would significantly degrade performance. We define the abstract semantics of mainstream string operations, namely \code{substring}, \code{length}, \code{indexOf}, \code{replace}, \code{concat} and \code{contains}, either defined directly on the automaton or on its corresponding equivalent regular expression. Soundness proofs are provided for a subset of the operations.
$\tarsis$ has been implemented into a prototypical static analyzer supporting a subset of Java. By comparing $\tarsis$ with other cutting-edge domains for string analysis, results show that (i) when applied to simple code that causes a precision loss in simpler domains, $\tarsis$ correctly approximate string values within a comparable execution time, (ii) on code that makes the standard automata domain unusable due to the complexity of the analysis, $\tarsis$ is in position to perform in a limited amount of time, making it a viable domain for complex and real codebases, and (iii) $\tarsis$ is able to precisely abstract complex string operations that have not been addressed by state-of-the-art domains.
The rest of the paper is structured as follows. Sect.~\ref{sec:motivating} introduces a motivating example. Sect.~\ref{sec:bg} defines the mathematical notation used throughout the paper. Sect.~\ref{sect:domain} formalizes $\tarsis$ and its abstract semantics. Sect.~\ref{sec:experiments} reports experimental results and comparison with other domains, while Sect.~\ref{sec:conclusion} concludes. Selected proofs can be found in Appendix~\ref{sect:proofs}.
\section{Motivating example}
\label{sec:motivating}
Consider the code of Fig.~\ref{code:countmatches}, that counts the occurrences of string {\tt sub} into string {\tt str}. This code is (a simplification of) the \textit{Apache commons-lang} library method \code{StringUtils.countMatches} \footnote{{\tt \url{https://commons.apache.org/proper/commons-lang/}}}, one of the most popular Java libraries providing extra functionalities over the core classes of the Java lang library (that contains class {\tt String} as well). Proving properties about the value of \code{count} after the loop is particularly challenging, since it requires to correctly model a set of string operations (namely, {\tt length}, {\tt contains}, {\tt indexOf}, and {\tt substring}) and their interaction. State-of-the-art string analyses fail to model precisely most of such operations, since their abstraction of string values is not rigorous enough to deal with such situations. Such loss of precision usually leads to fail to prove string-based properties (also on non-string values) in real-world software, such as the numerical bounds of the value returned by method {\tt countMatches} when applied to some string values.
The goal of this paper is to provide abstract interpretation-based static analysis, in order to deal with complex and nested string manipulations similar to the one reported in Fig.~\ref{code:countmatches}. As we will discuss in Sect.~\ref{sec:experiments}, $\tarsis$ models (among the others) all string operations used in \code{countMatches},
and it is precise enough to infer, given the abstractions of \code{str} and \code{sub}, the precise range of values that \code{count} might have at the end of the method.
\begin{figure}[t]
\begin{CenteredBox}
\begin{lstlisting}
int countMatches(String str, String sub) {
int count = 0;
int len = sub.length();
while (str.contains(sub)) {
int idx = str.indexOf(sub);
count = count + 1;
int start = idx + len;
int end = str.length();
str = str.substring(start, end);
}
return count;
}
\end{lstlisting}
\end{CenteredBox}
\caption{A program that counts the occurrences of a string into another one}
\label{code:countmatches}
\vskip-15pt
\end{figure}
\section{Preliminaries}\label{sec:bg}
\noindent\textbf{Mathematical notation.} $\;$ Given a set $S$, $S^*$ is the set of all finite sequences of elements of $S$. If $s = s_0\dots s_{n}\in S^*$, $s_i$ is the $i$-th element of $s$, $|s| = n + 1$ is its length, and $s[x / y]$ is the sequence obtained replacing all occurrences of $x$ in $s$ with $y$. When $s'$ is a subsequence of $s$, we write $s' \sub s$. We denote by $s^n, n \ge 0$ the $n$-times repetition of the string $s$.
Given two sets $S$ and $T$, $\wp(S)$ is the powerset of $S$, $S \smallsetminus T$ is the set difference, $S \subset T$ is the strict inclusion relation between $S$ and $T$, $S \subseteq T$ is the inclusion relation between $S$ and $T$, and $S \times T$ is the Cartesian product between $S$ and $T$.
\noindent\textbf{Ordered structures.} $\;$
A set $L$ with a partial ordering relation $\leq \subseteq L \times L$ is a poset, denoted by $\tuple{L,\leq}$. A poset $\tuple{L,\leq, \vee , \wedge}$, where $\vee$ and $\wedge$ are respectively the least upper bound (lub) and greatest lower bound (glb) operators of $L$, is a lattice if $\forall x,y \in L \st x \vee y$ and $x \wedge y$ belong to $L$. It is also complete if $\forall X \subseteq L$ we have that $\bigvee X,\bigwedge X\in L$.
A complete lattice $L$, with ordering $\leq$, lub $\vee$, glb $\wedge$, top element $\top$, and bottom element $\bot$ is denoted by $\tuple{L,\leq,\vee,\wedge,\top,\bot}$.
\noindent\textbf{Abstract interpretation.} $\;$
Abstract interpretation~\cite{cc77,cc79} is a theoretical framework for sound reasoning about semantic properties of a program, establishing a correspondence between the concrete semantics of a program and an approximation of it, called abstract semantics. Let $C$ and $A$ be complete lattices, a pair of monotone functions $\alpha: C \rightarrow A$ and $\gamma: A \rightarrow C$ forms a \emph{Galois Connection} (GC) between $C$ and $A$ if $\forall x \in C, \forall y \in A : \alpha(x) \leq_A y \Leftrightarrow x \leq_C \gamma(y)$. We denote a GC as $C \galois{\alpha}{\gamma} A$.
Given $C\galois{\alpha}{\gamma}A$, a concrete function $f: C \rightarrow C$ is, in general, not computable. Hence, a function $f^\sharp : A \rightarrow A$ that must \textit{correctly} approximate the function $f$ is needed. If so, we say that the function $f^\sharp$ is \textit{sound}. Given $C \galois{\alpha}{\gamma} A$ and a concrete function $f : C \rightarrow C$, an abstract function $f^\sharp : A \rightarrow A$ is sound w.r.t. $f$ if $\forall c \in C.\:\alpha(f(c)) \leq_A f^\sharp(\alpha(c))$.
Completeness~\cite{giacobazzi2000} can be obtained by enforcing the equality of the soundness condition and it is called \textit{backward completeness}. Given $C \galois{\alpha}{\gamma} A$, a concrete function $f : C \rightarrow C$ and an abstract function $f^\sharp : A \rightarrow A$, $f^\sharp$ is backward complete w.r.t. $f$ if $\forall c \in C.\:\alpha(f(c)) = f^\sharp(\alpha(c))$.
\noindent\textbf{Finite state automata and regular expression notation.} $\;$ We follow the notation reported in~\cite{arceri2019-fa} for introducing finite state automata. A finite state automaton (FA) is a tuple $\aut = \tuple{Q, \Sigma, \delta, q_0, F}$, where $Q$ is a finite set of states, $q0 \in Q$ is the initial state, $\Sigma$ is a finite alphabet of symbols, $\delta \subseteq Q \times \Sigma \times Q$ is the transition relation and $F \subseteq Q$ is the set of final states. If $\delta : Q \times \Sigma \rightarrow Q$ is a function then $\aut$ is called deterministic finite state automaton. The set of all the FAs is $\fa$. If $\lang \subseteq \Sigma^*$ is recognized by an FA, we say that $\lang$ is a regular language.
Given $\aut\in\fa$, $\lang(\aut)$ is the language accepted by $\aut$. From the Myhill-Nerode theorem, for each regular language uniquely exists a minimum FA (w.r.t. the number of states) recognizing the language. Given a regular language $\lang$, $\minimize(\aut)$ is the minimum FA $\aut$ s.t. $\lang = \lang(\aut)$. Abusing notation, given a language $\lang$, $\minimize(\lang)$ is the minimal FA recognizing $\lang$.
We denote as $\paths{\aut} \in \wp(\delta^*)$ the set of sequences of transitions corresponding to all the possible paths from the initial state $q_0$ to a final state $q_n \in F$. Given $\pi \in \paths{\aut}$, $|\pi|$ is its length, meaning the sum of the lengths of the symbols that appear on the transitions composing the path. Furthermore, $\minpath{\aut} \in \paths{\aut}$ and $\maxpath{\aut} \in \paths{\aut}$ are the paths of minimum and maximum length, respectively. Given $\pi = t_0\dots t_n \in \mathsf{paths}(\aut)$, $\sigma_{\pi_{i}}$ is the symbol read by the transition $t_i$, $i \in [0,n]$, and $\sigma_\pi =\sigma_{\pi_0}\dots\sigma_{\pi_n}$ is the string recognized by such path. Predicate $\hascycle{\aut}$ holds if and only if the given automaton contains a loop. Throughout the paper, it could be more convenient to refer to a finite state automaton by its regular expression (regex for short), being equivalent. Given two regexes $\re_1$ and $\re_2$, $\re_1 \ || \ \re_2$ is the disjunction between $\re_1$ and $\re_2$, $\re_1\re_2$ is the concatenation of $\re_1$ with $\re_2$, $(\re_1)^*$ is the Kleene-closure of $\re_1$.
\noindent\textbf{The finite state automata abstract domain.} $\;$ Here, we report the necessary notions about the finite state automata abstract domain presented in~\cite{arceri2019-fa}, over-approximating string properties as the minimum deterministic finite state automaton recognizing them.
Given an alphabet $\Sigma$, the finite state automata domain is defined as $\latticefa$, where $\fa$ is the quotient set of $\DFA$ \wrt the equivalence relation induced by language equality, $\leqfa$ is the partial order induced by language inclusion, $\lubfa$ and $\glbfa$ are the lub and the glb, respectively. The minimum is $\minimize(\varnothing)$, that is, the automaton recognizing the empty language and the maximum is $\minimize(\Sigma^*)$, that is, the automaton recognizing any possible string over $\Sigma$. We abuse notation by representing equivalence classes in $\fa$ by one of its automaton (usually the minimum), \ie when we write $\aut\in\fa$ we mean $[\aut]_{\equiv}$. Since $\fa$ does not satisfy the Ascending Chain Condition (ACC), \ie it contains infinite ascending chains, it is equipped with the parametric widening $\widfa$. The latter is defined in terms of a state equivalence relation merging states that recognize the same language, up to a fixed length $n \in \nats$, a parameter used for tuning the widening precision~\cite{bartzis2004,silva2006}. For instance, let us consider the automata $\aut, \aut^\prime \in \fa$ recognizing the languages $\lang = \{\epsilon, a\}$ and $\lang^\prime = \{\epsilon, a, aa\}$, respectively. The result of the application of the widening $\widfa$, with $n = 1$, is $\aut \mathbin{\widfa} \aut^\prime = \aut^\second$ s.t. $\lang(\aut^\second) = \sset{a^n}{n \in \nats}$.
\noindent\textbf{Core language and semantics.} $\;$
\begin{figure}[t]
\begin{framed}
\vbox{%
\setlength{\grammarparsep}{3pt plus 1pt minus 1pt}
\setlength{\grammarindent}{4em}
\renewcommand{\syntleft}{} \renewcommand{\syntright}{}
\begin{grammar}
<$\aexp \in$ \aexps> $::=$ $x\in\ids$ ~|~ $n\in\ints$
~|~ $\aexp$ \tt{+} $\aexp$
~|~ $\aexp$ \tt{-} $\aexp$
~|~ $\aexp$ \tt{*} $\aexp$
~|~ $\aexp$ \tt{/} $\aexp$
\alt \length{$\sexp$}
~|~ \indexof{$\sexp$}{$\sexp$}
<$\bexp \in$ \bexps> $::=$ $x \in\ids$ ~|~ \true ~|~ \false
~|~ $\bexp$ \tt{\&\&} $\bexp$
~|~ $\bexp$ \tt{||} $\bexp$
~|~ \tt{!} $\bexp$
\alt $\exp$ \tt{\textless} $\exp$
~|~ $\exp$ \tt{==} $\exp$
~|~ \contains{$\sexp_1$}{$\sexp_2$}
<$\sexp \in$ \sexps> $::=$ $x\in\ids$ ~|~ $\str{\sigma}$
~|~ \subs{$\sexp$}{$\aexp$}{$\aexp$}
\alt \concat{$\sexp$}{$\sexp$}
~|~ \replace{$\sexp$}{$\sexp$}{$\sexp$} $\qquad (\sigma \in \Sigma^*)$
<$\exp \in$ \exps> $::=$ $\aexp$
~|~ $\bexp$
~|~ $\sexp$
<$\stmt \in$ \stmts> $::=$ $\stmt$ {\tt ;} $\stmt$ ~|~ $\ski$ ~|~ $x$ {\tt =} $\exp$
~|~ \ifc{$\bexp$}{$\stmt$}{$\stmt$}
\alt \while{$\bexp$}{$\stmt$}
<$\prog \in \imp$> $::=$ $\stmt$ {\tt ;}
\end{grammar}
}%
\vspace{-14pt}
\end{framed}
\caption{$\imp$ syntax}
\label{fig:imp-syntax}
\vskip-15pt
\end{figure}
We introduce a minimal core language $\imp$, whose syntax is reported in Fig.~\ref{fig:imp-syntax}. Such language supports the main operators over strings. In particular, $\imp$ supports arithmetic expressions ($\aexps$), Boolean expressions ($\bexps$) and string expressions ($\sexps$). Primitives values are $\val = \ints \cup \Sigma^* \cup \{\true, \false\}$, namely integers, strings and booleans. Programs states $\Mem : \ids \rightarrow \val$ map identifiers to primitives value, ranged over the meta-variables $\mem$. The concrete semantics of $\imp$ statements is captured by the function $\csem{\stmt} : \Mem \rightarrow \Mem$. The semantics is defined in a standard way, and it is reported in Appendix~\ref{sect:impstsem}. Such semantics relies on the one of expressions, that we capture, abusing notation, as $\csem{\exp} : \Mem \rightarrow \val$. While the semantics concerning arithmetic and Boolean expressions is straightforward (and not of interest of this paper), we define the part concerning strings in Fig.~\ref{imp:expressions}.
\begin{figure}[t]
\begin{align*}
\csem{\subs{\sexp}{\aexp}{\aexp'}}\mem &=
\sigma_{i}\dots \sigma_{j} \qquad \mbox{if } i \leq j < |\sigma| \\
\csem{\length{\sexp}}\mem &= |\sigma|\\
\csem{\indexof{\sexp}{\sexp'}}\mem &= \begin{cases}
\min\sset{i}{\sigma_i\dots\sigma_j = \sigma'} & \mbox{if }\exists i,j\in\nats\st \sigma_i\dots\sigma_j = \sigma' \\
-1 & \mbox{otherwise}
\end{cases}\\
\csem{\replace{\sexp}{\sexp'}{\sexp''}}\mem &= \begin{cases}
\sigma[\sigma' / \sigma'']& \mbox{if } \sigma' \sub \sigma\\
\sigma & \mbox{otherwise}
\end{cases}\\
\csem{\concat{\sexp}{\sexp'}}\mem &= \sigma\cdot\sigma'\\
\csem{\contains{\sexp}{\sexp'}}\mem &= \begin{cases}
\true & \mbox{if } \exists i,j\in\nats\st\sigma_i\dots\sigma_j = \sigma'\\
\false & \mbox{otherwise}
\end{cases}
\end{align*}
\caption{Concrete semantics of $\imp$ string expressions}
\label{imp:expressions}
\end{figure}
\section{The $\tarsis$ abstract domain}\label{sect:domain}
In this section, we recast the original finite state abstract domain working over an alphabet of character $\Sigma$, reported in Sect.~\ref{sec:bg}, to an augmented abstract domain based on finite state automata over an alphabet of strings.
\subsection{Abstract domain and widening}\label{sect:domandwid}
The key idea of $\tarsis$ is to adopt the same abstract domain, changing the alphabet on which finite state automata are defined to a set of strings, namely $\Sigma^*$. Clearly, the main concern here is that $\Sigma^*$ is infinite and this would not permit us to adopt the finite state automata model, that requires the alphabet to be finite. Thus, in order to solve this problem, we make such abstract domain \textit{parametric} to the program we aim to analyze and in particular to its strings. Given an $\imp$ program $\prog$, we denote by $\Sigma^*_\prog$ any substring of strings appearing in $\prog$\footnote{The set $\Sigma^*_\prog$ can be easily computed collecting the constant strings in $\prog$ by visiting its abstract syntax tree and then computing their substrings.}. The alphabet $\Sigma^*_\prog$ contains any possible string that can be computed by the program $\prog$, \textit{delimiting} the space of string properties we aim to check on
$\prog$.
At this point, we can instantiate the automata-based framework proposed in~\cite{arceri2019-fa} with the new alphabet as
$$
\latticehfa
$$
The alphabet on which finite state automata are defined is $\alphabet{\prog} \defn \Sigma^*_\prog \cup \{\ctop\}$, where $\ctop$ is a special symbol that we intend as \textit{"any possible string"}. Let $\HFA$ be the set of any deterministic finite state automaton over the alphabet $\alphabet{\prog}$. Thus, $\hfa$ is the quotient set of $\HFA$ \wrt the equivalence relation induced by language equality. $\leqhfa$ is the partial order induced by language inclusion, $\lubhfa$ and $\glbhfa$ are the lub and the glb corresponding to the union and the intersection automata operations, respectively. The bottom element is $\minimize(\varnothing)$, corresponding to the automaton recognizing the empty language and the maximum is $\minimize(\alphabet{\prog}^*)$, namely the automaton recognizing any string over~$\alphabet{\prog}$.
Like in the standard finite state automata domain $\fa$, also $\hfa$ is not a complete lattice and, consequently, it does not form a Galois Connection with the string concrete domain $\wp(\Sigma^*)$. This comes from the non-existence, in general, of the best abstraction of a strings set in $\hfa$ (e.g., a context-free language has no best abstract element in $\hfa$ approximating it). Nevertheless, this is not a concern since weaker forms of abstract interpretation are still possible~\cite{cc92} still guaranteeing soundness relations between concrete and abstract elements (e.g., polyhedra~\cite{ch78}). In particular, also without having the best abstraction, we can still ensuring soundness comparing the concretizations of our abstract elements (cf. Sect. 8 of~\cite{cc92}). Hence, we define the concretization function $\gammahfa : \hfa \rightarrow \wp(\Sigma^*)$ as
$
\gammahfa(\aut) \defn \bigcup_{\sigma \in \lang(\aut)} \fla(\sigma)
$,
where $\fla$ converts a string over $\alphabet{\prog}$ into a set of strings over $\Sigma^*$. For instance $\fla(a \; \ctop\ctop \; bb \; c) = \sset{a\sigma bbc}{\sigma \in\Sigma^*}$.
\paragraph*{Widening.} Similarly to the standard automata domain $\fa$, also $\hfa$ does not satisfy ACC, meaning that fix-point computations over $\hfa$ may not converge in a finite time. Hence, we need to equip $\hfa$ with a widening operator to ensure the convergence of the analysis. We define the widening operator $\widhfa{n} : \hfa \times \hfa \rightarrow \hfa$, parametric in $n \in \nats$, taking two automata as input and returning an over-approximation of the least upper bounds between them, as required by widening definition. We rely on the standard automata widening reported in Sect.~\ref{sec:bg}, that, informally speaking, can be seen as a \textit{subset construction} algorithm~\cite{davis1994} up to languages of strings of length $n$.
In order to explain the widening $\widhfa{n}$,
consider the following function manipulating strings.\footnote{For the sake of readability, in the program examples presented in this paper \lstinline{+} operation between strings corresponds to the string concatenation.}
\begin{CenteredBox}
\begin{lstlisting}[escapeinside={(*}{*)}]
function f(v) {
res = "";
while ((*?*))
res = res + "id = " + v;
return res;
}
\end{lstlisting}
\end{CenteredBox}
The function {\tt f} takes as input parameter {\tt v} and returns variable {\tt res}. Let us suppose that {\tt v} is a statically unknown string, corresponding to the automaton recognizing $\ctop$ (i.e., $\minimize(\{\ctop\})$). The result of the function {\tt f} is a string of the form $\mathtt{id = } \ctop$, repeated zero or more times.
Since the {\tt while} guard is unknown, the number of iterations is statically unknown, and in turn, also the number of performed concatenations inside the loop body.
The goal here is to over-approximate the value returned by the function {\tt f}, i.e., the value of {\tt res} at the~end~of the function.
\begin{figure}[t]
\begin{subfigure}[b]{0.41\textwidth}
\centering
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.9cm, semithick]
\node[initial,state,scale=\nodesize, initial text =, accepting] (0) {$q_0$};
\node[state,scale=\nodesize] (1) [right of=0] {$q_1$};
\node[state,scale=\nodesize, accepting] (2) [right of=1] {$q_2$};
\path[->] (0) edge node {{\tt id = }} (1);
\path[->] (1) edge node {$\ctop$} (2);
\end{tikzpicture}
\caption{Value of {\tt res} ($\aut$) at the beginning of the 2nd iteration of the loop}
\label{fig:1-it}
\end{subfigure}
\begin{subfigure}[b]{0.59\textwidth}
\centering
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.9cm, semithick]
\node[initial,state,scale=\nodesize, initial text =] (0) {$q_0$};
\node[state,scale=\nodesize] (1) [right of=0] {$q_1$};
\node[state,scale=\nodesize] (2) [right of=1] {$q_2$};
\node[state,scale=\nodesize] (3) [right of=2] {$q_3$};
\node[state,scale=\nodesize, accepting] (4) [right of=3] {$q_4$};
\path[->] (0) edge node {{\tt id = }} (1);
\path[->] (1) edge node {$\ctop$} (2);
\path[->] (2) edge node {{\tt id = }} (3);
\path[->] (3) edge node {$\ctop$} (4);
\end{tikzpicture}
\caption{Value of {\tt res} ($\aut'$) at the end of the 2nd iteration of the loop}
\label{fig:2-it}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.9cm, semithick]
\node[initial,state,scale=\nodesize, initial text =, accepting] (0) {$q_0,q4$};
\node[state,scale=\nodesize] (1) [right of=0] {$q_1$};
\node[state,scale=\nodesize, accepting] (2) [right of=1] {$q_2$};
\node[state,scale=\nodesize] (3) [right of=2] {$q_3$};
\path[->] (0) edge node {{\tt id = }} (1);
\path[->] (1) edge node {$\ctop$} (2);
\path[->] (2) edge node {{\tt id = }} (3);
\path[->] (3) edge[bend right=40] node[swap] {$\ctop$} (0);
\end{tikzpicture}
\caption{The result of $\aut \widhfa{2} \aut'$}
\label{fig:3-it}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2.5cm, semithick]
\node[initial,accepting, state,scale=\nodesize, initial text =] (0) {$q_0$};
\node[state,scale=\nodesize] (1) [right of=0] {$q_1$};
\path[->] (0) edge node {{\tt id = }} (1);
\path[->] (1) edge[bend right=50] node[swap] {$\ctop$} (0);
\end{tikzpicture}
\caption{Minimized version of $\aut \widhfa{2} \aut'$}
\label{fig:4-it}
\end{subfigure}
\caption{Example of widening application}
\label{fig:widening}
\vskip-20pt
\end{figure}
Let $\aut$, reported in Fig.~\ref{fig:1-it}, be the automaton abstracting the value of {\tt res} before starting the second iteration of the loop, and let $\aut'$, reported in Fig.~\ref{fig:2-it} be the automaton abstracting the value of {\tt res} at the end of the second iteration. At this point, we want to apply the widening operator $\widhfa{n}$, between $\aut$ and $\aut'$, working as follows. We first compute $\aut \lubhfa \aut'$ (corresponding to the automaton reported in Fig.~\ref{fig:2-it} except that also $q_0$ and $q_2$ are final states). On this automaton, we merge any state that recognizes the same strings of length $n$, with $n \in \nats$. In our example, let $n$ be $2$. The resulting automaton is reported in Fig.~\ref{fig:3-it}, where $q_0$ and $q_4$ are put together, the other states are left as singletons since they cannot be merged with no other state. Fig.~\ref{fig:4-it} depicts the minimized version of Fig.~\ref{fig:3-it}.
The widening $\widhfa{n}$ has been proved to meet the widening requirements (i.e., over-approximation of the least upper bounds and convergence on infinite ascending chains) in~\cite{silva2006}. The parameter $n$, tuning the widening precision, is arbitrary and can be chosen by the user. As highlighted in~\cite{arceri2019-fa}, the higher $n$ is, the more the corresponding widening operator is precise in over-approximating lubs of infinite ascending chains (i.e., in fix-point computations).
A classical improvement on widening-based fix-point computations is to integrate a threshold~\cite{cortesi2011}, namely widening is applied to over-approximate lubs when a certain threshold (usually over some property of abstract values) is overcome. In fix-point computations, we decide to apply the previously defined widening $\widhfa{n}$ only when the number of the states of the lubbed automata overcomes the threshold $\tau \in \nats$. This permits us to postpone the widening application, getting more precise abstractions when the automata sizes do not overcome the threshold. At the moment, the threshold $\tau$ is not automatically inferred, since it surely requires further investigations.
\subsection{String abstract semantics of $\imp$}\label{sect:impabssem}
In this section, we define the abstract semantics of the string operators defined in Sect.~\ref{sec:bg} over the new string domain $\hfa$. Since $\imp$ supports strings, integers and booleans values, we need a way to merge the corresponding abstract domains. In particular, we abstract integers with the well-known interval abstract domain~\cite{cc77} defined as
$
\intervals \defn \sset{[a,b]}{a,b \in \ints \cup \{-\infty,+\infty\}, a \leq b} \cup \{\botintervals\}
$
\noindent
and Booleans with $\bools \defn \wp(\{\true, \false\})$. As usual, we denote by $\lubintervals$ and $\lubbools$ the lubs between intervals and Booleans, respectively. In particular, we merge such abstract domains in $\aval$ by the coalesced sum abstract domain~\cite{arceri2017} as
$$
\aval \defn \hfa \oplus \intervals \oplus \bools
$$
Informally, the coalesced sum abstract domain introduces a new bottom and top element, and it \textit{coalesces} the bottom elements of the involved domains.
The program state is represented through abstract program memories $\aMem : \ids \rightarrow \aval$ from identifiers to abstract values. The abstract semantics is captured by the function $\asem{\stmt} : \aMem \rightarrow \aMem$, relying on the abstract semantics of expression defined by, abusing notation, $\asem{\exp} : \aMem \rightarrow \aval$. We focus on the abstract semantics of string operations\footnote{Since the abstract semantics of {\tt concat} does not add any further important technical detail to the paper, it is reported in Appendix~\ref{sect:otherops}.}, while the semantics of the other expressions is standard and does not involve strings.
\noindent\textbf{Length} $\;$ Given $\aut \in \hfa$, the abstract semantics of {\tt length} returns an interval $\left[c_1, c_2\right]$ such that $\forall \sigma \in \lang(\aut) \st c_1 \le |\sigma| \le c_2$. We recast the original idea of the abstract semantics of {\tt length} over standard finite state automata. Let $\sexp \in \sexps$, supposing that $\asem{\sexp}\amem = \aut \in \hfa$. The {\tt length} abstract~semantics~is:
$$
\asem{\length{\sexp}}\amem \defn
\begin{cases}
[|\minpath{\aut}|, +\infty] & \mbox{if } \hascycle{\aut} \lor \readstop{\aut}\\
[|\minpath{\aut}|, |\maxpath{\aut}|] & \mbox{otherwise}
\end{cases}
$$
where $\readstop{\aut}\Leftrightarrow\exists q,q'\in Q\st(q, \ctop, q') \in \delta$. Note that, when evaluating the length of the minimum path, $\ctop$ is considered to have a length of $0$. For instance, consider the automaton $\aut$ reported in Fig.~\ref{fig:length1}. The minimum path of $\aut$ is $(q_0, aa, q_1), (q_1, \ctop, q_2), (q_0, bb, q_4)$ and its length is 4. Since a transition labeled with $\ctop$ is in $\aut$ (and its length cannot be statically determined), the abstract {\tt length} of $\aut$ is $[4, +\infty]$. Consider the automaton $\aut'$ reported in Fig.~\ref{fig:length2}. In this case, $\aut'$ has no cycles and has no transitions labeled with $\ctop$ and the length of any string recognized by $\aut'$ can be determined. The length of the minimum path of $\aut'$ is 3 (below path of $\aut'$), the length of the maximum path of $\aut'$ is 7 (above path of $\aut'$) and consequently the abstract {\tt length} of $\aut'$ is $[4,7]$.
\begin{figure}[t]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.7cm, semithick]
\node[initial,state,scale=\nodesize, initial text =] (0) {$q_0$};
\node[state,scale=\nodesize] (1) [above right of=0] {$q_1$};
\node[state,scale=\nodesize] (2) [right of=1] {$q_2$};
\node[state,scale=\nodesize] (3) [right of=0] {$q_3$};
\node[state,scale=\nodesize, accepting] (4) [right of=3] {$q_4$};
\path[->] (0) edge node {$aa$} (1);
\path[->] (1) edge node {$\ctop$} (2);
\path[->] (2) edge node {$bb$} (4);
\path[->] (0) edge node[swap] {$bbb$} (3);
\path[->] (3) edge node[swap] {$bbb$} (4);
\end{tikzpicture}
\caption{}
\label{fig:length1}
\end{subfigure}
~
\begin{subfigure}[b]{0.5\textwidth}
\centering
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick]
\node[initial,state,scale=\nodesize, initial text =] (0) {$q_0$};
\node[state,scale=\nodesize] (1) [right of=0] {$q_1$};
\node[state,scale=\nodesize] (2) [right of=1] {$q_2$};
\node[state,scale=\nodesize] (4) [below of=1] {$q_4$};
\node[state,scale=\nodesize] (3) [right of=4] {$q_3$};
\node[state,scale=\nodesize, accepting] (5) [right of=2] {$q_5$};
\path[->] (0) edge node {$aa$} (1);
\path[->] (1) edge node {$bbb$} (2);
\path[->] (2) edge node {$cc$} (5);
\path[->] (0) edge node[swap] {$a$} (4);
\path[->] (4) edge node {$b$} (3);
\path[->] (3) edge node[swap] {$c$} (5);
\end{tikzpicture}
\caption{}
\label{fig:length2}
\end{subfigure}
\caption{(a) $\aut$ s.t. $\lang(\aut) = \{bbb\;bbb, aa \;\ctop\;bb\}$, (b) $\aut'$ s.t. $\lang(\aut') = \{a\;b\;c, aa\;bbb\;cc\}$}
\label{fig:length}
\end{figure}
\noindent\textbf{Contains} $\;$ Given $\aut, \aut' \in \hfa$, the abstract semantics of {\tt contains} should return $\true$
if any string of $\aut'$ is contained into any string of $\aut$, $\false$ if any string of $\aut'$ is not surely contained in any string of $\aut$ and $\{\true, \false\}$ in the other cases. For instance, consider the automaton $\aut$ depicted in Fig.~\ref{fig:replace1} and suppose to check if it contains the automaton $\aut'$ recognizing the language $\{aa,a\}$. The automaton $\aut'$ is a \textit{single-path automaton}~\cite{arceri2019}, meaning that any string of $\aut'$ is a prefix of its longest string. In this case, the containment of the longest string (on each automaton path) implies the containment of the others, such as in our example, namely it is enough to check that the longest string of $\aut'$ is contained into $\aut$. Note that, a single-path automaton cannot read the symbol $\ctop$. We rely on the predicate $\mathsf{singlePath}(\aut)$ when $\aut$ is a non-cyclic single-path automaton and we denote by $\sigma_{\mathsf{sp}}$ its longest string.
Let $\sexp, \sexp' \in \sexps$, supposing that $\asem{\sexp}\amem = \aut \in \hfa$, $\asem{\sexp'}\amem = \aut' \in \hfa$. The {\tt contains} abstract semantics~is:
$$
\asem{\contains{\sexp}{\sexp'}}\amem \defn
\begin{cases}
\false & \mbox{if } \aut' \glbhfa \mathsf{FA}(\aut) = \minimize(\varnothing) \\
\true & \mbox{if } \neg\hascycle{\aut} \land \mathsf{singlePath}(\aut')\\
& \land \forall \pi \in \paths{\aut}\st \sigma_{\mathsf{sp}} \sub \sigma_{\pi} \\
\{\true, \false\} & \mbox{otherwise}
\end{cases}
$$
In the first case, we denote by $\mathsf{FA}(\aut)$ the factor automaton of $\aut$, i.e., the automaton recognizing any substring of $\aut$. In particular, if $\aut$ does not share any substring of $\aut'$, the abstract semantics safely returns $\false$ (checking the emptiness of the greatest lower bound between $\mathsf{FA}(\aut)$ and $\aut'$). Then, if $\aut'$ is a single path automaton and $\aut$ is not cyclic, the abstract semantics returns $\true$ if any path of $\aut$ reads the longest string of $\aut'$. Otherwise, $\{\true, \false\}$ is returned.
\noindent\textbf{IndexOf} $\;$ Given $\aut, \aut' \in \hfa$, the {\tt indexOf} abstract semantics returns an interval of the first positions of the strings of $\lang(\aut')$ inside strings of $\lang(\aut)$, recalling that when there exists a string of $\lang(\aut')$ that is not a substring of at least one string of $\lang(\aut')$, the resulting interval must take into account -1 as well.
Let $\sexp, \sexp' \in \sexps$ and suppose $\asem{\sexp}\amem = \aut$ and $\asem{\sexp'}\amem = \aut'$. The abstract semantics of {\tt indexOf} is defined as:
$$
\asem{\indexof{\sexp}{\sexp'}}\amem \defn
\begin{cases}
[-1, +\infty] & \mbox{if } \hascycle{\aut} \lor \hascycle{\aut'} \lor \readstop{\aut'}\\
\left[-1, -1\right] & \mbox{if } \forall \sigma' \in \lang(\aut') \; \nexists \sigma \in \lang(\aut) \st \sigma' \sub \sigma\\
\bigsqcup^{\intervals}\limits_{\sigma \in \lang(\aut')} \mathsf{IO}(\aut, \sigma) & \mbox{otherwise}\\
\end{cases}
$$
If one of the automata have cycles or the automaton abstracting strings we aim to search for ($\aut'$) has a $\ctop$-transition, we return $[-1, +\infty]$.
Moreover, if none of the strings recognized by $\aut'$ is contained in a string recognized by $\aut$, we can safely return the precise interval $\left[-1, -1\right]$ since any string recognized by $\aut'$ is never a substring of a string recognized by $\aut$.\footnote{Note that this is a decidable check since $\aut$ and $\aut'$ are cycle-free, otherwise the interval $[-1, +\infty]$ would be returned in the first case.} If none of the aforementioned conditions is met, we rely on the auxiliary function $\mathsf{IO} : \hfa \times \Sigma^* \rightarrow \intervals$, that, given an automaton $\aut$ and a string $\sigma$, returns an interval corresponding to the possible first positions of $\sigma$ in strings recognized by $\aut$. Since $\aut'$ surely recognizes a finite language (i.e., has no cycles), the idea is to apply $\mathsf{IO}(\aut, \sigma)$ to each $\sigma \in \lang(\aut')$ and to return the upper bound of the resulting intervals.
In particular, the function $\mathsf{IO}(\aut, \sigma)$ returns an interval $[i,j] \in \intervals$ where, $i$ and $j$ are computed as follows.
$$
i = \begin{cases}
-1 & \mbox{if } \exists \pi \in \mathsf{paths}(\aut)\st \sigma \not\sub \sigma_\pi\\
\min\limits_{\pi \in \mathsf{paths}(\aut)}\ssset{i}{\sigma \sub \sigma_\pi \land \\ \sigma_{\pi_{i}}\dots\sigma_{\pi_{i+n}} = \sigma_0\dots\sigma_{i+n}} & \mbox{otherwise}\\
\end{cases
$$
$$
j = \begin{cases}
-1 & \mbox{if } \forall \pi \in \mathsf{paths}(\aut) \st \sigma \not\sub \sigma_\pi\\
+\infty & \mbox{if } \exists \pi \in \mathsf{paths}(\aut) \st \sigma \sub \sigma_\pi\\
& \land \exists j \in \nats \st \sigma_{\pi_{j}} = \ctop\\
\max\limits_{\pi \in \mathsf{paths}(\aut)} \ssset{i}{\sigma \sub \sigma_\pi \land \\ \sigma_{\pi_{i}}\dots\sigma_{\pi_{i+n}} = \sigma_0\dots\sigma_{i+n}} & \mbox{otherwise}\\
\end{cases
$$
We recall that given a path $\pi$, $\sigma_{\pi_{i}}$ denotes the symbol read by the transition at the $i$-position of $\pi$ and $\sigma_\pi$ the string recognized by $\pi$.
Given $\mathsf{IO}(\aut, \sigma) = [i, j] \in\intervals$, $i$ corresponds to the minimal position where the string $\sigma$ can be found in $\aut$ for the first time, while $j$ the maximal one. Let us first focus on the computation of the minimal position. If there exists a path $\pi$ of $\aut$ s.t. $\sigma$ is not recognized by $\sigma_\pi$, then the minimal position where $\sigma$ can be found in $\aut$ does not exists and -1 is returned. Otherwise, the minimal position where $\sigma$ begins across $\pi$ is returned. Let us consider now the computation of the maximal position. If all paths of the automaton do not recognize $\sigma$, then -1 is returned. If there exists a path where $\sigma$ is recognized but the character $\ctop$ appears in the path, then $+\infty$ is returned. Otherwise, the maximal index where $\sigma$ begins across the paths of $\aut$ is returned.
\noindent\textbf{Replace} $\;$ In order to give the intuition about how the abstract semantics of {\tt replace} will work,
consider the three automata $\aut,\aut_s,\aut_r \in \hfa$. Roughly speaking, the abstract semantics of {\tt replace} substitutes strings of $\aut_s$ with strings of $\aut_r$ inside strings of $\aut$. Let us refer to $\aut_s$ as the \textit{search automaton} and to $\aut_r$ as the \textit{replace automaton}. We need to specify two types of possible replacements, by means of the following example. Consider $\aut \in \hfa$ that is depicted in Fig.~\ref{fig:replace1} and suppose that the search automaton $\aut_s$ is the one recognizing the string $bbb$ and the replace automaton $\aut_r$ is a random automaton. In this case, the {\tt replace} abstract semantics performs a \textit{must-replace} over $\aut$, namely substituting the sub-automaton composed by $q_1$ and $q_2$ with the replace automaton $\aut_r$. Instead, let us suppose that the search automaton $\aut_r$ is the one recognizing $bbb$ or $cc$. Since it is unknown which string \textit{must} be replaced (between $bbb$ and $cc$), the {\tt replace} abstract semantics needs to perform a \textit{may-replace}: when a string recognized by the search automaton is met inside a path of $\aut$ is leaved unaltered in the automaton and, in the same position where the string is met, the abstract {\tt replace} only extends $\aut$ with the replace automaton. An example of may replacement is reported in Fig.~\ref{fig:replace}, where $\aut$ is the one reported in Fig.~\ref{fig:replace1}, the search automaton $\aut_s$ is the one recognizing the language $\{bbb,cc\}$ and the replace automaton $\aut_r$ is the one recognizing the string $rr$.
\begin{figure}[t]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.6cm, semithick]
\node[initial,state,scale=\nodesize, initial text =] (0) {$q_0$};
\node[state,scale=\nodesize] (1) [right of=0] {$q_1$};
\node[state,scale=\nodesize] (2) [right of=1] {$q_2$};
\node[state,scale=\nodesize] (4) [below of=1] {$q_4$};
\node[state,scale=\nodesize] (3) [right of=4] {$q_3$};
\node[state,scale=\nodesize, accepting] (5) [right of=2] {$q_5$};
\path[->] (0) edge node {$aaa$} (1);
\path[->] (1) edge node {$bbb$} (2);
\path[->] (2) edge node {$cc$} (5);
\path[->] (0) edge node[swap] {$aa$} (4);
\path[->] (4) edge node {$b$} (3);
\path[->] (3) edge node[swap] {$c$} (5);
\end{tikzpicture}
\caption{}
\label{fig:replace1}
\end{subfigure}
~
\begin{subfigure}[b]{0.5\textwidth}
\centering
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick]
\node[initial,state,scale=\nodesize, initial text =] (0) {$q_0$};
\node[state,scale=\nodesize] (1) [above right of=0] {$q_1$};
\node[state,scale=\nodesize] (2) [right of=1] {$q_2$};
\node[state,scale=\nodesize] (4) [below right of=0] {$q_4$};
\node[state,scale=\nodesize] (3) [right of=4] {$q_3$};
\node[state,scale=\nodesize, accepting] (5) [above right of=3] {$q_5$};
\path[->] (0) edge node {$aaa$} (1);
\path[->] (1) edge node {$bbb$} (2);
\path[->] (2) edge node {$cc$} (5);
\path[->] (0) edge node {$aa$} (4);
\path[->] (4) edge node {$b$} (3);
\path[->] (3) edge node {$c$} (5);
\path[->] (1) edge[bend right=69] node {$rr$} (2);
\path[->] (2) edge[bend left=71] node {$rr$} (5);
\end{tikzpicture}
\caption{}
\label{fig:replace2}
\end{subfigure}
\caption{Example of may-replacement}
\label{fig:replace}
\end{figure}
Before introducing the abstract semantics of {\tt replace}, we define how to replace of a string into an automaton. In particular, we define algorithm $\makeReplace$ in Alg.~\ref{alg:mkreplace}, that given $\aut \in \hfa$, a replace automaton $\aut^r$ and $\sigma \in \Sigma^* \cup \{\ctop\}$, it returns a new automaton that is identical to $\aut$ except that $\sigma$ is replaced with $\aut^r$.
\begin{figure}[t]
\scalebox{0.85}
{%
\begin{algorithm}[H]
\KwData{$\aut^o = \tuple{Q^o, \alphabet{}, \delta^o, q^o_0, F^o}, \aut^r = \tuple{Q^r, \alphabet{}, \delta^r, q^r_0, F^r} \in \hfa, \sigma \in \Sigma^* \cup \{\ctop\}$}
\KwResult{$\aut \in \hfa$}
$Q^{result} \leftarrow Q^o \cup Q^r$;
$\delta^{result} \leftarrow \delta^o \cup \delta^r$\;
\ForEach{$\pi \in \paths{\aut^o}$}{
\ForEach{$(q_i, \sigma_0, q_{i+1}),\dots,(q_{i+n-1}, \sigma_n, q_{i+n}) \in \pi$}{
$\delta^{result} \leftarrow \delta^{result} \cup (q_i,\epsilon,q^r_0)$\;
$Q^{result} \leftarrow Q^{result} \cup \sset{(q_f,\epsilon,q_{i+n})}{q_f \in F^r}$\;
\ForEach{$k \in [i+n-1, i+1]$}{
\uIf{$\nexists (q_k,\sigma', q) \in \delta^o : q \neq q_{k+1}$}{
$Q^{result} \leftarrow Q^{result} \setminus \{q_k\}$\;
$\delta^{result} \leftarrow \delta^{result} \setminus \{(q_k,\sigma', q_{k+1})\}$\;
}
\textbf{else break}\;
}
}
}
\textbf{return} $\tuple{Q^{result}, \alphabet{}, \delta^{result}, q^o_0, F^o}$\;
\caption{$\makeReplace$ algorithm}
\label{alg:mkreplace}
\end{algorithm}
}%
\vskip-20pt
\end{figure}
Alg.~\ref{alg:mkreplace} searches the given string $\sigma$ across all paths of $\aut$,
collecting the sequences of transitions that recognize the search string $\sigma$ and extracting them from the paths of $\aut$ (lines 2-3): an $\epsilon$-transition is introduced going from the first state of the sequence to the initial state of $\aut'$, and one such transition is also introduced for each final state of $\aut'$, connecting that state with the ending state of the sequence (lines 4-5).
Then, the list of states composing the sequence of transitions is iterated backwardly (lines 6-7), stopping at the first state that has a transition going outside of such list. All the states traversed in this way (excluding the one where the iteration stopped) are removed from the resulting automaton, with the transitions connecting them (lines 8-9), since they were needed only to recognize the string that has been replaced. Note that $\makeReplace$ corresponds to a must-replace. At this point, we are ready to define the {\tt replace} abstract semantics. In particular, if either $\aut$ or $\aut_s$ have cycles or $\aut_s$ has a $\ctop$-transition, we return $\minimize(\{\ctop\})$, namely the automaton recognizing $\ctop$. Otherwise, the {\tt replace} abstract~semantics is:
$$
\asem{\replace{\sexp}{\sexp_s}{\sexp_r}}\amem \defn
\begin{cases}
\aut & \mbox{if } \forall \sigma_s \in \lang(\aut_s) \\
& \nexists \sigma \in \lang(\aut)\st \\
& \sigma_s \sub \sigma\\
\makeReplace(\aut, \sigma_s, \aut_r) & \mbox{if } \lang(\aut_s) = \{\sigma_s\}\\
\bigsqcup\limits_{\sigma \in \lang(\aut_s)} \makeReplace(\aut, \sigma, \aut_r \lubhfa \minimize(\{\sigma\})) & \mbox{otherwise}\\
\end{cases}
$$
In the first case, if none of the strings recognized by the search automaton $\aut_s$ is contained into strings recognized by $\aut$, we can safely return the original automaton $\aut$ without any replacement.
In the special case where $\lang(\aut_s) = \{\sigma_s\}$, we return the automaton obtained by performing a replacement calling the function $\makeReplace(\aut, \sigma_s, \aut_r)$.
In the last case, for each each string $\sigma \in \lang(\aut_s)$, we perform a may replace of $\sigma$ with $\aut_r$: note that, this exactly corresponds to a call $\makeReplace$ where the replace automaton is $\aut_r \lubhfa \minimize(\{\sigma\})$, namely $\sigma$ is not removed.
The so far obtained automata are finally lubbed together.
\begin{figure}[t]
\scalebox{0.85}
{%
\begin{algorithm}[H]
\KwData{$\re$ regex over $\alphabet{}$, $i,j \in \nats$}
\KwResult{$\sset{(\sigma, n_1, n_2)}{\sigma \in \Sigma^*, n_1, n_2 \in \nats}$}
\uIf{$j = 0 \lor \re = \varnothing$}{
\textbf{return} $\varnothing$\;
}
\uElseIf{$\re = \sigma \in \Sigma^*$}{
\lIf{$i > |\sigma|$}{
\textbf{return} $\{ (\epsilon, i - |\sigma|, j )\}$
}
\lElseIf{$i + j > |\sigma|$}{
\textbf{return} $\{ (\sigma_i\dots\sigma_{|\sigma|-1}, 0, j - |\sigma| + i)\}$
}\lElse{
\textbf{return} $\{ (\sigma_{i}\dots\sigma_{i+j}, 0, 0) \}$
}
}
\uElseIf{$\re = \ctop$}{
$\mathsf{result} \leftarrow \{(\epsilon, i - k, j) : 0 \le k \le i, k \in \nats\}$\;
$\mathsf{result} \leftarrow \mathsf{result} \cup \sset{(\bullet^k, 0, j - k)}{0 \le k \le j, k \in \nats}$\;
\textbf{return} $\mathsf{result}$\;
}
\uElseIf{$\re = \re_1\re_2$}{
$\mathsf{result} \leftarrow \varnothing$\;
$\mathsf{subs}_1 \leftarrow \rsubs(\re_1, i, j)$\;
\ForEach{$(\sigma_1, i_1, j_1) \in \mathsf{subs}_1$}{
\uIf{$j_1 = 0$}{
$\mathsf{result} \leftarrow \mathsf{result} \cup \{(\sigma_1, i_1, j_1)\}$\;
}
\Else{
$\mathsf{result} \leftarrow \mathsf{result} \cup \sset{(\sigma_1 \cdot \sigma_2, i_2, j_2)}{(\sigma_2, i_2, j_2) \in \rsubs(\re_2, i_1, j_1)}$\;
}
}
\textbf{return} $\mathsf{result}$\;
}
\uElseIf{$\re = \re_1 || \re_2$}{
\textbf{return} $\rsubs(\re_1, i, j) \cup \rsubs(\re_2, i, j)$\;
}
\uElseIf{$\re = (\re_1)^*$}{
$\mathsf{result} \leftarrow \{(\epsilon,i,j)\}$;
$\mathsf{partial} \leftarrow \varnothing$\;
\Repeat{$\mathsf{partial} \neq \varnothing$}{
$\mathsf{result} \leftarrow \mathsf{result} \cup \mathsf{partial}$;
$\mathsf{partial} \leftarrow \varnothing$\;
\ForEach{$(\sigma_n, i_n, j_n) \in \mathsf{result}$}{
\ForEach{$(\mathsf{suff}, i_s, j_s) \in \rsubs(\re_1, i_n, i_n + j_n)$}{
\uIf{$\nexists (\sigma', k, w) \in \mathsf{result} \st \sigma' = \sigma_n \cdot \mathsf{suff} \land k = i_s \land w = j_s$}{
$\mathsf{partial} \leftarrow \mathsf{partial} \cup \{(\sigma_n \cdot \mathsf{suff}, i_s, j_s)\}$\;
}
}
}
}
\textbf{return} $\mathsf{result}$\;
}
\caption{$\rsubs$ algorithm}
\label{alg:rsubs}
\end{algorithm}
}%
\vskip-20pt
\end{figure}
\noindent\textbf{Substring} $\;$ Given $\aut \in \hfa$ and two intervals $\mathsf{i}, \mathsf{j} \in \intervals$, the abstract semantics of {\tt substring} returns a new automaton $\aut'$ soundly approximating any substring from $i$ to $j$ of strings recognized by $\aut$, for any $i \in \mathsf{i}, j \in \mathsf{j}$ s.t. $i \leq j$.
Given $\aut\in \hfa$, in the definition of the {\tt substring} semantics, we rely on the corresponding regex $\re$ since the two representations are equivalent
and regexes allow us to define a more intuitive formalization of the semantics of {\tt substring}. Let us suppose that $\asem{\sexp}\amem = \aut \in \hfa$ and let us denote by $\re$ the regex corresponding to the language recognized by $\aut$. At the moment, let us consider exact intervals representing one integer value, namely $\asem{\aexp_1}\amem = [i,i]$ and $\asem{\aexp_2}\amem = [j,j]$, with $i, j \in \ints$. In this case, the abstract semantics is defined~as:
$$
\asem{\subs{\sexp}{\aexp_1}{\aexp_2}}\amem \defn \bigsqcup \minimize(\sset{\sigma} {(\sigma, 0, 0) \in \rsubs(\re, i, j - i)})
$$
where $\rsubs$ takes as input a regex $\re$, two indexes $i,j \in \nats$, and computes the set of substrings from $i$ to $j$ of all the strings recognized by $\re$. In particular, $\rsubs$ is defined by Alg.~\ref{alg:rsubs} and, given a regex $\re$ and $i, j \in \nats$, it returns a set of triples of the form $(\sigma, n_1, n_2)$, such that $\sigma$ is the \textit{partial substring} that Alg.~\ref{alg:rsubs} has computed up to now, $n_1 \in \nats$ tracks how many characters have still to be skipped before the substring can be computed
and $n_2 \in \nats$ is the number of characters Alg.~\ref{alg:rsubs} needs still to look for to successfully compute a substring.
Hence, given $\rsubs(\re, i,j)$, the result is a set of such triples; note that given an element of the resulting set $(\sigma, n_1, n_2)$, when $n_2 = 0$ means that no more characters are needed and $\sigma$ corresponds to a proper substring of $\re$ from $i$ to $j$. Thus, from the resulting set, we can filter out the partial substrings, and retrieve only proper substrings of $\re$ from $i$ to $j$, by only considering the value of $n_2$. Full explanation about how Alg.~\ref{alg:rsubs} works can be found in Appendix~\ref{sect:otherops}.
Above, we have defined the abstract semantics of {\tt substring} when intervals are constant. When $\asem{\aexp_1}\amem = [i,j]$ and $\asem{\aexp_2}\amem = [l,k]$, with $i, j, l, k \in \ints$, the abstract semantics of {\tt substring} is
$$
\asem{\subs{\sexp}{\aexp_1}{\aexp_2}}\amem \defn \bigsqcup_{a \in [i,j], b \in [l,k], a \leq b}\bigsqcup \minimize(\sset{\sigma}{(\sigma, 0, 0) \in \rsubs(\re, a, b - a)})
$$
We do not precisely handle the cases when the intervals are unbounded (e.g., $[1, +\infty]$). These cases have been already considered in~\cite{arceri2019-fa} and treated in an ad-hoc manner and one may recast the same proposed idea in our context.
Nevertheless, when these cases are met, our analysis returns the automaton recognizing any possible substring of the input automaton, still guaranteeing~soundness.
\section{Experimental Results}\label{sec:experiments}
$\tarsis$ has been compared with five other domains, namely the prefix (\dprefix), suffix (\dsuffix), char inclusion (\dincl), bricks (\dbricks) domains (all defined in~\cite{costantini2015}), and $\fa$.
Since the first four domains do not deal with all the operations presented in this paper (and neither with intervals, but only integers)
the comparisons presented in Sect.~\ref{sect:comparison} will focus on the precision of these operations on small examples. Then, in Sect.~\ref{sect:qual}, we tackle more complex and real world-like programs to highlight precision and performance differences of $\tarsis$
\wrt $\fa$.
All domains have been implemented in a prototype of a static analyzer for a subset of the Java language, similar to $\imp$ (Sect.~\ref{sec:bg}), plus the \code{assert} statement.
In particular, our analyzer raises a \textit{definite} alarm (\code{DA} for short) when a failing assert is met, namely when the assertion is definitely false, while it raises a \textit{possible} alarm (\code{PA} for short) when the assertion \textit{might} fail (i.e., the assertion evaluates to $\ctop_\bools$).
Comparisons have been performed by analyzing the code through the coalesced sum domain specified in Sect.~\ref{sect:impabssem} with trace partitioning~\cite{tracepartitioning}, plugging in the various string domains. All experiments have been performed on a HP EliteBook G6 machine, with an Intel Core i7-8565U @ 1.8GHz processor and 16 GB of RAM memory.
\begin{figure}[t]
\begin{subfigure}[b]{.5\textwidth}
\centering
\begin{lstlisting}
void substring() {
String res = "substring test";
if (nondet)
res = res + " passed";
else
res = res + " failed";
result = res.substring(5, 18);
assert (res.contains("g"));
assert (res.contains("p"));
assert (res.contains("f"));
assert (res.contains("d"));
}
\end{lstlisting}
\caption{Program \progname{subs}}
\label{code:substring}
\end{subfigure}
\begin{subfigure}[b]{.5\textwidth}
\centering
\begin{lstlisting}
void loop() {
String value = read();
String res = "Repeat: ";
while (nondet)
res = res + value + "!";
assert (res.contains("t"));
assert (res.contains("!"));
assert (res.contains("f"));
}
\end{lstlisting}
\caption{Program \progname{loop}}
\label{code:loop}
\end{subfigure}
\caption{Program samples used for domain comparison}
\label{code:comparisons}
\vskip-20pt
\end{figure}
\subsection{Precision of the various domains on test cases}\label{sect:comparison}
We start by considering programs \progname{subs} (Fig.~\ref{code:substring}) and \progname{loop} (Fig.~\ref{code:loop}).
\progname{subs} calls \code{substring} on the concatenation between two strings, where the first is constant and the second one is chosen in a non-deterministic way (i.e., \code{nondet} condition is statically unknown, lines 3-6). \progname{loop} builds a string by repeatedly appending a suffix, which contains a user input (i.e., an unknown string), to a constant value. Tab.~\ref{table:approx} reports the value approximation for \code{res} for each abstract domain and analyzed program as well as if the abstract domain precisely dealt with the program assertions, when the first assertion, of each program is met. For the sake of readability, $\tarsis$ and $\fa$ approximations are expressed as regexes.
\begin{table}[b]
\setlength{\tabcolsep}{5pt}
\centering
\begin{tabular}{r|cc|cc}
\textbf{Domain} & \multicolumn{2}{c}{\textbf{Program \progname{subs}}} & \multicolumn{2}{c}{\textbf{Program} \progname{loop}}\\
\hline
$\dprefix$ & $\code{ring test}$ & \xmark & $\code{Repeat: }$ & \xmark\\
$\dsuffix$ & $\epsilon$ & \xmark & $\epsilon$ & \xmark \\
$\dincl$ & $\left[\right]\left[\code{abdefgilnprstu }\right]$ & \checkmark & $\left[\code{:aepRt }\right]\left[\code{!:aepRt }\ctop\right]$& \xmark \\
$\dbricks$ & $\left[\left\{\code{ring test fai}, \code{ring test pas}\right\}\right](1,1)$& \xmark & $\left[\left\{\ctop\right\}\right](0,+\infty)$ & \checkmark \\
$\fa$ & $\code{ring test }(\code{pas} || \code{fai})$ & \checkmark & $\code{Repeat: }(\ctop)^*$ & \checkmark \\
$\tarsis$ & $(\code{ring test pas} || \code{ring test fai})$ & \checkmark & $\code{Repeat: }(\ctop\code{!})^*$ & \checkmark \\
\end{tabular}
\vskip5pt
\caption{Values of \code{res} at the first assert of each program}
\label{table:approx}
\vskip-25pt
\end{table}
When analyzing \progname{subs}, both $\dprefix$ and $\dsuffix$ lose precision since the string to append to \code{res} is statically unknown.
This leads, at line 7, to a partial substring of the concrete one with $\dprefix$, and to an empty string with $\dsuffix$. Instead, the \code{substring} semantics of $\dincl$ moves every character of the receiver in the set of possibly contained ones, thus the abstract value at line 7 is composed by an empty set of included characters, and a set of possibly included characters containing the ones of both strings. Finally, $\dbricks$, $\fa$ and $\tarsis$ are expressive enough to track any string produced by any concrete execution of \progname{subs}.
When evaluating the assertions of \progname{subs}, a \code{PA} should be raised on lines 9 and 10, since \textit{p} or \textit{f} might be in \code{res}, together with a \code{DA} alarm on line 111, since \textit{d} is surely not contained in \code{res}. No alarm should be raised on line 8 instead, since \code{g} is part of the common prefix of both branches and thus will be included in the substring. Such behavior is achieved when using $\dbricks$, $\fa$, or $\tarsis$. Since the \code{substring} semantics of $\dincl$ moves all characters to the set of possibly contained ones, \code{PA}s are raised on all four assertions.
Since $\dsuffix$ loses all information about \code{res}, \code{PA}s are raised on lines 7-10 when using such domain. $\dprefix$ instead tracks the definite prefix of \code{res}, thus the \code{PA} at line 7 is avoided.
When analyzing \progname{loop}, we expect to obtain no alarm at line 6 (since character {\tt t} is always contained in the resulting string value), and \code{PA} at lines 7 and 8. $\dprefix$ infers as prefix of \texttt{res} the string \code{Repeat :},
keeping such value for the whole analysis of the program. This allows the analyzer to prove the assertion at line 6, but it raises \code{PA}s when it checks the ones at lines 7 and 8.
Again, $\dsuffix$ loses any information about \code{res} since the lub operation occurring at line 3 cannot find a common suffix between \code{"Repeat: "} and \code{"!"}, hence \code{PA}s are raised on lines 6-8. Since the set of possible characters contains $\ctop$, $\dincl$ can correctly state that any character might appear in the string. For this reason, two \code{PA}s are reported on lines 7 and 8, while no alarm is raised on line 6 (again, this is possible since the string used in the \code{contains} call has length 1). The alternation of $\ctop$ and \code{!} prevents $\dbricks$ normalization algorithm from merging similar bricks. This will eventually lead to overcoming the length threshold $\code{k}_L$, hence resulting in the $\left[\left\{\ctop\right\}\right](0,+\infty)$ abstract value. In such a situation, $\dbricks$ returns $\ctop_\bools$ on all \code{contains} calls, resulting in \code{PA}s on lines 6-8. The parametric widening of $\fa$ collapses the colon into $\ctop$. In $\tarsis$, since the automaton representing \code{res} grows by two states each iteration, the parametric widening defined in Sect.~\ref{sect:domandwid} can collapse all the the whole content of the loop into a 2-states loop recognizing $\ctop\code{!}$. The precise approximation of \code{res} of both domains enable the analyzer to detect that the assertion at line 6 always holds, while \code{PA}s are raised on lines 7 and 8.
In summary, $\dprefix$ and $\dsuffix$ failed to produce the expected results on both \progname{subs} and \progname{loop}, while $\dincl$ and $\dbricks$ produced exact results in one case (\progname{loop} and \progname{subs}, respectively), but not in the other. Hence, $\fa$ and $\tarsis$ were the two only domains that produced the desired behavior in these rather simple test cases.
\subsection{Evaluation on realistic code samples}
\label{sect:qual}
\begin{figure}[t]
\begin{subfigure}[b]{.5\linewidth}
\begin{lstlisting}
void toString(String[] names) {
String res="People: {";
int i=0;
while(i<names.length){
res=res+names[i];
if(i!=names.length-1)
res=res+",";
i=i+1;
}
res=res+"}";
assert(res.contains("People"));
assert(res.contains(","));
assert(res.contains("not"));
}
\end{lstlisting}
\caption{Program \progname{toString}}
\label{code:arrays}
\end{subfigure}
\begin{subfigure}[b]{.5\linewidth}
\begin{lstlisting}[extendedchars=true, escapeinside={(*}{*)}]
void count(boolean nondet) {
String str;
if(nondet) str="this is the thing";
else str="the throat";
int count=countMatches(str, "th")
assert(count>0);
assert(count==0);
assert(count==3);
}
\end{lstlisting}
\caption{Program \progname{count}}
\label{code:matches}
\end{subfigure}
\caption{Programs used for assessing domain precision}
\vskip-20pt
\end{figure}
\begin{table}[b]
\setlength{\tabcolsep}{5pt}
\centering
\begin{tabular}{r|cc|cc}
\textbf{Domain} & \multicolumn{2}{c}{\textbf{Program \progname{toString}}} & \multicolumn{2}{c}{\textbf{Program \progname{count}}}\\
\hline
$\dprefix$ & $\code{People: }\{$ & \xmark & $[0, +\infty]$ & \xmark \\
$\dsuffix$ & $\epsilon$ & \xmark & $[0, +\infty]$ & \xmark \\
$\dincl$ & $\left[\{\}\code{:Peopl }\right]\left[\{\}\code{:,Peopl }\ctop\right]$ & \xmark & $[0, +\infty]$ & \xmark \\
$\dbricks$ & $\left[\left\{\ctop\right\}\right](0,+\infty)$& \xmark & $[0, +\infty]$ & \xmark \\
$\fa$ & $\code{People: }\{ (\ctop)^*\ctop \}$ & \checkmark & $[2, 3]$ & \checkmark \\
$\tarsis$ & $\code{People: }\{\} || \code{People: }\{(\ctop\code{,})^*\ctop\}$ & \checkmark & $[2, 3]$ & \checkmark \\
\end{tabular}
\vskip5pt
\caption{Values of \code{res} and \code{count} at the first assert of the respective program}
\label{table:approx2}
\vskip-25pt
\end{table}
In this section, we explore two real world code samples. Method \progname{toString} (Fig.~\ref{code:arrays}) transforms an array of names that come as string values into a single string. While it resembles the code of \progname{loop} in Fig.~\ref{code:loop} (thus, results of all the analyses show the same strengths and weaknesses), now assertions check \code{contains} predicates with a multi-character string.
Method \progname{count} (Fig.~\ref{code:matches}) makes use of \progname{countMatches} (reported in Sect.~\ref{sec:motivating}) to prove properties about its return value. Since the analyzer is not inter-procedural, we inlined \progname{countMatches}~inside \progname{count}.
Tab.~\ref{table:approx2} reports the results of both methods (stored in \code{res} and \code{count}, respectively) evaluated by each analysis at the first assertion, as well as if the abstract domain precisely dealt with the program assertions.
As expected, when analyzing \progname{toString}, each domain showed results similar to those of \progname{loop}. In particular, we expect to obtain no alarm at line 11 (since \code{People} is surely contained in the resulting string), and two \code{PA}s at line 12 and 13. $\dprefix$, $\dsuffix$, $\dincl$ and $\dbricks$ raise \code{PA}s on all the three assert statements. $\fa$ and $\tarsis$ detect that the assertion at line 11 always holds. Thus, when using them, the analyzer raises \code{PA}s on lines 12 and 13 since: comma character is part of \code{res} if the loop is iterated at least once, and $\ctop$ might match \code{not}.
If \progname{count} (with the inlined code from \progname{countMatches}) was to be executed, \code{count} would be either $2$ or $3$ when the first assertion is reached, depending on the choice of \code{str}. Thus, no alarm should be raised at line 6, while a \code{DA} should be raised on line 7, and a \code{PA} on line 8. Since $\dprefix$, $\dsuffix$, $\dincl$ and $\dbricks$ do not define most of the operations used in the code, the analyzer does not have information about the string on which \progname{countMatches} is executed, and thus abstract \code{count} with the interval $[0..+\infty]$. Thus, \code{PA}s are raised on lines 6-8. Instead, $\fa$ and $\tarsis$ are instead able to detect that \code{sub} is present in all the possible strings represented by \code{str}. Thus, thanks to trace partitioning, the trace where the loop is skipped and \code{count} remains $0$ gets discarded. Then, when the first \code{indexOf} call happens, $[0, 0]$ is stored into \code{idx}, since all possible values of \code{str} start with \code{sub}. Since the call to \code{length} yields $[10, 17]$, all possible substrings from $[2, 2]$ (\code{idx} plus the length of \code{sub}) to $[10, 17]$ are computed (namely, \code{"e throat"}, \code{"is is th"}, \code{"is is the"}, \dots, \code{"is is the thing"}), and the resulting automaton is the one that recognizes all of them. Since the value of \code{sub} is still contained in every path of such automaton, the loop guard still holds and the second iteration is analyzed, repeating the same operations. When the loop guard is reached for the third time, the remaining substring of the shortest starting string (namely \code{"roat"}) recognized by the automaton representing \code{str} will no longer contain \code{sub}: a trace where \code{count} equals $[2, 2]$ will leave the loop.
A further iteration is then analyzed, after which \code{sub} is no longer contained in any of the strings that \code{str} might hold. Thus, a second and final trace where \code{count} equals $[3, 3]$ will reach the assertions, and will be merged by interval lub, obtaining $[2, 3]$ as final value for \code{count}. This allows $\tarsis$ and $\fa$ to identify that the assertion at line 7 never holds, raising a \code{DA}, while the one at line 8 might not hold, raising a \code{PA}.
\subsection{Efficiency}
\begin{table}[t]
\setlength{\tabcolsep}{5pt}
\centering
\begin{tabular}{r|c|c|c|c}
\textbf{Domain} & \textbf{\progname{subs}} & \textbf{\progname{loop}} & \textbf{\progname{toString}} & \textbf{\progname{count}}\\
\hline
$\dprefix$ & 11 ms & 3 ms & 78 ms & 29 ms \\
$\dsuffix$ & 10 ms & 2 ms & 92 ms & 29 ms \\
$\dincl$ & 10 ms & 3 ms & 90 ms & 29 ms \\
$\dbricks$ & 13 ms & 3 ms & 190 ms & 28 ms \\
$\fa$ & 10 ms & 52013 ms & 226769 ms & 4235 ms \\
$\tarsis$ & 34 ms & 38 ms & 299 ms & 39 ms \\
\end{tabular}
\vskip5pt
\caption{Execution times of the domains on each program}
\label{table:times}
\end{table}
The detailed analysis of two test cases, and two examples taken from real-world code underlined that $\tarsis$ and $\fa$ are the only ones able to obtain precise results on them. We now discuss the efficiency of the analyses. Tab.~\ref{table:times} reports the execution times for all the domains on the case studies analyzed in this section. Overall, $\dprefix$, $\dsuffix$, $\dincl$, and $\dbricks$ are the fastest domains with times of execution usually below 100 msecs. Thus, if on the one hand these domains failed to prove some of the properties of interest, they are quite efficient and they might be helpful to prove simple properties. $\tarsis$ execution times are higher but still comparable with them (about about 50\% overhead on average). Instead, $\fa$ blows up on three out of the four test cases (and in particular on \progname{toString}). Hence, $\tarsis$ is the only domain that executes the analysis in a limited time while being able to prove all the properties of interest on these four case studies.
\section{Conclusion}\label{sec:conclusion}
In this paper we introduced $\tarsis$, an abstract domain for sound abstraction of string values. $\tarsis$ is based on finite state automata paired with their equivalent regular expression: a representation that allows precise modeling of complex string values. Experiments show that $\tarsis$ achieves great precision also on code that heavily manipulate string values, while the time needed for the analysis is comparable with the one of other simpler domains.
The analysis proposed in this paper is intra-procedural and we are currently working on extending it to an inter-procedural analysis.
Moreover, in order to further improve the performance of our analysis, sophisticated techniques such as abstract slicing~\cite{mastroeni2010,mastroeni2017} can be integrated to keep the size of automata arising during abstract computations as low as possible, by focusing the analysis only on the string variables of interest. Finally, in this paper, we did not investigate completeness property of $\tarsis$ w.r.t. the considered operations of interest. This would ensure that no loss of information is related to $\hfa$ due to the input abstraction process~\cite{arceri2019}. Our future directions will include a deeper study about $\hfa$ completeness, and possibly the application of completion processes when incompleteness arises for a string operation~\cite{giacobazzi2000}.
\bibliographystyle{splncs04}
|
2,869,038,155,069 | arxiv |
\section{Test Functions}
\label{app:franke}
In our experiments, we use the test functions proposed in \cite{Franke79}.
For 2D:
\begin{align*}
f_{2\text{D}}(x_1,x_2) & = \frac{3}{4}{\rm e}^{-\frac{(9x_1-2)^2+(9x_2-2)^2}{4}}+\frac{3}{4}{\rm e}^{-\frac{(9x_1+1)^2}{49}-\frac{9x_2+1}{10}} \\
&+\frac{1}{2} {\rm e}^{-\frac{(9x_1-7)^2+(9x_2-3)^2}{4}}-\frac{1}{5} {\rm e}^{-(9x_1-4)^2-(9x_2-7)^2},
\end{align*}
and 3D:
\begin{align*}
f_{3\text{D}}&(x_1,x_2,x_3) = \\&\frac{3}{4}{\rm e}^{-\frac{(9x_1-2)^2+(9x_2-2)^2+(9x_3-2)^2}{4}}+\frac{3}{4} {\rm e}^{-\frac{(9x_1+1)^2}{49}-\frac{9x_2+1}{10}-\frac{9x_3+1}{10}} \\
&+\frac{1}{2} {\rm e}^{-\frac{(9x_1-7)^2+(9x_2-3)^2+(9x_3-5)^2}{4}}-\frac{1}{5} {\rm e}^{-(9x_1-4)^2-(9x_2-7)^2-(9x_3-5)^2}.
\end{align*}
\section{Basis functions}
\label{app:pk}
We use Lagrange tensor product function to interpolate between the nodes in quadrilateral and hexahedral elements. We provide the explicit formulation for $Q_{1}$ and $Q_{2}$ both in 2D, the 3D formulation follows. The four linear bases are constructed from the 1D linear bases
\[
\alpha_1(t) = 1-t \qquad \text{and}\qquad\alpha_2(t) = t
\]
as the tensor products
\begin{align*}
\phi_1(u,v) = \alpha_1(u)\, \alpha_1(v),\qquad
\phi_2(u,v) = \alpha_1(u)\, \alpha_2(v),\\
\phi_3(u,v) = \alpha_2(u)\, \alpha_1(v),\qquad
\phi_4(u,v) = \alpha_2(u)\, \alpha_2(v).
\end{align*}
\noindent
Similarly the nine quadratic bases follow from the three quadratic polynomials
\[
\theta_1(t) = (1 - t) \, (1 - 2 t),\quad
\theta_2(t) = 4 t \, (1 - t), \quad
\theta_3(t) = t \, (2 t - 1)
\]
as
\begin{align*}
\phi_1(u,v) = \theta_1(u)\, \theta_1(v),\quad
\phi_2(u,v) = \theta_1(u)\, \theta_2(v),\quad
\phi_3(u,v) = \theta_1(u)\, \theta_3(v),\\
\phi_4(u,v) = \theta_2(u)\, \theta_1(v),\quad
\phi_5(u,v) = \theta_2(u)\, \theta_2(v),\quad
\phi_6(u,v) = \theta_2(u)\, \theta_3(v),\\
\phi_7(u,v) = \theta_3(u)\, \theta_1(v),\quad
\phi_8(u,v) = \theta_3(u)\, \theta_2(v),\quad
\phi_9(u,v) = \theta_3(u)\, \theta_3(v).
\end{align*}
\section{Basis construction}
\label{sec:basis}
We seek to construct a basis on $\Omega = \mathbf{g}(\param{\mathcal{M}})$ that has the following properties:
\begin{enumerate}
\item it is $C^0$ everywhere on $\Omega$, $C^1$ at regular edges and
vertices, and $C^\infty$ within each $H$ and $P$ (polynomials on hexahedra).
\item it has approximation order 3 on each $H$ and $P$.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=0.4\linewidth]{figs/illustrations/fig4}
\caption{Spline local grid (shown in dark), for an internal and a boundary quadrilateral. The color codes are as defined in \Cref{fig:complex}.}
\label{fig:hex-local-grid}
\end{figure}
\begin{figure}
\centering
\hfill
\includegraphics[width=.4\linewidth]{figs/illustrations/spline_center}\hfill
\includegraphics[width=.4\linewidth]{figs/illustrations/spline_corner}\hfill{}
\caption{Spline hex degrees of freedom for a central element and a corner one.}
\label{fig:spline-dofs}
\end{figure}
The unknown function $u$ on the domain $\Omega$ is approximated by $u_h = \sum_{i=1}^N u_i \phi_i$, where $\phi_i$ are the basis functions.
The support of each basis function is a union of a set of the images under $\mathbf{g}$ of cells in $\param{\mathcal{M}}$.
The actual representation of the basis, which allows us to perform per-element construction of the stiffness matrix, consists of three parts.
The first two parts are local: we define a \emph{local} set of dofs and a \emph{local} basis.
For hexahedral elements, there are several types of local polynomial bases, each coming with its set of local dofs, associated with a local \emph{control mesh} for the element. These basis functions are encoded as sets of polynomial coefficients.
For polyhedral elements, all local basis functions are weighted combinations of harmonic kernel functions and a triquadratic polynomial, so these are encoded as kernel centers, weights and polynomial coefficients.
The third part is the \emph{local-to-global} linear map that represents local dofs in terms of the global ones. Importantly, unlike most standard FEM
formulations, our local-to-global maps are not necessarily simply identifying local dofs the global ones: some local dofs are linear combinations of global ones.
These maps are formally represented by $m \times N$ matrices, where $m$ is a small number of local dofs, and $N$ is the total number of global dofs.
However, as the elements local dofs depend only on nearby global dofs, these
matrices have a small number of nonzeros and can be encoded in a compact form.
In the following, we consider the construction of these three elements (set of local basis
functions, set of local dofs, local-to-global map) for each of our three element
types. But before we can construct the basis for each element, hexahedral elements need to be classified into $\mathcal{S}$ (spline-compatible) and $\mathcal{Q}$.
\subsection{Spline-compatible hexahedral elements}
\label{sec:spline-compatible}
We define a hexahedron $H$ to be spline-compatible, if its one-ring cell neighborhood is a $3\times 3\times 3$ regular grid, possibly cut on one or more sides if $H$ is on the boundary, see \Cref{fig:hex-local-grid}.
\emph{The local dofs} of this element type form a $3\times 3 \times 3$
grid (for interior elements), with the element in the center (Figure~\ref{fig:spline-dofs} left);
for boundary elements, there are still 27 dofs, ensuring a full triquadratic polynomial reproduction.
If a single layer with 9 dofs is missing, we add an extra
degree of freedom for each face of the local $3 \times 3 \times 2$ grid
corresponding to the boundary. Other cases are handled in a similar manner; e.g. the configuration for a regular corner is shown in Figure~\ref{fig:spline-dofs}, right.
\emph{The basis functions} in this case are just the standard triquadratic
uniform spline basis functions for interior hexahedra. For the boundary case,
we use the knot vector $[0,0,0,1,2,3]$ in the direction perpendicular to the
boundary. \Cref{fig:spline-bases} shows an example of the bases in 2D, for an internal node on the left and for a boundary node on the right.
Finally, the \emph{local-to-global} map simply identifies local basis dofs with corresponding global ones.
\begin{figure}
\centering
\hfill
\includegraphics[width=.4\linewidth]{figs/illustrations/spline_bases1}\hfill
\includegraphics[width=.4\linewidth]{figs/illustrations/spline_bases2}\hfill{}
\caption{Plot of the spline bases for a regular 2D grid.}
\label{fig:spline-bases}
\end{figure}
Compared to a standard $Q_2$ element, the ratio of degrees of freedom to the number of elements is much lower (a single degree of freedom per element for splines), although the approximation order is the same.
\subsection{$Q_2$ hexahedral elements}
\label{seq:q2}
This element is used for all remaining hexahedra. It is a standard element, widely used in finite element codes. \emph{Local dofs} for this element are associated with the element vertices, edge midpoints, face centers, and cell centers (Figure~\ref{fig:basis-nodes}).
The \emph{local basis functions} for the element are obtained as the tensor product of the interpolating quadratic bases on the interval $[0,1]$, consisting of $(t-\frac{1}{2})(t-1)$, $(t-\frac{1}{2})t$, and $(t-1)t$ (\Cref{app:pk}).
The only complicated part in the case of $Q_2$ elements is the definition of the
local-to-global map. For the two-dimensional setting, it is illustrated in Figure~\ref{fig:Q2-dofs}.
\begin{figure}
\centering
\includegraphics[width=0.35\linewidth]{figs/illustrations/spline_q2_nodes}
\caption{Local-to-global map for a $Q_2$ element (gray) adjacent to a single spline element (green).}
\label{fig:Q2-dofs}
\end{figure}
The difficulty in the construction of this map is due to the interface between spline elements and $Q_2$ elements, and the need to ensure continuity between the two.
In the two-dimensional case, suppose that a $Q_2$ element $Q\in\mathcal{Q}$ shares an edge with exactly one quad spline element $S\in\mathcal{S}$.
Let $u_{ij}$, $i,j= 1,\ldots, 3$, be the global dofs of the spline element,
and let $q_{ij}$, $i,j=1,\ldots, 3$, be the degrees of freedom of the $Q_2$ element, as shown in the picture.
In this case, we ensure $C^0$ continuity of the basis by expressing the
values of the polynomials on $Q \in \mathcal{Q}$ at the shared boundary points in terms of global degrees of freedom.
Since both the $Q_2$ and the spline basis restricted to an edge are quadratic polynomials, they only need to be equal on three distinct points of the edge to ensure continuity.
By noticing that the $Q_2$ basis is interpolatory at the nodes, it is enough to evaluate the spline basis at these edge nodes.
For the two-dimensional example in \Cref{fig:Q2-dofs}, the local-to-global map for the local dofs $q_{31}$, $q_{32}$ and $q_{33}$ along the edge (in blue) that the $Q_2$ element shares with the spline is obtained as follows:
\begin{equation}
\begin{split}
q_{31} & = \frac{1}{4} \left(u_{11} + u_{12} + u_{21} + u_{22} \right),\\
q_{32} &= \frac{3}{8} \left(u_{12} + u_{22}\right) + \frac{1}{16}\left(u_{11}+u_{21}+u_{13}+u_{23} \right),\\
q_{33} & = \frac{1}{4} \left(u_{12} + u_{13} + u_{22} + u_{23} \right).\\
\end{split}
\label{eq:locglobq2}
\end{equation}
In 3D, the construction is similar. We first identify all spline bases overlapping with a local dof $q_{ij}$ on the boundary of a $Q_2$ element (i.e., a vertex, edge, or face dof). To determine the weights of the local-to-global map, we evaluate each spline basis on the local dof $q_{ij}$ and set it as weight.
The remaining degrees of freedom of the $Q_2$ element are identified with global $Q_2$ degrees of freedom at the same locations. We note once again that at the center of cells in $\mathcal{Q}$ with neighboring cells in $\mathcal{S}$, there are \emph{two} dofs, one spline dof and one $Q_2$ dof. \Cref{fig:spline-q2-bases} shows an example of two basis functions on the transition from the regular part on the left to the ``irregular'' part on the right. We clearly see that on the regular part the bases are splines and on the irregular one are the standard $Q_2$ basis function: on the interface the functions are only $C_0$.
\begin{figure}
\centering
\hfill
\includegraphics[width=.2\linewidth]{figs/illustrations/spline_q2_1_mesh}\hfill
\includegraphics[width=.37\linewidth]{figs/illustrations/spline_q2_1}\hfill
\includegraphics[width=.37\linewidth]{figs/illustrations/spline_q2_2}
\caption{Plot of the bases on a junction between a regular (green) and an irregular (red) part for a regular 2D grid.}
\label{fig:spline-q2-bases}
\end{figure}
\subsection{Basis construction on polyhedral cells}
\label{sec:polyhedral}
The construction of the basis on the polyhedral cells is quite different from the construction of the basis on hexahedra. For hexahedra, the basis functions are defined on the parametric domain $\param{\mathcal{M}}$, and are remapped to $\Omega\subset \mathbb{R}^3$ via the geometric map.
For polyhedra, we construct the basis directly in physical space
On possible option to construct the basis on polyhedral cells is to split each polyhedral cells into tetrahedra. This approach has two main disadvantages: (i) it requires the use of pyramids to ensure conformity to the neighboring hexahedra, (ii) it is difficult to guarantee a sufficient element quality after subdivision. Instead, we follow the general approach of \cite{Martin:PFE:2008} with two important alterations designed to ensure third-order convergence.
Recall that all polyhedron faces are quadrilateral, and all
polyhedra are surrounded by hexahedra, specifically $Q_2$ hexahedra as their neighborhood is not regular. Moreover, since we always perform an initial refinement step, there are no polyhedral cells touching each other.
We use the degrees of freedom on the faces of these elements as degrees of freedom for the polyhedra, therefore the \emph{local-to-global} map in this case is trivial.
Each dof is already associated with a basis function $\phi_j$ defined on the hexahedra adjacent to the polyhedron.
We construct the extension of $\phi_j$ to the polyhedron $P$ from $k$ harmonic kernels $\psi_i(\mathbf{x}) = \norm{\mathbf{x}-\mathbf{z}_i}^{-1}$ centered at positions $\mathbf{z}_i$ outside the polyhedron and quadratic monomials $q_d(\mathbf{x})$, $d=1, \ldots, 10$, as
\begin{align}
\restr{\phi_j}{P}(\mathbf{x})
&= \sum_{i=1}^{k} w^j_i \psi_i(\mathbf{x}) +
\sum_{d=1}^{10} a^j_d q_d(\mathbf{x}) \label{eq:basisdef} \\
&= \vect{w}^j \cdot \vect{\psi}(\mathbf{x}) +
\vect{a}^j \cdot \vect{q}(\mathbf{x}), \nonumber
\end{align}
where $\vect{w}^j = (w^j_1, \dots, w^j_{k})^\mathsf{T}$, $\vect{\psi} = (\psi_1, \dots, \psi_{k})^\mathsf{T}$, $\vect{a}^j = (a^j_1, \dots, a^j_{10})^\mathsf{T}$, and $\vect{q} = (q_1, \dots, q_{10})^\mathsf{T}$. The coefficients $\vect{w}^j$ and $\vect{a}^j$ are $r \times k$ and $r \times 10$ matrices, respectively, with $r=1$ (scalar PDEs) or $r=2,3$ (vector PDEs).
Following \shortcite{Martin:PFE:2008}, the weights $w^j_i, a^j_d \in \mathbb{R}^k$ are determined
using a least squares fit to match the values of the basis $\phi_j$ evaluated on a set of points sampled on the boundary of the polyhedron $P$.
In \cite{martin2011flexible}, it is shown that this construction automatically
guarantees reproduction of linear polynomials if $q_d$ are linear; the quadratic case is fully analogous.
However, this condition is insufficient for high-order convergence, because our basis is \emph{non-conforming}, that is non $C^0$. In the context of the second-order PDEs we are considering, it means that it lacks $C^0$ continuity on the boundary of the polyhedron.
For this type of elements, additional \emph{consistency conditions} are required to ensure high-order convergence. These conditions depend on the PDE that we need to solve.
\begin{figure}
\centering
\hfill
\includegraphics[width=0.35\linewidth]{figs/illustrations/kernels_2d}\hfill
\includegraphics[width=0.45\linewidth]{figs/illustrations/kernels_3d}\hfill
\caption{The local basis for a polygon consists of the set of triquadratic polynomials $q_d$
and harmonic kernels $\psi_i$ centered at shown locations $\mathbf{z}_i$.}
\label{fig:poly-dofs}
\end{figure}
\paragraph{FEM theory detour.} \label{sec:poly-contraints} To achieve higher order convergence \emph{three} conditions need to be satisfied: (1) polynomial reproduction; (2) consistency, which we discuss in more detail below and (3) quadrature accuracy.
We refer to standard FEM texts such as \cite{braess2007finite}
for details, as well as to virtual element method literature (e.g., \cite{de2016nonconforming} is closely related).
To satisfy the third condition, we use high-order quadrature on the polyhedron: we decompose it into tetrahedra and use Gaussian quadrature points in each tetrahedron (the decomposition is detailed in \Cref{sec:starshaped}).
The first condition, polynomial reproduction, is ensured by construction of the basis above.
The second constraint, consistency, requires further elaboration. We first derive it for the Poisson
equation, and then summarize the general form.
We leave as future work the complete proof of the convergence properties of our method (cf. \cite{de2016nonconforming}),
which requires, in particular, a proper stability analysis.
Nevertheless, in Section~\ref{sec:evaluation} we provide numerical evidence that our method does converge at the expected rate, and that its conditioning is not affected in a significant way by the presence of nonconforming polyhedral elements.
The standard way to find the solution of a PDE for a finite element system
is to consider its weak form. For the Poisson equation, find $u$ such that
\begin{equation}
\int_\Omega \Delta u \, v = -
\int_\Omega \nabla u \cdot \nabla v = \int_\Omega f v,\qquad
\forall v
\end{equation}
\emph{Remark.} We omit, for readability, the integration
variable $\dd\mathbf{x}$. In the remaining formulas we use integration over
the physical space exclusively, in practice carried over to the parametric
space by adding the Jacobian of the geometric map.
Then, $u$ is approximated by $u_h=\sum_i u_i \phi_i$, and $v$ is taken to be in the space spanned by the basis functions $\phi_j$.
The stiffness matrix entries are obtained as $K_{ij} = \sum_{C} \int_{C} \nabla \phi_i \cdot\nabla\phi_j$, where the integral is computed per element $C$, leading to the discrete system $\mat{K} \mathbf{u} = \mathbf{f}$ (\Cref{app:fem}).
For general non-conforming elements, however, we cannot rely on this standard approach.
For example, if we consider piecewise-constant elements for the Poisson equation, the stiffness matrix would be all zeros.
However, \emph{for a given PDE}, one can construct converging non-conforming elements. One condition that is typically used, is that the discrete matrix, constructed per element as above, gives us \emph{exact} values of the weak-form integral for all polynomials reproduced by the basis (cf. $k$-consistency property in \cite{de2016nonconforming}).
As our basis reproduces triquadratic monomials (i.e., they are in the span of bases $\phi_i$), we have $q_d(\mathbf{x})=\sum_i q_d^i\phi_i(\mathbf{x})$. To ensure consistency, we require that any \emph{nonconforming basis function $\phi_j$} satisfies
\begin{equation}
-\int_{\mathbf{g}(\param{\mathcal{M}})} \Delta q_d \phi_j = \sum_i K_{ij} q_d^i
\label{eq:consistency}
\end{equation}
for all triquadratic monomials $q_d$.
To convert this equation to an equation for the unknown coefficients $w^j_i$ and
$a^j_d$, we observe that
\begin{equation}
\sum_i K_{ij} q_d^i =
\int_{\mathbf{g}(\param{\mathcal{M}})}\bigg(\sum_i q_d^i \nabla \phi_i\bigg) \cdot \nabla \phi_j =
\int_{\mathbf{g}(\param{\mathcal{M}})} \nabla q_d \cdot \nabla\phi_j
\end{equation}
due to the polynomial reproduction property. Separating the integral into the part over the hexahedra $g(\param\mathcal{M}\setminus P)$ and over the polyhedron $P=g(P)$, we write
\begin{align}
\sum_i K_{ij} q_d^i &=
C_H + \int_P \nabla q_d \cdot \nabla \Big(\vect{w}^j \cdot \vect{\psi} + \vect{a}^j \cdot \vect{q}\Big) \\
& = C_H + \vect{b}^\mathsf{T} \vect{w}^j + \vect{c}^\mathsf{T} \vect{a}^j \nonumber
\end{align}
where
\begin{equation*}
C_H = \sum_{\param{C}\in\param{\mathcal{M}}\setminus P} \int_{\mathbf{g}(\param{C})} \nabla q_d \cdot \nabla \phi_j,\quad
\vect{b} = \int_P \nabla q_d \cdot \nabla \vect{\psi}, \quad
\vect{c} = \int_P \nabla q_d \cdot \nabla \vect{q}.
\end{equation*}
Similarly, the left-hand side of \Cref{eq:consistency} is reduced to a
linear combination of $\vect{w}^j$ and $\vect{a}^j$.
This forms a set of additional constraints for the coefficients of the basis functions on the polyhedron.
To enforce them on each polyhedron, we solve a constrained least squares system for each nonconforming basis function and store the obtained coefficients.
Importantly, the addition of constraints to the least squares system does not violate
the polynomial reproduction property on the polyhedron. This can be seen as follows.
Let $v_h$ be the linear combination of basis functions $\phi_i$ overlapping $P$ that yields a triquadratic mononomial $q_d$ when restricted to $P$. Then $v_h$ is continuous on $\Omega$: the samples at the points of the boundary are from a quadratic function, therefore, match
exactly the quadratic continuation to adjacent hexahedra.
The consistency condition (Equation~\ref{eq:consistency}) applied to $v_h$ simply states that it satisfies the integration by parts formula, which it does as it is $C^0$ at the element boundaries, and smooth on the elements:
\[
-\sum_{C} \int_C \Delta q_d v_h = \sum_C \int_C \nabla q_d \cdot \nabla v_h .
\]
We conclude that $v_h$ is in the space defined by the consistency constraint, and imposing this constraint preserves polynomial reproduction. See \Cref{app:constraints} for the complete list of constraints for the Poisson equation.
More generally, for a linear PDE and for any polynomial $q$ (for vector PDEs, e.g., elasticity, this means that all components are polynomial) we require
\[
a(q_d, v_h) = a_h(q_d, v_h),
\quad\text{where}\quad
a(u, v) = \int_\Omega \mathcal{F}(x, u, \nabla u, \Delta u) v
\]
where $\mathcal{F}$ is a linear function of its arguments depending on $u$, and $a_h$ is defined as a sum of integral over $\Omega$ after formal integration by parts of $\mathcal{F}$, to eliminate the second-order derivatives. For a conforming $C^0$ basis, this condition automatically follows from the integration by parts formulas, which are applicable.
We now split the two bilinear forms as $a=a^H+a^P$ and $a_h=a_h^H + a_h^P$ where $a^H$ and $a_h^H$ contains the integral over the hexahedral known part, and $a^P$ and $a_h^P$ the integral over the polyhedral unknown part. Thus, for a basis $\phi_j$ we obtain the following set of constraints
\begin{align*}
a^H(q_d,\phi_j) - a^H_h(q_d,\phi_j)
&= a^H\left(
q_d, \vect{w}^j \cdot \vect{\psi}(\mathbf{x}) + \vect{a}^j \cdot \vect{q}(\mathbf{x})
\right) \\
& -a^H_h\left(
q_d, \vect{w}^j \cdot \vect{\psi}(\mathbf{x}) + \vect{a}^j \cdot \vect{q}(\mathbf{x})
\right).
\end{align*}
For a scalar-valued PDE, we have the same number of constraints (5 in 2D and 9 in 3D) as monomials $q_d$, thus we are guaranteed to have a solution that respects the constraints for any $k > 0$.
For vector PDEs (e.g., elasticity), we impose the additional constraints such that the
coefficients $\left( w^j_{i} \right)_\alpha$ are the same for all dimensions $\alpha = 1,\ldots,r$, $r=2$ or $r=3$, which simplifies the implementation, but increases the number of required centers $\vect{z}_j$,
so that all constraints can be satisfied. More explicitly, for vector PDEs we require that the
constraints
\[
a(q^s_d \vect{e}_\alpha, \phi^s_j \vect{e}_\beta) = a_h(q^s_d \vect{e}_\alpha, \phi^s_j \vect{e}_\beta)
\]
for $\alpha,\beta = 1,\ldots,r$ are satisfied, with $q^s$ and $\phi_j^s$ denoting scalar polynomials
and scalar basis functions respectively, defined as in \eqref{eq:basisdef} for dimension 1, and $\vect{e}_\alpha$ is the unit vector for axis $\alpha$.
For dimensions 2 and 3, the number of monomials $q$ is $5$ and $9$ respectively. The number of constraints is given by $r^2q - q$, and thus we will need at least 15 $\vect{z}_i$ in 2D and 72 in 3D to ensure that the constraints are respected.
\subsection{Imposing boundary conditions}
We consider two standard types of boundary conditions: Dirichlet (fixed function values on the boundary) and Neumann (fixed normal derivatives at the boundary). Neumann (also known as natural) boundary conditions are handled in the context of the variational formulation of the problem as extra integral terms, in the case of inhomogeneous conditions. Homogeneous conditions do not require any special treatment and are imposed automatically in the weak formulation.
We assume that the Dirichlet conditions are given as a continuous function defined on the boundary of the domain. For all boundary dofs, we sample the boundary condition on the faces of the domain and perform a least-squares fit to retrieve the nodal values.
\section{Limitations and concluding remarks}
\label{sec:conclusion}
We introduced Poly-Spline FEM, an integrated meshing and finite element method designed to take advantage of recent developments in hexahedral-dominant meshing, opening the doors to black box analysis with an high-order basis and cubic convergence under refinement. Our approach is to use the best possible basis for each element of the mesh and is amenable to continuous improvement, as the mesh generation methods and basis constructions improve.
For instance, in this setting, one can avoid costly manual mesh repair and improvement,
at the expense of modest increases in solution time, by switching to more expensive, but much less shape-sensitive elements when a hexahedron is badly shaped.
While our basis construction is resilient to bad element quality, the geometric map between the splines and the $Q_2$ elements might introduce distortion (and even inversions in pathological cases), lowering convergence rate. These effects could be ameliorated by optimizing the positions of the control points of the geometric map, which is an interesting avenue for future work.
Our current construction always requires an initial refinement step to avoid having polyhedra adjacent to other polyhedra or to the boundary. This limitation could be lifted by generalizing our basis construction, and would allow our method to process very large datasets, that cannot be refined due to memory considerations.
Another limitation of our method is that the consistency constraints in our basis construction (\Cref{sec:poly-contraints}) are PDE-dependent, and they thus require additional efforts to be used with a user-provided PDE: a small and reusable investment compared to the cost of manually meshing with hexahedra every surface that one wishes to analyze using $Q_2$ elements.
The code can be found at \url{https://polyfem.github.io/} and
provides an automatic way to generate such constraints relying on both the local assembler and automatic differentiation.
Poly-Spline FEM is a practical construction in-between unstructured $Q_2$ and fully-structured pure splines: it requires a smaller number of dofs than $Q_2$ (thanks to the spline elements) while preserving cubic convergence rate. We believe that our construction will stimulate additional research in the development of heterogeneous FEM methods that exploit the regularity of spline basis and combine it with the flexibility offered by traditional FEM elements. To allow other researchers and practitioners to immediately build upon our construction, we will release our entire software framework as an open-source project.
\section*{Acknowledgements}
We are grateful to the NYU HPC staff for providing computing cluster service. This work was partially supported by the \grantsponsor{NSFC}{NSF CAREER}{} award \grantnum{NSFC}{1652515}, the NSF grant \grantnum{NSFC}{IIS-1320635}, the NSF grant \grantnum{NSFC}{DMS-1436591}, the NSF grant \grantnum{NSFC}{1835712}, the SNSF grant \grantnum{SNSF}{P2TIP2\_175859}, a gift from Adobe Research, and a gift from nTopology.
\section{Polyhedral basis constraints}
\label{app:constraints}
We restrict the detailed explanation to 2D, the three-dimensional case follows.
Let $\vect{p}_i=(x_i, y_i)$, $i=1,\dots,s$, be the set of collocation points, that is, the points where we know the function values.
For the FEM basis $\phi_j$ that is nonzero on the polyhedral element $P$, we want to solve the least squares system $\vect{A w} = \vect{b}$, where
\begin{equation*}
\mat{A} =
\begin{pmatrix}
\psi_1(\vect{p}_1)& \dots& \psi_k(\vect{p}_1)& 1& x_1& y_1& x_1\, y_1& x_1^2& y_1^2 \\
\vdots& \ddots& \vdots& \vdots& \vdots& \vdots& \vdots& \vdots& \vdots&\\
\psi_1(\vect{p}_s)& \dots& \psi_k(\vect{p}_s)& 1& x_s& y_s& x_s\, y_s& x_s^2& y_s^2
\end{pmatrix},
\end{equation*}
$\vect{b}$ is the evaluation of the basis $\phi_j$ from the neighbouring elements on the on the collocation points, and $\vect{w}=(w_1, \dots, w_k, a_{00}, a_{10}, a_{01}, a_{11}, a_{20}, a_{02})$.
Now to ensure consistency we need that
\[
-\int_{\mathbf{g}(\param\mathcal{M})}\Delta q \phi_j = \int_{\mathbf{g}(\param\mathcal{M})}\nabla q \cdot \nabla\phi_j
\]
holds for any of the 5 monomials.
We now split the previous integral over the polygon $P$ and over the known non-polygonal part $\overline P = \mathbf{g}(\param\mathcal{M}) \setminus P$
\[
\int_{P}\Delta q \phi_j + \int_{P}\nabla q \cdot \nabla\phi_j =
-\int_{\overline P}\Delta q \phi_j - \int_{\overline P}\nabla q \cdot \nabla\phi_j.
\]
We remark that the right-hand side of this equation is known since the bases on $\overline P$ are given, we call the five term $c_{ij}$ following the same indices as $a_{ij}$ (e.g., $c_{20} =
-\int_{\overline P}\Delta x^2 \phi_j - \int_{\overline P}\nabla x^2 \cdot \nabla\phi_
$).
We now evaluate the left-hand side for the five 2D monomials
\begin{align*}
\int_{P}\Delta x \phi_j + \int_{P}\nabla x \cdot \nabla\phi_j &= \int_{P}\pdiff{x}{\phi_j},\\
\int_{P}\Delta y \phi_j + \int_{P}\nabla x \cdot \nabla\phi_j &= \int_{P}\pdiff{y}{\phi_j},\\
\int_{P}\Delta (xy) \phi_j + \int_{P}\nabla (xy) \cdot \nabla\phi_j &= \int_{P}y\pdiff{x}{\phi_j} + \int_{P}x\pdiff{y}{\phi_j},\\
\int_{P}\Delta x^2 \phi_j + \int_{P}\nabla x^2 \cdot \nabla\phi_j &= \int_{P} 2 \phi_j + \int_{P}2x\pdiff{x}{\phi_j},\\
\int_{P}\Delta y^2 \phi_j + \int_{P}\nabla y^2 \cdot \nabla\phi_j &= \int_{P} 2 \phi_j + \int_{P}2y\pdiff{y}{\phi_j}.
\end{align*}
By plugging the definition of $\phi_j$ over $P$ we obtain the following consistency constraints for the coefficients $a_{00}, a_{10}, a_{01}, a_{11}, a_{20}, a_{02}$:
\begin{multline*}
\sum_{i=1}^k w_i^j \int_P\pdiff{x}{\psi} + a_{10}\abs{P} + a_{11}\int_P y + 2a_{20}\int_P x = c_{10}\\
\sum_{i=1}^k w_i^j \int_P\pdiff{y}{\psi} + a_{01}\abs{P} +\\ a_{11}\int_P x + 2a_{02}\int_P y= c_{01},\\
\sum_{i=1}^k w_i^j \int_P(y\pdiff{x}{\psi} + x\pdiff{y}{\psi}) +\\ a_{10} \int_P x + a_{01}\int_P y + a_{11}\int_P x^2 + y^2 + 2(a_{20} + a_{02})\int_P xy= c_{11}\\
2\sum_{i=1}^k w_i^j \int_P(\psi + x\pdiff{x}{\psi}) + 2a_{00} +\\ 4a_{10}\int_P x + 2a_{01}\int_P y + 4a_{11}\int_P x y + 6a_{20}\int_P x^2 + 2a_{02}\int y^2= c_{20},\\
2\sum_{i=1}^k w_i^j \int_P(\psi + y\pdiff{y}{\psi}) + 2a_{00} +\\ 2a_{10}\int_P x + 4a_{01}\int_P y + 4a_{11}\int_P x y + 2a_{20}\int_P x^2 + 6a_{02}\int y^2= c_{02}.\\
\end{multline*}
\section{Evaluation}
\label{sec:evaluation}
\begin{figure*}
\centering
\parbox{0.38\linewidth}{
\includegraphics[width=0.45\linewidth]{figs/screenshots/penguin1}
\includegraphics[width=0.45\linewidth]{figs/screenshots/penguin2}
}\hfill{}
\parbox{0.2\linewidth}{
\includegraphics[width=0.9\linewidth]{figs/screenshots/torus1}\par
\includegraphics[width=0.9\linewidth]{figs/screenshots/torus2}
}\hfill{}
\parbox{0.38\linewidth}{
\includegraphics[width=0.45\linewidth]{figs/screenshots/sculpt1}
\includegraphics[width=0.45\linewidth]{figs/screenshots/sculpt2}
}\\[1.5ex]
\parbox{0.38\linewidth}{\centering 2D polygon.}\hfill
\parbox{0.2\linewidth}{\centering 3D hexahedral mesh.}\hfill
\parbox{0.38\linewidth}{\centering 3D hybrid mesh.}
\caption{Solution of the Poisson problem different meshes.}
\label{fig:result_all}
\end{figure*}
\begin{figure*}
\centering
\makebox[\columnwidth][c]{
\rotatebox{90}{\parbox{0.15\linewidth}{\centering\footnotesize Error}}\hfill
\begin{overpic}[width=0.26\linewidth]
{figs/batch/batch_planar_l2_2d}
\put (50,60) {$L_2$}
\end{overpic}\hfill{}
\begin{overpic}[width=0.26\linewidth]
{figs/batch/batch_planar_lp_2d}
\put (50,60) {$L_\infty$}
\end{overpic}\hfill{}
\begin{overpic}[width=0.26\linewidth]{figs/batch/batch_volume_l2_3d}
\put (50,60) {$L_2$}
\end{overpic}\hfill{}
\begin{overpic}[width=0.26\linewidth]{figs/batch/batch_volume_lp_3d}
\put (50,60) {$L_\infty$}
\end{overpic}\hfill{}
}\\
{\footnotesize Number of DOFs}
\caption{Scatter plot of the $L_2$ and $L_\infty$ error versus the number of dofs on the 2D (first two) and 3D (last two) dataset.}
\label{fig:scatter_plot}
\end{figure*}
We demonstrate the robustness of our method by solving the Poisson equation on a dataset of pure hex and hybrid meshes, consisting of 205 star-shaped polygonal meshes in 2D, 165 pure hexehedral meshes in 3D, and 29 star-shaped polyhedral meshes in 3D.
The dataset can be found at \url{https://cims.nyu.edu/gcl/papers/2019-Polyspline-Dataset.zip}.
All those meshes were automatically generated using \cite{Gao:2017,Gao:2017:RSS}. We show a selection of meshes from our dataset in Figures~\ref{fig:teaser} and \ref{fig:result_all}.
We evaluated the performance, memory consumption, and running time of our proposed spline construction compared with standard $Q_1$ and $Q_2$ elements. For our experiments, we compute the approximation error on a standard Franke's test function~\cite{Franke79} in 2D and 3D (\Cref{app:franke}).
Note that in all these experiments, we enforced the consistency constraints on the bases spanning the polyhedral elements, to ensure the proper convergence order.
The 2D experiments were run on a PC with an Intel\textregistered{} Core\texttrademark{}\textbf{} i7-5930K CPU @ 3.50GHz with 64 GB, while the 3D dataset was run on a HPC cluster with a memory limit of 64 GB.
\paragraph{Absolute Errors.} \Cref{fig:scatter_plot} shows a scatter plot of the $L_2$ and $L_\infty$ errors on both 2D and 3D datasets, with respect to the number of bases created by each type of elements ($Q_1$, $Q_2$, Splines), after one step of polar refinement.
The plot shows that in 2D both the $L_2$ and $L_\infty$ errors are about $1.5$ orders of magnitude lower for our splines compared to $Q_1$, while keeping a similar number of dofs. In comparison, $Q_2$ has lower error, but requires a much larger number of dofs. In 3D the spread of both errors is much larger, and the gain in $L_\infty$ is less visible, but still present, compared to $Q_1$.
\begin{figure}
\centering
\makebox[\columnwidth][c]{
\includegraphics[width=0.5\columnwidth]{figs/memory/memory_peak_2d_nt}
\includegraphics[width=0.5\columnwidth]{figs/memory/memory_peak_3d_nt}
}
\caption{Peak memory for the direct solver as reported by Pardiso. Left: 2D results. right: 3D results.}
\label{fig:memory}
\end{figure}
\paragraph{Memory.} A histogram of the memory consumption of the solver is presented in \Cref{fig:memory}. The figure shows the peak memory usage as reported by Pardiso \cite{pardiso-6.0a, pardiso-6.0b} when solving the linear system arising from the FEM.
Out of the 159 pure hexahedral models we tested, 33 went out of memory when solving using $Q_2$ elements, while only 2 are too big to solve with our spline bases. On the star-shaped hybrid meshes, one model is too big to solve for both $Q_2$ and our spline construction.
More detailed statistics are reported in \Cref{tab:statistics}. We remark that the error for our method is higher than $Q_2$ because our method has less dofs (50\% less in average) since both meshes have the same number of vertices.
\paragraph{Time.} \Cref{fig:timings_regular} shows the assembly time and solve time for solving a Poisson problem on an unit square (cube) under refinement in two (three) dimensions. Note that both steps (assembly and solve) are performed in parallel. For the 2D experiment we used a 3.1~GHz Intel Core i7-7700HQ with 8 threads, while in 3D we used a 3.5~GHz Intel Core i7-5930K with 12 threads (both machines use hyper-threading). In \Cref{tab:statistics} we summarize the timings for the large dataset using a 2.6~GHz Intel Xeon E5-2690v4 with 8 threads. In all cases, the total time is dominated by the solving time.
\begin{figure*}
\centering
\makebox[\columnwidth][c]{
\rotatebox{90}{\parbox{0.15\linewidth}{\centering\footnotesize Error}}\hfill
\begin{overpic}[width=0.26\linewidth]{figs/convergence/conv_regular_l2_2d}
\put (50,60) {$L_2$}
\end{overpic}\hfill{}
\begin{overpic}[width=0.26\linewidth]{figs/convergence/conv_regular_lp_2d}
\put (50,60) {$L_\infty$}
\end{overpic}\hfill{}
\begin{overpic}[width=0.26\linewidth]{figs/convergence/conv_regular_l2_3d}
\put (50,60) {$L_2$}
\end{overpic}\hfill{}
\begin{overpic}[width=0.26\linewidth]{figs/convergence/conv_regular_lp_3d}
\put (50,60) {$L_\infty$}
\end{overpic}\hfill{}
}\\
{\footnotesize Max edge length}
\caption{
Poisosn equation convergence plot in $L_2$ and $L_\infty$ norm on a regular grid in 2D (first two) and 3D (last two).
}
\label{fig:convergence_regular}
\end{figure*}
\begin{figure*}
\centering
\makebox[\linewidth][c]{
\rotatebox{90}{\parbox{0.15\linewidth}{\centering\footnotesize Error}}\hfill
\begin{overpic}[width=0.26\linewidth]{figs/convergence/conv_hybrid_l2_2d}
\put (50,60) {$L_2$}
\end{overpic}\hfill{}
\begin{overpic}[width=0.26\linewidth]{figs/convergence/conv_hybrid_lp_2d}
\put (50,60) {$L_\infty$}
\end{overpic}\hfill{}
\begin{overpic}[width=0.26\linewidth]{figs/convergence/conv_hybrid_l2_3d}
\put (50,60) {$L_2$}
\end{overpic}\hfill{}
\begin{overpic}[width=0.26\linewidth]{figs/convergence/conv_hybrid_lp_3d}
\put (50,60) {$L_\infty$}
\end{overpic}\hfill{}
}\\
{\footnotesize Max edge length}
\caption{
Poisson equation convergence plot in $L_2$ and $L_\infty$ norm for a hybrid mesh in 2D (first two) and 3D (last two). Meshes are show in \Cref{fig:refinement_example}.
}
\label{fig:convergence_hybrid}
\end{figure*}
\begin{figure}
\centering
\makebox[\columnwidth][c]{
\rotatebox{90}{\parbox{0.3\linewidth}{\centering\footnotesize Time (s)}}\hfill
\begin{overpic}[width=0.52\linewidth]{figs/convergence/time_regular_2d}
\end{overpic}\hfill{}
\begin{overpic}[width=0.52\linewidth]{figs/convergence/time_regular_3d}
\end{overpic}
}\\
{\footnotesize Max edge length}
\caption{
Time required to assemble the stiffness matrix and solve the linear system on a regular grid in 2D (left) and 3D (right)
}
\label{fig:timings_regular}
\end{figure}
\paragraph{Convergence.} Figures~\ref{fig:convergence_regular} and~\ref{fig:convergence_h1_regular} show the convergence of spline elements vs $Q_1$ and $Q_2$ for the $L_2$, $L_\infty$, and $H_1$ norms, in the ideal case of a uniform grid, both in 2D and 3D.
This is in a sense the best-case scenario that can be expected for our spline construction: every element is regular and has a $3^2$ or $3^3$ neighborhood. In this situation, splines exhibit a superior convergence $> 3.0$ under both $L_2$, $L_\infty$, and $H_1$ norms.
On a 2D test mesh with mixing polygons and splines (model shown in \Cref{fig:refinement_example} top), we achieved a convergence rate of 2.8 in $L_\infty$, and 3.1 in $L_2$ (\Cref{fig:convergence_hybrid}, left). \Cref{fig:convergence_hybrid} also displays the convergence we obtained on a very simple hybrid 3D mesh, starting from a cube marked as a polyhedron, to which we applied our polar refinement described in Section~\ref{sec:refinement}. On this particular mesh, the splines exhibited a $L_\infty$ convergence similar to $Q_2$, albeit producing an error that is somewhat larger.
\begin{figure}
\centering
\makebox[\columnwidth][c]{
\rotatebox{90}{\parbox{0.3\linewidth}{\centering\footnotesize Error}}\hfill
\begin{overpic}[width=0.52\linewidth]{figs/convergence/conv_regular_h1_2d}
\end{overpic}\hfill{}
\begin{overpic}[width=0.52\linewidth]{figs/convergence/conv_regular_h1_3d}
\end{overpic}
}\\
{\footnotesize Max edge length}
\caption{
Poisson equation convergence plot in $H_1$ norm on a regular grid in 2D (first plot) and 3D (second plot).
}
\label{fig:convergence_h1_regular}
\end{figure}
\paragraph{Consistency Constraints.} \Cref{fig:integral_constraints} shows the effect of our consistency constraint on the convergence of a polygonal mesh under refinement (the one shown in \Cref{fig:refinement_example}, top), with $Q_2$ elements used on the quadrilateral part. Without imposing any constraint on the bases overlapping the polygon, one can hope at best for a convergence of $\sim{}2.0$, whereas pure $Q_2$ elements should have a convergence rate of $3.0$. With a constraint ensuring linear reproduction for the bases defined on polyhedra, the convergence rate is still only $\sim{}2.5$.
Finally, with the constraints we describe in Section~\ref{sec:polyhedral} to ensure the bases reproduce triquadratic polynomials, we reach the expected convergence rate of $\sim{3.0}$.
\begin{figure}
\centering
\rotatebox{90}{\parbox{0.3\linewidth}{\centering\footnotesize Error}}
\begin{overpic}[width=0.52\linewidth]{figs/constraints/constraints_poly_lp_2d}
\put (50,60) {$L_\infty$}
\end{overpic}
\\
{\footnotesize Max edge length}
\caption{$L^\infty$ convergence for the different consistency constraints on the polyhedron of Figure~\ref{fig:refinement_example}.}
\label{fig:integral_constraints}
\end{figure}
\paragraph{Polyhedral Basis Resilience.} Our polyhedral bases are less susceptible to badly shaped elements than $Q_2$. We computed the $L_2$ and $L_\infty$ interpolation errors for the gradients of the Franke function for 14 badly shaped hexahedra, \Cref{fig:poly_error} shows some of them. The $L_2$ and $L_\infty$ maximum and average errors are 3 times smaller with our polygonal basis.
\paragraph{Conditioning and Stability.} An important aspect of our new FE method is the conditioning of the resulting stiffness matrix: this quantity relates to both the stability of the method, and to its performances when an iterative linear solver is used (important only for large problems where direct solvers cannot be used due to their memory requirements). We compute the condition number of the Poisson stiffness matrix on a regular and perturbed grid (Figure \ref{fig:spectrum}).
In both cases, our discretization has a good conditioning number, slightly higher than pure linear elements, but lower than pure quadratic elements (while sharing the same cubic convergence property).
To evaluate the conditioning of the polyhedral bases we started from a base mesh of good quality, marked $5\%$ of the quads as polygons, and pushed one of the vertices inwards. Even for this extreme distortion of polyhedral elements, the conditioning remained similar to the case when no polyhedral elements are used on the same mesh.
\begin{figure}
\centering
\includegraphics[width=0.24\linewidth]{figs/screenshots/poly_0}\hfill{}
\includegraphics[width=0.24\linewidth]{figs/screenshots/poly_1}\hfill{}
\includegraphics[width=0.24\linewidth]{figs/screenshots/poly_2}\hfill{}
\includegraphics[width=0.24\linewidth]{figs/screenshots/poly_3}\par
\caption{Low-quality polyhedra used to evaluate the interpolation errors.}
\label{fig:poly_error}
\end{figure}
\begin{figure}
\centering
\rotatebox{90}{\parbox{0.3\linewidth}{\centering\footnotesize Condition number}}\hfill
\includegraphics[width=0.45\linewidth]{figs/convergence/spectrum_normal}\hfill{}
\includegraphics[width=0.45\linewidth]{figs/convergence/spectrum_extreme}\\
{\footnotesize Number of refinements}
\caption{
Evolution of the condition number of the stiffness matrix for the Poisson problem under refinement.
For each level of refinement we artificially marked 5\% of the quads as polyhedra and move one random vertex on the diagonal between 20\% 40\%, as shown in the insets figures in blue.
Note that some of the curves coincide, that is $Q_1$ with $Q_1$ poly and $Q_2$ with $Q_2$ poly.
}
\label{fig:spectrum}
\end{figure}
\paragraph{Elasticity.}
While most of our testing was done for the Poisson equation, we have performed some testing of linear elasticity problems.
\Cref{fig:elasticity} top shows the solution of a linear elasticity problem on a pure hexahedral mesh. The outer loops of the knots are pulled outside of the figure, deforming the knot. The color in the figure represents the magnitude of the displacement vectors. On the bottom we show the result for a Young's modulus of 2e5.
Figure~\ref{fig:convergence_regular_elast} shows a plot for the linear elasticity PDE with Young's modulus 200 and Poisson's ratio 0.35 on a regular grid, and similar results are obtained an hybrid mesh, Figure~\ref{fig:convergence_hybrid_elast}.
The convergence plots for $Q_1$ and $Q_2$ are obtained by mixing regular $Q_1$/$Q_2$ bases with the polyhedral construction (Section~\ref{sec:polyhedral}).
\begin{figure}
\centering
\hfill{}
\includegraphics[width=0.45\linewidth]{figs/screenshots/elasticity}\hfill{}
\includegraphics[width=0.45\linewidth]{figs/screenshots/elasticity_sol}\par
\includegraphics[width=0.45\linewidth]{figs/screenshots/largeEI}\hfill
\includegraphics[width=0.45\linewidth]{figs/screenshots/largeE}\par
\caption{Displacements computed solving linear elasticity on a pure hexahedral 3D model, using spline bases. Top a complicated model with $\lambda=1$ and $\mu=1$, bottom a bended bar $\nu=0.35$ and large young modulus $E=2$e5.}
\label{fig:elasticity}
\end{figure}
\begin{figure}
\centering
\makebox[\columnwidth][c]{
\rotatebox{90}{\parbox{0.3\linewidth}{\centering\footnotesize Error}}\hfill
\begin{overpic}[width=0.52\linewidth]{figs/convergence/elast_regular_l2}
\put (50,60) {$L_2$}
\end{overpic}\hfill{}
\begin{overpic}[width=0.52\linewidth]{figs/convergence/elast_regular_h1}
\put (50,60) {$H_1$}
\end{overpic}
}\\
{\footnotesize Max edge length}
\caption{
Linear elasticity convergence plot in $L_2$ and $H_1$ norm on a regular grid in 2D.
}
\label{fig:convergence_regular_elast}
\end{figure}
\begin{figure}
\centering
\makebox[\columnwidth][c]{
\rotatebox{90}{\parbox{0.3\linewidth}{\centering\footnotesize Error}}\hfill
\begin{overpic}[width=0.52\linewidth]{figs/convergence/elast_poly_l2}
\put (50,60) {$L_2$}
\end{overpic}\hfill{}
\begin{overpic}[width=0.52\linewidth]{figs/convergence/elast_poly_h1}
\put (50,60) {$H_1$}
\end{overpic}
}\\
{\footnotesize Max edge length}
\caption{
Linear elasticity convergence plot in $L_2$ and $H_1$ norm on a hybrid mesh in 2D.
}
\label{fig:convergence_hybrid_elast}
\end{figure}
\begin{table}
\centering
\makebox[\linewidth][c]{
{\footnotesize
\begin{tabular}{@{}c@{}l@{}rrr@{\quad}r@{\quad}rS[output-exponent-marker=\text{e}]S[output-exponent-marker=\text{e}]}
& & Num dofs & \multicolumn{1}{@{}c@{}}{Solver} & \multicolumn{1}{@{}c@{}}{Bases} & Assembly & \specialcell[c]{Memory \\ (\si{\mebi\byte})} & {$L_2$ Error} & {$L_\infty$ Error} \\
\toprule
\multirow{5}{*}{$Q_1$}
& mean & 174,335 & \ang{; 8;26} & \ang{; 15;40} & \ang{;0;37} & 1,132 & \num{7.60e-05} & \num{1.11e-03} \\
& std & 177,192 & \ang{; 20;29} & \ang{; 19;44} & \ang{;0;40} & 1,589 & \num{2.27e-04} & \num{2.69e-03} \\
& min & 3,035 & \ang{; 0; 0} & \ang{; 0;17} & \ang{;0;1 } & 5 & \num{5.57e-07} & \num{1.98e-05} \\
& median & 105,451 & \ang{; 1; 2} & \ang{; 9;8 } & \ang{;0;23} & 500 & \num{3.18e-05} & \num{3.39e-04} \\
& max & 926,938 & \ang{;182;32} & \ang{;101;27} & \ang{;5;20} & 9,329 & \num{2.88e-03} & \num{2.09e-02} \\
\midrule
\multirow{5}{*}{$Q_2^\star$ }
& mean & 552,583 & \ang{; 63;43} & \ang{;13;11} & \ang{;0;55} & 5,716 & \num{3.62e-06} & \num{6.34e-05} \\
& std & 355,783 & \ang{; 72;42} & \ang{;11;17} & \ang{;0;59} & 4,382 & \num{1.80e-05} & \num{2.00e-04} \\
& min & 21,525 & \ang{; 0; 5} & \ang{; 0;19} & \ang{;0; 3} & 94 & \num{5.85e-09} & \num{9.58e-08} \\
& median & 457,358 & \ang{; 34;17} & \ang{; 8;38} & \ang{;0;40} & 4,586 & \num{6.31e-07} & \num{1.06e-05} \\
& max & 1,709,712 & \ang{;289;19} & \ang{;52;4 } & \ang{;6;56} & 15,677 & \num{1.87e-04} & \num{1.50e-03} \\
\midrule
\multirow{5}{*}{\rotatebox[origin=c]{90}{our$^\star$}}
& mean & 239,245 & \ang{;34;30} & \ang{;14;59} & \ang{;1;33} & 3,728 & \num{1.65e-05} & \num{2.88e-03} \\
& std & 178,979 & \ang{;62;19} & \ang{;12;34} & \ang{;1;15} & 3,787 & \num{5.57e-05} & \num{1.82e-02} \\
& min & 9,987 & \ang{;0;1} & \ang{; 0;21} & \ang{;0; 3} & 61 & \num{4.09e-08} & \num{8.04e-07} \\
& median & 189,880 & \ang{;9;13} & \ang{;10;13} & \ang{;1;11} & 2,391 & \num{3.46e-06} & \num{2.62e-04} \\
& max & 1,033,492 & \ang{;324;8} & \ang{;55;11} & \ang{;7; 9} & 15,681 & \num{5.85e-04} & \num{2.30e-01} \\
\bottomrule
\end{tabular}
}}
\caption{
Dataset 3D pure hexahedra + star-shaped polyhedra (188 models in total). The memory is the total peak memory (in \si{\mebi\byte}) as reported by the solver Pardiso. \emph{$~^\star$does not include the models that went out of memory}.
From left to right, the total number of DOFs, the time required to solve the system, the time used to build the bases, the time employed to assembly the stiffness matrix, the peak memory, the $L_2$ error, and the $L_\infty$ error.
}
\label{tab:statistics}
\end{table}
\section{Brief Finite Element Introduction}
\label{app:fem}
Many common elliptic partial differential equations have the general form
\[
\mathcal{F}(\mathbf{x}, u, \nabla u, \Delta u) = f(\mathbf{x}), \qquad \mathbf{x}\in\Omega,
\]
subject to
\[
u(\mathbf{x})=d(\mathbf{x}),~\mathbf{x}\in\partial\Omega_D \qquad\text{and}\qquad
\nabla u(\mathbf{x}) \cdot \vect{N}(\mathbf{x}) = n(\mathbf{x}),~\mathbf{x}\in\partial\Omega_N
\]
where $\vect{N}(\mathbf{x})$ is the surface normal, $\partial\Omega_D$ is the Dirichlet boundary where the function $u$ is constrained (e.g., positional constraints) and $\partial\Omega_N$ is the Neumann boundary where the gradient of the function $u$ is constrained. The most common PDE in this class is the Poisson equation $-\Delta u = f$.
\paragraph{Weak Form} The first step in a finite element analysis consists of introducing the weak form of the PDE: find $u$ such that
\[
\int_\Omega\mathcal{F}(\mathbf{x}, u, \nabla u, \Delta u)\, v(\mathbf{x})\,\dd\mathbf{x} = \int_\Omega f(\mathbf{x})\, v(\mathbf{x})\,\dd\mathbf{x},
\]
holds for any \emph{test function} $v$ vanishing on the boundary. This reformulation has two advantages: (1) it simplifies the problem, and (2) it weakens the requirement on the function $u$. For instance, in case of the Poisson equation, the strong form is well defined only if $u$ is twice differentiable, which is a difficult condition to enforce on a discrete tesselation. However, the weak form requires only that the second derivatives of $u$ are integrable, allowing discontinuous jumps. Using integration by parts it can be further relaxed to
\[
\int_\Omega \nabla u(\mathbf{x}) \cdot \nabla v(\mathbf{x})\,\dd\mathbf{x} = \int_\Omega f(\mathbf{x})\, v(\mathbf{x})\,\dd\mathbf{x},
\]
where only the gradient of $u$ needs to be integrable, that is $u\in H^1$, and can thus be represented using piecewise-linear basis functions.
\paragraph{Basis Functions} The key idea of a finite element discretization is to approximate the solution space via a \emph{finite} number of basis functions $\phi_i$, $i=1,\hdots, N$, which are independent from the PDE we are interested in.
The number of nodes (and basis functions) per element and their position is directly correlated to the order of the basis, see Figure~\ref{fig:basis-nodes}. We note that the nodes coincide the mesh vertices only for linear basis functions.
Instead of solving the PDE, the goal becomes finding the coefficients $u_i$, $i=1,\hdots, N$ of the discrete function $u_h(\mathbf{x})=\sum_{i=1}^N u_i\phi_i(\mathbf{x})$ that approximates the unknown function $u$. For a linear PDE this results in a linear system $\mat{K}\vect{u}=\vect{f}$, where $\mat{K}$ is the $N\times N$ stiffness matrix, $\vect{f}$ captures the boundary conditions, and $\vect{u}$ is the vector of unknown coefficients $u_i$. For instance, for the Laplace equation the entries of the stiffness matrix are
\[
K_{ij} = \int_\Omega \nabla\phi_i(\mathbf{x}) \cdot \nabla\phi_j(\mathbf{x})\,\dd\mathbf{x}.
\]
\paragraph{Local Support}Commonly used basis functions are locally supported. As a result, most of the pairwise intgerals are zero, leading to a sparse stiffness matrix. The pairwise integrals can be written as a sum of integrals over the elements (e.g., quads or hexes) on which both functions do not vanish. This representation enables so-called \emph{per-element assembly}: for a given element, a local stiffness matrix is assembled.
For instance, if and element $C$ has four non-zero basis functions $\phi_i$, $\phi_j$, $\phi_k$, $\phi_l$ (this is the case for linear $Q_{1}$ quad) the local stiffness matrix $\mat{K}^L\in\mathbb{R}^{4\times 4}$ for the Poisson equation is
\[
K^L_{o,p} = \int_C \nabla\phi_n(\mathbf{x}) \cdot \nabla\phi_m(\mathbf{x})\,\dd\mathbf{x},
\]
where $o,p=1,\hdots,4$ and $m,n\in\{i,j,k,l\}$.
By using the mapping of local indices $(o,p)$ to global
indices $(m,n)$, the local stiffness matrix entries are summed to yield the global stiffness matrix entries.
\paragraph{Geometric Mapping} The final piece of a finite element discretization is the geometric mapping $\mathbf{g}$. The local integrals need to be computed on every element. The element stiffness matrix entries are computed as integrals over a \emph{reference} element $\hat{C}$ (e.g., a regular unit square/cube) through change of variables
\[
\int_C \nabla\phi_n(\mathbf{x}) \cdot \nabla\phi_m(\mathbf{x}) \,\dd\mathbf{x} =
\int_{\hat{C}}
(\mathrm{D}\mathbf{g}^{\mathsf{-T}} \nabla\hat\phi_n(\mathbf{x})) \cdot
(\mathrm{D}\mathbf{g}^{\mathsf{-T}} \nabla\hat\phi_m(\mathbf{x})) \abs{\mathrm{D}\mathbf{g}} \,\dd\mathbf{x},
\]
where $\mathrm{D}\mathbf{g}$ is the Jacobian matrix of the geometric mapping $\mathbf{g}$, and $\hat\phi=\phi\circ \mathbf{g}$ are the bases defined on the reference element $\hat{C}$. While usually $\mathbf{g}$ is expressed by linear combination of $\phi_i$, leading to isoparametric elements, the choice of $\mathbf{g}$ is independent from the basis.
\paragraph{Quadrature} All integrals are computed numerically by means of quadrature points and weights, which translates the integrals into weighted sums. Although there are many strategies to generate quadrature data (e.g., Gaussian quadrature), all of them integrate exactly polynomials up to a given degree to ensure an appropriate approximation order. For instance, if we use one quadrature point in the element's center with weight 1, we can integrate exactly constant functions.
\paragraph{Right-hand Side} The setup of the right-hand side $\vect{b}$ is done in a similar manner: its entries are $b_i = \int_\Omega \phi_i(\mathbf{x}) f(\mathbf{x})\,\dd\mathbf{x}$.
Dirichlet boundary conditions are treated as constrained degrees of freedom. The Neumann boundary conditions are imposed by setting
\[
b_j = \int_{\partial\Omega_N} \phi_j(\mathbf{x}) \cdot n(\mathbf{x})\,\dd\mathbf{x}
\]
for any node $j$ in $\partial\Omega_N$.
\begin{figure}
\centering
\includegraphics[width=0.22\linewidth]{figs/illustrations/Q1_2d}\hfill
\includegraphics[width=0.22\linewidth]{figs/illustrations/Q2_2d}\hfill
\includegraphics[width=0.22\linewidth]{figs/illustrations/Q1_3d}\hfill
\includegraphics[width=0.22\linewidth]{figs/illustrations/Q2_3d}\par
\parbox{0.22\linewidth}{\centering $Q_{1}$}\hfill
\parbox{0.22\linewidth}{\centering $Q_{2}$}\hfill
\parbox{0.22\linewidth}{\centering $Q_{1}$}\hfill
\parbox{0.22\linewidth}{\centering $Q_{2}$}\par
\caption{Node position for the linear and quadratic bases in two and three dimensions.}
\label{fig:basis-nodes}
\end{figure}
As for the stiffness matrix assembly the basis and node construction for the right-hand side is performed locally.
\section{Geometric map construction}
\label{sec:geommap}
The geometric map is a map from $\param{\mathcal{M}}$ to $\Omega\subset\mathbb{R}^3$, defined per element.
Its primary purpose is to allow us to construct basis functions $\param{\phi}_i$ on reference domains (i.e., the elements of $\param{\mathcal{M}}$ that are unit cubes), and then to remap them
to the physical space as $\phi_i = \param{\phi}_i \circ \mathbf{g}^{-1}$.
As the local basis on the polyhedral elements is constructed directly in the physical space, $\mathbf{g}$ is the identity on these elements.
The requirements for the geometric map are distinct for the spline and $Q_2$ elements, and are matched by using spline basis itself for $\mathcal{S}$ and trilinear interpolation for $Q_2$ elements.
Because of the geometric mapping $\mathbf{g}$, for the quadratic spline, the basis $\phi_i$ does not reproduce polynomials in the physical space; nevertheless, the approximation properties of the basis are retained \cite{bazilevs2006isogeometric}.
For $Q_2$ elements, Arnold et al.~\shortcite{arnold2002approximation} shows that \emph{bilinear} maps are sufficient, and in fact allow to retain reproduction of triquadratic polynomials in the physical space.
This is very important for the basis construction on polyhedral elements, as polynomial reproduction on these elements depends on reproduction of polynomials on the polyhedron boundary.
\paragraph{Computing the geometry map.}
If we assume that the input only has vertex positions $\vect{v}_i$ for $\mathcal{M}$, we solve the equations $\mathbf{g}(\param{\mathbf{x}}_i) = \vect{v}_i$, which is a linear system of equations in terms of coefficients of $\mathbf{g}$ in the basis we choose.
In the trilinear basis, the system is trivial, as the coefficients coincide with the values at $\mathbf{x}_i$, and these are simply set to $\vect{v}_i$.
For the triquadratic basis, this is not the case, and a linear system needs to be solved. If the system is under-determined, we find the least-norm solution.
\section{Introduction}
\label{sec:intro}
The numerical solution of partial differential equations is ubiquitous in computer graphics and engineering applications, ranging from the computation of UV maps and skinning weights, to the simulation of elastic deformations, fluids, and light scattering.
The finite element method (FEM) is the most commonly used discretization of PDEs, especially in the context of structural and thermal analysis, due to its generality and rich selection of off-the-shelf commercial implementations. Ideally, a PDE solver should be a ``black box'': the user provides as input the domain boundary, boundary conditions, and the governing equations, and the code returns an evaluator that can compute the value of the solution at any point of the input domain. This is surprisingly far from being the case for all existing open-source or commercial software, despite the research efforts in this direction and the large academic and industrial interest.
To a large extent, this is due to treating meshing and FEM basis construction as two disjoint problems. The FEM basis construction may make a seemingly innocuous assumption (e.g., on the geometry of elements), that lead to exceedingly difficult requirements for meshing software. For example, commonly used bases for tetrahedra are sensitive to the tetrahedron shape, so tetrahedral mesh generators have to guarantee good element shape everywhere: a difficult task which, for some surfaces, does not have a fully satisfactory solution.
Alternatively, if few assumptions are made on mesh generation (e.g., one can use elements that work on arbitrary polyhedral domains), the basis and stiffness matrix constructions can become very expensive.
This state of matters presents a fundamental problem for applications that require fully automatic, robust processing of large collections of meshes of varying sizes, an increasingly common situation as large collections of geometric data become available. Most importantly, this situation arises in the context of machine learning on geometric and physical data, where a neural network could be trained using large numbers of simulations and used to compute efficiently an approximated solution \cite{Chen:Neural:2018,kostrikov2017surface}. Similarly, shape optimization problems often require solving PDEs in the inner optimization loop on a constantly changing domain \cite{Panetta:2015}.
\paragraph{Overview.}
We propose an integrated pipeline, considering meshing and element design as a single challenge: we make the tradeoff between mesh quality and element complexity/cost \emph{local}, instead of making an a priori decision for the whole pipeline. We generate high quality, simple, and regularly arranged elements for most of the volume of the shape, with more complex and poor quality polyhedral shapes filling the remaining gaps \cite{Sokolov2016,Gao:2017}. Our idea is to match each element to a basis construction, with well-shaped elements getting the simplest and most efficient basis functions and with complex polyhedral element formulations used only when necessary to handle the transitions between regular regions, which are the ones that are topologically and geometrically more challenging.
A spline basis on a regular lattice has major advantages over traditional FEM elements, since it has the potential to be \emph{both} accurate and efficient: it has a single degree of freedom per element, except at the boundary, yet, it has full approximation power corresponding to the degree of the spline. This observation is one of the foundations of \emph{isogeometric analysis} in 3D \cite{hughes:2005:isogeometric,Cottrell:2009}.
Unfortunately, it is easy to define and implement only for fully regular grids, which is not practical for most input geometries. The next best thing are spline bases on
\emph{pure hexahedral} meshes: while smooth constructions for polar configurations exist \cite{Toshniwal:2017}, a solution applicable to general hexahedral meshes whose interior singular curves meet is still elusive, restricting this construction to simple shapes. Padded hexahedral-meshes \cite{Marechal:2009} are necessary to ensure a good boundary approximation for both regular and polycube~\cite{Tarini:2004} hexahedral meshing methods, but they unfortunately cannot be used by these constructions since their interior curve singularities meet in the padding layer.
We propose a hybrid construction that sidesteps these limitations: we use spline elements only on fully regular regions, and fill the elements that are touching singular edges, or that are not hexahedra, with local constructions (harmonic elements for polyhedra, triquadratic polynomial elements for hexahedra). This construction further relaxes requirements for meshing, since it works on general hexahedral meshes (without any restriction on their singularity structure) but also directly supports \emph{hex-dominant} meshes, which can be robustly generated with modern field-aligned methods \cite{Gao:2017,Sokolov2016}. These meshes consist mostly of well-shaped hexahedra with locally regular mesh structure, but also contain other general polyhedra. Our construction takes advantage of this high regularity, adding a negligible overhead over the spline FEM basis only for the sparse set of non-regular elements.
We demonstrate that our proposed \emph{Poly-Spline} FEM retains, to a large extent, both the approximation and performance benefits of splines, at the cost of the increasing basis construction complexity, and at the same time, works for a class of meshes that can be robustly generated for most shapes with existing meshing algorithms.
Our method exhibits cubic convergence on a large data set, for a degree of freedom budget comparable to trilinear hexahedral elements, which have only quadratic convergence. To the best of our knowledge, this paper is the first FEM method exploiting the advantages of spline basis that has been validated on a large collection of complex geometries.
\section{Algorithm Overview}
\label{sec:overview}
In this section, we introduce the main definitions we use in our algorithm description, and outline the structure of the algorithm. We refer to \Cref{app:fem} for a brief introduction to the finite element method and the setup of our mathematical notation.
\paragraph{Input complex and subcomplexes.}
The input to our algorithm is a 3D polyhedral complex $\mathcal{M}$, with vertices $\mathbf{v}_i\in \mathbb{R}^3$, $i=1,\ldots,N_V$, consisting of polyhedral cells $C_i$, $i=1,\ldots,N_C$, most of which are hexahedra. \Cref{fig:complex} shows a two-dimensional example of such complex.
The edges, faces, and cells of the mesh are defined combinatorially, that is, edges are defined by pairs of vertices, faces by sequences of edges, and cells by closed surface meshes formed by faces.
We assume that 3D positions of vertices are also provided as input and that $\mathcal{M}$ is three-manifold, i.e., that there is a way to identify vertices, edges, faces,
and cells with points, curves, surface patches and simple volumes, such that their union is a three-manifold subset of $\mathbb{R}^3$.
We assume that for any hexahedron there is at most one non-hexahedral cell sharing one of its faces or edges, which can be achieved by refinement. We also assume that no two polyhedral cells are adjacent, and that no polyhedron touches the boundary, which can also be achieved by merging polyhedral cells and/or refinement.
This preprocessing step (i.e., one step of uniform refinement) is discussed in Section~\ref{sec:refinement}. As a consequence of our refinement, \emph{all faces of $\mathcal{M}$ are quadrilateral}.
One of the difficulties of using general polyhedral meshes for basis constructions is that, unlike the case of,
for example, pure tetrahedral meshes, there is no natural way to realize all elements of the mesh in 3D just from vertex positions (e.g., for a tetrahedral mesh, linear interpolation for faces and cells is natural). This requires constructing bases on an explicitly defined parametric domain associated with the input complex.
For this purpose, we define a certain number of complexes related to the original complex $\mathcal{M}$ (Figure~\ref{fig:complex}).
There are two goals for introducing these: defining the parametric domain for the basis, and defining the \emph{geometric map}, which specifies how the complex is realized in three-dimensional \emph{physical space}.
\begin{figure}
\centering
\includegraphics[width=0.4\linewidth]{figs/illustrations/fig2}
\caption{Complexes involved in our construction. In green we show $\mathcal{S}$, in red $\mathcal{Q}$, and in blue $\mathcal{P}$.}
\label{fig:complex}
\end{figure}
\begin{itemize}[leftmargin=*]
\item $\mathcal{H} \subseteq \mathcal{M}$ is the hexahedral part of $\mathcal{M}$, consisting of hexahedra $H$.
\item $\mathcal{P} = \mathcal{M}\setminus\mathcal{H}$ is the non-hexahedral part of $\mathcal{M}$, consisting of polyhedra $P$.
\item $\mathcal{S} \subseteq \mathcal{H}$ is the complex consisting of spline-compatible hexahedra
$S$ defined in \Cref{sec:spline-compatible}.
\item $\mathcal{Q} = \mathcal{H} \setminus \mathcal{S}$ is the spline-incompatible pure-hexahedral part of $\mathcal{M}$.
\end{itemize}
\noindent
Note that the sub-complexes of $\mathcal{M}$ are nested: $\mathcal{S} \subseteq \mathcal{H} \subseteq \mathcal{M}$.
In the context of finite elements, the distinction between \emph{parametric space} and \emph{physical space} is critical:
the bases on the hexahedral part of the mesh are defined in terms of parametric space coordinates, where all hexahedra are unit cubes; this makes it possible to define simple, accurate, and efficient bases.
However, the derivatives in the PDE are taken with respect to physical space variables, and the unknown functions are naturally defined on the physical space. Remapping these functions to the parametric space is necessary to discretize the PDE using our basis.
We define parametric domains $\param{\mathcal{M}}$, $\param{\mathcal{H}}$, $\param{\mathcal{S}}$, and $\param{\mathcal{Q}}$ corresponding to ${\mathcal{M}}$, ${\mathcal{H}}$, ${\mathcal{S}}$, and ${\mathcal{Q}}$, respectively.
$\param{\mathcal{H}}$ consists of unit cubes $\param{H}$, one per hexahedron $H$ with corresponding faces identified, and $\param{\mathcal{S}}$ and $\param{\mathcal{Q}}$ are its subcomplexes. The complete parametric space $\param{\mathcal{M}}$ is obtained by
adding a set of polyhedra for $\mathcal{P}$, defined using the geometric map
as described below. For polyhedra, physical and parametric space coincide.
\paragraph{Geometric map and complex embedding.}
The input complex, as it is typical for mesh representations, does not define a complete geometric realization of the complex: rather it only includes vertex positions and element connectivity.
We define a complete geometric realization as the \emph{geometric map} $\mathbf{g}\colon \param{\mathcal{M}} \rightarrow \mathbb{R}^3$, from the parametric domain $\param{\mathcal{M}}$ to the physical space.
We use $\param{\mathbf{x}}$ for points in the parametric domain, and $\mathbf{x}$ for points in the physical space, and denote the image of the geometric map by $\Omega = \mathbf{g}(\param{\mathcal{M}})$ (\Cref{fig:geom_mapping}).
The definition requires bootstrapping: $\mathbf{g}$ is first defined on $\param{\mathcal{H}}$.
For example, the simplest geometric map $\mathbf{g}$ on $\param{\mathcal{M}}$ can be obtained by trilinear interpolation:
$\mathbf{g}$ restricted to a unit cube $\param{H} \subset \param{\mathcal{M}}$ is a trilinear interpolation of the positions of the vertices of its associated hexahedron $H$.
We make the following assumption about $\mathbf{g}(\param{\mathcal{H}})$: the map is bijective on the faces of $\mathcal{H}$, corresponding to the
boundary of any polyhedral cell $P$, and the union of the images of these
faces does not self-intersect and encloses a volume $P'$. Section~\ref{sec:refinement} explains how this is ensured.
Then we complete $\param{\mathcal{M}}$ by adding the volume $P'$ as the
parametric domain for $P$. We add this volume to the parametric domain $\param{\mathcal{M}}$, identifying corresponding faces with faces in $\param{\mathcal{H}}$, and defining
the geometric map to be the identity on these domains.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figs/illustrations/gmapping}
\caption{Illustration of the geometric mapping.}
\label{fig:geom_mapping}
\end{figure}
The simplest trilinear map is adequate for elements of the mesh outside the regular part $\mathcal{S}$, but is insufficient for accuracy on the regular part, as discussed below.
We consider a more complex definition of $\mathbf{g}$ ensuring $C^1$ smoothness across interior edges and faces of $\mathcal{S}$, described in Section~\ref{sec:geommap}, after we describe our basis construction.
Our construction is \emph{isoparametric}, that is, it uses the same basis for the geometric map as for the solution.
In other words, on $\mathcal{Q}$ we use the standard tri-qua\-dra\-tic geometric map that maps each reference cube $[0,1]^3$ to the actual hex-element in the mesh. On $\mathcal{S}$ we use a $C^1$ spline mapping, explained in \Cref{sec:geommap}. On the polyhedral part, the geometric map is the identity, thus all quantities are defined directly on the physical domain.
\paragraph{Overview of the basis and discretization construction.}
Given an input complex $\mathcal{M}$, we construct a set of bases $\param{\phi}_i \colon \param{\mathcal{M}} \to \mathbb{R}$, $i=1,\ldots,N$, such that:
\begin{itemize}[leftmargin=*]
\item the restriction of basis function $\param{\phi}_i$ to spline compatible hexahedral domains $\param{S} \in \param{\mathcal{S}}$ is a spline basis function;
\item the restriction to hexahedra $\param{Q} \in \param{\mathcal{Q}}$ is a standard
triquadratic ($Q_2$) element function;
\item the restriction to polyhedra $\param{P} \in \param{\mathcal{P}}$ (or $P \in \mathcal{P}$) is a harmonic-based
nonconformal, third-order accurate basis function.
\end{itemize}
\noindent
The degrees of freedom (dofs) corresponding to basis functions $\param{\phi}_i$ are associated with:
\begin{itemize}[leftmargin=*]
\item each hexahedron either in $\mathcal{S}$ or adjacent to a spline-compatible one (\emph{spline cell dofs});
\item each boundary vertex, edge, or face of $\mathcal{S}$ (\emph{spline boundary dofs}); these are needed to have correct approximation on the boundary;
\item each vertex, edge, face, and cell of $\mathcal{Q}$ (\emph{triquadratic element degrees of freedom}).
\end{itemize}
The total number of degrees of freedom is denoted by $N$.
While most of the construction is independent of the choice of PDE
(we assume it to be second-order), with a notable exception of the
consistency condition for polyhedral elements, we use the Poisson equation
to be more specific.
Note that hexahedra adjacent to $\mathcal{S}$, but not in $\mathcal{S}$ (i.e., hexahedra in $\mathcal{Q}$)
get both spline dofs and triquadratic element dofs: such a cell may have $\geq 28$ dofs instead of 27.
Polyhedral cells are not assigned separate degrees of freedom: the basis functions
with support overlapping polyhedra are those associated with dofs at incident hexahedra.
We assemble the standard stiffness matrix for an elliptic PDE, element-by-element, performing integration on the hexahedra $\param{H}$ of $\param{\mathcal{M}}$ and polyhedra $P$.
The entry $K_{ij}$ of the stiffness matrix $\mat{K}$ for the Poisson equation is computed as follows:
\begin{equation}
K_{ij} = \sum_{\param{C} \in \param{\mathcal{M}}} \int_{\mathbf{g}(\param{C})}
\nabla \phi_i(\mathbf{x}) \cdot \nabla\phi_j(\mathbf{x}) \,\dd\mathbf{x},
\label{eq:stiffness}
\end{equation}
where $\phi_i = \param{\phi}_i \circ \mathbf{g}^{-1}$.
The actual integration is performed on the elements in the parametric domain
$\param{\mathcal{M}}$, using a change of variables $\mathbf{x} = \mathbf{g}(\param{\mathbf{x}})$ for every element:
\begin{equation}
K_{ij} = \sum_{\param{C} \in \param{\mathcal{M}}} \int_{\param{C}} \nabla \param{\phi}_i(\param{\mathbf{x}})^\mathsf{T} \, \mat{A}(\param{\mathbf{x}}) \, \nabla \param{\phi}_j(\param{\mathbf{x}}) \, \abs{\mathrm{D}\mathbf{g}}\, \dd\param{\mathbf{x}}
\label{eq:stiffness-param}
\end{equation}
where $\mat{A}(\param{\mathbf{x}})$ is the metric tensor of the geometric map $\mathbf{g}$ at $\param{\mathbf{x}}$, given by $\mathrm{D}\mathbf{g}^{-1} \mathrm{D}\mathbf{g}^{\mathsf{-T}}$, with $\mathrm{D}\mathbf{g}$ being the Jacobian of $\mathbf{g}$.
In the next sections, we describe the construction of the basis on each element type, the geometric map, and the stiffness matrix construction.
\section{Mesh preprocessing and refinement}
\label{sec:refinement}
Without loss of generality, we restrict the meshing discussion to 2D, as the algorithm introduced in this section extends naturally to 3D.
For the sake of simplicity, in this discussion the term polygon refers to non-quadrilateral elements.
As previously mentioned, our method can be applied to hybrid meshes without two adjacent polygons and without polygons touching the boundary, which we ensure with one step of refinement. While our construction could be extended to support these configurations, we favored refinement due to its simplicity. Refining polygonal meshes is an interesting problem on its own: while there is a canonical way to refine quads, there are multiple ways to refine a polygon. We propose the use of polar refinement~(\Cref{sec:polar}), which has the added benefit of allowing us to resample large polygons to obtain a uniform element size. However, to avoid self-intersections between edges during the refinement, we impose each polygon be star-shaped. This condition is often, but not always, satisfied by existing hybrid meshers: we thus introduce a simple merging and splitting procedure to convert hybrid meshes into star-shaped polyhedral meshes (\Cref{sec:starshaped}), and then detail our refinement strategy (\Cref{sec:polar}).
Another advantage of restricting ourselves to star-shaped polygons is that partitioning it into triangles (respectively tetrahedra in 3D) is trivial, by introducing a point in the kernel and connecting it to all the boundary faces. This step is required to generate quadrature points for the numerical integration (\Cref{sec:polyhedral}): the quality of the partitioning is usually low, but this is irrelevant for this purpose.
\subsection{Mesh preprocessing}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figs/illustrations/merging}
\caption{Our algorithm iteratively merges polygons (gray polygon in the first image), until the barycenter of the merged polygon is inside its kernel (gray polygon in the second).}
\label{fig:merging}
\end{figure}
\label{sec:starshaped}
We propose a simple and effective algorithm to convert polygonal meshes into star-shaped polygonal meshes, by combining existing polygons until they are star-shaped (and eventually splitting them if they contain a concave part of the boundary).
For every non-star-shaped polygon, we compute its barycenter and connect it to all its vertices (Figure \ref{fig:merging}, left). This procedure generates a set of intersecting segments (red in Figure~\ref{fig:merging}), which we use to grow the polygon by merging it with the faces incident to each intersecting segment. The procedure is repeated until no more intersections are found, which usually happens in one or two iterations in our experiments. If we reach a concave boundary during the growing procedure, it might be impossible to obtain a star-shaped polyhedron by merging alone: In these cases, we triangulate the polygon, and merge the resulting triangles in star-shaped polygons if possible.
\subsection{Polar refinement}
\label{sec:polar}
Each star-shaped polygon is refined by finding a point in its kernel
(Figure \ref{fig:refinement_procedure}, a), connecting it to all its vertices (b), splitting each edge with mid-point subdivision and connecting them to the point in the kernel (c), and finally adding rings of quadrilaterals around the boundary (d). Figure~\ref{fig:refinement_example} shows an example of polar refinement in two and three dimensions. The more splits are performed in the edge, the more elements are added. This is a useful feature to homogenize the element size in case the polygons were expanded too much during the mesh preprocessing stage. In our implementation, we split the edges evenly, ensuring that the shortest segment has a length as close as possible to the average edge length of the input mesh.
\begin{figure}
\centering
\includegraphics[width=0.24\linewidth]{figs/illustrations/refinement_a}\hfill
\includegraphics[width=0.24\linewidth]{figs/illustrations/refinement_b}\hfill
\includegraphics[width=0.24\linewidth]{figs/illustrations/refinement_c}\hfill
\includegraphics[width=0.24\linewidth]{figs/illustrations/refinement_d}
%
\caption{Polar refinement for polygons.}
\label{fig:refinement_procedure}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.3\linewidth]{figs/screenshots/wiggly0}\hfill
\includegraphics[width=0.3\linewidth]{figs/screenshots/wiggly1}\hfill
\includegraphics[width=0.3\linewidth]{figs/screenshots/wiggly2}\\[1ex]
%
\includegraphics[width=0.3\linewidth]{figs/screenshots/cube1}\hfill
\includegraphics[width=0.3\linewidth]{figs/screenshots/cube2}\hfill
\includegraphics[width=0.3\linewidth]{figs/screenshots/cube3}
\caption{Example of polar refinement for a polygon and a polyhedron. The bottom view is a cut-through of the actual 3D mesh.}
\label{fig:refinement_example}
\end{figure}
\section{Related Work}
\label{sec:related}
When numerically solving PDEs using the finite element method, one has to discretize the spatial domain into finite elements and define shape functions on these elements. Since shape functions, element types, and mesh generation are closely related, we discuss the relevant approaches in tandem.
For complex spatial domains, the discretization is frequently based on the Delaunay triangulation \cite{Shewchuk:Tri:1996} or Delaunay tetrahedrization \cite{Si:TetGen:2015}, respectively, since those tessellations can be computed in a robust and automatic manner. Due to their simplicity and efficiency, linear shape functions over triangular or tetrahedral elements are often the default choice for graphics applications~\cite{hughes:fem}, although they are known to suffer from locking for stiff PDEs, such as nearly incompressible elastic materials~\cite{hughes:fem}.
This locking problem can be avoided by using bilinear quadrangular or trilinear hexahedral elements ($Q_1$ elements), which have the additional advantage of yielding a higher accuracy for a given number of elements \cite{FEAD1992,Benzley95}. Triquadratic hexahedral elements ($Q_2$) provide even higher accuracy and faster convergence under mesh refinement (cubic converge in $L_2$-norm for $Q_2$ vs.\ quadratic converge for $Q_1$), but their larger number of degrees of freedom (8 vs. 27) leads to high memory consumption and computational cost.
The main idea of isogeometric analysis (IGA) \cite{hughes:2005:isogeometric,Cottrell:2009,Engvall:2017:IGA} is to employ the same spline basis for defining the CAD geometry as well as for performing numerical analysis. Using quadratic splines on hexahedral elements results in the same cubic convergence order as $Q_2$ elements, but at the much lower cost of \emph{one} degree of freedom per element (comparable to $Q_1$ elements). This efficiency, however, comes at the price of a very complex implementation for non-regular hexahedral meshes. Moreover, generating IGA-compatible meshes from a given general boundary surface is still an open problem \cite{aigner:2009:swept,martin:2010:volumetric,li:2013:surface}.
Concurrent work~\cite{Wei2018} introduces a construction that can handle irregular pure hex meshes, with tensor-product cubic splines used on regular parts. However, we focus on handling general polygonal meshes and we use quadratic splines (note that, our approach can be easily extended to cubic polynomials if desired).
A standard method for volumetric mesh generation is through hierarchical subdivision of an initial regular hexahedral mesh, leading to so-called octree meshes \cite{marechal2009,Ito2009,Zhang2013}. The T-junctions resulting from adaptive subdivision can be handled by using T-splines \cite{Sederberg:TSL:2004,Veiga:TIGA:2011} as shape functions. While this meshing approach is very robust, it has problems representing geometric features that are not aligned with the principal axes.
Even when giving up splines or T-splines for standard $Q_1$/$Q_2$ elements, the automatic generation of the required hexahedral meshes is problematic. Despite the progress made in this field over the last decade, \emph{automatically} generating pure hexahedral meshes that (i) have sufficient element quality, (ii) are not too dense, and (iii) align to geometric features is still unsolved. Early methods based on paving or sweeping \cite{Owen:2000,Yamakawa:2003,Staten:UPP:2005,Shepherd2008} require complicated handling of special cases and generate too many singularities. Polycube methods~\cite{HexmeshSGP2011,li:2013:surface,livesu2013polycut,huang2014,fang2016all,Fu:2016}, field-aligned methods \cite{NieserSGP11,Huang2011,Li2012,Jiang2013}, and the surface foliation method \cite{LEI2017758} are interesting research venues, but they are currently not robust enough and often fail to produce a valid mesh.
However, if the strict requirement of producing hexahedral elements only is relaxed, field-aligned methods \cite{Sokolov2016,Gao:2017} can robustly and automatically create hex-dominant polyhedral meshes, that is, meshes consisting of mostly, but not exclusively, of hexahedral elements. The idea is to build local volumetric parameterizations aligned with a specified directional field, and constructing the mesh from traced isolines of that parameterization, inserting general polyhedra if necessary. Their drawback is that the resulting \emph{hex-dominant} meshes, are not directly supported by most FEM codes.
One option is to split these general polyhedra into standard elements, leading to a mixed FEM formulation. For instance, the field-aligned meshing of Sokolov et al.~\shortcite{Sokolov2016} extract meshes that are composed of hexahedra, tetrahedra, prisms, and pyramids. However, the quality of those split elements is hard to control in general. An interesting alternative is to avoid the splitting of polyhedra and instead incorporate them into the simulation, for instance though mimetic finite differences~\cite{Lipnikov:MFD:2014}, the virtual element method~\cite{Veiga:VEM:2013}, or polyhedral finite elements~\cite{Manzini:NPPFEM:2014}. The latter employ generalized barycentric coordinates as shape functions, such as mean value coordinates \cite{Floater:2005:MVC,Ju:2005:MVC}, harmonic coordinates \cite{Joshi:2007:HCF}, or minimum entropy coordinates \cite{Hormann:2008:MEC}. From those options, harmonic coordinates seem most suitable since they generalize both linear tetrahedra and trilinear hexahedra to general non-convex polyhedra \cite{Martin:PFE:2008,Bishop:2014}. While avoiding splitting or remeshing hex-dominant meshes, the major drawback of polyhedral elements is the high cost for computing and integrating their shape functions.
In the above methods the meshing stage either severely restricts admissible shape functions, or the element type puts (too) strong requirements on the meshing. In contrast, we use the most efficient elements where possible and the most flexible elements where required, which enables the use of robust and automatic hex-dominant mesh generation. |
2,869,038,155,070 | arxiv | \section{Introduction}
Various magnetic fields are known to exist in the universe.
In the context of cosmology, magnetic fields in galaxies,
galactic clusters and void regions are important.
The magnetic fields in galaxies and clusters are
$B_{\rm gal} \sim10^{-6}$G~\cite{Wielebinski:2005,Beck:2012}.
The void magnetic field
is reported to be stronger than $B_{\rm void}\gtrsim 10^{-15}$G
with an uncertainty of a few orders based on blazar observations~\cite{Neronov:1900zz, Tavecchio:2010mk, Essey:2010nd, Finke:2013bua}.
However, the origin of these magnetic fields is a long-term and outstanding
problem in astrophysics and cosmology.
The candidates of the mechanism that generates the magnetic fields
are divided into two main classes~\cite{Giovannini:2003yn,Kandus:2010nw,Durrer:2013pga}. One class includes astrophysical processes
which exploit plasma motions to produce magnetic fields in
comparatively local regions while it may be difficult for these mechanisms to work in void regions.
The other consists of cosmological processes which generate magnetic fields
spread over the universe in the very early universe.
The magnetic field produced by the latter class of models
can directly dilute into the void magnetic field
and also seed the galactic and cluster magnetic fields
if the strength is sufficient.
The scenario of the primordial magnetic field
naturally explains the hierarchy between
$B_{\rm gal}$ and $B_{\rm void}$ because the adiabatic compression
and the dynamo mechanism may amplify it in galaxies and clusters
while the magnetic field is expected to dilute
due to the cosmic expansion in void regions.
However, the primordial magnetic field is constrained by CMB observations as
$B_{p} \lesssim 10^{-9}$G (see, e.g. \cite{Yamazaki:2012pg}, and references therein).
Here, we focus on the magnetic fields with a primordial origin,
especially an inflationary origin.
Inflation is a widely accepted paradigm of
the very early universe and it can produce cosmological perturbations
from quantum fluctuations. Since the initial density perturbation
which seeds the large scale structures observed in the present universe
originates from inflation, it is an attractive idea that the observed
magnetic fields are also attributed to inflation.
Although many models in which magnetic fields are generated
during inflation are proposed so far~\cite{Turner:1987bw,Ratra:1991bn,Garretson:1992vt,Davis:2000zp,Finelli:2000sh,Ferreira:2013sqa}
, these ``inflationary magnetogenesis" models suffer from several problems.
It is known that the strong couling problem, the back reaction problem
and the curvature perturbation problem spoil inflationary magnetogenesis models~\cite{Demozzi:2009fu,Fujita:2012rb,Suyama:2012wh,Barnaby:2012xt,Fujita:2013qxa}.
In the following section, we will explain these problems.
Inflationary magnetogenesis targets the magnetic field that is stronger than
the blazar lower bound $B_{\rm void}\gtrsim 10^{-15}$G
because the void magnetic field is not amplified after reheating and reflects
the primordial amplitude.
\footnote{See, however, ref.~\cite{Campanelli:2007tc,Saveliev:2013uva}
in which the inverse cascade is discussed.}
Although $10^{-15}$G is very weak in comparison with, for example, earth's magnetism
($\sim 0.2-0.7$G), an remarkably strong magnetic field
at the end of inflation is required. It is because the magnetic
field dilutes in proportion to $a^{-2}$ in the expanding universe.
Furthermore, on super horizon scales during inflation, the electric field
is stronger than the magnetic field which has to grow rapidly against
the $a^{-2}$ dilution. As we will see below, this extremely strong
electric field makes magnetogenesis difficult.
\section{Sketch of inflationary magnetogenesis}
The basics of inflationary magnetogenesis can be understood
by reviewing a model. Let us sketch the kinetic coupling model
(or $IFF$ model)~\cite{Ratra:1991bn}
as an example.
The model action is
\begin{equation}
S = \int {\rm d}\eta {\rm d}^3 x \sqrt{-g}
\left[
-\frac{1}{4} I^2(\eta) F_{\mu\nu}F^{\mu\nu}
\right],
\quad
\left(
F_{\mu\nu}\equiv \partial_\mu A_\nu - \partial_\nu A_\mu
\right),
\label{Model Action}
\end{equation}
where $\eta$ is the conformal time, $A_\mu$ is a gauge field and $I(\eta)$ is originally considered
as a function of a scalar field but is treated as a function of time. To solve the EoM of $A_\mu$ analytically,
$I(\eta)$ is usually assumed as
\begin{equation}
I(\phi)=
\left\{
\begin{array}{cc}
\left(\eta/\eta_\ff \right)^n & (\eta<\eta_\ff)\\
1 & (\eta\ge\eta_\ff)
\end{array}\,, \right.
\label{I}
\end{equation}
where ``f'' denotes the end of inflation and $n$ is a constant.
Without a time variation of $I(\eta)$, no fluctuation of $A_\mu$
would not be generated because of the conformal invariance~\cite{Turner:1987bw}.
The EoM of $A_i$ is given by
\begin{equation}
\left[\partial_\eta^2 +k^2-\frac{n(n-1)}{\eta^2}\right](I\mcA_k)=0
\,,
\label{EoM of A}
\end{equation}
where $\mcA_k$ is the mode function of $A_i$ expanded by the polarization
vectors.
If $n<0$, $I(\eta)\ll 1$ during inflation and loop effects due to
the coupling to charged fermions cannot be ignored.
Then a reliable calculation is hardly done.
It is known as the strong coupling problem~\cite{Demozzi:2009fu}.
Thus we choose $n>0$ and obtain the solution on super-horizon scale:
\begin{equation}
| I\mcA_k(\eta) | =
\frac{\Gamma(n-1/2)}{\sqrt{2\pi k}}\left( \frac{-k\eta}{2}\right)^{1-n}
,
\quad \left(-k\eta \ll 1,\ n>\frac{1}{2} \right).
\label{Sol of A}
\end{equation}
In the expanding universe, the power spectra of electromagnetic fields are given by
\begin{equation}
\mcP_E (\eta,k) \equiv \frac{k^3 |\partial_\eta \mcA_k|^2}{\pi^2 a^4},\qquad
\mcP_B (\eta,k) \equiv \frac{k^5 |\mcA_k|^2}{\pi^2 a^4}.
\label{P of EB}
\end{equation}
It should be noted that the magnetic field is diluted in proportion to $a^{-2}$.
Substituting eq.~\eqref{Sol of A} into eq.~\eqref{P of EB}, one finds
that the resultant magnetic field at present is
\begin{equation}
\mcP^{1/2}_B (\eta_{\rm now},k)
=
\frac{\Gamma(n-\frac{1}{2})}{2^{\frac{3}{2}-n}\pi^{\frac{3}{2}}}
(a_\ff H)^{n-1} k^{3-n}
\sim
10^{23n-80} {\rm G}\times
\left( \frac{\rho_\inf^{1/4}}{10^{16}\GeV} \right)^{n-1}
\left( \frac{k}{1{\rm Mpc}^{-1}} \right)^{3-n},
\label{current B}
\end{equation}
in the case of the instant reheating. Here, $\rho_\inf$ is the inflation energy scale.
Therefore $n \gtrsim 3$ is necessary to produce the magnetic field
with the sufficient strength, $B(\eta_{\rm now}) > 10^{-15}$G
at $1$ Mpc scale.
\section{Problems}
\subsection{Back reaction problem}
In the previous section, we assume that inflation continues
regardless of the generation of the electromagnetic fields.
However, if the energy density of the electromagnetic fields
overtakes that of inflaton, the inflation dynamics and/or
the generation of electromagnetic fields are altered~\cite{Demozzi:2009fu}.
This is the so-called back reaction problem.
Before evaluating the electromagnetic energy density,
it is important to realize that,
on super-horizon scales, the electric field is much stronger than
the magnetic field in the kinetic coupling model:
\begin{equation}
\left|
\frac{\mcP_E}{\mcP_B}
\right|
=
\left|
\frac{\partial_\eta \mcA_k}{k \mcA_k}
\right|^2
\simeq
\frac{1}{|k\eta|^2}
=e^{2N_k}\gg1,
\qquad ({\rm super\ horizon})
\end{equation}
where $N_k \equiv -\ln|k\eta|$ is the e-folding number of $k$ mode.
Thus we can focus on the electric field.
Its energy density at the end of inflation is given by
\begin{equation}
\rho_\em(\eta_\ff) \simeq \frac{1}{2} \int^{a_\ff H}_{k_\mi}\frac{\dd k}{k}
\mcP_E (\eta_\ff, k)
\simeq
H^4
\left[\frac{e^{2(n-2)N_\tot}-1}{2n-4}\right]
\,,
\end{equation}
where $N_\tot$ is {\it not} the total e-folds of inflation {\it but}
the total e-folds of magnetogenesis (i.e. the time duration where
$I(\eta)\propto \eta^n$) and
$k_\mi$ is the mode which exits the horizon at the onset of magnetogenesis.
$H$ is the Hubble parameter during inflation.
Note we drop a numerical factor for simplicity.
One can see that for $n>2$, $\rho_\em$ becomes huge due to the IR contribution.
Demozzi et al.~\cite{Demozzi:2009fu} show that by requiring
$\rho_\em < \rho_\inf$, the magnetic field produced in the kinetic coupling
model with the power-law kinetic function, $I(\eta)\propto \eta^n$, cannot exceed $10^{-32}$G
today.
\footnote{They assume $N_\tot=75$ and $H=10^{-6}\Mpl$.}
It is far smaller than the observational lower bound.
Although their result is striking, it does not mean
inflationary magnetogenesis is generally excluded
because their analysis is based on the specific model.
In ref.~\cite{Fujita:2012rb}, nevertheless,
the authors conduct a model independent argument
in which the strong coupling problem and the back reaction problem
are taken into account. They derive an universal upper bound on
the inflation energy scale:
\begin{equation}
\rho^{1/4}_{{\rm inf} }
<
6 \times 10^{11} \GeV
\left( \frac{B(\eta_{\rm now})}{10^{-15}G} \right)^{-2}.
\label{eq:main result}
\end{equation}
Therefore the back reaction problem implies that
low energy inflation is favored.
In addition, this constraint can be translated into the bound
on the tensor-to-scalar ratio:
$r<10^{-19}(B/10^{-15}{\rm G})^{-8}$.
Thus if the background gravitational waves are detected in the future,
inflationary magnetogenesis is excluded.
\subsection{Curvature perturbation problem}
The curvature perturbation problem refers that inflationary magnetogenesis
can be constrained due to the curvature perturbation
induced by the generated electromagnetic fields~\cite{Barnaby:2012xt}.
The electromagnetic fields produced during inflation behave
as isocurvature perturbations and source
the adiabatic perturbation~\cite{Suyama:2012wh,Fujita:2013qxa}:
\begin{equation}
\zeta^\em(t,\bm{x}) = -\frac{2H}{\epsilon \rho_{\rm inf}} \int^t_{t_0} dt' \rho_\em (t',\bm{x}),
\label{zeta}
\end{equation}
where $t$ is the cosmic time and $\epsilon$ is the slow-roll parameter.
Regarding the curvature perturbation,
not only the amplitude of the power spectrum $\mcP_\zeta$ but also
the non-linarity parameters, $f_{\rm NL}$ and $\tau_{\rm NL}$,
are observationally measured.
Then those parameters induced by the electromagnetic fields
should not exceed the observed values:
\begin{equation}
\mcP^\obs_\zeta > \mcP^\em_\zeta,
\qquad
f_{\rm NL}^\obs > f_{\rm NL}^\em,
\qquad
\tau_{\rm NL}^\obs > \tau_{\rm NL}^\em.
\quad
\end{equation}
Considering $\mcP^\obs_\zeta > \mcP^\em_\zeta$ in a model independent way,
the authors in ref.~\cite{Suyama:2012wh} put the lower bound on
the inflation energy scale:
\begin{equation}
\rho_\inf^{1/4} > 3\times 10^{13}\GeV
\left( \frac{B(\eta_{\rm now})}{10^{-15}G} \right)^{1/2}.
\end{equation}
Apparently, combined with eq.~\eqref{eq:main result},
this constraint eliminates inflationary magnetogenesis models in general.
Nonetheless it should be noted in ref.~\cite{Suyama:2012wh} the authors assume
that inflation is single slow-roll, the correlation length of
the void magnetic field is $1$ Mpc at present and the amplitudes of $\mcP_\zeta$ of the minimal scale of inflation is same as that of the CMB scale,
$\mcP_\zeta(k_{\rm CMB})=\mcP_\zeta(k_{\rm max})$.
On the other hand, without these assumptions, $\mcP^\em_\zeta, f_{\rm NL}^\em$ and $\tau_{\rm NL}^\em$ are explicitly calculated
and compared with the Planck result~\cite{Ade:2013uln}
under the framework of the kinetic coupling model in ref.~\cite{Fujita:2013qxa}.
Interestingly, it is found that the constraint from $\tau_{\rm NL}$ is the
most stringent in the single slow-roll inflation case
while the bound from the back reaction problem become the tightest when the single slow-roll assumption is relaxed (see fig.1).
Unfortunately, in both cases, the allowed strength of the magnetic field
is far smaller than the observational lower limit.
\begin{figure}[htb]
\hspace{-2mm}
\includegraphics[width=75mm]{B-inf-100.eps}
\hspace{3mm}
\includegraphics[width=75mm]{B-curv-100.eps}
\caption
{The upper limit of the current strength of the magnetic field
for $n\ge2$.
in the kinetic coupling model.
It is assumed that
inflaton generates all observed curvature perturbation
in the left panel while that assumption is relaxed and instead
$\epsilon = 10^{-2}$ is adopted in
the right panel.
The total duration of the electromagnetic field generation is set as
$N_\tot = 100$.
The shaded regions represent the restriction from
gravitational wave (blue) and big bang nucleosynthesis (red), respectively.
}
\label{fig:B}
\end{figure}
\section{Summary and discussion}
Since the magnetic fields in the universe
are observed and their properties are constrained
($B_{\rm gal} \sim10^{-6} {\rm G}, B_{\rm void}\gtrsim 10^{-15}$G),
theoretical attempts to explain their origin are strongly motivated.
However, in spite of longstanding efforts and
numerous papers, a successful quantitative model of
magnetogenesis is not yet established.
In this paper, we explore inflationary magnetogenesis
where the electromagnetic fields are generated during inflation.
The idea that inflation produces the primordial magnetic field
as well as the density perturbation looks natural.
However, as we discussed above, inflationary magnetogenesis
suffers from several problems
and no promising model is known so far.
To determine whether inflationary magnetogenesis is possible
or not, two ways can be considered. One is building a viable model
and explicitly proving its existence. The other is
conducting a model independent argument which generally
constrains the possibility or gives a guidance for model building.
As the general discussion of the strong coupling and back reaction
problem~\cite{Fujita:2012rb} implies that
low energy inflation is favored for magntogenesis,
a new general argument will provide a novel insight.
For example, it seems that a model independent argument of the curvature perturbation problem without the assumptions can be made.
\footnote{See ref.~\cite{TFSY}}
Note, we presume that the void magnetic field is generated purely during inflation and no additional amplification occurs.
However, there is a chance that the magnetic field is amplified
during reheating era or its dilution due to the cosmic expansion
is partially compensated by the inverse cascade.
Therefore even if {\it pure} inflationary magnetogenesis is excluded,
the inflationary origin of the cosmic magnetic field
combined with post-inflation dynamics may be viable.
\newpage
\Acknowledgements
This work was supported by the World Premier International
Research Center Initiative
(WPI Initiative), MEXT, Japan. T.F. acknowledges the support by Grant-in-Aid
for JSPS Fellows No.248160.
|
2,869,038,155,071 | arxiv | \section{Introduction}\label{sec:introduction}
Radiative energy transport within a medium plays a role in a wealth of astrophysical scenarios. When the photon mean free path is short compared with the typical scale height of the medium, irrespective of the photon wavelength, this problem can be modelled by the diffusion approximation.
By assuming local thermodynamic equilibrium (LTE), the definition of a harmonic mean opacity coefficient -- the so-called Rosseland mean \citep{1925ApJ....61..424R} -- emerges naturally from the evaluation of the integrated radiative flux. Rosseland stated that the derivation of the analytical expression for this "characteristic function of the medium regarding its power of transmitting radiation \textit{en bloc}" is not difficult, yet the actual calculation is. This is because one usually has to account in detail for a wide variety of absorbers. Typical applications of Rosseland mean opacities are stellar structure and evolution calculations, which span orders of magnitude in temperature and density and require the inclusion of numerous absorption and scattering processes. As an example, we consider a low mass star in the late phase of its evolution on the Asymptotic Giant Branch (AGB). Data for electron thermal conductivity, a basic energy transport process in the inert C-O core, is available from \citet{2007ApJ...661.1094C}. Tabulated high temperature opacity data (ranging from a few thousand to a few hundred million Kelvin) are provided by the OPAL collaboration \citep{1996ApJ...464..943I} and the Opacity Project \citep[OP, ][]{2005MNRAS.362L...1S}. Both groups provide their data publicly through web interfaces\footnote{OPAL: \texttt{http://www-phys.llnl.gov/Research/OPAL/}}$^,$\footnote{OP: \texttt{http://opacities.osc.edu}} where in general single element abundances can be varied. OPAL also allows to produce so-called Type II tables: starting from a given element mixture, the abundances of two elements (e.\,g. C and O) are enhanced by adding constant mass fractions in each case. Apart from H$_2$, however, no molecular contributions to the Rosseland mean are taken into account in both the OPAL and OP databases. Molecules become the dominant opacity source at temperatures lower than about $4000$-$5000\,\mathrm{K}$, which occur in the outer envelope and atmosphere of a star on one of the giant branches -- either the Red Giant Branch (RGB) or the AGB --, or in protoplanetary accretion disks. At even lower temperatures (below approximately $1500\,\mathrm{K}$), dust can be responsible for the bulk of opacity.
One of the first papers giving a detailed discussion about low temperature mean opacities (and also summarising earlier efforts on this topic) was \citet{1983ApJ...272..773A}. This work evolved further and resulted in the extensive database of \citet{1994ApJ...437..879A} and the updated version of \citet{2005ApJ...623..585F} -- henceforth AF94 and F05, respectively -- that has become a standard for low temperature opacities in the past few years. These data are based on scaled solar metal compositions (plus some tables for enhanced alpha element abundances). Returning to our AGB star example, the problem is that the chemical composition in the envelope of such an object varies. Products of the ongoing nucleosynthesis in the stellar interior are brought to the surface by a series of mixing events. The entire mechanism is dubbed Third Dredge-up \citep[TDU, for a review see][]{1999ARA&A..37..239B}. The main burning products dredged up are freshly synthesised carbon and elements produced by the slow neutron capture process. Alterations to the element mixture result in significantly different opacity coefficients. The corresponding implications for the stellar evolution calculations were emphasised by \citet{1983ApJ...272..773A}. The authors provide some examples of how the Rosseland mean opacity changes when the number ratio of carbon to oxygen atoms (C/O) varies. The important role of the C/O ratio is due to the fact that of all the abundant molecules, carbon monoxide (CO) has the highest binding energy \citep{1934ApJ....79..317R}. The partial pressure of CO hardly varies with the C/O ratio in a plasma at constant temperature and pressure. In a chemical equilibrium situation, the less abundant species of carbon and oxygen are almost completely locked in the CO molecule. The remaining fraction of the more numerous atoms is free to form other molecules. The absolute number of free atoms (i.\,e. not bound in CO) of either oxygen or carbon primarily determines the magnitude and characteristics of the Rosseland mean opacity since CO contributes on average less than other molecular species (such as H$_2$O, TiO, C$_2$H$_2$, and HCN).
Although the above facts have been in principle known for a long time, they have not been accounted for in stellar evolution models, due to the lack of adequate opacity data. From the tabulated low temperature opacities that can be found in the literature \citep[other than AF94 and F05, e.\,g. ][partly monochromatic and focused on special objects such as brown dwarfs and extrasolar planets]{1993A&A...274..818N,2003A&A...410..611S,2006astro.ph..5666W,2007ApJS..168..140S,2008ApJS..174..504F}, only \citet{2000A&A...358..651H} considered a case in which $\mbox{C/O}=1.8$ in simulating winds of carbon-rich AGB stars. \citet{2007ApJ...666..403D} investigated single element abundance variations and the influence on low and high temperature opacities exemplarily. Subsequently, the sensitivity of stellar evolution models to these changes was analysed and both effective temperatures and stellar lifetimes depend clearly on opacity changes due to abundance variations.
A dramatic improvement in evaluating molecular opacities with a varying amount of carbon was achieved by \citet{2002A&A...387..507M} who illustrated how a correct treatment of opacity can affect stellar evolution tracks by using analytical AGB star models. The principle outcomes were that observations of carbon stars can be reproduced more accurately than before and the models appear to imply that the carbon star phase is shortened considerably with a consequent reduction in the stellar yields. The opacities were estimated by chemical equilibrium calculations and analytical fits for molecular contributions to the Rosseland mean, which were combined by a simple linear summation. This was the main problem of this approach. The non-linear nature of the Rosseland mean by definition renders the method of adding up opacity contributions a fragile approximation. Moreover, the derived molecular contributions are not unique because they cannot be determined in a non-ambiguous way. The Rosseland mean emphasises transparent spectral regions, and the gaps between absorption lines from a specific molecule can be filled by other species depending on the chemistry. Thus, this interplay crucially influences the total opacity. Finally, not all relevant molecular absorbers have been taken into account in the work described above.
From this overview, we conclude that the stellar evolution community require a complete, homogeneous database containing Rosseland mean opacity coefficients with varied abundances of carbon. The aim of this work is to fill this gap. We also included abundance variations in nitrogen, since in low and intermediate mass stars mechanisms might activate the CN cycle and trigger the reconversion of carbon isotopes into nitrogen. The corresponding mechanism acting in stars of mass lower than $4\,\mathrm{M_{\sun}}$ is dubbed Cool Bottom Process \citep{1995ApJ...447L..37W} which is a slow, deep circulation at the bottom of the convective envelope. Its origin is still unknown, although a number of driving mechanisms have been proposed (e.\,g. rotationally induced instabilities, magnetically induced circulation, gravity waves or thermohaline mixing -- we refer to \citealp{2007ApJ...671..802B} for an overview). In more massive AGB stars ($M>4\,\mathrm{M_{\sun}}$), the analogous process is known as Hot Bottom Burning \citep{1973ApJ...185..209I}.
A preliminary version of the opacity data presented in the following was applied successfully by \citet{2007ApJ...667..489C,2008AIPC.1001....3C} and \citet{2008arXiv0805.3242L}. \citet{2007ApJ...667..489C} considered a low metallicity ($Z=1\times10^{-4}$) stellar model with $M=2\,\mathrm{M_{\sun}}$. By using the new opacity coefficients, the model was able to reproduce observational data more accurately than earlier models in terms of physical properties (such as for instance the effective temperature) and the abundance pattern of the heavy elements. Information that is complementary to the data considered here can be found in \citet{2007AIPC.1001...11L}.
The paper is structured as follows. In Sect.~\ref{sec:toolandmethod}, we describe the tools used to generate our opacity tables. Section~\ref{sec:datasources} summarises all data adopted in our calculations, i.\,e. abundances as well as atomic and molecular opacity data. We describe and justify the design of our tables in Sect.~\ref{sec:tabledesign}. We discuss the results in detail in Sect.~\ref{sec:results}, before concluding and providing a perspective on future work in Sect.~\ref{sec:conclusions}.
\section{Tool and method}\label{sec:toolandmethod}
To generate the data presented in this work, we used the COMA code developed by \citet{Aringer2000}. For a description of improvements since the initial version, we refer to a forthcoming paper of Aringer et al. (in preparation). Assuming local thermodynamic equilibrium, the program solves for the ionisation and chemical equilibrium (using the method of \citealp{1973A&A....23..411T}) at a given temperature and density (or pressure) combination for a set of atomic abundances. From the resulting partial pressures for the neutral atoms, ions, and molecules, the continuous and line opacity is calculated at the desired wavelengths using the data listed in Sect.~\ref{sec:datasources}. The main purpose of the COMA code was originally to provide monochromatic absorption coefficients for dynamical model atmospheres of cool giants \citep{2003A&A...399..589H}. However, it has been used in a wide range of applications, for example in the process of calculating low and high resolution spectra \citep[e.\,g. ][respectively]{2002A&A...395..915A,2008arXiv0805.3242L} and line profile variations \citep{2005A&A...437..273N}.
Once the monochromatic opacities $\kappa_\nu(T,\rho)\equiv\kappa(\nu,T,\rho)$ are known ($T$ stands for the temperature and $\rho$ for the density), the calculation of a mean opacity coefficient such as the Rosseland mean is straightforward. One has to perform a weighted integration of $\kappa_\nu$
over the relevant frequency range. For the Rosseland mean $\kappa_\mathrm{R}=\kappa_\mathrm{R}(T,\rho)$, the relation is given by
\begin{equation}
\label{eq:rosselandmean}
\frac{1}{\kappa_\mathrm{R}}=\int _0 ^{\infty} \frac{1}{\kappa _\nu} \frac{\partial B_\nu(T)}{\partial T}d\nu\ \Big/ \int _0 ^{\infty} \frac{\partial B_\nu(T)}{\partial T} d\nu\mbox{,}
\end{equation}
where the weighting function is the partial derivative of the Planck function with respect to the temperature. The main subject of this paper is to study not only the dependence of $\kappa_\mathrm{R}$ on the thermodynamic quantities $T$ and $\rho$, but in addition the chemical composition.
In practice, the integration over $\nu$ in Eq.~\ref{eq:rosselandmean} must be performed at a predefined discrete frequency (or wavelength) grid. We use a grid that is based on the one described by \citet{1992A&A...261..263J}, but we extend the wavelength range to have boundaries at $200{,}000\,\mathrm{cm}^{-1}$ ($500\,\AA=0.05\,\mu\mathrm{m}$) and $200\,\mathrm{cm}^{-1}$ ($50\,\mu\mathrm{m}$). This results in a total number of 5645 wavelength points at which we calculate the opacity. \citet{1998A&A...337..477H} proposed to use a number of opacity sampling points for the accurate modelling of atmospheres of late-type stars that is roughly four times larger than the value used here. The same is true for the number of wavelength points in F05, who use more than $24{,}000$ points. However, the error introduced when using a lower resolution is generally small compared with other uncertainty sources (see Sect.~\ref{sec:uncertainties}), and a smaller number of grid points has the advantage of a reduced computing time. We note that the opacity sampling technique remains -- regardless of the precise number of points -- a statistical method. To arrive at a realistic and complete description of the spectral energy distribution, one would need a far higher resolution ($R\simeq200{,}000$). In any case, the grid is sufficiently dense for a rectangle rule to be sufficient in carrying out the wavelength integration. This can be justified by comparing the numerically obtained value of the normalisation factor on the right-hand side of Eq.~\ref{eq:rosselandmean} with its analytical value. The formal integration limits in the definition of the Rosseland mean have to be replaced by cut-off wavelengths. These are determined by the weighting function $\partial B_\nu(T)/\partial T$ that constrains the relevant wavelength range. Like the Planck function itself, the maximum of its derivative shifts to higher wavelengths with decreasing temperature and vice versa. At the upper wavelength limit adopted here ($50\,\mu\mathrm{m}$) and for the lowest temperature at which we generate data (about $1600\,\mathrm{K}$), the weighting function has decreased by more than $1/100$ relative to its maximum value. Accordingly, at the high temperature edge the weighting function at the same wavelength has dropped to almost $1/10{,}000$ of its maximum value. Since we do not include grain opacity in our calculations that would require going to even higher wavelengths (as in F05), we definitely cover the relevant spectral range for the calculation of $\kappa_\mathrm{R}$ within the adopted parameter range. The decline in the weighting function towards lower wavelengths (or higher frequencies) is far steeper, so that the above argument is also fulfilled at the low-wavelength cut-off.
\section{Data sources}\label{sec:datasources}
In the following, we briefly summarise the sources of the data entering the opacity calculations. The basic ingredient in this procedure is the relative amount of elements contained in the mixture for which we would like to know the opacity. In this work, we chose to use the set of recommended values for solar element abundances compiled by \citet{2003ApJ...591.1220L}, which imply a solar C/O ratio of $0.501$. The values for the elements C, N, and O are close to the values given by \citet{2007SSRv..130..105G}; using their abundances results in (C/O)$_\odot=0.537$. The authors derive these values from a 3D hydrodynamic model of the solar atmosphere, a technique that caused a downward revision of the solar CNO abundances in recent years. However, these values are still disputed. For instance, \citet{2008ApJ...682L..61C} used spectro-polarimetric methods to argue for an oxygen abundance that is higher than claimed by \citet{2007SSRv..130..105G} and closer to previously accepted values (e.\,g. \citealp{1998SSRv...85..161G} with (C/O)$_\odot=0.490$) that agree with values derived from helioseismology (e.\,g. \citealp{2008PhR...457..217B}).
However, the data presented depend to first order only on the relative amount of carbon and oxygen, which does not differ significantly from that for the various abundance sets mentioned above. Therefore and due to the fact that C/O is a variable quantity, the current tables can serve as an approximation for applications that use abundances other than \citet{2003ApJ...591.1220L}, until we generate further data.
\subsection{Continuous opacity}
\begin{table}
\caption{Continuous opacity sources}
\centering
\label{table:continuum}
\begin{tabular}{l l}
\hline\hline
Ion and process & Reference\\
\hline
\ion{H}{i} b-f and f-f & \citet{1961ApJS....6..167K}\\
H$^-$ b-f & \citet{1966MNRAS.132..255D}\\
H$^-$ f-f & \citet{1966MNRAS.132..267D}\\
H+H (Quasihydrogen) & \citet{1968ApJ...153..987D}\\
H$_2^+$ f-f & \citet{1965ApJS....9..321M}\\
H$_2^-$ f-f & \citet{1964ApJ...139..192S}\\
\ion{C}{i}, \ion{Mg}{i}, \ion{Al}{i}, \ion{Si}{i}, \ion{He}{i} f-f & \citet{1970MmRAS..73....1P}\\
He$^-$ f-f & \citet{1965ApJ...141..811S}\\
continuous $e^-$ scattering & \citet{1978stat.book.....M}\\
Rayleigh scattering from \ion{H}{i} & Dalgarno, quoted by \citet{1964SAOSR.167...17G}\\
Rayleigh scattering from H$_2$ & \citet{1962ApJ...136..690D}\\
\hline
\end{tabular}
\end{table}
The routines for the calculation of the continuum opacity in COMA are based on an earlier version of the MARCS code \citep{1992A&A...261..263J}. The latest MARCS release was described by \citet{2008A&A...486..951G}. We adopt the format of their Table 1 to ease comparison with their updated set of continuous opacity sources. The data that we use (and list in Table~\ref{table:continuum}) is not as extensive. However, the most relevant sources are included, and in the low temperature region of the presented tables in particular the molecular contribution to $\kappa_\mathrm{R}$ dominates over the continuum by several orders of magnitude.
\subsection{Atomic lines}
Atomic line data are taken from the VALD database \citep{2000BaltA...9..590K}\footnote{VALD: \texttt{http://ams.astro.univie.ac.at/\~{}vald/}}, where we use updated version 2 data (from January 2008) here. For the atomic lines, we adopt full Voigt profiles derived from the damping constants listed in VALD. The only exception is hydrogen for which we use an interpolation in tabulated line profiles from \citet{1995yCat.6082....0S}. The atomic partition functions are taken from the work of \citet{1981ApJS...45..621I}, although the data for boron has been modified \citep{Gorfer2005}. The number of atomic lines included is $16{,}059{,}201$. Split into their respective ionisation stages the numbers are: \ion{}{I} $4{,}028{,}995$, \ion{}{II} $5{,}347{,}990$, and \ion{}{III} $6{,}682{,}216$. For the ionisation energies, we refer to \citet{Stift2000} since we use the same reference data quoted there in the current work.
\subsection{Molecular data}
\begin{table}
\begin{minipage}[t]{\columnwidth}
\caption{Molecular line data}
\label{table:molecules}
\centering
\renewcommand{\footnoterule}{}
\begin{tabular}{l c r c}
\hline\hline
Molecule & Thermodynamic & Number of lines & Line \\
& data
\footnote{
(1) \citealt{1984ApJS...56..193S},
(2) \citealt{1985A&A...148...93R},
(3) \citealt{VidlerTennyson2000},
(4) \citealt{2002JChPh.11711239B},
(5) \citealt{1988A&AS...74..145I},
(6) \citealt{2003ApJ...594..651D}
}
& & data
\footnote{
(7) \citealt{1994ApJS...91..483G},
(8) \citealt{Jorgensen1997},
(9) \citealt{1974A&A....31..265Q},
(10) \citealt{LanghoffBauschlicher1993},
(11) \citealt{1998cpmg.conf..321S},
(12) \citealt{2006MNRAS.368.1087B},
(13) \citealt{2006MNRAS.367..400H},
(14) \citealt{Schwenke1997},
(15) \citealt{1998A&A...330.1109A},
(16) \citealt{1995SPIE.2471..105R},
(17) \citealt{2005JQSRT..96..139R},
(18) \citealt{Tipping2007},
(19) \citealt{Plez2007},
(20) \citealt{Littleton1987},
(21) \citealt{BauschlicherRam2001},
(22) \citealt{1989ApJ...343..554J}
}
\\
\hline
CO & 1 & 131\,859 & 7 \\
CH & 2 & 229\,134 & 8 \\
C$_2$ & 1 & 360\,887 & 9 \\
SiO & 2 & 85\,788 & 10 \\
CN & 1 & 2\,533\,040 & 8 \\
TiO & 1 & 22\,724\,670 & 11 \\
H$_2$O & 3 & 27\,693\,367 & 12 \\
HCN/HNC & 4 & 33\,454\,021 & 13 \\
OH & 1 & 36\,601 & 14 \\
VO & 1 & 3\,171\,552 & 15 \\
CO$_2$ & 2 & 1\,032\,266 & 16 \\
SO$_2$ & 5 & 29\,559 & 17 \\
HF & 1 & 462 & 18 \\
HCl & 1 & 447 & 17 \\
ZrO & 1 & 16\,391\,195 & 19 \\
YO & 1 & 975 & 20 \\
FeH & 6 & 116\,300 & 6 \\
CrH & 1 & 13\,824 & 21 \\
\hline
C$_2$H$_2$ & - & opacity sampling & 22 \\
C$_3$ & - & opacity sampling & 22 \\
\hline
\end{tabular}
\end{minipage}
\end{table}
In the calculation of low temperature opacities, molecules play a critical role. We list the data that we used to calculate the tables containing Rosseland mean opacity coefficients in Table~\ref{table:molecules}. References to the thermodynamic data (i.\,e. the partition function) and the line data for each molecule (displayed in the first column) are as indexed in the columns two and four, respectively. The number of lines entering the calculation is given in the third column (the original lists contained more lines). For some molecules, there is more than one line list available. The line lists that we use were selected during other projects (cited at the beginning of Sect.~\ref{sec:toolandmethod}) proven to deliver viable results. In the case of the OH line list, the measured data from the HITRAN database contains more lines than that from \citet{Schwenke1997}. We performed some test calculations after completing the database with OH data from HITRAN. The change in our results was, however, hardly perceptible. As long as the overall opacity distribution is reproduced reasonably, the line positions do not have to be precisely correct in calculating $\kappa_\mathrm{R}$ (as opposed to applications relying on high-resolution spectral synthesis).
\begin{table*}
\caption{Metallicities and enhancement factors}
\label{table:metallicities}
\centering
\begin{tabular}{l | c c c c c c c c c | c c c}
\hline\hline
\multicolumn{1}{l}{Metallicity (Z)} & \multicolumn{9}{c}{\element[][12]{C} enhancement factors} & \multicolumn{3}{c}{\element[][14]{N}} enhancement factors\\
\hline
0.04& 1.0 & 1.2 & 1.5 & 1.8 & 2.0 & 2.2 & 3.0 & & & 1.0 & 1.2 & 1.5 \\
0.02& 1.0 & & 1.5 & 1.8 & 2.0 & 2.2 & 3.0 & 5.0 & & 1.0 & 1.5 & 2.0 \\
0.01& 1.0 & & 1.5 & 1.8 & 2.0 & 2.2 & 5.0 & 10.0 & & 1.0 & 1.5 & 3.0 \\
0.008& 1.0 & & & 1.8 & 2.0 & 2.2 & 5.0 & 10.0 & 20.0 & 1.0 & 2.0 & 4.0 \\
0.006& 1.0 & & & 1.8 & 2.0 & 2.2 & 5.0 & 10.0 & 30.0 & 1.0 & 2.5 & 5.0 \\
0.005& 1.0 & & & 1.8 & 2.0 & 2.2 & 5.0 & 10.0 & 30.0 & 1.0 & 2.5 & 5.0 \\
0.004& 1.0 & & & 1.8 & 2.0 & 2.2 & 5.0 & 10.0 & 30.0 & 1.0 & 2.5 & 5.0 \\
0.003& 1.0 & & & 1.8 & 2.0 & 2.2 & 6.0 & 20.0 & 70.0 & 1.0 & 3.0 & 8.0 \\
0.002& 1.0 & & & 1.8 & 2.0 & 2.2 & 6.0 & 20.0 & 70.0 & 1.0 & 3.0 & 8.0 \\
0.001& 1.0 & & & 1.8 & 2.0 & 2.2 & 8.0 & 35.0 & 150.0 & 1.0 & 4.0 & 16.0 \\
0.0003& 1.0 & & & 1.8 & 2.0 & 2.2 & 12.0 & 75.0 & 500.0 & 1.0 & 7.0 & 50.0 \\
0.0001& 1.0 & & & 1.8 & 2.0 & 2.2 & 18.0 & 150.0 & 1500.0 & 1.0 & 12.0 & 150.0 \\
0.00003& 1.0 & & & 1.8 & 2.0 & 2.2 & 27.0 & 350.0 & 5000.0 & 1.0 & 22.0 & 500.0 \\
0.00001& 1.0 & & & 1.8 & 2.0 & 2.2 & 40.0 & 750.0 & 15000.0 & 1.0 & 40.0 & 1500.0 \\
\hline
\end{tabular}
\end{table*}
For the molecules C$_2$ and CN, we introduced some modifications to the line data. \citet{Jorgensen1997} suggested a scaling of the $gf$ values for C$_2$ in a certain wavelength region (details are given in \citealp{2001A&A...371.1065L}). We followed this suggestion but also investigated the effect on $\kappa_\mathrm{R}$ of not applying this scaling (see Sect.~\ref{sec:uncertainties}). The modifications to the CN line list are not crucial to the calculation of the Rosseland mean. We corrected the line positions of approximately $18{,}000$ lines using measurements listed in the catalogue of \citet{2005cns..book.....D}. The data for the molecules C$_3$ and C$_2$H$_2$ from \citet{Jorgensen1997} are only available in the form of an opacity sampling. For the generation of this data, a microturbulent velocity of $2.5\,\mathrm{km\,s^{-1}}$ was adopted. As for the calculation of the molecular line opacity, we assume Doppler profiles in each case, since there is little information about the damping constants of molecular transitions. Additionally, for the species dominating the overall opacity and in regions close to bandheads, even the wings of the strongest lines will contribute less to the opacity than the Doppler cores of the numerous overlapping neighbour lines.
The chemical equilibrium constants used in calculating of the molecule partial pressures are those from \citet{1973A&A....23..411T} with updates described in \citet{1996A&A...315..194H}. In the case of C$_3$, we utilised data published by \citet{1981ApJS...45..621I}. All molecules (a total number of 314 species) enter into the equation of state.
\section{Table design}\label{sec:tabledesign}
We tabulate the logarithm of the Rosseland mean opacity $\log \kappa_\mathrm{R}\,\mathrm{[cm^2\,g^{-1}]}$ as a function of $\log T \mathrm{[K]}$, the logarithm of the gas temperature, and $\log R\,\mathrm{[g\,cm^{-3}\,K^{-3}\,10^{18}]}$, where $R\equiv\rho/T_6^3$. $\rho$ and $T_6$ represents the density in units of $\mathrm{g\,cm^{-3}}$ and the temperature in millions of Kelvin, respectively. The ranges covered are $3.2 \leq \log T\,\mathrm{[K]} \leq 4.05$ with a step size of $0.05$, and $-7.0 \leq \log R \leq 1.0$ with a step size of $0.5$. The low temperature cut-off was set to be the temperature at which grains may become the major opacity source (cf. e.\,g. F05). However, typical AGB stellar evolution models do not attain such low temperatures. We would like to emphasise that dust formation is usually not an equilibrium process \citep{1988A&A...206..153G}, and thus an a priori tabulation of grain opacities can be affected by large uncertainties. At the highest value of $\log T$, the contribution of molecules to the mean opacity vanishes and a smooth transition to high temperature opacity data is possible (see Sect. \ref{sec:comparison}). Data are available for 14 different values of $Z$ (cf. Table~\ref{table:metallicities}) that depicts the total mass fraction of all elements heavier than helium. The metallicity spans a range from $Z=1\times10^{-5}$ to the Magellanic clouds metallicity range \citep{2000glg..book.....V,2000PASP..112..529V}, where the grid is a bit denser in terms of $Z$, and the solar metallicity to super-solar metallicity ($Z=0.04$). We calculated tables for three different mass fractions of hydrogen ($X\in\left\{0.5,0.7,0.8\right\}$). Thus, the data cover the supposedly prior field of application, namely the outer layers of an AGB star, where such values of $X$ occur. Initially, we calculated a master table for each metallicity, in which the metal composition was linearly scaled from the abundances given by \citet{2003ApJ...591.1220L} to arrive at the respective $Z$ value. From these abundances, we then enhanced the mass fractions of \element[][12]{C} and \element[][14]{N} in steps that depended on the initial metallicity (see Table~\ref{table:metallicities}). Since this produced an increase in the overall metallicity $Z$, we followed the OPAL approach and reduced the mass fraction of \element[][4]{He} to fulfil $X+Y+Z=1$.
The number of enhancement factors is constrained by a trade-off between numerical accuracy and computational costs in the generation and application of the tables. Due to the special role of carbon in conjunction with oxygen (see Introduction), the number of enhancement factors is higher than for nitrogen.\footnote{Here we deviate from the original design used in \citet{2007ApJ...667..489C} and \citet{2007AIPC.1001...11L}, where we adopted 5 enhancement factors for both carbon and nitrogen .} We calculated opacities for 7 different mass fractions of \element[][12]{C}. The starting point was the mass fraction that results from scaling all element abundances to the metallicity under consideration. All other carbon mass fractions resulted from multiplying this mass fraction by factors chosen as follows. For each metallicity, we use the factors 1.8, 2.0, and 2.2. The C/O ratio emerging from the adopted scaled solar abundances was about 0.5. Thus, multiplying the initial $X(\mathrm{\element[][12]{C}})$ by 2.0, we inferred that $\mbox{C/O}\simeq1$, where the molecular opacity in general reaches a minimum at low temperatures. As one can see for example in Fig.~\ref{fig:kappa-c-logT-logRm1p5}, the molecular absorption increased sharply on both sides of this minimum, while, towards much higher and lower C/O ratios, there was some type of saturation. To resolve this sharp turnaround, we included the factors 1.8 and 2.2 (corresponding to $\mbox{C/O}\simeq0.9$ and $1.1$, respectively). In stellar evolution models of low metallicity AGB stars this is probably of minor importance because the transition to a carbon star is usually rapid and occurs within a few dredge-up episodes \citep[e.\,g. ][]{2007A&A...469..239M}. The respective highest enhancement factor is related to the initial metallicity. From the work of \citet{2003PASA...20..314A} and references therein and the first application of our data by \citet{2007ApJ...667..489C,2008AIPC.1001....3C}, we derived information about the final carbon abundances reached in AGB stars. These determined the maximum enhancement factors in our tables for the metallicities under consideration (for instance $Z=1\times10^{-2},1\times10^{-3},1\times10^{-4}$). At all other metallicities, we derived the highest factors using a roughly linear relation between $\log Z$ and $\log f_\mathrm{C,max}$ (where we denote the enhancement factor as $f$). The remaining two factors for carbon are distributed almost equispaced on a logarithmic scale between $f=2$ and $f_\mathrm{C,max}$ at the lower metallicities. For high $Z$ with low final enhancements, we shifted factors from the carbon-rich regime to the region in which $\mbox{C/O}<1$. For \element[][14]{N}, we introduced two additional factors beyond the initial abundance. The expected final overabundance of nitrogen is much lower than for carbon, and we set $f_\mathrm{C,max}/f_\mathrm{N,max}=10$ for the lowest values of $Z$, decreasing this value for increasing $Z$. The intermediate enhancement factor for nitrogen was set to bisect the logarithmic interval approximately between the two other factors. An overview of all enhancement factors at each metallicity is given in Table~\ref{table:metallicities}.
We have not yet considered varied alpha element abundances, although one expects an increased value of [O/Fe] at low metallicities (e.\,g. \citealp{2000A&A...364L..19H} and references therein). The reason is the sharp increase in the number of tables, if one retains the previously outlined data structure. In its current version, the database contains $3\times7\times3=63$ tables per metallicity. Varying the abundance of oxygen also influences the C/O ratio, which is the decisive quantity for the molecular opacity at low temperatures. This in turn requires alterations to the enhancement factors of carbon, since one wishes to retain at least one point where $\mbox{C/O}=1$, even for an enhanced oxygen abundance. Establishing a scheme with a minimal number of enhancement factors for three elements (or element groups) is therefore not straightforward if one is attempting to retain as much information as possible with respect to the role of the C/O ratio. In place of enhancement factors, one could add constant amounts of one element (group) in terms of a mass fraction, as completed in the OPAL Type II tables. Only the application of the data in its current form will provide us with information about the feasibility of our approach and whether the data should be arranged in a different way. Future work will be dedicated to these questions.
\section{Results and discussion}\label{sec:results}
\begin{figure*}
\centering
\resizebox{\textwidth}{!}{\includegraphics{0576fg01.eps}}
\caption{Contents of the opacity database in some showcases. Top left panel: Rosseland mean opacities at constant $\log R$ for $Z=0.02$ as a function of $\log T$ for different values of $X$. The qualitative behaviour is fairly independent of $X$ and $Z$ and varies smoothly with $\log R$ at fixed abundances of carbon and nitrogen (the latter is not enhanced in any of the panels shown in this figure). Bottom left panel: The molecular opacity decreases when lowering the metallicity, but the structure with a bump at low temperatures due to the molecular contribution to the opacity is conserved. Top right panel: For the case where carbon is enhanced the molecules also produce high opacities at low $\log T$, although the shape of the curve differs noticeably from the standard case. Bottom right panel: The reason for the different structure in the opacities is that different molecules contribute to the opacity in the carbon-rich case (white boxes), in contrast to the oxygen-rich case (dark-grey boxes). The CO molecule contributes in both cases as well as CN, although at different orders of magnitude in each case. The extension of the boxes provide an indication of the temperature where the sources contribute but does not contain information about the order of magnitude of the contribution. The total contribution of molecules and atoms is assessed by leaving out these opacity sources in the calculations. All curves have been smoothed using cubic splines. See text for details.}
\label{fig:coma-x-z-showcase-4up}
\end{figure*}
The calculation of Rosseland mean opacities for scaled solar metal mixtures has been discussed extensively in many papers (see Introduction for citations). Since we find good agreement with data from other groups (see Sect.~\ref{sec:comparison}), we only briefly restate the main points of this procedure. In Fig.~\ref{fig:coma-x-z-showcase-4up}, we provide an overview of the contents of the database using examples. We refer to individual panels in the following paragraphs. Generally speaking, molecules cause the mean opacity to vary dramatically as a function of temperature in a similar way for each hydrogen mass fraction $X$ and metallicity $Z$ (left panels of Fig.~\ref{fig:coma-x-z-showcase-4up}) that we include in our database. Beyond a temperature corresponding to $\log T=3.6$ to $3.7$ (depending on $Z$ and $\log R$), the contribution of molecules to $\kappa_\mathrm{R}$ vanishes and only continuous sources and atomic lines block the radiation field.
AF94 provided an in-depth discussion about which type of opacity is dominant in different regions of the parameter space. They presented a detailed treatment of the main molecular opacity sources, which were in this case water (H$_2$O) -- accounting for the large bump at low temperatures -- and titanium oxide (TiO). They also provided information about the monochromatic absorption coefficients of these molecules. Beside H$_2$O and TiO, there is a number of other molecules that deliver non-negligible contributions to the Rosseland opacity in the oxygen-rich case \citep[cf. ][]{2007AIPC.1001...11L}. At higher temperatures, CN and CO contribute to $\kappa_\mathrm{R}$, and VO, OH, and SiO (ordered by decreasing importance) should also be taken into account. Calculations based on these 7 molecules result in opacity coefficients that are accurate to about 10 per cent compared with the full dataset when the metal mixture is oxygen-rich. The further inclusion of CrH and YO reduces this error to below 3 per cent on average. The temperature ranges in which different molecules contribute to $\kappa_\mathrm{R}$ indicated in the bottom right panel of Fig.~\ref{fig:coma-x-z-showcase-4up} are estimated by omitting the respective molecules in the calculation of $\kappa_\mathrm{R}$ and by checking where the change in this quantity exceeds 5 per cent with respect to the complete dataset. However, the limits derived from this criterion vary with the value of $\log R$ under consideration. We use $\log R=-1.5$, which is typical of the envelopes of AGB stars (Sergio Cristallo, private communication).
\begin{figure*}
\centering
\resizebox{\textwidth}{!}{\includegraphics{0576fg02.eps}}
\caption{Rosseland opacity changing with the carbon content in the metal mixture as a function of temperature. We show curves for two metallicities, $Z=0.02$ and the lowest metallicity in this database $Z=0.00001$, at a value of $\log R=-3.0$. The full line represents the solar scaled metal mixture with $\mbox{C/O}\simeq0.5$. An increase in the carbon mass fraction first causes a drop in $\kappa_\mathrm{R}$ at low temperatures as C/O approaches 1 ($X(\mathrm{\element[][12]{C}})\times2.0$, dotted line), because more oxygen atoms get bound in CO and less other molecular opacity carriers can be formed. When C/O rises beyond 1 this trend is reversed and the mean opacity increases again due to the formation of carbon-bearing molecules. At higher temperatures the opacity is growing monotonically due to CO, CN and atomic carbon. See also Fig.~\ref{fig:kappa-c-logT-logRm1p5}. All curves have been smoothed using cubic splines.
}
\label{fig:coma-c-enhancment-Zhilo-logRm3p0}
\end{figure*}
\begin{figure*}[!ht]
\centering
\resizebox{\textwidth}{!}{\includegraphics{0576fg03.eps}}
\caption{Effects of an increased nitrogen abundance on the Rosseland mean opacity at various $\log R$ values relative to the case without any nitrogen enhancement. The filled and empty symbols refer to intermediate and maximum enhancement factors of \element[][14]{N}, respectively. For $Z=0.02$ (left panels), these are $1.5$ and $2$, whereas for $Z=0.00001$ (right panels), we have $40$ and $1500$. On the top, we show the cases for the oxygen-rich case (default carbon abundance). Here the increase in opacity is due to the CN molecule. The bottom panels refer to maximum enhanced \element[][12]{C} (i.\,e. the carbon-rich case), where HCN also contributes to the mean opacity at lower temperatures. Note the different scales on the y-axis.
}
\label{fig:coma-nitrogen-relative-4}
\end{figure*}
An increase in the carbon content of the metal mixture while the amount of oxygen remains at its original value (i.\,e. an increase in the C/O ratio) has distinct effects on the Rosseland mean opacity. We refer to the Introduction and the previous section for a description of the respective mechanism. In the transition from the oxygen-rich regime (Fig.~\ref{fig:coma-x-z-showcase-4up}, top left) to the carbon-rich regime (Fig.~\ref{fig:coma-x-z-showcase-4up}, top right), one can distinguish between two cases. At lower temperatures (below $\log T=3.4$; we refer in the following to the case shown in Fig.~\ref{fig:coma-c-enhancment-Zhilo-logRm3p0}, i.\,e. $\log R=-3.0$), the opacity first decreases, due to the above described property of CO, which causes the following mechanism. As C/O increases from its initial value of about $0.5$ and approaches 1, more oxygen becomes bound in CO and fewer oxygen atoms are free to form molecules with a large overall absorption such as H$_2$O. Close to $\mbox{C/O}=1$ (not necessarily at an exact equal amount of C and O), the opacity reaches a minimum as the partial pressures of oxygen-bearing molecules drop substantially, while the abundances of carbon-bearing molecules only begin to rise to significant levels. As the amount of carbon continues to increase, thus incrementing C/O beyond 1, the opacity increases due to the formation of polyatomic carbon-bearing molecules such as C$_2$H$_2$ or HCN. These polyatomic molecules are obviously most relevant at lower temperatures. In addition, C$_3$ and C$_2$ produce a bump in the opacity at high carbon enrichments.
The situation at higher temperatures (up to $\log T=3.7$) is, however, different. Besides CO, the contribution of which remains almost constant, only CN and C$_2$ are relevant opacity sources (cf. Fig.~\ref{fig:coma-x-z-showcase-4up}, bottom right panel), while other molecules that are important to the Rosseland opacity are dissociated at these temperatures. The partial pressures of these molecules rise monotonically with the carbon abundance and thus the opacity at high $\log T$ increases in the same manner. At intermediate temperature (around $\log T=3.4$) where many different opacity sources contribute, the aforementioned mechanisms compete and the behaviour of $\kappa_\mathrm{R}$ is a more complex function of C/O.
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{\includegraphics{0576fg04.eps}}
\caption{Comparison of COMA (filled circles) and F05 (empty triangles) values at a metallicity of $Z=0.02$. The top panel shows absolute values, whereas in the bottom panel the differences between F05 and COMA values are indicated on a relative scale. Here and in the following figures, $\kappa_\mathrm{COMA}$ refers to data contained in our database, while the respective comparison values are always labelled $\kappa_\mathrm{R}$. The overall agreement down to $\log T=3.5$ is gratifying. The growing discrepancies towards lower temperatures are most probably due to the deviating set of molecular data in the respective calculations. The steep increase in $\kappa_\mathrm{R}$ in the F05 data below $\log T=3.25$ is due to dust grains that are not accounted for in our data. Below $\log T=3.5$, F05 use a smaller temperature spacing than we do. The regions between the grid points can be reasonably well reconstructed by a cubic spline interpolation in $\log T$ (dotted lines in top panel). In the bottom panel, we compare values at the COMA grid points only.
}
\label{fig:coma-f05-full-sc}
\end{figure}
Briefly summarised, changes in the chemistry alter the Rosseland mean opacity, where variations in the C/O ratio has the most pronounced effect. An oxygen-rich composition ($\mbox{C/O}<1$) results in a different group of molecules accounting for the opacity than in the carbon-rich case ($\mbox{C/O}>1$). Furthermore, carbon-bearing molecules have different spectral appearances from the oxygen-bearing ones and thus cause $\kappa_\mathrm{R}$ to have a different functional behaviour. \citet{2000A&A...358..651H} provided examples of monochromatic absorption coefficients for carbon-bearing molecules. The only two molecules that contribute significantly for either chemistry are CO and CN.
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{\includegraphics{0576fg05.eps}}
\caption{Comparison of COMA values with OP and OPAL for $Z=0.02$. The top panel shows absolute values, whereas in the bottom panel the differences in the overlapping region are again indicated on a relative scale. The high temperature data do not include the relevant molecular opacity sources. Therefore, the discrepancies increase the lower the temperature becomes. OP data range down to $\log T=3.5$, whereas the OPAL tables end at $\log T=3.75$. The transition between low and high temperature data can be done close to the upper temperature border of the COMA data, where agreement between the values shown here is fairly good. See also Figs.~\ref{fig:coma-op-opal-relative-logR} and \ref{fig:coma-op-relative-logR-Z1E-5}.
}
\label{fig:coma-op-opal-full}
\end{figure}
As emphasised earlier, the presented data are primarily relevant to the envelopes of evolved low mass stars. Chemical composition variations due to the TDU acting in AGB stars concern mostly the enrichment in carbon. However, the dredged-up carbon in the envelope can be inputted back into the CN cycle, which partly converts \element[][12]{C} to \element[][14]{N} (see Introduction). By varying the abundance of nitrogen (more precisely \element[][14]{N}), we add a further dimension to our data tables. These alterations have more direct consequences on the behaviour of the Rosseland mean in the sense that an increase in nitrogen always causes an increase in the opacity (in contrast to an increase in carbon, which can lower $\kappa_\mathrm{R}$ for a certain parameter range; see above). Nitrogen is present only within two molecules being considered here, i.\,e. CN and HCN, and can directly influence the Rosseland opacity only via these compounds. Other molecules containing nitrogen indirectly affect the opacity by altering the molecular partial pressures in chemical equilibrium. In the oxygen-rich case, the effect of an increase in \element[][14]{N} is relatively moderate, because the partial pressure of CN is in general low. In the carbon-rich case, the abundance of CN is however much higher, HCN is also present in significant amounts, and, as a consequence, the opacity can increase considerably. These properties of $\kappa_\mathrm{R}$ are illustrated in Fig.~\ref{fig:coma-nitrogen-relative-4} for the two enhancement factors of \element[][14]{N}. On top, we present results for two different metallicities without any carbon enhancement, where the increase in opacity is due to CN only. In the respective bottom panels, the carbon abundance has been enhanced to its maximum value and HCN causes a considerable rise in $\kappa_\mathrm{R}$ at lower temperatures. In the high temperature range, some minor contributions from atomic nitrogen to the opacity are evident.
The results discussed above are condensed into the form of 14 separate files, one for each metallicity.\footnote{The tables are only available in electronic form at the CDS via anonymous ftp to {\tt cdsarc.u-strasbg.fr (130.79.128.5)} or via {\tt http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/}. Alternatively, the files will be provided on request.} Each file consists of a header indicating the abundances used, the initial metallicity, the initial mass fractions for \element[][12]{C}, \element[][14]{N}, and the alpha elements, and a look-up table for the true data block. The final file consists of 63 rectangular data arrays, where $\log \kappa_\mathrm{R}$ is tabulated as a function of $\log T$ and $\log R$. The tables are ordered such that the mass fraction $X(\mbox{\element[][12]{C}})$ varies the most rapidly followed by the hydrogen mass fraction and $X(\mbox{\element[][14]{N}})$. For future compatibility, a data field for the alpha element enhancement factor was introduced into the look-up table.
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{\includegraphics{0576fg06.eps}}
\caption{Comparison of COMA values with OP and OPAL. Our opacity coefficients are systematically higher for a metallicity of $Z=0.02$. This constant offset vanishes for lower metallicities (see Fig.~\ref{fig:coma-op-relative-logR-Z1E-5}), which implies that atomic opacities (lines and continuum) of the metals are the cause. For high values of $\log R$ where the pressure broadening of the spectral lines becomes important, we find increasingly divergent results. The plots appear to imply that the low and high temperature data should be merged around $\log T=4.0$ (solid line).
}
\label{fig:coma-op-opal-relative-logR}
\end{figure}
\subsection{Comparison with other data}\label{sec:comparison}
We compare our tables based on a scaled solar metal mixture with data from F05 based on the same abundances as in this work. A direct comparison with AF94 is not possible because there are, of course, no tables based on the \citet{2003ApJ...591.1220L} abundances. We refer to F05 for a comparison of AF94 and F05. In the figures, we always depict data from our database as $\kappa_\mathrm{COMA}$, while the respective comparison values are labelled $\kappa_\mathrm{R}$. Despite the numerous differences between the COMA and F05 approach, we find reasonable agreement between both sets of data. For the case shown in Fig.~\ref{fig:coma-f05-full-sc} ($Z=0.02$), the difference between the COMA and F05 values does not exceed 15 per cent for temperatures as low as $\log T=3.5$. The discrepancies at lower temperatures are higher (up to 35 per cent) and can in fact be ascribed to several things. First and foremost, the use of different sets of molecular data in the calculations (cf. our Table~\ref{table:molecules} and their Tables~3 and 4) produces a deviation in the resulting mean opacity coefficients. Second, we adopt a microturbulent velocity of $2.5\,\mathrm{km\,s^{-1}}$, while F05 use $2.0\,\mathrm{km\,s^{-1}}$. The choices for this parameter are (within a certain range that is found for atmospheres of low mass giants) somewhat arbitrary and cause perceptible changes in $\kappa_\mathrm{R}$, especially at lower temperatures. Third, F05 use a denser wavelength grid for the evaluation of $\kappa_\mathrm{R}$. We discuss these issues in more detail in Sect.~\ref{sec:uncertainties}. From a comparison of Fig.~\ref{fig:coma-f05-full-sc} with Figs.~\ref{fig:coma-h2o-c2-relative-logT} (showing a comparable order of magnitude of the deviations) and \ref{fig:coma-xi-f05res-relative-logT}, it is, however, clear that the numerous differences in the physical input data are responsible for the major part of the discrepancies. The resolution and microturbulent velocity influence $\kappa_\mathrm{R}$ not quite as much. The large deviations in the data at the lowest temperatures are due to grain opacity that we do not take into account in our calculations, but dust is usually not formed under equilibrium conditions (as assumed by F05, see Introduction). Moreover, F05 adopted a finer grid in $\log T$ below 3.5. For the oxygen-rich case, a cubic spline interpolation (see Fig.~\ref{fig:coma-f05-full-sc}, dotted lines) on the coarser grid we adopted (and also used by AF94) provides reasonably accurate values.
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{\includegraphics{0576fg07.eps}}
\caption{Comparison of COMA with OP at a low metallicity ($Z=0.00001$) with scaled solar abundances of carbon and nitrogen, and with C and N enhanced to the maximum values (which results in a higher metallicity). The data agree quite well, the discrepancies remain within 5 per cent around $10{,}000\mathrm{\,K}$ (solid line). Again, at high $\log R$ the deviations are higher, obviously due to a deviant treatment of the pressure broadening of spectral lines. From a comparison between OP and OPAL (see text) we draw the same conclusions for OPAL data.}
\label{fig:coma-op-relative-logR-Z1E-5}
\end{figure}
\begin{figure*}
\centering
\resizebox{\textwidth}{!}{\includegraphics{0576fg08.eps}}
\caption{Evolution of the Rosseland mean as a function of the C/O ratio for different metallicities ($Z=0.02$ and $Z=0.00001$) and a representative value of $\log R=-1.5$. To obtain opacities at enhanced carbon abundances in-between grid points we recommend a linear interpolation scheme in $\log \kappa_\mathrm{R}$ and $\log X(\mbox{\element[][12]{C}})$ as indicated by the lines. The largest relative errors have to be expected at low metallicities and temperatures between $X(\mbox{\element[][12]{C}})\times2.2$ and the successive enhancement factor. For a description of the mechanism that makes $\kappa_\mathrm{R}$ drop to a minimum value at $\mbox{C/O}=1$ and then rise again we refer to Sect.~\ref{sec:results}.
}
\label{fig:kappa-c-logT-logRm1p5}
\end{figure*}
The comparison with high temperature data such as that from OPAL or OP is limited to the temperature regions where the tables overlap. Moreover, it is this region where a transition between low and high temperature opacities has to be made for applications covering a wide temperature range. OP data stretch down to $\log T=3.5$, whereas the OPAL tables end at $\log T=3.75$. The comparison for a standard scaled solar composition in Fig.~\ref{fig:coma-op-opal-full} shows a growing deviation for lower temperatures because both OPAL and OP do not include molecular absorbers (except H$_2$). This plot indicates that in the region between $\log T=3.8$ and the high temperature end of the COMA data, a smooth transition to high temperature data is possible. Again, from the dimension of the differences, we conclude that these are due to different physical input data rather than other parameters (see Sect.~\ref{sec:uncertainties}). To assess which temperature region renders itself to such a crossover we plot the logarithmic difference between our opacities and those of OPAL and OP, respectively, as a function of $\log R$ in Fig.~\ref{fig:coma-op-opal-relative-logR}. The deviations are in general moderate and the closest agreement is found around temperatures of $10{,}000\,\mbox{K}$. A few comments about the differences are required here. For a metallicity of $Z=0.02$, the values from COMA are systematically higher than both OPAL and OP. This discrepancy almost vanishes at the lowest metallicity considered ($Z=0.00001$, Fig.~\ref{fig:coma-op-relative-logR-Z1E-5}). It must therefore be related to the metallicity, which leaves either the metal lines or the differing continuum opacity as a reason. As for the first possibility, OPAL and OP use a restricted set of the most abundant elements, while we use all data that is contained in VALD. Whether this can cause the differences is a question to be answered by further analysis. Moreover, either OPAL and OP data do not include line broadening due to microturbulence, although this alone cannot account for the difference in $\kappa_\mathrm{R}$.
Beside the constant offset, we recognise a growing discrepancy at increasing $\log R$, which is supposedly to be linked to the pressure broadening of the atomic lines. As mentioned earlier, the damping constants are those from VALD. Additionally, the adopted tabulated hydrogen line profiles could play a role since hydrogen lines contribute significantly to the Rosseland mean at high temperatures (cf. e.\,g. AF94). A deeper investigation of this issue is still to be completed, although the differences remain on an overall satisfactorily small scale.
For a mixture enriched in carbon and nitrogen, we also performed tests. We switched to the lowest metallicity in our database where the effects should be most pronounced due to the high enhancement factors. Here we restrict ourselves to a comparison with OP data since these can be produced quite easily using the software contained in the OPCD (version 3.3). Based on the same mass fractions that we used in COMA, we generated opacity tables with the aforementioned program set and compared them with ours. The results are shown in Fig.~\ref{fig:coma-op-relative-logR-Z1E-5}. We refer to the discussion in the above paragraph concerning the differences but we emphasise again that the quantity $\log \kappa_\mathrm{R}/\kappa_\mathrm{COMA}$ measuring the differences in the logarithmic opacity in dex remains within a reasonable range, i.\,e. $\Delta<0.05\,\mbox{dex}$. In these plots, it is evident that the transition from the COMA coefficients to OP data should occur around a temperature of $\log T=4.0$. From the work of F05 who provided more details about the relation between the low and high temperature opacities, and \citet{2004MNRAS.354..457S} who completed an in-depth comparison between OP and OPAL, we conclude that the above statement also holds for OPAL data.
\subsection{Interpolation}\label{sec:interpolation}
The interface between the opacity tables and the various application codes in which they are used are the interpolation routines. Concerning the temperature and $\log R$, it has become sort of a standard to interpolate in these dimensions using cubic splines \citep[e.\,g. ][]{1993MNRAS.265L..25S,1996MNRAS.279...95S} or quadratic fits (in the Fortran subroutines from Arnold I. Boothroyd\footnote{\texttt{http://www.cita.utoronto.ca/\~{}boothroy/kappa.html}}). The results are quite satisfactory, but we want to emphasise that, in the carbon-rich case, problems can occur. At low values of $\log R$ and a high carbon enhancement, a cubic spline interpolation in the $\log T$ dimension might overshoot and produce spurious results. We strongly advise always checking separately the quality of the fit for each table (or relevant parts thereof) used.
The problem now is how to account for the element enhancements. As outlined in Sect.~\ref{sec:results}, the special role of the C/O ratio the parameter range at $\mbox{C/O}=1$ at low temperatures, and $\kappa_\mathrm{R}$ is not a continuous function of the carbon content at this point (Fig.~\ref{fig:kappa-c-logT-logRm1p5}). To resolve the sharp turnaround in the Rosseland mean, we require some grid points close to $\mbox{C/O}=1$. Overall, the number of enhancement factors is too low and the grid too coarse to apply any other interpolation scheme than a linear one. As shown in Fig.~\ref{fig:kappa-c-logT-logRm1p5} for the solar case and at low metallicity, linear interpolation in $\log \kappa_\mathrm{R}$ and $\log X(\mbox{\element[][12]{C}})$ delivers quite gratifying results beyond a certain temperature, where molecules cease to play an important role in determining the value of the Rosseland opacity. The lower the temperature becomes, the sharper the turnaround in the functional behaviour of $\kappa_\mathrm{R}$. Due to the sudden drop in opacity when the amount of carbon and oxygen atoms become approximately equal, linear interpolation misses out a certain fraction of information. At high metallicities the situation is not so serious, although the case shown in the right panel of Fig.~\ref{fig:kappa-c-logT-logRm1p5} ($Z=0.00001$) reveals this shortcoming clearly. Once the element mixture is carbon-rich ($\log \mbox{C/O}>0$), the opacity first increases sharply but flattens at high carbon enhancement values. Due to the construction of our tables (see Sect.~\ref{sec:tabledesign}), the spacing of the enhancement factors in the carbon-rich regime increases at lower metallicities. Between the $X(\mbox{\element[][12]{C}})\times2.2$ and the successive enhancement factor, an additional grid point would be favourable. For the case of nitrogen enhancement, linear interpolation in both $\log \kappa_\mathrm{R}$ and $\log X(\mbox{\element[][14]{N}})$ is a good approximation, because the relation between the nitrogen content and the opacity has a simpler behaviour.
\subsection{Sources of uncertainties}\label{sec:uncertainties}
The definition of the Rosseland mean opacity in Eq.~\ref{eq:rosselandmean} leaves only some ambiguity about how to evaluate this quantity in terms of numerical methods. However, the considerable uncertainties in published opacity coefficients originate in data entering the calculations. In the case of low temperature opacities, there is, in particular, a good amount of physical data of different quality that must be combined into one quantity. The summary in the following paragraphs is not exhaustive but discusses the accuracy of the data presented here and elsewhere.
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{\includegraphics{0576fg09.eps}}
\caption{
Changes in the Rosseland opacity when using different sets of solar element abundances, for instance \citet[][GS98]{1998SSRv...85..161G} and \citet[][GAS07]{2007SSRv..130..105G}. Data shown here are for the carbon-rich case and $Z=0.02$ at $\log R=-3.0$. The C/O ratios were set to match the values contained in our database using \citet[][L03]{2003ApJ...591.1220L} abundances resulting from $X(\mathrm{\element[][12]{C}})\times2.2,3.0,5.0$. GAS07 is in many respects very similar to L03 (e.\,g. regarding the C, N and O abundances and the share of these elements in $Z$) and thus results in almost identical opacity coefficients. The abundances given by GS98 deviate much more from L03. The differences become manifest in the opacities at high temperatures, and also at low $\log T$ by means of the chemical equilibrium. Other uncertainties discussed in the text have a comparable order of magnitude.
}
\label{fig:coma-abundances-relative-logT}
\end{figure}
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{\includegraphics{0576fg10.eps}}
\caption{Uncertainties in the molecular data that affect the mean opacity. Top panel: Using a different line list for water which is the main source of opacity at low temperatures in the oxygen-rich case causes considerable changes in $\kappa_\mathrm{R}$. As a test we substituted the BT2 water line list by the one from SCAN and plot the relative changes compared to the default setup. Bottom panel: For the database described in this paper we scale the C$_2$ line strengths in a certain wavelength region (see text). Not applying this correction results in notable differences in the Rosseland mean opacity of carbon-rich mixtures. ($X=0.7$, $Z=0.02$, no and maximum carbon enhancement in the top and bottom panel, respectively.)}
\label{fig:coma-h2o-c2-relative-logT}
\end{figure}
\subsubsection{Adopting different solar element abundances}
Hitherto, it has been emphasised that low temperature Rosseland opacities are to a large extent determined firstly by the total metallicity of the element mixture $Z$ and secondly by the C/O ratio. The individual element abundances play a role as well but do not change the opacity on an order-of-magnitude scale when the aforementioned parameters are kept fixed. Concerning the oxygen-rich case, we refer to a discussion of this topic given by \citet{2007ApJ...666..403D}. For the carbon-rich case, we exemplarily calculated opacity coefficients starting from solar element abundances other than \citet[][L03]{2003ApJ...591.1220L}, that is \citet[][GS98]{1998SSRv...85..161G} and \citet[][GAS07]{2007SSRv..130..105G}. The results of our comparison are shown in Fig.~\ref{fig:coma-abundances-relative-logT} (for $Z=0.02$ at $\log R=-3.0$). The solar abundances given by GAS07 are very similar to those of L03, and it is therefore unsurprising to uncover virtually identical opacity coefficients for our test case. The situation is different when we consider the GS98 abundances, which provide higher values for C, N, and O than L03 and GAS07. These elements also make up a higher fraction of $Z$ than in the other cases. In turn, when the metals are scaled to obtain $Z=0.02$, metals apart from C, N, and O, are present in lower amounts than in the L03 case, and thus contribute a smaller fraction to the opacity at high and intermediate temperatures (beyond $\log T=3.5$). At the lowest temperatures, more carbon-bearing molecules, such as C$_3$ and C$_2$H$_2$, are likely to form and produce a higher value of $\kappa_\mathrm{R}$, partially compensating for the lower atomic opacity contribution at intermediate temperatures. The sensitivity of our results to the adopted starting abundances is limited. The size of the differences with respect to the standard case is similar to other uncertainties discussed here. Hence, our data can be used to approximate the Rosseland opacity coefficients for a different set of scaled solar abundances, as long as the CNO abundances do not deviate significantly from the values given in L03.
\subsubsection{Uncertainties in molecular data}
We first consider the molecular line data. For many molecules, there is more than one line list available. The Rosseland mean as a global quantity is insensitive to the precision of line positions so long as the overall opacity distribution is reproduced well. However, there are cases where data from different sources result in altered overall opacity coefficients. As an example, we consider the contribution of water, the major low temperature opacity source in the oxygen-rich case, to $\kappa_\mathrm{R}$. In this work we use the recently published BT2 water line list \citep{2006MNRAS.368.1087B}. An alternative would have been the SCAN database line list from \citet{2001A&A...372..249J}. For the solar metallicity case, we calculated a table utilising this list and illustrate the results in the top panel of Fig.~\ref{fig:coma-h2o-c2-relative-logT}. The discrepancy in the resulting values are as high as 30 per cent. Pronounced differences between the two lists lie in the region where the weighting function in the definition of the Rosseland mean has its maximum, and we ascribe the deviating values of $\kappa_\mathrm{R}$ to this fact.
For the uncertain line data in the carbon-rich case, we mention the modifications to the C$_2$ line data from \citet{1974A&A....31..265Q}. To reproduce carbon star spectra, \citet{2001A&A...371.1065L} proposed a scaling of the $gf$ values in the infrared region \citep[suggested by ][]{Jorgensen1997} based on a comparison with other line lists. More precisely, they scaled the line strengths by a factor of $0.1$ beyond $1.5\,\mathrm{\mu m}$ and left them unchanged below $1.15\,\mathrm{\mu m}$. In-between, they assumed a linear transition. We adopt this method for the calculation of our opacity tables. By not applying this modification to the line strengths, we would have caused an increase of $\kappa_\mathrm{R}$ of roughly 25 per cent at $Z=0.02$ with maximum enhanced carbon (Fig.~\ref{fig:coma-h2o-c2-relative-logT}, bottom panel). The error in these data will have a more significant effect at low metallicities, where one expects a higher enrichment in carbon. From the calculation of mean opacities, we observe a clear need for new and improved C$_2$ line data.
Beside the problems with existing data there are also molecules so far unconsidered that are suspected of providing non-negligible contributions to the opacity. The prime example is C$_2$H, which could be an important opacity source in carbon stars, although to date no line data has existed for this molecule (we refer to \citealp{1995ASPC...78..347G} for an overview).
Another decisive set of input parameters are the chemical equilibrium constants usually depicted by $K_p$. Each constant is in fact a temperature-dependent function setting the partial pressure of a molecule in relation to the product of the partial pressures of the molecule's constituents \citep[cf. e.\,g.][]{1973A&A....23..411T}. \citet{2000A&A...358..651H} pointed out that the literature values for equilibrium constants from different sources could differ strongly at low temperatures. The critical point here is that one has not only to pay attention to the main opacity carriers but also to less abundant molecules competing with them for the same atomic species. \citet{2000A&A...358..651H} referred to TiO and TiO$_2$ as a examples but also reported other molecules for which order-of-magnitude differences in the partial pressures were found using different sets of $K_p$ data. The data that we use is documented in Sect.~\ref{sec:datasources}.
The above examples underline that accurate molecular line data is not only desirable for high resolution applications but also of importance to the calculation of mean opacities. In general, all data used in calculating the Rosseland mean, whether line data or other accompanying data like partition functions or equilibrium constants and continuum sources, must always undergo critical evaluation.
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{\includegraphics{0576fg11.eps}}
\caption{Influence of variations in resolution or microturbulence on the mean opacity. Top panel: The calculation of $\kappa_\mathrm{R}$ requires to know the monochromatic opacity at a large number of wavelength points. A high spectral resolution assures convergence but induces high computational costs. We use a lower resolution than F05 but the changes in the opacity coefficients are rather moderate especially when compared to other sources of uncertainty. Bottom panel: We adopted a microturbulent velocity of $\xi=2.5\mathrm{\,km\,s^{-1}}$ throughout this work. Changing this value to $\xi=2.0\mathrm{\,km\,s^{-1}}$ like in F05 causes $\kappa_\mathrm{R}$ to drop. ($X=0.7$, $Z=0.02$, no carbon enhancement.)}
\label{fig:coma-xi-f05res-relative-logT}
\end{figure}
\subsubsection{Numerics and parameters}
Apart from the imprecision due to the physical input data, there are other factors influencing the emerging opacity coefficients. For instance, an error source exists in the wavelength grid for which the opacities are calculated and the integration completed in deriving the opacity mean. Compared to F05, we use a considerably lower spectral resolution. To assess the uncertainties due to this difference, we simulated the resolution of F05, recalculated one of our tables, and compared our results to those for the original case. The differences found were relatively small as shown in Fig.~\ref{fig:coma-xi-f05res-relative-logT} (upper panel). Since the error is low compared to other effects described above, we propose that use of a lower resolution is justifiable because it would reduce considerably the amount of CPU time.
On the other hand, additional physical parameters enter the calculation of $\kappa_\mathrm{R}$, such as the microturbulent velocity $\xi$, which influences the width of the line profiles. The spectral lines are broadened according to the adopted value for $\xi$, which is somewhat arbitrary. Throughout this work, we used a value of $\xi=2.5\mathrm{\,km\,s^{-1}}$ for the generation of our data. Results from previous works on spectra of late-type stars \citep[e.\,g.][]{2002A&A...395..915A,2004A&A...422..289G,2008arXiv0805.3242L} have shown that this is a reasonable assumption. In the work of F05, however, $\xi$ was set to equal $2.0\mathrm{\,km\,s^{-1}}$. Both options are well within the range of values found for AGB star atmospheres (e.\,g. \citealp{1990ApJS...72..387S}). In Fig.~\ref{fig:coma-xi-f05res-relative-logT} (lower panel), we show the results of a test using the F05 value. Since the spectral lines possess a smaller equivalent width at a reduced value of $\xi$, the mean opacity is lower than for the COMA default case.
\subsubsection{Application of the data}
Beyond generating tables, there is further possibilities for introducing uncertainties while applying the data. First, there is the technical problem of interpolating the tabulated values. Compared to previously available data, the situation is worse because there are two more dimensions along which to interpolate, that is the varying amount of carbon and nitrogen. However, on the basis of the above discussion, it is unlikely that too sophisticated interpolation algorithms produce improved accuracy. This is, however, a problem that can in principle be solved by increasing the amount of computer power.
Far more worrying and the largest error source of all is potential misapplication of the data. Strictly speaking, the scope of the tables containing Rosseland mean opacity coefficients are regions where the diffusion approximation for the radiative transfer is fulfilled. In terms of the optical depth, this means $\tau\gg1$ for all wavelengths. One of the main applications of our data will be the outermost parts of an AGB star evolution model. The outer boundary condition is usually set somewhere in the atmosphere ($\log T\leq3.6$), where by definition $\tau\leq1$. In some situations, the Rosseland mean might still be a good approximation for evaluating the radiative energy transport. However, in general it is necessary to use a non-grey radiative transfer method because, due to the molecular absorbers, the spectral energy distribution is strongly wavelength-dependent. We refer to the work of \citet{2003A&A...399..589H}, who demonstrated the shortcomings of a grey treatment of the radiative transfer for dynamical model atmospheres. \citet{2007AIPC..948..195H} investigated the effect of non-grey surface boundary conditions on the evolution of low mass stars and reported noticeable changes to RGB evolution tracks. We thus want to emphasise that our mean opacity tables are meant to provide an interim solution until modelling of non-grey radiative transfer in stellar evolution calculations becomes feasible.
\section{Conclusions}\label{sec:conclusions}
We have presented a grid of low temperature Rosseland mean opacity tables that take into account variations in the single element abundances of carbon and nitrogen. By gradually enhancing the carbon content of a metal mixture, the molecular contribution to opacity changes significantly due to the altered chemistry. Already within a certain regime (i.\,e. oxygen-rich or carbon-rich), the relative amount of carbon to oxygen has pronounced effects on $\kappa_\mathrm{R}$. More distinctive, however, is the comparison between oxygen-rich and carbon-rich regimes. Different molecules serve as opacity sources in either case and thus result in a qualitatively and quantitatively different Rosseland mean opacity as a function of temperature and density. Changes in the nitrogen abundance also alter the opacity coefficients via certain nitrogen-bearing molecules.
The tables are designed such that an incorporation into existing codes that utilised AF94 or F05 data should be straightforward. Our data cover a wide metallicity range, and the overabundances of carbon and nitrogen are adjusted in each case. We are confident that with these data we provide a tool to simulate the final phases in the evolution of low or intermediate mass stars in more detail. Our data include the effects of the ongoing nucleosynthesis and mixing events in AGB stars in terms of the opacity. As shown in previous papers \citep[e.\,g.][]{2007ApJ...667..489C}, the incorporation of our tables into stellar evolution codes alters the physical properties of the stellar models. Once the star becomes carbon-rich, molecules form that are more opaque than those in an oxygen-rich regime. This in turn results in a steeper temperature gradient. A consequence is, for instance, a decrease in the effective temperature in stellar evolution models. The stellar radius increases, and the average mass-loss rate increases and erodes the envelope mass at a faster rate. \citet{2003PASA...20..389S} showed that a change in the envelope mass (as well as a change in the core mass) affect fundamental properties of AGB stars, e.\,g. the strength of the thermal pulses and the total amount of the mass dredged up. It will also be interesting to see how different mass-loss prescriptions interact with the newly calculated opacity coefficients, since these issues are physically closely coupled.
In the future, we plan to extend our tables to contain data about the enrichment in alpha elements. We must emphasise, however, that the data provided in the course of the current and future work must be seen as a transitional solution to the treatment of molecular opacity in AGB star envelopes and atmospheres. Due to the band structure of molecular absorption, mean opacities will yield inaccurate results. The past results from static and dynamical model atmosphere calculations demonstrate the importance of a frequency-dependent radiative transfer. Our data promise to bridge the gap until these methods are employed in stellar evolution models.
Finally, we emphasise that for forthcoming extensions of this database, it would be desirable to obtain extensive response from the community. Comments and criticisms that can lead to an improvement in the quality of the data are highly welcome.
\begin{acknowledgements}
MTL and BA acknowledge funding by the Austrian Research Fund FWF (projects {P-18171} and {P-19503}). BA received financial support from the University of Padova (Progetto di Ricerca di Ateneo {CPDA052212}). MTL has been supported by the Austrian Academy of Sciences (DOC programme). We also want to thank Sergio Cristallo who pointed out the need for the data presented in this work and provided us with many useful comments. Christian St\"utz is thanked for delivering an updated and ready-to-use set of VALD data.
\end{acknowledgements}
|
2,869,038,155,072 | arxiv | \section{Introduction}
\label{Introduction.sec}
The Antarctic Plateau has great potential for astronomical
observations as it is extremely cold and dry and has a calm and
tenuous atmosphere---attributes that are particularly favourable for
optical, infrared and {\bf submillimeter} observations. For
photometry, there is the advantage of reduced scintillation as a
result of the decreased high-altitude turbulence above the Antarctica
plateau. \citep{ken06}. In addition, there is the {\bf possibility} of
long, continuous observations uninterrupted by the usual diurnal
cycle, giving access to a time-series regime otherwise only available
from space \citep{ken06a, mos07, rau08}.
Astronomers have been interested in the Antarctic plateau for over
thirty years (see \citet{ind05} for a historical account). The first
stellar photometry from Antarctica was conducted at the South Pole by
\citet{tay88} in the late 1980s. More recently, Strassmeier and
colleagues \citep{str08} have conducted continuous time-series
observations from Dome\,C\ with sIRAIT, a small f/12 optical
telescope. The ASTEP South project has reported 1592 hours of
observations of the South Celestial Pole from Dome\,C\ during 2008 using a
10 cm refractor and $4096\times4096$ pixel CCD camera \citep{cro09}.
At optical wavelengths most of the attention is now focussed on the Concordia
station at Dome C, where excellent cloud-cover statistics, low free-atmosphere
seeing and a relatively thin turbulent boundary layer have been measured, as
summarised in e.g., \citet{sto05}. For example, \citet{law04} investigated the
seeing at Dome\,C\ and reported that the median seeing above the boundary layer
is $0.27''$, and for 25\% of the time is as low to 0.15$''$. Thus, for some observations even a small
telescope at Dome\,C\ can be as effective as a much larger one at the best
temperate observatories.
Dome\,A\ is located at longitude $77^{\circ}06'57''$E, latitude
$80^{\circ}25'08''$S, and is 1271 km directly inland from the Chinese Zhongshan
Station. The elevation of Dome\,A\ is 4093 meters above sea level, the temperature
is quite low (sometimes below $-80 ^{\circ}$C), the surface wind speeds are
even lower than those at Dome\,C, and the precipitable water vapor is extraordinarily low \citep{kul09}.
The seeing at Dome\,A\ from above the boundary layer may be better than that
at Dome\,C, and the boundary layer itself should be thinner---perhaps as low as
15 meters for much of the time.
Recently, \citet{sau09} compared Domes A, B, C and F, and Ridges A and
B from the point of view of cloud cover, boundary layer thickness,
auroral emission, free-atmosphere seeing, precipitable water vapor and
surface temperature---and concluded that Dome\,A\ is possibly the best
site on earth currently being used for astronomical observations.
Because of the potential opportunities offered by Dome\,A\ for astronomy and
other scientific studies, an permanent station, {\it Kunlun station}, was
established at Dome\,A\ by the 25$^{\rm th}$ Chinese expedition team in 2009 January.
In early 2007, Chinese astronomers began the development of CSTAR, with
an extensive testing program (Zhou et al. 2009). In November of that year, the
CSTAR system was shipped to Antarctica as part of the Plateau Observatory
(PLATO) \citep{law08, yang09}, and commissioned at Dome\,A\ in
2008 January. CSTAR began observations from Dome\,A\ in 2008 March, as soon as the
sky became sufficiently dark. During the year, we were typically able to return
one image per day and roughly one third of the photometric catalog (less
than 3000 bright stars) via Iridium satellite. CSTAR worked well until early
August, when the PLATO power system shut down. The 25$^{\rm th}$ Chinese expedition
team returned to Dome\,A\ in 2009 January, retrieving all the data including
images and catalogs that were stored on a large hard disk. The Dome\,A\ data are
valuable as they have many potential uses, from site characterisation to the
study of astronomical sources such as variable stars, supernovae, gamma-ray
bursts and extra-solar planets. A separate paper analysing the weather
statistics of the Dome\,A\ site from CSTAR data is in preparation \citep{zou10}.
This paper is organized as follows. A description of CSTAR
and the data reduction method are presented in \S 2. The
observations from Dome\,A\ are described in \S 3. In \S 4, we describe the
data processing of these observations to produce
the catalog. In \S 5, we discuss the photometric accuracy of the
catalog. A final summary is presented in \S 6.
\section{CSTAR}
\subsection{The instrument}
The CSTAR program is conducted under the auspices of the Chinese Center for
Antarctic Astronomy. The telescopes were built by the Nanjing Institute of
Astronomical Optics \& Technology (NIAOT) \citep{yuan08}, while the hardware
and software of the data acquisition system were developed by the National
Astronomical Observatories, Chinese Academy of Sciences (NAOC) \citep{zhou09}.
CSTAR consists of four 14.5cm Schmidt telescopes, each with a different
filter: g, r, i and open. The FOV is $4.5^{\circ}\times 4.5^{\circ} ($20 deg$^2$).
Each telescope is equipped with an
Andor DV435 1k$\times$1k CCD, giving a pixel size of about 15$''$ in the sky.
A 750 GB hard disk is used as the main storage, with all the software including
the Windows XP operating system installed in a 4 GB Compact Flash memory.
An 8 GB solid state disk is used in each computer to back up the most important
data. The computer system can work at temperatures down to $-30^{\circ}$C, and
is installed inside the PLATO instrument module where it is kept well above
this temperature by the PLATO thermal management system \citep{luo08}. The
CCD cameras and electronics are installed outside on the CSTAR telescopes,
where the ambient temperature can be lower than $-80^{\circ}$C.
Before its deployment to Dome\,A, the CSTAR data acquisition system was tested
at Kalasu on the high plateau of Pamir in China, close to Tajikistan.
At an elevation of 4450 meters and with
temperatures that plunged to $-18 ^{\circ}$C during the tests, Kalasu provides
an excellent environment for pre-deployment testing of astronomical
instruments for Dome\,A. The complete CSTAR system was also tested at the NAOC's
Xinglong observatory at the beginning of September of 2007. Details of these
tests are presented in \citet{zhou09}.
In \citet{zhou09} we also describe the data acquisition procedure and
automatic photometric pipeline used to perform aperture photometry on
all the point sources in our images. The apertures for the photometry
are 3, 4 and 5 pixels in radius and the inner and outer radius of the
sky annulus is 10 and 20 pixels, respectively.
The typical FWHM of
the point spread function (PSF) of a stellar image taken by CSTAR\#1
is between 1.5 and 2.5 pixels. {\bf The spot diagram under laboratory
conditions, that is $20^{\circ}$C and 1 atm pressure, is shown in
\citet{yuan08}. The optical system may have suffered some misalignment from vibration
during the almost 1300km sled traverse from Zhongshan Station to Dome A.
It is also possible that there were some optical alignment changes due
to the low ambient temperature at Dome A, although this was allowed for in
the design. An image from CSTAR\#1, A5CH5029.fit, is shown in Figure~\ref{fig:9}
to illustrate the realized optical quality at Dome A, where $\sim90\%$ of the light energy is encircled in
2 pixels.}
\begin{figure*}
\center
\includegraphics[scale=0.4,angle=0]{fig9.ps}
\caption[]{$50\time50$ pixel sub-images of the image A5CH5029.fit are
used to show the PSF located in the center, four corners and four sides of the
field of view}
\label{fig:9}
\end{figure*}
\section{Observations at Dome A}
CSTAR observed robotically at Dome\,A\ from 2008 March 4 until 2008 August
8 (see Figure \ref{fig:10}), taking an image with each telescope whenever it was
dark enough to do so. The exposure time was changed from 20 sec to
30 sec on April 4. The CCD is a frame transfer CCD, which allows us to operate
without a mechanical shutter. We readout the image after
each exposure and perform a real-time data reduction for producing
an initial catalog. A deadtime of $\sim$2.61 sec
results in times between exposures of 22.61 sec, or 32.61 sec
after April 4. Images were not taken if the sky was too bright.
For a variety of technical
reasons only CSTAR\#1 produced images of good optical quality during 2008.
There are 16918, 50575, 68157, 110357, 58711 and 5749 of these
images from March, April, May, June, July and August,
respectively. The total exposure times were 67.4, 311.4, 378.7,
613.1, 326.2 and 31.9 hours, respectively, in these months, giving a
total exposure time of 1728 hours. We plot the distribution of
integrated exposure time for each month in Figure~\ref{fig:1}. The
integrated exposure time in June is the longest and is over one third
of the total exposure time, while the exposure time in August is the
shortest due to the shutdown of the PLATO power system.
\begin{figure*}
\center
\includegraphics[scale=0.5,angle=0]{fig10.ps}
\caption[]{CSTAR during a mid-winter full moon at Dome A, Antarctica,
on 2008 June 17. This image was taken by a camera on the roof of
PLATO's Instrument Module \citep{law08}. The four CSTAR telescopes are
in the box on the tripod, pointing away from the camera. For scale, the
top of the CSTAR box is about 1.8m above the snow.}
\label{fig:10}
\end{figure*}
\begin{figure*}
\center
\includegraphics[scale=0.7,angle=0]{fig1.ps}
\caption[]{The integrated CSTAR exposure time for each month during 2008.}
\label{fig:1}
\end{figure*}
\begin{figure*}
\center
\includegraphics[scale=0.7,angle=-90]{fig2.ps}
\caption[]{Comparison of the USNO-B1 i-band magnitudes and CSTAR instrumental
magnitudes for 48 stars. The average offset is
$\Delta i=4.16\pm0.12$\ mag.}
\label{fig:2}
\end{figure*}
\section{Data Reduction}
We have developed a pipeline for image processing and photometry
that includes bias and flat-field correction, and then performs aperture
photometry \citep{zhou09}. This work was undertaken during testing CSTAR at
NAOC's Xinglong station. The quality of the images taken at Dome\,A\ was
significantly better than obtained at Xinglong due to the lower
CCD temperature and improved sky conditions.
\subsection{Flat fielding}
\label{sec:flat}
{\bf We constructed a flat-field image, shown in Figure~\ref{fig:3},
from a median of images taken with a high sky background, after
removal of stars using sigma-clipping. To correct for variations in the flat-field
over large spatial scales, we selected more than 50,000 images taken under
conditions of good transparency and looked for systematic changes in the
brightnesses of stars during their daily circular motion around the South Pole.
This allowed us to create an additional ``residual flat-field'' correction that
improved our photometric accuracy.}
\begin{figure*}
\center
\includegraphics[width=85mm]{fig3.ps}
\caption{The flat-field image used for CSTAR\#1.}
\label{fig:3}
\end{figure*}
{\bf For this first release of the catalog we have not checked the
variation of the flat-field during the year, nor with CCD temperature.
The next release will
include improved flat-fielding using images taken under photometric
conditions with high sky background (i.e., during twilight or bright
moon).}
\subsection{Absolute flux calibration}
\label{sec:cali}
{\bf An image, A5CH5029, was taken under relatively good photometric
conditions (17:50:29 UT, 2008 May 5) and was used as a standard
image for calibrating magnitude offsets. For each other image, we
derived a single magnitude offset, to be applied to all the stars on
the image, from the mean of the magnitude offsets of a selection of
bright stars.}
The USNO-B catalog \citep{mon03} contains stellar magnitudes in the
optical passbands B1, B2, R1, R2 and I2 for over one billion objects
over the whole sky. The catalog is complete down to $V = 21$~mag, with
a positional accuracy of $0.2''$ at J2000 and a photometric accuracy
of better than 0.3 magnitudes in the five colors. Since the USNO-B
catalog contains such well calibrated magnitudes of the point sources
in our observed field, we elected to use these objects for our flux
calibration. Fortunately, \citet{mon03} have derived a formula for
transforming the USNO-B1.0 magnitudes to those appropriate to the
Sloan Digital Sky Survey (SDSS) filters, based on $\sim 450$ deg$^2$ of
SDSS Early Data Release. Since the CSTAR filters were chosen to be
very similar to the SDSS filters, we can use this formula directly.
We determine
$\overline{i {\rm CSTAR}-i {\rm USNO}}=4.16\pm0.12$ with 48 field
stars. The differences between CSTAR instrumental magnitude and SDSS
magnitudes of these stars are shown in Figure~\ref{fig:2}. The
offset was used to transform the CSTAR instrumental magnitudes
to SDSS i-band magnitudes. {\bf To reduce statistical measurement
errors, we only selected the brightest stars for the offset calibration.
We plan to do further work on choosing well-calibrated standard stars
for our next run of the data reduction pipeline.}
\begin{figure*}
\center
\includegraphics[scale=0.7,angle=0]{fig5.ps}
\caption[]{The light curve of HIP 48752 =
GSC 9518:379 at $09^{h}57^{m}43.3^{s}, -89^{\circ}47^{'}02.2^{''}$, a 8.2 mag
star. The gaps
at the top of the Figure are due to twilight.}
\label{fig:5}
\end{figure*}
\subsection{Time calibration}
The CSTAR\#3 computer included a GPS receiver to maintain time
synchronisation and this time was intended to be distributed to the
other CSTAR computers. However, there was a communication problem
between CSTAR\#3 and the other computers throughout the year. As a
result, the timing of CSTAR\#1 ran independently and at its own rate
for the entire observation period. While this would normally create
an intractable data reduction problem---particularly when determining
the epochs of transient events and eclipses---in this case it is
easily corrected. CSTAR is fixed in position and points at the South
Celestial Pole. Every star traces out a circle on the CCD, and the
position of each star can be used as a clock. To identify the stars,
we calculate the rotation and shift of every image relative to a
standard image using the positions of all the bright stars. The mean
rotation angle relative to the standard image is then used to derive
the rate of drift of computer time. In this way, precise timing can
be determined. We find that the computer clock typically runs
1.3885714 seconds/day slower than the real local
time. Figure~\ref{fig:6} shows the time difference between the
computer time and the real time obtained from the star positions
during a period of 140 days of observation. The computer time on 2008
January 25 should be correct, because the computer time was manually
set to GPS time (UT) on that date. In the catalog we present the
corrected Julian time at the mid-exposure point of every image. This
time should be accurate to {\bf a few seconds}.
\begin{figure*}
\center
\includegraphics[scale=0.5,angle=0]{fig6.ps}
\caption[]{The difference between computer time and the true local time
throughout the observational period. The true local time was obtained from
the star positions on the images.}
\label{fig:6}
\end{figure*}
\subsection{Photometric accuracy}
Besides the statistical errors in the star and background brightness
measurements, there are several systematic errors that may affect the derived
stellar fluxes. The main systematic errors are:
1) Residual flat-field and bias correction errors. {\bf Light curves
of all of the stars that had no significant variation were used to
improve the flat-field correction map. However, a single flat-field
and bias image were used for all observations, with no attempt to
allow for temperature variations or instrumental drifts. This may
have introduced systematic errors with each star tracing out a circle
on the CCD.}
2) Point Spread Function (PSF) variation. The CSTAR telescopes have a large
field-of-view, $> 20$ square degrees, making it impossible for the optical design to keep the PSF
uniform over all parts of the image. Since we used a fixed aperture to measure
the magnitudes of the stars, there will be some measurement error from the PSF
variation depending on the star location within the image.
3) Under-sampling. Each pixel of the CSTAR CCD is
15$''$, so the light from a star might fall on only one or two pixels, with
resultant problems from intra-pixel sensitivity variations.
4) Aurorae and thin cirrus clouds. These introduce an
inhomogeneity in the sky background, especially during full moon.
In the case of cirrus, there will be a resultant variable
extinction across the field-of-view. The photometric calibration can therefore
differ from one star to another. The sky background for each star is also difficult
to estimate in the presence of cirrus.
The systematic photometric error exceeds other sources of error when observing bright
stars. As an example, we plot the light curve of a bright star (HIP 48752 =
GSC 9518:379) of i$\,=\,$8.2~mag at
($09^{h}57^{m}43.3^{s}, -89^{\circ}47^{'}02.2^{''}$) in Figure~\ref{fig:5}.
In the Figure, the light curves for every 10 days, with an arbitrary magnitude
offset, are shown. From examining the best quality images, the RMS of the light
curve variation is about 0.003 mag. We expect that the systematic error should
be smaller than 0.003 mag.
To estimate the overall real measurement error for
each field star in the image, we compared every image to a standard
image. The standard image, A5CH5029, was taken under relatively good
photometric conditions (17:50:29 UT, 2008 May 5). By comparison of the
calibrated magnitudes of all the stars in the two images, we can
plot the RMS error as a function of the magnitude of
the stars.
As an example we show in Figure~\ref{fig:7} the distribution of the
measurement difference in magnitude for every star obtained from
image A62M5104 and the standard image A5CH5029. We divided the
magnitude range into small bins of 0.2 mag, then calculated the
$1\sigma$ dispersion of the magnitude as the ``real measurement
error'' in each magnitude interval. The error bars as a function of magnitude as shown in
Figure~\ref{fig:7} were then applied to all the measurements in the catalog for that image.
\begin{figure*}
\center
\includegraphics[scale=0.7,angle=0]{fig7.ps}
\caption[]{Estimate of the photometric error for all the point sources
of A62M5104. The error bars represent the 1$\sigma$
errors. The two horizontal lines represent $\Delta$mag=0.1, while the vertical
line shows the corresponding limiting magnitude of 13.02 at S/N=10.}
\label{fig:7}
\end{figure*}
We defined the magnitude limit of our images as the magnitude which has a
$1\sigma$ RMS error of 0.1 mag, which corresponds to a S/N=10.
\begin{figure*}
\center
\includegraphics[scale=0.7,angle=0]{fig8.ps}
\caption[]{The limiting magnitudes for one whole day (24 hours) of
images on May 12, 2008.}
\label{fig:8}
\end{figure*}
As a byproduct of estimating the measurement error, the magnitude
limit is also obtained for each image. In Figure~\ref{fig:8} we plot
the limiting magnitude distribution for all of
the images taken over the 24 hours during 2008 May 12. The limiting magnitude
changes with time because of variations in atmospheric extinction
and sky brightness.
The final output from our data reduction is a catalog of star magnitudes for each image.
The contents of the catalog are arranged as follow:
The first two columns are RA and DEC, respectively; the following
columns are the magnitudes and errors in aperture photometry obtained with
radii of 3, 4, and 5 pixels respectively. In the header of the
catalog some additional information is provided:
the CCD temperature, the date and corrected time (UT) at the exposure midpoint, exposure time
(seconds), sky brightness (ADU), filter number and number of sources found in the image.
\begin{table*}
\begin{center}
\caption[]{Photometry of several sources in three different apertures. The catalog header is: ``$-$59 2008 Jun 02 22:50:42.20 20 i 10398 154.954086''
\label{tab:cata}}
\begin{tabular}{c|cc|cc|cc|cc}
\hline\hline \noalign{\smallskip} \multicolumn{1}{c}{Number}&\multicolumn{1}{c}{RA}&\multicolumn{1}{c}{DEC} & \multicolumn{1}{c}{$M 1$} &\multicolumn{1}{c}{$\sigma 1$} &\multicolumn{1}{c}{$M 2$} &\multicolumn{1}{c}{$\sigma 2$}&\multicolumn{1}{c}{$M 3$} &\multicolumn{1}{c}{$\sigma 3$} \\
\multicolumn{1}{c}{}&\multicolumn{1}{c}{(J2000)}&\multicolumn{1}{c}{(J2000)} &\multicolumn{1}{c}{(r=3~pixel)}&\multicolumn{1}{c}{} &\multicolumn{1}{c}{(r=4~pixels)}&\multicolumn{1}{c}{} &\multicolumn{1}{c}{(r=5~pixels)}&\multicolumn{1}{c}{} \\ \hline \noalign{\smallskip}
277 & 23:23:46.274 & -89:25:17.81& 11.095 & 0.022& 11.011& 0.022& 10.838& 0.025\\
278 & 10:43:24.023 & -88:42:00.78& 11.099 & 0.026& 11.048& 0.022& 11.013& 0.022\\
279 & 16:13:39.187 & -87:44:30.11& 11.100 & 0.026& 11.030& 0.022& 11.014& 0.022\\
280 & 14:09:08.706 & -89:07:12.63& 11.100 & 0.026& 11.035& 0.022& 10.987& 0.022\\
281 & 13:46:15.127 & -88:26:01.94& 11.100 & 0.026& 11.013& 0.022& 10.885& 0.025\\
282 & 17:54:27.175 & -89:42:21.70& 11.103 & 0.026& 11.065& 0.022& 11.032& 0.022\\
\noalign{\smallskip}\hline
\end{tabular}
\end{center}
{NOTE. The header parameters are decoded as: (1) $-59$: CCD temperature in Celcius (2) 2008 Jun 02: date
(3) 20: exposure time in seconds (3) 10398: the number of sources detected
in the image (4) 154.954086 day of the year during 2008. The catalogs can be
downloaded from National Astronomical Observatories Science Data
Center, Chinese Academy of Science at http://archive.bao.ac.cn/en/cstar.
}
\end{table*}
\section{Summary and catalog availability}
The CSTAR Point Source Catalog first release contains 1728 hours of i-band photometry data
taken by CSTAR\#1 at Dome A, Antarctica, between 2008 March 4 and 2008 August
8. The data are from a fixed field-of-view of $4.5^{\circ}\times 4.5^{\circ}$
centered on the South Celestial Pole, with an image taken approximately every 30 s. Aperture photometry was used to derive the magnitudes of each
of the 10,000 stars that were typically identified in each of the 300,000 images. The
data have been flux calibrated using 48 standard stars to link to the USNO-B1.0
photometric system. The CSTAR catalog is available at http://archive.bao.ac.cn/en/cstar.
\section*{Acknowledgments}
This study has been supported by the Chinese National Natural Science
Foundation through grants 10873016, 10803007, 10473012, 10573020,
10633020, 10673012, and 10603006, and by the National Basic Research
Program of China (973 Program), No.~2007CB815403. This research is
also supported by the Chinese PANDA International Polar Year project,
the Polar Research Institute of China (PRIC), and the international
science and technology cooperation projects 2008DFA20420 of the
Ministry of Science and Technology of China. The PLATO observatory
was supported by the Australian Research Council and the Australian
Antarctic Division. Iridium communications were provided by the US
National Science Foundation and the United States Antarctic
Program. The authors wish to thank all members of the 2008 and 2009
PRIC Dome A expeditions for their heroic effort in reaching the site
and for providing invaluable assistance to the expedition astronomers
in setting up and servicing the PLATO observatory and its associated
instrument suite. Additional financial contributions have been made by
the institutions involved in this collaboration.
|
2,869,038,155,073 | arxiv | \section{#1}\setcounter{equation}{0}}
\newcommand{\subsectiono}[1]{\subsection{#1}\setcounter{equation}{0}}
\newcommand{\nonumber}{\nonumber}
\newcommand\bOm{\bar\Omega}
\newcommand\tOm{\widetilde\Omega}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\def\,{\rm Tr}\, {\,{\rm Tr}\, }
\newcommand{\tilde\alpha}{\tilde\alpha}
\newcommand{\tilde d}{\tilde d}
\renewcommand{{\wh{A}}}{\hat\alpha}
\newcommand{\check \alpha}{\check \alpha}
\newcommand{\tilde c}{\tilde c}
\newcommand{c}{c}
\newcommand{g_{\rm Coulomb}}{g_{\rm Coulomb}}
\newcommand{G_{\rm Higgs}}{G_{\rm Higgs}}
\newcommand{g_{\rm Higgs}}{g_{\rm Higgs}}
\newcommand{Q_{\rm Higgs}}{Q_{\rm Higgs}}
\newcommand{\bar Q_{\rm Higgs}}{\bar Q_{\rm Higgs}}
\newcommand{Q_{\rm Coulomb}}{Q_{\rm Coulomb}}
\newcommand{{\bar Q}_{\rm Coulomb}}{{\bar Q}_{\rm Coulomb}}
\newcommand{G_{\rm Coulomb}}{G_{\rm Coulomb}}
\newcommand{\OmS}{\Omega_{\rm S}}
\newcommand{\bOmS}{\bar\Omega_{\rm S}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\title{Generalized quiver mutations \\ and single-centered indices}
\preprint{
CERN-PH-TH/2013-221\\
HRI/ST/1304\\
arXiv:1309.7053v2}
\author{Jan Manschot$^{1}$, Boris Pioline$^{2,3}$, Ashoke Sen$^{4}$
\\
$^1$ {\it Institut Camille Jordan, Universit\'e Claude Bernard Lyon 1, \\ 43 boulevard du 11 novembre 1918, 69622 Villeurbanne cedex, France}\\
$^2$ {\it CERN PH-TH,
Case C01600, CERN, CH-1211 Geneva 23, Switzerland}\\
$^3$ {\it Laboratoire de Physique Th\'eorique et Hautes
Energies, CNRS UMR 7589, \\
Universit\'e Pierre et Marie Curie,
4 place Jussieu, 75252 Paris cedex 05, France} \\
$^4$ Harish-Chandra Research Institute,
Chhatnag Road, Jhusi, Allahabad 211019, India
\\
\vspace*{2mm} \email{
[email protected], [email protected],
[email protected]}
\vspace*{-3mm}
}
\abstract{Quiver quantum mechanics is invariant under Seiberg
duality. A mathematical consequence is that the cohomology of the
Higgs branch moduli space is invariant under mutations of the quiver. The Coulomb
branch formula, on the other hand, conjecturally expresses the
Poincar\'e / Dolbeault polynomial of the Higgs branch moduli space in terms of
certain quantities known as single-centered indices. In this work we
determine the transformations of these single-centered
indices under mutations. Moreover, we generalize these mutations to
quivers whose nodes carry single-centered indices different from unity. Although the Higgs branch
description of these generalized quivers is currently unknown, the Coulomb branch formula
is conjectured to be invariant under generalized mutations.}
\begin{document}
\tableofcontents
\section{Introduction and summary} \label{sintro}
Originally introduced in order to describe D-branes at orbifold singularities \cite{Douglas:1996sw},
quiver quantum mechanics has become a powerful tool for determining the spectrum of BPS states both in four-dimensional gauge theories with $\mathcal{N}=2$ global
supersymmetries
\cite{Fiol:2000pd,Fiol:2000wx,Alim:2011ae,Alim:2011kw,Cecotti:2012se,Xie:2012gd,Galakhov:2013oja,Cordova:2013bza,Chuang:2013wt,Cirafici:2013bha} and in four-dimensional type II string vacua with the same amount of
local supersymmetry \cite{Douglas:2000ah,Fiol:2000wx,Denef:2002ru,
Denef:2007vg,Aganagic:2010qr}. Physically, quiver quantum mechanics encodes the low energy dynamics of open strings stretched between D-brane constituents, and
BPS bound states are identified as cohomology classes on the Higgs branch. Mathematically, the latter is interpreted as the moduli space of semi-stable quiver representations \cite{zbMATH00720513}.
For quivers without oriented loops, such that the superpotential vanishes, the Higgs branch cohomology can be computed systematically \cite{1043.17010}. Equivalently, it can be computed on the Coulomb branch, by studying the quantum mechanics of a set of point-like charged particles associated with the nodes of the quiver, and interacting by Coulomb and Lorentz-type forces according to the number of arrows between any two nodes \cite{Denef:2002ru}. The classical moduli space of such multi-centered solutions is a finite dimensional compact symplectic space \cite{deBoer:2008zn}, and the corresponding supersymmetric quantum mechanics \cite{Manschot:2011xc,Kim:2011sc,Lee:2011ph} can be solved using localization techniques \cite{Manschot:2010qz,Manschot:2011xc,Manschot:2013sya} (see \cite{Pioline:2013wta} for a recent review). Agreement between the two approaches for any choice of stability condition (equivalently, Fayet-Iliopoulos or FI
parameters) was demonstrated recently
in \cite{Sen:2011aa,Manschot:2013sya}.
For quivers with loops, the situation is much more involved: on the Higgs branch side, there is currently no systematic way to compute the cohomology of a quiver with generic superpotential,
except for Abelian quivers which can be treated by ad hoc methods \cite{Bena:2012hf,Lee:2012sc,Lee:2012naa,Manschot:2012rx}. On the Coulomb branch side, the BPS phase space is in general no longer compact, due to the occurence of scaling solutions \cite{Denef:2007vg,Bena:2006kb} where three or more constituents approach each other at arbitrary small distance. While the symplectic volume of this phase space is still finite \cite{deBoer:2008zn,Manschot:2011xc},
the number of associated Coulomb branch states fails to match the number of states on the Higgs branch, by an exponential amount \cite{Denef:2007vg}. Based on the observation on simple cases that the discrepancy originates solely from the middle cohomology
(more precisely, the Lefschetz singlet part thereof) and is insensitive to wall-crossing \cite{Bena:2012hf}, it was proposed in \cite{Manschot:2012rx} that the
isomorphism between the Coulomb and Higgs branch could be restored by
postulating the existence of new Coulomb branch constituents, behaving
as point-like particles
carrying composite charge $\gamma$ and internal degrees of freedom with index
$\OmS(\gamma)$, insensitive to the choice of stability condition.
Conjecturally, the Poincar\'e-Laurent polynomial
of the quiver moduli space (defined in \eqref{epol} below)
is expressed
in terms of these invariants, known as single-centered indices (or indices
associated with pure Higgs, or intrinsic Higgs states)
through the Coulomb branch formula (see \eqref{essp1}).
Defining and computing the single-centered indices $\OmS(\gamma)$ directly
remains an open problem.
While there is no general prescription for computing the
Poincar\'e-Laurent polynomial of
a quiver with generic superpotential, it is known to be invariant
under specific transformations of the quiver known as mutations
\cite{zbMATH05573998,ks,zbMATH05848698}. Quiver mutation was
first introduced in the context of ADE quivers \cite{Bernstein:1973},
and is one of the basic principles of the theory of cluster algebras \cite{2001math4151F}.
In terms of the quiver quantum mechanics descriptions of BPS bound
states, mutations are a manifestation of Seiberg
duality \cite{Feng:2001bn,Beasley:2001zp,Berenstein:2002fi,Feng:2002kk,Mukhopadhyay:2003ky,Herzog:2004qw,Vitoria:2007ff}, and arise when the splitting between BPS and anti-BPS
states is varied \cite{Denef:2000nb,Aganagic:2010qr,Andriyash:2010yf,Cordova:2013bza}.
This happens in particular when the moduli are varied around a point where
one of the constituents of the bound state becomes massless, and is responsible
for the monodromy transformation of the BPS spectrum \cite{Aganagic:2010qr,Andriyash:2010yf}. A natural question is to determine the action of mutations on the
single-centered invariants $\OmS(\gamma)$ appearing in the Coulomb branch formula.
From the point of view of the Coulomb branch formula, however, quiver
moduli spaces are but a very special case where the basis vectors
associated to the nodes of the quiver carry unit
index, $\OmS(\gamma_i)=1$ and $\OmS(\ell\gamma_i)=0$ if $\ell>1$ (mathematically, the nodes represent spherical objects in the derived category of representations). Formally, one could very well keep the same quiver topology but associate different indices $\OmS(\gamma_i)$ to the
basis vectors and multiples thereof, and use the Coulomb branch formula to produce a set
of symmetric
Laurent polynomials satisfying the standard wall-crossing
properties. We refer to such quivers with non-standard single-centered indices
as generalized quivers, and to the corresponding Laurent polynomials as
generalized quiver invariants.
Ref. \cite{Manschot:2011xc} showed that, in the case of quivers without closed loops,
such generalized quivers appear in wall-crossing
formulas for Donaldson-Thomas invariants \cite{ks,MR2951762}.
Whether or not the generalized quiver invariants correspond to
the Poincar\'e/Dolbeault polynomial of a putative moduli space is unclear
to us at this stage, but we can ask whether invariance under mutations
can be extended to this set of polynomials. A suggestive
fact is that mutations can also be defined for
cluster algebras with skew-symmetrizable -- as opposed to skew-symmetric -- exchange matrix,
which are naturally represented by quivers with multiplicity \cite{zbMATH05145743,
2003math11245F,Labardini:2013}.
Another reason to expect such a generalization is the physical `Fermi flip' picture of mutation
developed in the context of split attrator flows in supergravity in \cite{Andriyash:2010yf}. Namely,
in the vicinity of certain walls in moduli space (conjugation walls in the language of \cite{Andriyash:2010yf},
or walls of the second kind in the language of \cite{ks}), the representation of a BPS state of total charge $\gamma=\gamma_j+N \gamma_k$ as a halo of particles carrying charges $\ell_i\gamma_k$ with $\ell_i>0$
orbiting around a core of charge $\gamma_j$ can become invalid, and needs to be replaced by a halo of particles carrying charges $-\ell_i\gamma_k$ with $\ell_i>0$
around a core of
charge $\gamma_j+M_j \gamma_k$, for some positive integer $M_j$
\cite{Andriyash:2010yf}.
This is possible when the particles of charge $\ell\gamma_k$
behave as fermions (i.e. carry positive\footnote{Due to the supermultiplet structure
a state with positive index behaves as a fermion while forming a bound state \cite{Denef:2002ru}.}
index), so that the Fermi vacuum can be replaced
by the filled Fermi sea.
In this paper, we shall argue
that this picture applies just as well for generalized quivers with oriented loops, and naturally
suggests that the Laurent polynomials produced by the Coulomb branch formula are invariant
under a generalized mutation transformation. Before stating this transformation,
we need to set up some notations.
\subsection{Review of quiver invariants and the Coulomb branch formula}
Consider a quiver with $K$ nodes with dimension vectors $(N_1,\cdots N_K)$,
stability (or Fayet-Iliopoulos, or FI) parameters $(\zeta_1,\cdots \zeta_K)$ satisfying $\sum_{i=1}^K N_i \zeta_i=0$, and $\gamma_{ij}$ arrows from the $i$-th node to the $j$-th node.
We denote such a quiver by $\mathcal{Q}(\gamma;\zeta)$, where $\gamma$
is a vector $\gamma=\sum_{i=1}^K
N_i\gamma_i$ in a $K$-dimensional lattice $\Gamma$ spanned by basis vectors $\gamma_i$
associated to each node. We shall denote by $\Gamma^+$ the collection of lattice vectors
of the form $\sum_i n_i\gamma_i$ with $n_i\ge 0$; clearly all physical quivers are
described by some vector $\gamma\in\Gamma^+$.
We introduce a bilinear symplectic product (the Dirac-Schwinger-Zwanziger, or DSZ product)
on $\Gamma$ via
$\langle \gamma_i, \gamma_j\rangle=\gamma_{ij}$.
To define the quiver moduli space, we introduce complex
variables $\phi_{\ellk, \alpha, ss'}$ for every pair $\ell,k$ for which
$\gamma_{\ellk}>0$. Here $\alpha$ runs over $\gamma_{\ellk}$ values,
$s$ is an index labelling the
fundamental representation of $U(N_\ell)$ and $s'$ is an index representing
the anti-fundamental representation of $U(N_{k})$. The moduli space $\mathcal{M}(\gamma;\zeta)$ of
classical vacua is the space of solutions to the D-term and F-term constraints,
\begin{equation} \label{emodi1}
\begin{split}
\sum_{k , s,t,s'\atop \gamma_{\ellk}>0} \phi_{\ellk, \alpha, ss'}^* \, T^a_{st} \,
\phi_{\ellk,\alpha,t s'} - \sum_{k ,s,t,s'\atop \gamma_{k\ell}>0}
\phi_{k\ell, \alpha, s's}^*
\, T^a_{st} \,
\phi_{k\ell,\alpha,s't} = \zeta_\ell \, \,{\rm Tr}\, (T^a)\quad \forall \, \ell, \, a \, , & \\
{\partial W\over \partial \phi_{\ellk,\alpha,ss'}}=0\, \ , &
\end{split}
\end{equation}
modded out by the natural action of the gauge group $\prod_\ell U(N_\ell)$.
Here $T^a$'s are the generators of the $U(N_\ell)$ gauge group, and $W$ is a
generic gauge invariant superpotential holomorphic in the variables
$ \phi_{\ellk, \alpha, ss'}$. For generic potential, $\mathcal{M}(\gamma;\zeta)$ is a compact
algebraic variety, which is smooth if the vector $\gamma$ is primitive.
Let $Q(\gamma; \zeta;y)$ be the Poincar\'e-Laurent polynomial
of the quiver moduli space $\mathcal{M}(\gamma;\zeta)$,
\begin{equation} \label{epol}
Q(\gamma; \zeta;y) = \sum_{p=0}^{2d} b_p(\mathcal{M})\, (-y)^{p-d}
\end{equation}
where $d$ is the complex
dimension of $\mathcal{M}$
and the $b_p(\mathcal{M})$'s are the topological Betti numbers of $\mathcal{M}$.
The Coulomb branch formula
for $Q(\gamma; \zeta;y)$,
which we denote by $Q_{\rm Coulomb}(\gamma; \zeta;y)$, takes the form \cite{Manschot:2011xc,
Manschot:2012rx,Manschot:2013sya}
\begin{equation}
\label{essp1}
\begin{split}
Q_{\rm Coulomb}(\gamma; \zeta;y) =& \sum_{m|\gamma}
\frac{\mu(m)}{ m} {y - y^{-1}\over y^m - y^{-m}}
{\bar Q}_{\rm Coulomb}(\gamma/m; \zeta;y^m) \ , \\
{\bar Q}_{\rm Coulomb}(\gamma; \zeta;y) =&
\sum_{n\ge 1}\sum_{\{\alpha_i\in \Gamma^+\} \atop \sum_{i=1}^n \alpha_i =\gamma}
\frac{g_{\rm Coulomb}\left(\{\alpha_1, \cdots, \alpha_n\},
\{c_1,\cdots c_n\};y\right)}
{ |{\rm Aut}(\{\alpha_1,\cdots, \alpha_n\})|}
\\ &\quad
\prod_{i=1}^n \left\{\sum_{m_i\in\mathbb{Z}\atop m_i|\alpha_i}
{1\over m_i} {y - y^{-1}\over y^{m_i} - y^{-m_i}}\,
\Omega_{\rm tot}(\alpha_i/m_i;y^{m_i})
\right\}
\, ,
\end{split}
\end{equation}
where $\mu(m)$ is the M\"obius function,
$|{\rm Aut}(\{\alpha_1,\cdots \alpha_n\})|$ is a symmetry factor given by
$\prod_k s_k!$ if among the set $\{\alpha_i\}$ there are
$s_1$ identical vectors $\tilde \alpha_1$, $s_2$ identical vectors
$\tilde\alpha_2$ etc., and $m|\alpha$ means that $m$ is a common divisor of
$(n_1,\cdots , n_K)$ if $\alpha =\sum_\ell n_\ell \gamma_\ell$.
The sums over $n$ and $\{\alpha_1,\cdots \alpha_n\}$ in the second
equation label all possible ways of
expressing $\gamma$ as (unordered) sums of elements $\alpha_i$ of $\Gamma^+$.
The coefficients
$c_i$ are determined in terms of the FI parameters $\zeta_i$ by $c_i
=\sum_\ell A_{i\ell} \zeta_\ell$ whenever $\alpha_i=\sum_\ell A_{i\ell}\gamma_\ell$.
From the restrictions $\sum_i \alpha_i
=\gamma$ and $\sum_\ell N_\ell \zeta_\ell=0$ it follows that
$\sum_i c_i=0$.
The functions $g_{\rm Coulomb}(\{\alpha_1,\cdots, \alpha_n\};
\{c_1,\cdots c_n\};y)$, known as Coulomb indices,
can be computed from the sum over
collinear solutions to Denef's equations for multi-centered black hole
solutions \cite{Manschot:2011xc}. The functions $\Omega_{\rm tot}(\alpha;y)$
are expressed
in terms of the single-centered BPS invariants $\OmS$ through
\begin{equation} \label{essp2}
\Omega_{\rm tot}(\alpha;y) = \OmS(\alpha;y) +
\sum_{\{\beta_i\in \Gamma^+\}, \{m_i\in\mathbb{Z}\}\atop
m_i\ge 1, \, \sum_i m_i\beta_i =\alpha}
H(\{\beta_i\}; \{m_i\};y) \, \prod_i
\OmS(\beta_i;y^{m_i})
\, .
\end{equation}
The $H(\{\beta_i\}; \{m_i\};y)$ are determined recursively using the
minimal modification hypothesis described in \cite{Manschot:2012rx},
and $\OmS(\alpha;y)$ are
expected to be $y$-independent constants for quivers with generic superpotential.
A fully explicit recursive algorithm for computing the Coulomb indices $g_{\rm Coulomb}$
and $H$-factors was given in \cite{Manschot:2013sya}.
In \cite{Manschot:2012rx} we also proposed a formula for the Dolbeault polynomial
\begin{equation}
Q(\gamma; \zeta; y;t) \equiv \sum_{p,q} h^{p,q}(\mathcal{M}) \, (-y)^{p+q-d} \, t^{p-q}\, ,
\end{equation}
where $h^{p,q}(\mathcal{M})$ are the Hodge numbers of $\mathcal{M}$. The formula takes the same
form as \eqref{essp1}, \eqref{essp2}, with the only difference that $\OmS$ is
allowed to depend on $t$, and
the arguments $y$ and $y^m$ inside $Q_{\rm Coulomb}$, ${\bar Q}_{\rm Coulomb}$, $\Omega_{\rm tot}$
and $\OmS$ are replaced by $y;t$ and $y^m;t^m$ respectively.\footnote{Eventually
we drop the $y$-dependence of $\OmS$ for quivers with generic superpotential.}
The Coulomb indices $g_{\rm Coulomb}$ and the functions $H$ remain
unchanged and
independent of $t$.
\subsection{Generalized quivers and generalized mutations} \label{sgenmut}
We are now ready to state our main result.
As mentioned above,
the Coulomb branch formula given
in eqs.\eqref{essp1}, \eqref{essp2} leads to a set of symmetric Laurent polynomials satisfying the standard wall-crossing formula, for any choice of symmetric Laurent polynomials
$\OmS(\gamma;y;t)$. For ordinary quivers with a generic superpotential, the single-centered
invariants satisfy
\begin{equation}
\label{OmSquivers}
\OmS(n_i\gamma_i+n_j \gamma_j;y;t)=\begin{cases} 1 & \mbox{if}\ n_i=1, n_j=0\\
1 & \mbox{if}\ n_i=0, n_j=1\\
0 & \mbox{otherwise}
\end{cases}
\end{equation}
for any linear combination of two basis vectors $n_i\gamma_i+n_j \gamma_j$.
We refer to quivers equipped with more general choices of the single-centered invariants $\OmS(\gamma;y;t)$,
subject to the condition that they vanish unless $\gamma\in\Gamma^+$,
as `generalized quivers'.
For such a generalized quiver, we introduce a
generalized mutation $\mu_k^{\varepsilon}$ (where $\varepsilon=1$ for a right mutation,
and $\varepsilon=-1$ for a left mutation) with respect to the $k$-th node, through
the following transformation
rules of the basis vectors $\gamma_i$,
DSZ matrix $\gamma_{ij}$, stability parameters $\zeta_i$,
and dimension vector $N_i$:
\begin{equation}
\label{mutDSZgen0}
\begin{split}
\gamma'_i=&\begin{cases}
-\gamma_k & \hbox{if $i=k$} \\
\gamma_i + M \, {\rm max}(0,\varepsilon \gamma_{ik})\, \gamma_k& \hbox{if $i\neq k$}
\end{cases}
\\
\gamma'_{ij} =&
\begin{cases}
-\gamma_{ij} & \mbox{if}\quad i=k \quad \mbox{or}\quad j=k \\
\ \gamma_{ij} + M\, {\rm max}(0, \gamma_{ik} \gamma_{kj})\, {\rm sign}(\gamma_{kj}) &
\mbox{if} \quad i,j\neq k
\end{cases}
\\
\zeta'_i=& \begin{cases}
-\zeta_k & \hbox{if $i=k$} \\
\zeta_i+M\, \text{max}(0, \varepsilon \gamma_{ik}) \, \zeta_k & \hbox{for $i\ne k$},
\end{cases}
\\
N'_i=&\begin{cases}
-N_k+ M \sum_{j\neq k} N_j \, {\rm max}(0,\varepsilon \gamma_{jk}) & \hbox{if $i=k$} \\
N_i & \hbox{if $i\neq k$}
\end{cases}
\end{split}
\end{equation}
where $M$ is an integer defined by
\begin{equation}
\label{eq:M}
M \equiv \sum_{\ell\geq 1}
\sum_{n,s} \ell^2 \, \Omega_{n,s}(\ell \gamma_k) \ ,\qquad
\OmS(\ell\gamma_k;y;t) = \sum_{n,s } \Omega_{n,s}(\ell\gamma_k) y^n t^s\ .
\end{equation}
These transformation laws guarantee that
\begin{equation}
\gamma\equiv \sum_i N_i \gamma_i = \sum_i N_i'\gamma_i'\, .
\end{equation}
We conjecture that the Laurent polynomials produced by the Coulomb branch formula are invariant
under the generalized mutation transformation: \footnote{The second equation in \eqref{mutinv0} may be surprising at first, but
physically it
reflects the fact that in the transformed quiver states with
charge vectors $\ell\gamma_k$ are considered as anti-BPS states and are no longer counted
in the BPS index. On the other hand states with charge vector $-\ell\gamma_k$, which are
considered anti-BPS in the original quiver and not counted, are taken to be BPS in the new
quiver.}
\begin{equation}
\label{mutinv0}
Q_{\rm Coulomb}(\gamma; \zeta; y; t) =
\begin{cases} Q_{\rm Coulomb}'(\gamma; \zeta'; y; t) &\hbox{if $\gamma\not\parallel \gamma_k$}\\
Q_{\rm Coulomb}'(-\gamma;\zeta;y;t) &\hbox{if $\gamma\parallel \gamma_k$}\,,
\end{cases}
\end{equation}
under the conditions that
\begin{itemize}
\item[i)] $\Omega_{n,s}(\ell\gamma_k)$ are positive
integers satisfying $\Omega_{n,s}(\ell\gamma_k)=\Omega_{-n,-s}(\ell\gamma_k)$ and
vanish
for $\ell$ large enough, so that the integer $M$ is well defined,
\label{condi}
\begin{equation} \label{eposcon0}
\Omega_{n,s}(\ell\gamma_k)\geq 0\ \forall\ \ell>0\ ,\quad
\Omega_{n,s}(\ell\gamma_k) = 0 \, \, \text{for}
\, \, \ell > \ell_{\rm Max}\, ,
\end{equation}
\item[ii)]
the stability parameter $\zeta_k$ has sign $-\varepsilon$,
\begin{equation}
\label{einequal}
\varepsilon\, \zeta_k<0\ ,
\end{equation}
\item[iii)] the single-centered indices transform
as\footnote{It is easy to verify that the rational invariants ${\bar Q}_{\rm Coulomb}$ and $\bOmS$ satisfy
the same mutation transformation rules as $Q_{\rm Coulomb}$ and $\OmS$ respectively.}
\begin{equation}
\label{egench00}
\OmS(\alpha;y;t) =
\begin{cases}
\OmS'\left(\alpha+ M {\rm max}(0, \varepsilon \langle \alpha,\gamma_k\rangle)\, \gamma_k
; y;t\right) & \hbox{for $\alpha \not\parallel \gamma_k$} \\
\OmS'\left( - \alpha\right) & \hbox{for $\alpha\parallel \gamma_k$}
\end{cases}\, .
\end{equation}
\end{itemize}
In \eqref{mutinv0}, it is understood that in computing the l.h.s. we have to express
$\gamma$ as $\sum_iN_i\gamma_i$ treating $\gamma_i$'s as the basis vectors and apply
the Coulomb branch formula \eqref{essp1}, \eqref{essp2} while in
computing the r.h.s. we have to express
$\gamma$ as $\sum_i N'_i\gamma'_i$ treating $\gamma'_i$'s as the basis vectors and then
apply the Coulomb branch formula \eqref{essp1}, \eqref{essp2}.
Since the left and right mutations $\mu_k^\pm$ are inverses of each other, we shall
restrict our attention to right mutations only and set
\begin{equation}
\varepsilon=1
\end{equation}
henceforth.
Several remarks about our generalized mutation conjecture are in order:
\begin{enumerate}
\item
For ordinary quivers,
$\OmS(\ell\gamma_k)=\delta_{\ell,1}$,
hence $M=1$ and
the above relations reduce to mutations of ordinary quivers with superpotential (the
action on the superpotential can be found in \cite{zbMATH05573998}).
\item For quivers
obtained from cluster algebras with skew-symmetrizable exchange matrix (i.e. a integer matrix
$\hat \gamma_{ij}$ such that $\gamma_{ij}\equiv \hat\gamma_{ij}/d_j$ is antisymmetric for some positive integers $d_i$), the action on $\gamma_{ij}$ coincides with the mutation rule specified in
\cite{2003math11245F,zbMATH05145743} for $M=d_k$.
\item Mutation invariance in general imposes additional restrictions
on the single-centered invariants $\OmS(\gamma;y;t)$, beyond the vanishing of $\OmS(\gamma;y;t)$ for $\gamma\notin\Gamma^+$ with respect to the original quiver.
Indeed, if we denote by $\Gamma'^+$
the set of vectors $\gamma=\sum_i n_i'\gamma_i' \in\Gamma$ where all $n_i'$ are non-negative,
then the transformation rule \eqref{egench00} requires that $\OmS(\alpha;y;t)$ should
vanish if the mutated vector $\alpha'\equiv \alpha+ M {\rm max}(0, \langle \alpha,\gamma_k\rangle)\, \gamma_k$ does not lie in $\Gamma'^+$, even if $\alpha\in\Gamma^+$ (excluding
the case $\alpha\parallel\gamma_k$)\footnote{The same reasoning applies to the Dolbeault-Poincar\'e polynomial: $Q_{\rm Coulomb}(\gamma)=0$ if $\gamma\notin \Gamma_+$ or $\gamma\notin\Gamma'_+$}.
Similarly, $\OmS'(\alpha')$ should vanish if $\alpha'\in \Gamma'^+$ but $\alpha\notin\Gamma^+$. Another consequence of the generalized mutation symmetry is that $\OmS(\gamma_j + \ell\gamma_k)$ must vanish for all $\ell\ne 0$. Indeed, for negative $\ell$, the vector $\alpha=\gamma_j + \ell\gamma_k$ fails to lie in $\Gamma^+$,
while for positive $\ell$, the mutated vector $\alpha'=\gamma_j + M {\rm max}(\gamma_{jk},0)\, \gamma_k+\ell\gamma_k=\gamma_j' - \ell\gamma'_k$ fails to lie in $\Gamma'^+$.
If the $\Omega_s$'s fail to satisfy these constraints, they still define
a generalized quiver but generalized mutation symmetry does not apply.
Indeed
it is unclear a priori if there exists a set
of single-centered invariants $\OmS(\gamma;y;t)$ which is consistent with the above
constraints arising from arbitrary sequences of mutations. Finding a Higgs branch-type realization of such generalized quivers invariant under mutations would allow to give an affirmative
answer to this question.
\item A useful way to state the property \eqref{mutinv0}
is to construct the generating functions
\begin{eqnarray}\displaystyle
\label{defFF}
{\cal F}(\vec N; \zeta; q; y; t) &\equiv& \sum_{N_k} Q_{\rm Coulomb}\left(\sum_{i\ne k} N_i \gamma_i
+ N_k \gamma_k; \zeta; y; t\right) q^{N_k} \, , \nonumber \\
{\cal F}'(\vec N; \zeta'; q; y; t) &\equiv& \sum_{N_k} Q_{\rm Coulomb}'\left(\sum_{i\ne k} N_i \gamma_i'
+ N_k \gamma_k'; \zeta'; y; t\right) q^{N_k} \, ,
\end{eqnarray}
where, on the left-hand side, $\vec N$ denotes the truncated dimension vector
\begin{equation}
\vec N \equiv (N_1, \cdots N_{k-2}, N_{k-1}, N_{k+1}, N_{k+2}, \cdots )\ .
\end{equation}
Mutation invariance for all values of $N_k$ is then equivalent to the functional identity
\begin{equation}
{\cal F}(\vec N; \zeta; q; y; t) =
\begin{cases} q^{\sum_{i\ne k} M N_i
{\rm max}(\gamma_{ik},0)} {\cal F}'(\vec N; \zeta'; q^{-1}; y; t) & \text{for}\ \vec N\ne \vec 0 \\
{\cal F}'(\vec 0; \zeta'; q; y; t) & \text{for}\ \vec N=\vec 0
\end{cases}
\end{equation}
We conjecture that under the assumption \eqref{eposcon0}, both sides of this equation
are in fact polynomials in $q$.
\item
While the conditions i) -- iii) are necessary for mutation invariance
of the Dolbeault polynomials $Q_{\rm Coulomb}(\gamma;\zeta;y;t)$,
it is possible to relax condition i) if one is interested only
in the numerical invariants $Q_{\rm Coulomb}(\gamma;\zeta;y=1;t=1)$.
In that case we conjecture that it is sufficient that the
generating function ${\cal F}(\vec N;\zeta;q;1;1)$ be a polynomial in
$q$, invariant under $q\to 1/q$ (up to an overall power
$q^{\sum_{j\neq k} M {\rm max}(\gamma_{jk},0)}$). This allows some of
the $\OmS(\ell\gamma_k;1;1)$'s to be negative.
For example, for
the generalized Kronecker quiver (example 1 in \S\ref{sgen}),
one may take $\OmS(\gamma_k;1;1)=-1$,
$\OmS(2\gamma_k;1;1)=1$, and $\OmS(\ell\gamma_k;1;1)=0$ for all other
$\ell$. Then the generalized mutation $\mu_2^+$ has $M=3$ and preserves the
numerical invariants $Q(\gamma;\zeta;1;1)$.
Example 2(g) of \S\ref{sgen} gives another example of this phenomenon for a three-node
quiver.
\end{enumerate}
Although we do not have a general proof that the Coulomb branch formula is indeed
invariant under such generalized mutations, we shall check
it in many examples of
ordinary and generalized quivers, with or without oriented loop. In some cases, mutation
invariance allows to determine the complete set of single-centered
indices. Another useful property of mutations is
that in special cases they can reduce the total rank of the quiver, which typically reduces
considerably the computation time of the Coulomb branch formula.
\subsection{Outline}
The rest of the paper is organised as follows.
In \S\ref{sgenphys} we describe the physical origin
of the generalized
mutation transformation rules,
the transformation properties of
single-centered indices under generalized mutation
and the choice of FI
parameters given in \eqref{einequal}. In
\S\ref{sord} we test the ordinary
mutation symmetry of the Coulomb branch formula through several
examples. In \S\ref{sgen} we repeat this exercise for generalized mutations.
\section{Motivation for the generalized mutation conjecture} \label{sgenphys}
As mentioned in the introduction, quiver quantum mechanics describes the dynamics of open strings stretched between the various BPS constituents of a given bound state. In particular,
it depends on a choice of half-space $\mathcal{H}$ in the central charge plane, such that all states whose
central charge lie in $\mathcal{H}$ are deemed to be BPS, while those in the opposite half-plane are anti-BPS. As the choice of $\mathcal{H}$ is varied, it may happen that one of the constituents, with charge $\gamma_k$, crosses the boundary of $\mathcal{H}$ and falls on the anti-BPS side, while its CPT-conjugate with charge $-\gamma_k$ enters the BPS side.\footnote{We assume that the spectrum is such that no other BPS
state crosses the boundary of $\mathcal{H}$ at the same time.} Equivalently, this may take place
for a fixed choice of $\mathcal{H}$ under a variation of the asymptotic moduli (staying away from walls of marginal stability). Such a wall is sometimes known as a wall of second kind \cite{ks}, or as a
conjugation wall \cite{Andriyash:2010yf}. Such walls are encountered in particular when varying the moduli around a point where the central charge associated to one of the BPS constituents vanishes, see Figure \ref{figB2} for an example which can serve as a guidance
for the discussion below.
\begin{figure}
\centerline{\includegraphics[height=10cm]{MutationB2Grid}}
\caption{Spectrum of a generalized Kronecker quiver with $\gamma_{12}=1$,
$\OmS(\gamma_1)=1, \OmS(\gamma_2)=2$ as the central charge
$Z(\gamma_2)=\rho\, e^{i\theta}$ rotates clockwise around 0, keeping
$0<\rho\ll 1$ and $Z(\gamma_1)+\tfrac{1}{\pi}(\tfrac{\pi}{6}-\theta) Z(\gamma_2)=e^{i\pi/2}$
fixed. The BPS half-space ${\rm Im}(Z)>0$ is kept fixed during the deformation.
Occupied charges
are depicted by an arrow in the central charge plane, decorated with
the corresponding BPS index in square bracket.
A conjugation wall is crossed in going from a) to b) and c) to d), while walls of marginal
stability are crossed in going from b) to c) and d) to e). The spectrum in e) is identical to the
spectrum in a), up to a monodromy $\gamma_1\mapsto\gamma_1+2\gamma_2$. In more detail:
a) $0<\theta<\pi/2$: the spectrum consists of 4 occupied charges
$(\gamma_1,\gamma_1+\gamma_2,\gamma_1+2\gamma_2,\gamma_2)$ and
BPS indices $(1,2,1,2)$, respectively. b) $-\pi/2<\theta<0$: $\gamma_2$
is now anti-BPS. The spectrum of the mutated quiver consists of 4
occupied charges
$(-\gamma_2,\gamma_1,\gamma_1+\gamma_2,\gamma_1+2\gamma_2)$
and indices $(2,1,2,1)$. c) $-\pi<\theta<-\pi/2$: the phases of
the two charges
$(\gamma_1+2\gamma_2,-\gamma_2)$ swap and they no
longer form any BPS bound state. d) $-3\pi/2<\theta<-\pi$: $\gamma_2$
re-enters the BPS half space and the spectrum of the twice-mutated
quiver contains two occupied charges
$(\gamma_2,\gamma_1+2\gamma_2)$ with index $(2,1)$ and no bound state.
e) $-2\pi<\theta<-3\pi/2$, the phases of the two charges
$(\gamma_2,\gamma_1+2\gamma_2)$ swap again and the spectrum of the
twice-mutated quiver consists of 4 occupied charges
$(\gamma_1+2\gamma_2,\gamma_1+3\gamma_2,\gamma_1+4\gamma_2,\gamma_2)$
with indices $(1,2,1,2)$. }\label{figB2}
\end{figure}
Clearly, as the state with charge $-\gamma_k$ enters the BPS half-space, it cannot be viewed as a bound state of the BPS constituents with charges $\gamma_i$, and must therefore be considered as elementary. Consequently the vector $-\gamma_k$ must be taken as a new basis
vector, and the other basis vectors must be changed as well so that
the charges carried by the BPS states can be expressed as positive linear combinations of the
basis vectors. Invariance under mutation is the statement that the same
BPS states can be described either as bound states of the original BPS constituents with charge $\gamma_i$,
or of the new BPS constituents with charge $\gamma'_i$.
For this equivalence to hold, it is not necessary
that the indices associated with
the constituents satisfy the constraint \eqref{OmSquivers} --
indeed this constraint is generically not obeyed for bound states in gauge theory
\cite[Section 3.2]{Cordova:2013bza} and in supergravity (such as in
the D6-D0 system, studied in more detail in \cite[Appendix B]{Manschot:2010qz}).
Instead, we shall allow the indices $\OmS(\gamma_j)$ of the BPS constituents to be arbitrary symmetric Laurent polynomials in two-parameter $y$ and $t$, with support on non-negative
dimension vectors $\gamma\in\Gamma^+$.
We refer to the polynomials
$Q_{\rm Coulomb}(\gamma;\zeta;y;t)$ produced by the Coulomb branch formula \eqref{essp1} as generalized quiver invariants. We also assume that
$\OmS(\gamma_j+\ell \gamma_k)$ vanishes for $\ell\ge 1$, and that the integers
$\Omega_{n,s}(\ell\gamma_k)$ defined through \eqref{eq:M}
are all positive and vanish for some large enough $\ell$.
The necessity of the first
condition was discussed in the last but one paragraph of \S\ref{sgenmut}, whereas the
necessity of the
second condition will become clear below.
Figure \ref{figB2} is an example of generalized quivers, associated to a rank 2 cluster algebra
with non-symmetrizable exchange matrix with Dynkin diagram $B_2$ (see \cite{Alexandrov:2011ac} for a similar example with Dynkin diagram $G_2$).
In the rest of this section
we shall describe the motivation behind
the generalized mutation conjecture
\eqref{mutDSZgen0}-\eqref{egench00}
for the generalized quiver invariants.
\subsection{Semi-primitive Coulomb formula and Fermi flip} \label{secflip}
In order to motivate the action of mutations on the basis of BPS states, we shall focus on
dimension vectors $\gamma= \, \gamma_j + N\gamma_k$
with support only on two nodes, the mutating node $k$ and any adjacent
node $j$, hence effectively dealing with a Kronecker quiver with $\gamma_{jk}$ arrows
and dimension vector $(1,N)$.
Due to our assumption that
$\OmS(\gamma_j+\ell \gamma_k)=0$ for non-zero $\ell$, states
carrying charge $\gamma_j+N\, \gamma_k$
can only arise in the original quiver
as bound states of a center of charge $\gamma_j$ with other centers carrying
charges $\ell_i\gamma_k$ with $\ell_i>0$.
Assuming $\zeta_k<0<\zeta_j$, these states exist whenever
$\gamma_{jk}>0$, and arise physically as halos of particles of charge $\ell\gamma_k$ orbiting around a core of charge $\gamma_j$ \cite{Denef:2007vg}. Their
indices are given by the semi-primitive Coulomb branch
formula\cite{Denef:2007vg,Dimofte:2009bv,Manschot:2010qz},
\begin{equation}
\label{eqor}
\begin{split}
Z=&\sum_N Q_{\rm Coulomb}(\gamma_j+N \gamma_k;\zeta;y;t) \, q^N \\
=& \OmS(\gamma_j;y;t) \,
\prod_{\ell\ge 1} \prod_{J=1}^{\ell\gamma_{jk}}\prod_n \prod_s \left( 1 + q^\ell t^s y^n (-y)^{2J - \ell \gamma_{jk}
-1}\right)^{\Omega_{n,s}(\ell\gamma_k)} \, .
\end{split}
\end{equation}
This implies that only a finite number of charge vectors $\gamma_j+N \gamma_k$ have non-zero index, namely those with $0\leq N\leq M\, \gamma_{jk}$ where
\begin{equation}
\label{defM}
M \equiv \sum_{\ell= 1}^{\ell_{\rm Max}} \sum_{n,s} \ell^2 \, \Omega_{n,s}(\ell \gamma_k) \, .
\end{equation}
Physically $Q_{\rm Coulomb}(\gamma_j+N \gamma_k;\zeta;y;t)$ can be interpreted as the number of states
corresponding to the excitations of the fermionic oscillators of
charges $\ell_i\gamma_k$
in \eqref{eqor} acting on the fermionic vacuum with charge $\gamma_j$. As pointed out in
\cite{Andriyash:2010yf}, the same multiplet of states can be obtained from the filled Fermi sea
of charge $\gamma'_j=\gamma_j+M \gamma_{jk} \gamma_k$ by acting with fermionic oscillators of charges $\ell_i\gamma'_k=-\ell_i\gamma_k$,
provided they carry the same indices
\begin{equation} \label{esemireln}
\Omega'_{n,s}(\ell \gamma'_k) = \Omega_{n,s}(\ell \gamma_k)\ ,\qquad
\OmS'(\gamma_j';y;t) =\OmS(\gamma_j;y;t) \ .
\end{equation}
The particles of charge $\ell\gamma'_k$ and $\gamma'_j$ and the corresponding
indices can be associated to the nodes of a new
(generalized) quiver.
In this alternative description,
the bound states with charge $\gamma_j+N \gamma_k=\gamma_j'+(M\gamma_{jk}-N) \gamma_k'$
are described in terms of a halo of particles of charges $\ell_i\gamma'_k$ orbiting around a
core of charge $\gamma'_j$. To see the equivalence of the two descriptions,
one can start from the halo partition function
\begin{eqnarray}\displaystyle
\label{eqor1}
Z'&\equiv& \sum_N Q_{\rm Coulomb}'(\gamma_j'+(M \gamma_{jk} -N) \gamma_k';\zeta';y;t) \, q^N \nonumber\\
&=& q^{M\gamma_{jk} } \sum_{N'} Q_{\rm Coulomb}'(\gamma_j'+N' \gamma_k';\zeta';y;t) \, q^{-N'} \nonumber\\
&=& q^{M\gamma_{jk} } \, \OmS'(\gamma_j';y;t) \,
\prod_{\ell\ge 1} \prod_{J=1}^{\ell\gamma_{jk}}\prod_n \prod_s \left( 1 + q^{-\ell} t^s y^n (-y)^{2J - \ell \gamma_{jk}
-1}\right)^{\Omega_{n,s}'(\ell\gamma_k')}\, ,
\end{eqnarray}
where we have used the fact that
$\gamma'_{jk} = -\gamma_{jk}<0$ and $\zeta'_k>0$. Taking out the
factor of $ q^{-\ell} t^s y^n (-y)^{2J - \ell \gamma_{jk}
-1}$ from each term inside the product in \eqref{eqor1}, using \eqref{esemireln}
and making a change of variable $J\to \ell\gamma_{jk}-J+1$,
this can be rewritten as
\begin{eqnarray}\displaystyle
Z'&=& q^{M\gamma_{jk} - \gamma_{jk} \sum_{\ell,n,s} \ell^2 \Omega_{n,s}(\ell\gamma_k)} \,
t^{\gamma_{jk} \sum_{\ell,n,s} \ell\, s\, \Omega_{n,s}(\ell\gamma_k)}
y^{\gamma_{jk} \sum_{\ell,n,s} \ell\, n\, \Omega_{n,s}(\ell\gamma_k)}\nonumber \\
&&
\OmS(\gamma_j;y;t) \,
\prod_{\ell\ge 1} \prod_{J=1}^{\ell\gamma_{jk}}\prod_n \prod_s \left( 1 + q^{\ell} t^{-s} y^{-n}
(-y)^{2J - \ell \gamma_{jk}
-1}\right)^{\Omega_{n,s}(\ell\gamma_k)}\, .
\end{eqnarray}
The exponent of $q$ in the first factor on the right hand side vanishes due to \eqref{defM},
while the exponents of $t$ and $y$ in the second and third factors vanish due to the
Hodge duality symmetry $\Omega_{n,s}(\ell\gamma_k) = \Omega_{-n,-s}(\ell\gamma_k)$.
The same symmetry allows us to replace the $t^{-s} y^{-n}$ term inside the product by
$t^{s} y^{n}$. Thus we arrive at
\begin{equation}
Z'=
\OmS(\gamma_j;y;t) \,
\prod_{\ell\ge 1} \prod_{J=1}^{\ell\gamma_{jk}}\prod_n \prod_s \left( 1 + q^{\ell} t^{s} y^{n}
(-y)^{2J - \ell \gamma_{jk}
-1}\right)^{\Omega_{n,s}(\ell\gamma_k)}\, ,
\end{equation}
reproducing \eqref{eqor} whenever $\gamma_{jk}>0$.
If instead $\gamma_{jk}<0$ (keeping $\zeta_k<0<\zeta_j$) then the first quiver does not
carry any bound state of the center carrying charge $\gamma_j$ with centers carrying
charges $\ell_i\gamma_k$ with $\ell_i>0$. Thus $Q_{\rm Coulomb}(\gamma_j+N\gamma_k)$ vanishes
for $N>0$. The mutated quiver describing
centers of charges $\gamma_j'=\gamma_j$
and $\ell_i\gamma_k'=-\ell_i\gamma_k$, with indices $\OmS(\gamma_j;y;t)$ and
$\OmS(\ell_i\gamma_k;y;t)$ respectively, has
$\gamma'_{jk}>0$, $\zeta_j'<0<\zeta'_k$,
and therefore also no bound states of charge $\gamma_j'+N\gamma_k'$ for $N>0$.
The partition functions $Z=Z'=\OmS(\gamma_j;y;t)$ are therefore again the same on both sides.
This shows that, under the assumptions $\zeta_k<0< \zeta_j$ and \eqref{eposcon0}, the
semi-primitive Coulomb branch formula is invariant under the transformation
\begin{eqnarray}\displaystyle \label{esemigen}
&& \gamma_k'= -\gamma_k\ ,\quad
\gamma_j'=\gamma_j+M\, \text{max}(0, \gamma_{jk})\, \gamma_k \quad \hbox{for $j\ne k$},
\nonumber \\
&& \OmS(\gamma_j) = \OmS'(\gamma_j'), \quad
\OmS'(\ell\gamma'_k; y; t) = \OmS(\ell \gamma_k; y; t) \, \quad \forall \ell
\, .
\end{eqnarray}
This is a special case of the generalized mutation rules \eqref{mutDSZgen0}-\eqref{egench00},
providing the initial motivation for the conjectured invariance under the
generalized mutation transformation.
In the next subsections, we comment on
aspects of the generalized mutation rules which are not obvious consequences of
the semi-primitive case.
\subsection{Transformation rule of single-centered indices}
Let us now comment on the transformation rule \eqref{egench00} of
$\OmS(\alpha)$. The first equation for $\alpha=\gamma_j$ as well as the second equation
follow from the analysis of the Kronecker quiver given
above,\footnote{While this paper was in preparation, this observation
was also made in Ref. \cite{Cordova:2013bza}.} but we shall now justify why this is needed for
general $\alpha$.
Consider two generalized
quivers which are identical in all respects except that for some specific charge vector
$\alpha$, the first quiver has $\OmS(\alpha)=0$ while the second quiver has some
non-zero $\OmS(\alpha;y;t)$. Let us denote by $Q(\gamma)$ and $\hat Q(\gamma)$ the
Coulomb branch formul\ae\ for these two quivers. Now consider the difference
$\hat Q(\alpha+\ell\gamma_k) - Q(\alpha+\ell\gamma_k)$ for some positive integer $\ell$.
This difference must come from a bound state configuration of a center of charge
$\alpha$ with a set
of centers carrying charges parallel to $\gamma_k$. The index associated with this
configuration is encoded in the partition function $Z$ given in \eqref{eqor} with
$\gamma_j$ replaced by $\alpha$. Now consider the mutated version of both quivers with
respect to the $k$-th node. The difference
$\hat Q'(\alpha+\ell\gamma_k) - Q'(\alpha+\ell\gamma_k)$ must agree with
$\hat Q(\alpha+\ell\gamma_k) - Q(\alpha+\ell\gamma_k)$. Our previous analysis showing the
equality of $Z$ and $Z'$ guarantees
that this is achieved if we assume that the mutated quivers are identical
except for one change:
$\OmS'\left(\alpha+ M {\rm max}(0, \langle \alpha,\gamma_k\rangle)\, \gamma_k
; y;t\right)$ is zero in the first mutated quiver but is
equal to $\OmS(\alpha;y;t)$ for the second mutated quiver. The extra states in the second
quiver then appear from the bound state of a center carrying charge
$\alpha+ M\, \gamma_k\, {\rm max}(0, \langle \alpha,\gamma_k\rangle)$
and other states with charges
proportional to $-\gamma_k$.
This in turn justifies the transformation law of $\OmS$ given in the
first equation of \eqref{egench00}.
This transformation law is also consistent with the requirement that a monodromy,
exemplified in Figure \ref{figB2},
leaves invariant the physical properties of the BPS spectrum.
Since the monodromy transformation is induced by successive
application of two mutations, one with a node carrying charge proportional to $\gamma_k$ and
then with a node carrying charges proportional to $-\gamma_k$, the
transformation law \eqref{egench00} under a mutation implies that under a monodromy
we have $\tilde \Omega_{\rm
S}(\alpha+M\langle\alpha,\gamma_k\rangle \gamma_k)=\OmS(\alpha)$, where we denoted
by $\tilde \Omega_{\rm
S}$ the single centered indices after the monodromy transformation. On the other hand
a monodromy maps a BPS bound state with constituent
charges $\alpha$ to one with charges $\tilde
\alpha=\alpha+M\langle\alpha,\gamma_k\rangle\, \gamma_k$, while other physical
quantities as the central charges and symplectic inner products remain
invariant. Moreover, the physical equivalence of the bound states
before and after the monodromy requires that the single centered
indices transform as $\tilde \Omega_{\rm
S}(\tilde\alpha)=\OmS(\alpha)$.
This agrees with the monodromy transformation law of $\OmS$
obtained by application of two successive mutations.
\subsection{Dependence on the choice of FI parameters} \label{sfichoice}
Note that while \eqref{einequal} fixes the sign of $\zeta_k$, it leaves
unfixed the signs and the magnitudes of the other $\zeta_i$'s as long as they
satisfy $\sum_i N_i\zeta_i=0$. Since for different choices of the FI parameters
we have different $Q_{\rm Coulomb}$ and $Q_{\rm Coulomb}'$, \eqref{mutinv0} apparently gives different
consistency relations for different choices of FI parameters. We shall now outline
a proof that once the mutation invariance has been tested for one choice
of FI parameters, its validity for other choices of FI parameters subject to the
restriction \eqref{einequal} is automatic.
We shall carry out the proof in steps.
First consider a vector $\gamma\in\Gamma^+\backslash \Gamma'^+$ (i.e. such that $\gamma=\sum_i n_i \gamma_i=\sum_i n_i'\gamma_i'$ with
non-negative $n_i$'s, but with some negative $n_k'$).
In this case $Q'(\gamma)$ (and the rational invariant
$\bar Q'(\gamma)$) vanishes in all chambers and hence $Q(\gamma)$ and
$\bar Q(\gamma)$ must
also vanish in all chambers.
We shall now prove that it is enough to check that
$\bar Q(\gamma)$ vanishes in any one chamber, by induction
on the rank $r=\sum n_i$.\footnote{Note that the
rank depends on whether we are using the original or the mutated quiver.
Here rank will refer to the rank in the original quiver.}
Suppose that we have
verified the vanishing of $\bar Q(\gamma)$ for all $\gamma\in \Gamma^+\backslash \Gamma'^+$ with rank $\le r_0$
for some integer $r_0$.
Now consider a $\gamma\in
\Gamma^+\backslash \Gamma'^+$ with rank $r=r_0+1$, and suppose
that $\bar Q(\gamma)$ vanishes in some chamber $c_+$. If we now go across a wall of
$c_+$ then the jump in $\bar Q(\gamma)$ across the wall will be given by the sum of products
of $\bar Q(\alpha_i)$ for appropriate charge vectors $\alpha_i$ satisfying
$\sum_i\alpha_i=\gamma$. Now in the original quiver each of the $\alpha_i$'s have
rank less than
$r_0$. Furthermore at least one of the $\alpha_i$'s must be in
$\gamma\in \Gamma^+\backslash \Gamma'^+$; to see this note that when we express $\gamma=\sum_i \alpha_i$
in the $\gamma'_i$ basis
the coefficient of $\gamma'_k$ is negative, and hence at least one of the $\alpha_i$'s
expressed in the $\gamma'_i$ basis has negative coefficient of $\gamma'_k$. Thus the
corresponding $\bar Q(\alpha_i)$ vanishes by assumption, causing the net jump in
$\bar Q(\gamma)$ to vanish. Thus the vanishing of $\bar Q(\gamma)$ in one chamber
implies its vanishing in all chambers.
Similarly, if $\gamma\in\Gamma'^+\backslash \Gamma^+$, the same argument shows that
the vanishing of $Q'(\gamma)$ in one chamber is sufficient to
ensure the vanishing in all chambers.
Now suppose that we have already established the vanishing of $Q(\gamma)$ for
$\gamma\in \Gamma^+\backslash \Gamma'^+$ and of $Q'(\gamma)$ for $\gamma\in \Gamma'^+\backslash \Gamma^+$ in all the chambers
subject to the restriction
\eqref{einequal}.
We now consider a general charge vector $\gamma$.
Our goal will be to show that to test the equivalence of $Q(\gamma)$ and
$Q'(\gamma)$, it is enough to verify this in one chamber for each $\gamma$.
We shall carry out this proof by induction.
Let us suppose that
we have
established the equality of $Q(\gamma)$ and $Q'(\gamma)$ for all $\gamma$
(except for $\gamma\parallel \gamma_k$)
of
rank $\le r_0$ in the $\gamma_i$ basis
in all chambers subject to the restriction
\eqref{einequal}.
We shall then prove that for a
charge vector $\gamma$ of rank $r_0+1$, the equality of
$Q(\gamma)$ and $Q'(\gamma)$ in any one chamber $c_+$ implies their
equality in all chambers.
For this consider a wall of marginal stability that forms a boundary of $c_+$.
Then as we approach this wall we can find a pair of primitive charge vectors
$\alpha_1$ and $\alpha_2$ such that $\gamma=M_1\alpha_1+M_2\alpha_2$ for
positive integer $M_1$ and $M_2$ and furthermore the FI parameters associated
with the vectors $\alpha_1$ and $\alpha_2$
change sign across the wall.
Using the wall-crossing formula, the jump in $Q(\gamma)$ across the wall
can be expressed as a sum of products
of $\bar Q(m\alpha_1+n\alpha_2)$ for integer $m,n$ in appropriate chambers relevant for
those quivers.
Similarly the jump
in $Q'(\gamma)$ can be expressed as a sum of products of $Q'(m\alpha_1+n\alpha_2)$
for positive integer $m,n$ in the same chambers using the same wall-crossing formula.
Now since $m\alpha_1 + n\alpha_2$, being a constituent of the charge vector $\gamma$,
must have rank $<r_0$
in the original quiver, the equality of $Q(m\alpha_1+n\alpha_2)$
and $Q'(m\alpha_1+n\alpha_2)$ in any chamber holds by assumption. This shows that the net jumps
in $Q(\gamma)$ and $Q'(\gamma)$ across the wall agree and hence $Q(\gamma)=
Q'(\gamma)$ on the other side of the wall.
There are two possible caveats in this argument. First we have to assume that
none of the constituents
carrying charge $m\alpha_1+n\alpha_2$
has charge proportional to $\gamma_k$ since the equality of $Q(\gamma)$ and
$Q'(\gamma)$ does not hold for these charge vectors. This is guaranteed as long as we
do not cross the $\zeta_k=0$ wall, \i.e.\ as long as we obey the constraint \eqref{einequal}.
Second, we have implicitly assumed that for every possible set
of constituents\footnote{Here by constituent we do not mean only single-centered
constituents but also bound systems whose single-centered
constituents remain at finite separation
as we approach the wall. The index carried by such a constituent of charge $\alpha$ is
given by $Q(\alpha)$ in appropriate chamber.}
in the first quiver there is a corresponding set of constituents in the
second quiver carrying the same index
and vice versa. This is not true in general since there may be constituents in the first
quiver whose image in the second quiver may contain one of more $\alpha_i$'s with negative
coefficient of $\gamma_k'$ and hence is not a part of the second quiver.
These are the $\alpha_i$'s belonging to $\Gamma^+\backslash \Gamma'^+$.
The reverse is also possible. However since we have assumed
that the vanishing of $Q(\alpha_i)=0$ for all $\alpha_i\in \Gamma^+\backslash \Gamma'^+$ and
the vanishing of $Q'(\alpha_i)=0$ for all $\alpha_i\in \Gamma'^+\backslash \Gamma^+$
has already been established,
these possible non-matching contributions vanish identically
and we get the equality of $Q(\gamma)$ and $Q'(\gamma)$ in all chambers.
This establishes that, for any $\gamma\in\Gamma$,
the equality of $Q(\gamma)$ and $Q'(\gamma)$ in all chambers
follows from the equality in any given chamber.
We end by giving a physical motivation for
the restriction on the FI parameters
given in \eqref{einequal}. As explained earlier, in
${\cal N}=2$ supersymmetric theories where quiver invariants capture the index
of BPS states, the mutation $\mu^+_k$ takes place on walls
where the central charge $Z(\gamma_k)$ leaves the half-plane distinguishing
BPS states from anti-BPS states, while $Z(-\gamma_k)$ enters the same half-plane.
This clearly
requires that in the complex plane the ray of $Z(\gamma_k)$ lies to the extreme left of the
ray of any other $Z(\gamma)$ inside the BPS half-plane. Now the FI
parameter associated with $\gamma_k$ for a particular quiver of total charge $\gamma$ is
given by
\begin{equation}
\zeta_k = \text{Im} (Z(\gamma_k) / Z(\gamma)) \, .
\end{equation}
The condition on $Z(\gamma_k)$ mentioned above requires that $\zeta_k$ is negative.
However it does not specify its magnitude, nor the
magnitude or signs of the $\zeta_i$'s carried the other
constituents, as those depend both on the phases of $Z(\gamma_i)$ and on their
magnitudes. Thus we see from this physical consideration that if mutation is to be a symmetry,
it must hold under the condition \eqref{einequal} with no further constraint on the
other $\zeta_i$'s.
\section{Examples of ordinary quiver mutations} \label{sord}
In this section we shall test mutation invariance of the Coulomb branch formula
for ordinary quivers.
For this we take $\OmS(\gamma)$ to satisfy \eqref{OmSquivers}
and use the transformation law \eqref{egench00} of $\OmS(\gamma)$ under mutation.
We also
use mutation invariance to
compute single-centered indices for various quivers where a
direct analysis of the Higgs branch is forbidding. Since ordinary mutation is known
to be a symmetry of the quiver Poincar\'e polynomial, the analysis of this section can be
interpreted as a test of the Coulomb branch formula \eqref{essp1}, \eqref{essp2} and the
transformation rule \eqref{egench00} for single-centered indices.
\bigskip
\noindent {\bf Example 1}: Consider a 3-node quiver with charge vectors $\gamma_1$,
$\gamma_2$ and $\gamma_3$ associated with the nodes satisfying
\begin{equation}
\gamma_{12}=a, \quad \gamma_{23} = b, \quad \gamma_{31} = c, \qquad
\zeta_1< 0, \quad \zeta_2, \zeta_3>0, \quad a,b,c>0\, .
\end{equation}
Then mutation with respect to the node 1 generates a new quiver with basis vectors
\begin{equation} \label{e3.8}
\gamma_1'=-\gamma_1, \quad \gamma_2'=\gamma_2, \quad \gamma_3'
= \gamma_3 + c\, \gamma_1,
\end{equation}
DSZ matrix
\begin{equation}
\gamma'_{12}=-a, \quad \gamma'_{23}=b - ac,
\quad \gamma'_{31} = -c,
\end{equation}
FI parameters
\begin{equation}
\zeta_1'= - \zeta_1, \quad \zeta_2' = \zeta_2, \quad \zeta_3'=\zeta_3 + c\, \zeta_1\, ,
\end{equation}
and dimension vector
\begin{equation}
\gamma = N_1' \gamma_1' + N_2' \gamma_2' + N_3' \gamma_3', \qquad
N_1' = c N_3 - N_1, \quad N_2' = N_2, \quad N_3' = N_3\, .
\end{equation}
The original and mutated quiver are depicted in Fig.\ref{f1}.
Mutation invariance \eqref{mutinv0} requires
\begin{equation} \label{eqcmut}
Q(N_1, N_2, N_3) =Q'(N_1', N_2', N_3')\ ,
\end{equation}
where the l.h.s. is the shorthand notation for
$Q_{\rm Coulomb}\left(\sum_{i=1}^3 N_i \gamma_i;\zeta; y;t\right) $
while the r.h.s. is the shorthand notation for
$Q_{\rm Coulomb}'\left(\sum_{i=1}^3 N_i' \gamma_i';\zeta'; y;t\right) $,
computed with $\gamma_i'$ as the basis vectors and hence
$\gamma'_{ij}$ as the DSZ products.
We shall also use
$\OmS(N_1, N_2, N_3)$ to denote $\OmS(\sum_{i=1}^3 N_i \gamma_i)$
and $\OmS'(N_1', N_2', N_3')$ to denote $\OmS'(\sum_{i=1}^3 N_i' \gamma_i';t)$.
Eq.\eqref{egench00} then gives
\begin{equation} \label{eomtrs}
\OmS(N_1, N_2, N_3) = \OmS'(cN_3 - N_1 - \text{max} (0, N_3 c - N_2 a), N_2, N_3)
= \OmS'(\text{min}(N_3 c, N_2 a)-N_1, N_2, N_3)\, .
\end{equation}
\begin{figure}
\begin{center}
$$
\xymatrix{
\gamma_1 \ar[rrrr]|{a} & & & & \gamma_2 \ar[lldd]|{b} \\ &&&& \\
& & \gamma_3 \ar[lluu]|{c} & & } \qquad
\xymatrix{
\gamma_1' \ar[rrdd]|{c} & & & & \gamma_2' \ar[llll]|{a} \\ &&&& \\
& & \gamma_3' \ar[rruu]|{ac-b} & & }
$$
\end{center}
\caption{The original quiver (left) and the mutated quiver (right) of examples 1 and 2.
\label{f1}.}
\end{figure}
Let us choose
\begin{equation}
a=3, \quad b=4, \quad c=5, \quad \zeta_1 = -5.71, \quad \zeta_2=2.56 \, N_1/N_2+.01/N_2, \quad
\zeta_3=3.15 \, N_1/N_3 -.01/N_3\, .
\end{equation}
Then we get
\begin{eqnarray}\displaystyle
&& \gamma'_{12}=-3, \quad \gamma'_{23}=-11, \quad \gamma'_{31}=-5, \nonumber \\
&& \zeta_1'=5.71, \quad \zeta_2'=2.56\, N_1/N_2+.01/N_2, \quad \zeta_3'=3.15\, N_1/N_3 - 28.55
-.01/N_3\, .
\end{eqnarray}
Some of the relations following from \eqref{eomtrs} are
\begin{equation} \label{eomsome}
\OmS(N,1,1)=\OmS'(3-N,1,1), \quad \Rightarrow \quad
\OmS(N,1,1)=0 = \OmS'(N,1,1) \quad \text{for} \quad N\ge 3\, .
\end{equation}
We shall now check the invariance of the Coulomb branch formula under mutation.
Eq.\eqref{eqcmut} gives
\begin{equation} \label{emut3node}
Q(N,1,1) = Q'(5-N,1,1) \quad \text{for} \quad 0\le N\le 5, \quad Q(N,1,1)=0 = Q'(N,1,1)\quad
\text{for} \quad N\ge 6
\, .
\end{equation}
Now explicit evaluation gives
\begin{eqnarray}\displaystyle
Q(1,1,1) &=& 1/y^4 + 2/y^2 + 3+ 2 y^2 + y^4
+ \OmS(1,1,1), \nonumber \\
Q'(4,1,1) &=&
1/y^4 + 2/y^2 + 3+ 2 y^2 + y^4 +
\OmS'(2,1,1)
- (y + y^{-1})
\OmS'(3,1,1)
+ \OmS'(4,1,1) \, .\nonumber \\
\end{eqnarray}
Using \eqref{eomsome} we see that $Q(1,1,1)$ and $Q'(4,1,1)$ agree.
Next we compute
\begin{eqnarray}\displaystyle
Q(2,1,1) &=&
-(y^{-3} + 2 y^{-1} + 2 y + y^3) - (y^{-1} + y) \OmS(1,1,1)
+
\OmS(2,1,1)\nonumber \\
Q'(3,1,1) &=& \OmS'(1,1,1) -(y^{-3} + 2 y^{-1} + 2 y + y^3) - (y^{-1} + y)
\OmS'(2,1,1) + \OmS'(3,1,1) \, . \nonumber \\
\end{eqnarray}
Again using \eqref{eomsome} we see that $Q(2,1,1)$ and $Q'(3,1,1)$ agree.
Similarly we have
\begin{eqnarray}\displaystyle
Q(3,1,1) &=&
1 + \OmS(1,1,1) + \OmS(3,1,1) - (y + y^{-1}) \OmS(2,1,1)\, , \nonumber \\
Q'(2,1,1) &=& 1 + \OmS'(2,1,1) - (y + y^{-1}) \OmS'(1,1,1)\, .
\end{eqnarray}
These two agree as a consequence of \eqref{eomsome}.
We also have
\begin{eqnarray}\displaystyle
Q(4,1,1) &=& \OmS(2,1,1) + \OmS(4,1,1)
- (y + y^{-1}) \OmS(3,1,1)
\nonumber \\
Q'(1,1,1) &=&\OmS'(1,1,1)\, ,
\end{eqnarray}
which are in agreement as a consequence of \eqref{eomsome}.
Finally we have
\begin{eqnarray}\displaystyle
Q(5,1,1) &=& \OmS(3,1,1) +\OmS(5,1,1) - (y + y^{-1}) \OmS(4,1,1)\, , \nonumber \\
Q'(0,1,1) &=& 0\, ,
\end{eqnarray}
\begin{eqnarray}\displaystyle
Q(0,1,1) &=& -(y^{-3} + y^{-1} + y + y^3) \, , \\
Q'(5,1,1) &=& -(y^{-3} + y^{-1} + y + y^3) +\OmS'(3,1,1) - (y+y^{-1}) \OmS'(4,1,1)
+ \OmS'(5,1,1) \, . \nonumber
\end{eqnarray}
Again these equations are in agreement due to \eqref{eomsome}.
We have not tested the vanishing of $Q(N,1,1)$ and $Q'(N,1,1)$ for $N\ge 6$
due to the increase in the computational time, but we shall test similar relations
involving other quivers later.
So far we have not used any explicit results for $\OmS$ or $\OmS'$. We
now note that $\OmS'(1,1,1)$ vanishes since the corresponding $\gamma'_{ij}$'s fail to
satisfy the triangle inequality. The single-centered index
$\OmS(1,1,1;t)=9$ is easily
computed from the results in \cite{Bena:2012hf,Manschot:2012rx,Lee:2012naa}.
Thus we have
\begin{equation} \label{eresom1}
\OmS'(1,1,1;t)=\OmS(2,1,1;t)=0,
\quad
\OmS'(2,1,1;t)=\OmS(1,1,1;t)=9\, .
\end{equation}
It will be interesting to check the prediction for $\OmS'(2,1,1)$ by direct computation.
Note that in general $\OmS(\gamma)\neq \OmS'(\gamma)$.
For example $4\gamma_1'+\gamma_2'+\gamma_3'=
\gamma_1+\gamma_2+\gamma_3$
and $\OmS'(4,1,1) \neq
\OmS(1,1,1)$.
\bigskip
\noindent{\bf Example 2:} We again consider a 3-node quiver with
\begin{equation}
a=2, \quad b=2, \quad c=2, \quad \zeta_1 = -3.1 , \quad \zeta_2= N_1/N_2+.2/N_2, \quad
\zeta_3=2.1\, N_1/N_3 -.2/N_3\, ,
\end{equation}
and mutate with respect to the node 1.
Then we get
\begin{eqnarray}\displaystyle
&& \gamma'_{12}=-2, \quad \gamma'_{23}=-2, \quad \gamma'_{31}=-2, \nonumber \\ &&
\zeta_1'=3.1, \quad \zeta_2'=N_1/N_2+.2/N_2, \quad \zeta_3'=2.1\, N_1/N_3 - 6.2 - .2/N_3\, ,
\end{eqnarray}
\begin{equation} \label{entrs}
N_1'=2 N_3-N_1, \quad N_2'=N_2, \quad N_3'=N_3\, .
\end{equation}
Eqs.\eqref{eomtrs} give
\begin{equation} \label{econs1}
\OmS(N_1, N_2, N_3) = \OmS'( \text{min}(2N_3, 2 N_2)- N_1, N_2, N_3)\, .
\end{equation}
On the other hand since
the new quiver is the same as the old one with the arrows reversed and
different FI parameters, and since $\OmS$ is independent of the FI parameters
we have
\begin{equation} \label{ecycle}
\OmS'(N_1 ,N_2,N_3)
= \OmS(N_3,N_2,N_1)\, .
\end{equation}
Furthermore cyclic invariance of the quiver
implies that $\OmS(N_1,N_2,N_3)$
is invariant under cyclic permutations of $(N_1,N_2, N_3)$. Using these relations we
can severely constrain the values of $\OmS$.
For example we have\footnote{The fact that $\OmS(N,1,1)$ vanishes is
consistent with the fact that in the chamber
$\zeta_2>0,\zeta_1\to 0^-$ the moduli space is a codimension $Na$ surface in
$\mathbb{P}^{b-1}\times G(N,c)$, with dimension $1-N^2$.}
\begin{equation} \label{eomform}
\OmS(N,1,1) = \OmS'(2-N,1,1)=\OmS(1,1,2-N)=\OmS(2-N,1,1) \, ,
\end{equation}
and as a consequence
\begin{equation} \label{eomform2}
\OmS(N,1,1)=0 \quad
\text{for} \quad N\ge 2\, .
\end{equation}
More generally we get
\begin{equation} \label{econs2}
\OmS(N_1, N_2,N_3) = 0 \quad \text{for} \quad N_1\ge \text{min}(2N_2, 2 N_3)\, .
\end{equation}
Together with cyclic symmetry this implies that a necessary condition
for getting non-vanishing $\OmS(N_1,N_2, N_3)$ is that
each $N_i$ should be strictly less than the double of each of the other two $N_i$'s.
Using cyclic symmetry we can take $N_1$ to be the largest of $(N_1, N_2, N_3)$.
The mutation rule \eqref{econs1} then equates $\OmS(N_1,N_2, N_3)$
to $\OmS'(N_1', N_2, N_3)=\OmS(N_3,N_2, N_1')$
with $N_1'\le N_1$. The equality sign holds only if $N_1=N_2=N_3$. Thus unless
$N_1=N_2=N_3$ we can repeatedly use mutation and cyclic symmetry to reduce the
rank of the quiver until the maximum $N_i$ becomes greater than or equal to twice
the minimum $N_i$, and then $\OmS$ vanishes by \eqref{econs2}. Thus the only non-vanishing
$\OmS$ in this case are $\OmS(N,N,N)$.
We know from \cite{Bena:2012hf} that in the Abelian case, $\OmS(1,1,1;t)=1$.
We now proceed to test the invariance of the Coulomb branch formula under mutation.
From the general equation $Q(N_1,N_2,N_3)=Q(2N_3-N_1,N_2,N_3)$ that follows
from \eqref{entrs}, we get in particular
\begin{equation} \label{eqform}
Q(N,1,1) = Q(2-N, 1,1) \quad \Rightarrow \quad Q(N,1,1)=0 \quad \text{for} \quad N\ge 3\, .
\end{equation}
Explicit calculation gives
\begin{eqnarray}\displaystyle
&& Q(1,1,1) = 1 + \OmS(1,1,1), \quad Q'(1,1,1)=1+\OmS'(1,1,1)\, , \nonumber \\
&& Q(2,1,1) = \OmS(2,1,1), \quad Q'(0,1,1)=0 \, , \nonumber \\
&& Q(0,1,1) = -(y+y^{-1}), \quad Q'(2,1,1) = -(y+y^{-1}) + \OmS'(2,1,1)\, , \nonumber \\
&& Q(3,1,1) = \OmS(3,1,1), \quad Q(4,1,1) = \OmS(4,1,1), \nonumber \\
&& Q'(3,1,1) = \OmS'(3,1,1), \quad
Q'(4,1,1) = \OmS'(4,1,1) \, .
\end{eqnarray}
These results are all consistent with \eqref{eqform} after we use eqs.\eqref{eomform},
\eqref{eomform2}.
More generally, for any 3-node quiver with $a,b>0$ and
$c=2$, the Abelian representation $(1,1,1)$
is mapped by a mutation on node 1 to an Abelian representation.
We know from the analysis of $\OmS$ for $\vec N=(1,1,1)$ given
in \cite{Bena:2012hf,Lee:2012sc,Lee:2012naa,Manschot:2012rx}, that the only
non-vanishing $\OmS$ arise for $a=b\geq 2$.
In this case \eqref{eomtrs} gives
\begin{equation}
\OmS(N,1,1) = \OmS'(2-N,1,1)\, .
\end{equation}
In particular $\OmS(1,1,1)=\OmS'(1,1,1)$.
On the other hand since in each of these cases
the arrow
multiplicities computed using \refb{e3.8}
are just reversed under the mutation,
the equality of $\OmS(1,1,1)$ and $\OmS'(1,1,1)$ follows
automatically, confirming the transformation laws of $\OmS$ under
mutation.
Using this we can verify the equality
of $Q(1,1,1)$ and $Q'(1,1,1)$.
\bigskip
\noindent{\bf Example 3:}
Next we consider the 4-node quiver
\begin{equation}\label{eq:quivrepK}
\xymatrix{
\gamma_1 \ar[rrr]|{a} & & & \gamma_2 \ar[dd]|{b} \\
& & & \\ \gamma_4 \ar[uu]|{d} & & &
\gamma_3 \ar[lll]|{c}}
\end{equation}
with multiplicities of the arrows $a=5$, $b=5$, $c=2$ and $d=1$. We
choose for the FI parameters
\begin{equation}\label{efichoice}
\vec \zeta=\left(\frac{25\, N_4 + .1}{N_1}, \frac{17\, N_4+.2}{N_2},\frac{3\, N_4-.3}{N_3},-45\right)\, .
\end{equation}
We now perform a mutation at node 4. The mutated
quiver is:
\begin{equation}\label{eq:quivrepK2}
\xymatrix{
\gamma_1' \ar[rrr]|{a} \ar[dd]|{d} & & & \gamma_2' \ar[dd]|{b} \\
& & & \\ \gamma'_4 \ar[rrr]|{c} & & &
\gamma'_3 \ar[llluu]|{cd}}
\end{equation}
with
\begin{equation}
\gamma_1'=\gamma_1, \quad \gamma_2'=\gamma_2, \quad
\gamma'_3=\gamma_3+2\gamma_4, \quad \gamma_4'=-\gamma_4\, ,
\end{equation}
\begin{equation} \label{ezetamut}
\vec\zeta' = \left(\frac{25\, N_4+.1}{N_1}, \frac{17\, N_4+.2}{N_2},\frac{3\, N_4-.3}{N_3}-90,45\right)\, ,
\end{equation}
\begin{equation}
N_1'=N_1, \quad N_2'=N_2, \quad N_3'=N_3, \quad N_4'=c\, N_3 - N_4 = 2\, N_3-N_4\, .
\end{equation}
Note that the multiplicity $c$ is chosen such that the Abelian representation
$\vec N=(1,1,1,1)$ is mapped to the Abelian representation $\vec N'=(1,1,1,1)$.
More generally Eq. \eqref{mutinv0} implies
\begin{equation}
Q(N_1,N_2, N_3, N_4) = Q'(N_1, N_2, N_3, c N_3 - N_4) = Q'(N_1, N_2, N_3, 2 N_3 - N_4)\, .
\end{equation}
Thus we should have
\begin{eqnarray}\displaystyle \label{ecomp}
&& Q(1,1,1,0) = Q'(1,1,1,2), \quad
Q(1,1,1,1) = Q'(1,1,1,1), \quad Q(1,1,1,2) = Q'(1,1,1,0), \nonumber \\
&&
Q(1,1,1,N) = 0=Q'(1,1,1,N) \quad \hbox{for \quad $N\ge 3$} \, .
\end{eqnarray}
In order to test this we need to first study the transformation law of $\OmS$. Eq.\eqref{egench00}
gives
\begin{equation}
\begin{split}
\OmS(N_1,N_2,&N_3,N_4) = \OmS'(N_1,N_2,N_3,c N_3 - N_4 - \text{max}(c N_3 - d N_1, 0))
\\
=& \OmS'(N_1,N_2,N_3, \text{min}(c N_3, dN_1) - N_4)
= \OmS'(N_1,N_2,N_3, \text{min}(2 N_3, N_1) - N_4)\, .
\end{split}
\end{equation}
This gives in particular
\begin{eqnarray}\displaystyle \label{e4node1}
&& \OmS(1,1,1,1)=\OmS'(1,1,1,0), \quad \OmS(1,1,1,0)=\OmS'(1,1,1,1)\nonumber \\
&& \OmS(1,1,1,N) = 0 = \OmS'(1,1,1,N) \quad \text{for} \quad N\ge 2\, .
\end{eqnarray}
We now proceed to verify \eqref{ecomp}. One finds using \eqref{essp1}:
\begin{eqnarray}
\label{eq:invQ4}
Q(1,1,1,0) &=& 1/y^8 + 2/y^6 + 3/y^4 + 4/y^2 +5+ 4 y^2 + 3 y^4 + 2 y^6 + y^8\nonumber \\
Q(1,1,1,1)&=&y^{-8}+3y^{-6}+5y^{-4}+7y^{-2}+9+7y^{2}+5y^{4}+3y^{6}+y^{8}
+\OmS(1,1,1,1) \nonumber\\
Q(1,1,1,2)&=&y^{-6}+2y^{-4}+3y^{-2}+4+3y^{2}+2y^{4}+y^{6}
+ \OmS(1,1,1,1) + \OmS(1,1,1,2) \nonumber \\
Q(1,1,1,3) &=& \OmS(1,1,1,2) + \OmS(1,1,1,3) \nonumber \\
Q(1,1,1,4) &=& \OmS(1,1,1,3) +\OmS(1,1,1,4)\, ,
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:invQ4m}
Q'(1,1,1,2) &=& 1/y^8 + 2/y^6 + 3/y^4 + 4/y^2 + 5+ 4 y^2 + 3 y^4 + 2 y^6 + y^8
\nonumber \\
&& + \OmS'(1,1,1,1) + \OmS'(1,1,1,2) \nonumber \\
Q'(1,1,1,1)&=&y^{-8}+3y^{-6}+5y^{-4}+7y^{-2}+9+7y^{2}+5y^{4}+3y^{6}+y^{8}\nonumber \\
&&+\OmS'(1,1,1,0)+\OmS'(1,1,1,1) \\
Q'(1,1,1,0)&=&y^{-6}+2y^{-4}+3y^{-2}+4+3y^{2}+2y^{4}+y^{6}+ \OmS'(1,1,1,0)\, .\nonumber
\end{eqnarray}
Compatibility of these expressions with
\eqref{eq:invQ4}, \eqref{ecomp} follows directly from \eqref{e4node1} and \eqref{e4node2}.
In particular the last two equations of
\eqref{eq:invQ4} are consistent with \eqref{ecomp}, \eqref{e4node1}.
We can also test the vanishing of $Q'(1,1,1,N)$ for $N\ge 3$. For $\zeta'$ given by
\eqref{ezetamut} with $(N_1,N_2,N_3,N_4)=(1,1,1,2-N)$, we get
\begin{eqnarray}\displaystyle
Q'(1,1,1,3) &=& \OmS'(1,1,1,2)+\OmS'(1,1,1,3)\, , \nonumber \\
Q'(1,1,1,4) &=& \OmS'(1,1,1,3)+ \OmS'(1,1,1,4)\ .
\end{eqnarray}
These vanish using \eqref{e4node1}.
Note that in the above analysis we have not explicitly used
the values of $\OmS$ and $\OmS'$ or tested \eqref{e4node1}.
{}From direct analysis of 3-node and 4-node cyclic
quiver given in \cite{Bena:2012hf,Lee:2012sc,Lee:2012naa,Manschot:2012rx}
we know that $\OmS(1,1,1,0;t)=0$
(as there is no loop) and $\OmS(1,1,1,1;t)=4$.
Thus we have
\begin{equation} \label{e4node2}
\OmS'(1,1,1,1;t) = \OmS(1,1,1,0;t)=0, \quad \OmS'(1,1,1,0;t) = \OmS(1,1,1,1;t)=4\, .
\end{equation}
The value of $\OmS'(1,1,1,0;t)$ given in
\cite{Bena:2012hf,Lee:2012sc,Lee:2012naa,Manschot:2012rx}
agrees with the result given
above.
Vanishing of $\OmS'(1,1,1,1;t)$ can be
seen by direct analysis of the Higgs branch moduli space of this quiver.
\section{Examples of generalized quiver mutations} \label{sgen}
In this section we
test the conjectured invariance of the Coulomb branch formula for generalized
quivers where the condition \eqref{OmSquivers} is relaxed.
\bigskip
\noindent {\bf Example 1}:
\medskip
We consider the generalized Kronecker quiver with $m\equiv \gamma_{12}>0$ arrows
from node 1 to node 2,
with $\OmS(k\gamma_1;y;t)$
and $\OmS(\ell \gamma_2;y;t)$
given by arbitrary symmetric Laurent polynomials and
$\OmS(\gamma)=0$ otherwise.
In
the chamber $\zeta_1<0<\zeta_2$ the total index for charge $\gamma$ coincides
with $\OmS(\gamma)$ as there are no bound states with two or more centers.
The index
in the other chamber $\zeta_1>0>\zeta_2$, which we shall denote by
$Q(N_1, N_2)$, can be obtained using the wall-crossing formula.
We shall define, as in \eqref{essp1},
\begin{eqnarray}\displaystyle
\bOmS(\gamma;y; t) &=& \sum_{m|\gamma} {1\over m} {y-y^{-1}\over y^m - y^{-m}} \,
\OmS(\gamma/m; y^m; t^m)\, , \nonumber \\
{\bar Q}_{\rm Coulomb}(\gamma;y; t) &=& \sum_{m|\gamma} {1\over m} {y-y^{-1}\over y^m - y^{-m}} \,
Q_{\rm Coulomb}(\gamma/m; y^m; t^m)\, ,
\end{eqnarray}
and drop the arguments $y$ and $t$ from $\bOmS$ to avoid cluttering.
Using the shorthand notation $Q(p,q)$ for
$Q_{\rm Coulomb}(p\gamma_1+q\gamma_2;\zeta;y;t)$ etc.
the wall-crossing formula then takes the form
\begin{equation} \label{ewallc}
\prod_{p,q\atop p/q \downarrow} \exp\left[ \bar Q(p,q) e_{p,q}\right]
= \exp\left[\sum_\ell \bOmS(\ell\gamma_2) e_{0,\ell}\right]
\exp\left[\sum_k \bOmS(k\gamma_1) e_{k,0}\right] \,
\, ,
\end{equation}
where $e_{p,q}$ are elements of an algebra satisfying the commutation relation
\begin{eqnarray}\displaystyle
&& \left[ e_{p,q}, e_{p',q'}\right] = \kappa(\gamma, \gamma') \, e_{p+p',q+q'}, \quad
\gamma\equiv p\gamma_1+q\gamma_2, \quad \gamma'\equiv p'\gamma_1+q'\gamma_2, \nonumber\\
&&
\kappa(\gamma, \gamma')\equiv {(-y)^{\langle\gamma, \gamma'\rangle} -
(-y)^{-\langle\gamma, \gamma'\rangle}\over y - y^{-1}}\, .
\end{eqnarray}
The product over $p,q$
runs over non-negative integers $p,q$ and
symbol $p/q\downarrow$ on the left hand side of \eqref{ewallc} implies that the product is
ordered such that the ratio $p/q$ decreases from left to right.
If $p/q=p'/q'$ then the order is irrelevant since
$e_{p,q}$ and $e_{p',q'}$ will commute. Taking the $p=0$ terms on the left hand side
to the right hand side and using the fact that $\bar Q(0,\ell)=\bOmS(\ell\gamma_2)$,
we can express \eqref{ewallc} as
\begin{equation} \label{ewallcmod}
\begin{split}
\prod_{p,q\atop p\ne 0, \, p/q \downarrow}\!\!\!\! \exp\left[ \bar Q(p,q) e_{p,q}\right]
=& \exp\left[\sum_\ell \bOmS(\ell\gamma_2) e_{0,\ell}\right] \,
\exp\left[\sum_k \bOmS(k\gamma_1) e_{k,0}\right] \,
\exp\left[-\sum_\ell \bOmS(\ell\gamma_2) e_{0,\ell}\right] \, .
\end{split}
\end{equation}
Under generalized mutation with respect to the node 2, we have
$\gamma'_{12}=-\gamma_{12}$ and $\zeta'_1<0<\zeta'_2$. The effect of reversal of the sign
of $\zeta_i$'s will be to change the order of the products on both sides of
\eqref{ewallcmod}. On the other hand the effect of changing the
sign of $\gamma_{12}$ is
that the corresponding generators $e'_{p,q}$ which replace $e_{p,q}$ in \eqref{ewallcmod}
will satisfy a commutation relation similar to
that of $e_{p,q}$ but with an extra minus sign on the right hand side.
This means that $-e'_{p,q}$'s will satisfy the same commutation relations as $e_{p,q}$'s.
Thus we can write an equation similar to that of \eqref{ewallcmod} with the
order of products reversed on both sides, $\bar Q(p,q)$ replaced by $\bar Q'(p,q)$ and
$e_{p,q}$ replaced by $-e_{p,q}$:
\begin{equation} \label{ewallcpmod}
\begin{split}
\prod_{p,q\atop p\ne 0, \, p/q \uparrow} &\exp\left[ -\bar Q'(p,q) e_{p,q}\right]
= \\&
\exp\left[\sum_\ell \bOmS(\ell\gamma_2) e_{0,\ell}\right] \, \exp\left[-\sum_k
\bOmS(k\gamma_1) e_{k,0
}\right] \, \exp\left[-\sum_\ell \bOmS(\ell\gamma_2) e_{0,\ell}\right]
\, .
\end{split}
\end{equation}
Taking the inverse of this has the effect of reversing the order of the products and
changing the
signs of $e_\gamma$'s in the exponent. The resulting equation is identical to that of
\eqref{ewallcmod}
with $\bar Q(p,q)$ replaced by $\bar Q'(p,q)$,
showing that $\bar Q(p,q)=\bar Q'(p,q)$\cite{Manschot:2010qz}.
Mutation invariance however
requires us to prove a different equality, namely
$\bar Q'(p,q) = \bar Q(p, M\gamma_{12}p-q)$ where
\begin{equation}
M \equiv \sum_\ell \ell^2 \OmS(\ell\gamma_2; y=1; t=1)\, .
\end{equation}
To proceed, we shall assume that as a consequence of \eqref{ewallcmod} we have
\begin{equation} \label{evanish}
Q(p,q) = 0 \quad \text{for} \quad q> M\gamma_{12}p\, .
\end{equation}
Later we shall prove this relation. Assuming this to be true, we define
$p'=p, \quad q'=M\gamma_{12}p-q$ (or equivalently $p=p'$, $q= M\gamma_{12}p'-q'$)
which are both non-negative for $p\ge 0$,
$0\le q\le M\gamma_{12}p$ and note that $p'/q'$ are ordered in increasing order if $p/q$
are ordered in the decreasing order. Then we can express \eqref{ewallcmod} as
\begin{eqnarray}\displaystyle \label{ewallcnew}
&& \prod_{p',q'\atop p'\ne 0, \, p'/q' \uparrow} \exp\left[ \bar Q(p',M\gamma_{12}p'-q') e_{p',M\gamma_{12}p'-q'}\right]
\nonumber \\ &=& \exp\left[\sum_\ell
\bOmS(\ell\gamma_2) e_{0,\ell}\right] \,
\exp\left[\sum_k \bOmS(k\gamma_1) e_{k,0}\right] \, \exp\left[-\sum_\ell
\bOmS(\ell\gamma_2) e_{0,\ell}\right]
\, .
\end{eqnarray}
Since $p',q'$ are dummy indices we can change them to $p,q$ on the
left hand side.
Furthermore notice that $e_{p,M\gamma_{12}p-q}$'s and
$-e_{p,q}$'s have isomorphic
algebra for different $p,q$. Thus we can replace $e_{p,M\gamma_{12}p-q}$
by $-e_{p,q}$ on both sides without changing the basic content of the
equations. This gives
\begin{equation}
\begin{split} \label{ewallcnewp}
\prod_{p,q\atop p\ne 0, \, p/q \uparrow}& \exp\left[ -\bar Q(p,M\gamma_{12}p-q) e_{p,q}\right] \\
=& \exp\left[-\sum_\ell \bOmS(\ell\gamma_2) e_{0,-\ell}\right]\,
\exp\left[-\sum_k \bOmS(k\gamma_1) e_{k,M\gamma_{12}k}\right] \,
\exp\left[\sum_\ell \bOmS(\ell\gamma_2) e_{0,-\ell}\right]
\, .
\end{split}
\end{equation}
Thus the proof of mutation
symmetry $\bar Q'(p,q)=\bar Q(p,M\gamma_{12}p-q)$ reduces to proving the equality of
the right hand sides of \eqref{ewallcpmod} and \eqref{ewallcnewp}. This is the task we shall
undertake now. For this we define
\begin{equation} \label{edefuv}
U \equiv \exp\left[\sum_\ell
\bOmS(\ell\gamma_2) e_{0,\ell}\right] , \qquad
V \equiv \exp\left[-\sum_\ell \bOmS(\ell\gamma_2) e_{0,-\ell}\right]\, ,
\end{equation}
and express eqs.\eqref{ewallcpmod} and \eqref{ewallcnewp} as
\begin{equation} \label{eonea}
\prod_{p,q\atop p\ne 0, \, p/q \uparrow} \exp\left[ -\bar Q'(p,q) e_{p,q}\right]
= \prod_k \exp\left[-
\bOmS(k\gamma_1) U \, e_{k,0
} U^{-1} \right]\, ,
\end{equation}
and
\begin{equation} \label{etwob}
\prod_{p,q\atop p\ne 0, \, p/q \uparrow} \exp\left[ -\bar Q(p,M\gamma_{12}p-q) e_{p,q}\right]
= \prod_k \exp\left[- \bOmS(k\gamma_1) V \, e_{k,M\gamma_{12}k} V^{-1}\right] \, .
\end{equation}
Note that the order of terms in the product over $k$ on the right hand sides of these two equations
is irrelevant since the terms for different $k$ commute.
Thus the equality of the right hand side of the two expressions require us to prove that
$U e_{k,0} U^{-1}=V e_{k, M\gamma_{12}k} V^{-1}$.
Now suppose we combine all the factors on either side of \eqref{eonea} and \eqref{etwob}
using the Baker-Campbell-Hausdorff formula, and consider the coefficients of $e_{1,s}$
in the exponent.
On the left hand sides of \eqref{eonea} and \eqref{etwob}, these are determined in terms of $\bar Q'(1,q)$ and $\bar Q(1, M\gamma_{12}-q)$
respectively. Since we have already proved the equality of $\bar Q'(1,q)$ and
$\bar Q(1,M\gamma_{12}-q)$ with the help of semi-primitive wall-crossing formula, we see that
the coefficients of $e_{1,s}$ in the exponent on the left hand sides are equal. On the
other hand since $U e_{k,0} U^{-1}$ and $V e_{k, M\gamma_{12}k} V^{-1}$ are linear combinations
of $e_{k, q}$, on the right hand sides the coefficient of $e_{1,s}$ in the exponents
are given by the terms proportional to $U e_{1,0}U^{-1}$ and
$V e_{1,M\gamma_{12}} V^{-1}$, respectively.
Thus the equality of the coefficients of $e_{1,s}$ in the exponent of
the two left hand sides imply that
\begin{equation}
U e_{1,0} U^{-1} = V e_{1,M\gamma_{12}} V^{-1}\, .
\end{equation}
Now note that if we had considered a
Kronecker quiver with nodes carrying charges
$k\gamma_1$ for fixed $k$ and and $\ell\gamma_2$ for different $\ell>0$,
the semi-primitive wall-crossing formula would have given
the equality of this with a quiver whose nodes carry charges
$k\gamma_1 + M\gamma_{12}k\gamma_2$
and $-\ell\gamma_2$ for dimension vector $(1,N)$. On the other hand such a quiver is equivalent
to the one we are considering with $\OmS(r\gamma_1)=0$ for $r\ne k$, and we can use
\eqref{eonea}, \eqref{etwob} for such a quiver.
In this case
$\bar Q(p,q)$ and $\bar Q'(p, M\gamma_{12}p-q)$ would vanish for $1\le p\le k-1$ and for $p=k$ they would
be equal due to the generalized mutation invariance of the rank $(1,N)$ quiver. On the
right hand sides of the corresponding eqs.\eqref{eonea} and \eqref{etwob} the $e_{k,q}$ in
the exponent come from the $U e_{k,0} U^{-1}$ and $V e_{k,M\gamma_{12}k} V^{-1}$
terms, with $U$ and $V$ given by the same expressions \eqref{edefuv} as the original
quivers. Thus we conclude that
\begin{equation}
U e_{k,0} U^{-1} = V e_{k,M\gamma_{12}k} V^{-1}\, .
\end{equation}
Since this is valid for every $k$, we see that the right hand sides of \eqref{eonea} and
\eqref{etwob} are equal for the original quiver.
This in turn proves the equality of the left hand sides and hence the
desired relation
\begin{equation}
\bar Q(p,M\gamma_{12}p-q) = \bar Q'(p,q)\, .
\end{equation}
Finally, we prove \eqref{evanish} as follows.
From the
analysis of the rank $(1,N)$ case we know that $\bar Q'(1,q)$ vanishes for $q>M\gamma_{12}$.
With the help of \eqref{eonea} we can translate this to a statement that
$U e_{1,0} U^{-1}$ is a linear combination of $e_{1,q}$ for $0\le q\le M\gamma_{12}$. Generalizing this
to the quiver whose nodes carry charges $k\gamma_1$ and $\gamma_2$ we can
conclude that $U e_{k,0} U^{-1}$ is a linear combination of $e_{k,q}$ for $0\le q\le M\gamma_{12}k$.
Eq.\eqref{eonea} then shows that $\bar Q'(p,q)$
vanishes for $q>M\gamma_{12}p$. Equality of $Q(p,q)$ and $Q'(p,q)$, discussed below
\eqref{ewallcpmod} independent of the validity of generalized mutation symmetry,
then leads to \eqref{evanish}.
We shall now test this for some specific choices of single-centered indices, namely
\begin{equation}
\OmS(\gamma_1)=p_1, \quad
\OmS(\gamma_2)=q_1,\quad \OmS(2\gamma_2)=q_2\ , \quad p_1,q_1,q_2\ge 0\, ,
\end{equation}
with all other single-centered indices vanishing. Generalized mutation invariance
with respect to the node 2 requires that the
generating function
\begin{equation}
{\cal F}(N_1,q; y;t) = \sum_{N_2\geq 0} Q_{\rm Coulomb}(N_1 \gamma_1 + N_2\gamma_2; \zeta; y;t)\, q^{N_2}
\end{equation}
satisfies the functional equation
\begin{equation}
\label{funsemiprim}
q^{m N_1 M} {\cal F}(N_1,1/q;y;t) = {\cal F}(N_1,q;y;t)\ .
\end{equation}
where $M\equiv q_1+4 q_2>0$. This equation holds for $N_1=1$
by assumption.
Using the generalized semi-primitive formulae established in \cite{Manschot:2010qz}, we can test this property for $N_1=2$
or $N_1=3$. For simplicity we restrict to $N_2=2$, $1\leq m\leq 3$ and set $y=t=1$. We have computed
${\cal F}(2,q)$ for the values of $(m,p_1,p_2,q_1,q_2)$ displayed in table \ref{fig_kron}, and found
that \eqref{funsemiprim} was indeed obeyed.
\begin{table}
$$
\begin{array}{|c|c|r|r|}\hline
m &p_1,q_1,q_2 & {\cal F}(1,q) & {\cal F}(2,q)\\ \hline
1 & 1,1,0 & 1+q & 0\\
1 & 1,2,0 & (1+q)^2 & 0\\
1 & 1,3,0 & (1+q)^3 & q^3 \\
1 & 2,1,0 & 2(1+q) & q \\
1 & 2,2,0 & 2(1+q)^2 & 2q(1-q+q^2)\\
1 & 2,3,0 & 2(1+q)^3 & q(3-6q+14q^2-6q^3+3q^4)\\
1 & 3,1,0 & 3(1+q) & 3q\\
1 & 3,2,0 & 3(1+q)^2 & 6q(1-q+q^2)\\
1 & 3,3,0 & 3(1+q)^3 & 3q(3-6q+13q^2-6q^3+3q^4)\\
\hline
2 & 1,1,0 & (1-q)^2 & q(1+q^2)\\
2 & 1,2,0 & (1-q)^4 & q(2-4q+22q^2-20q^3+22q^4-4q^5+2q^6)\\
2 & 2,1,0 & 2(1-q)^2 & 2q(3-2q+3q^2)\\
2 & 2,2,0 & 2(1-q)^4 & 4q(3-8q+29q^2-28q^3+29q^4-8q^5+3q^6)\\
\hline
3 & 1,1,0 & (1+q)^3 & q(3-6q+13q^2-6q^3+3q^4)\\
3 & 1,2,0 & (1+q)^6 & 2q(3-15q+85 q^2-165 q^3+351 q^4-337 q^5+351 q^6 + \dots + 3 q^{10})\\ \hline
\end{array}
$$
\caption{Generating functions of $Q_{\rm Coulomb}(\gamma_1+N\gamma_2)$ and $Q_{\rm Coulomb}(2\gamma_1+N\gamma_2)$ for the generalized Kronecker quiver with $\OmS(\gamma_1)=p_1,\OmS(\gamma_2)=q_1,\OmS(2\gamma_2)=q_2$. The symmetry under $q\to 1/q$ shows mutation invariance in these cases.
\label{fig_kron}}
\end{table}
In this case, we can also test whether the conditions \eqref{eposcon0} can be relaxed.
Let us set $p_2=q_2=0$, $m=1$ for simplicity, and try
$q_1=-1$. .
The semi-primitive partition function
\begin{equation}
{\cal F}(1,q)=\frac{p_1}{1+q}
\end{equation}
is multiplied by $q$
under $q\to 1/q$
but its rank 2 counterpart, computed using the formulae in \cite{Manschot:2010qz}, is not
multiplied by $q^2$ under $q\to 1/q$:
\begin{equation}
{\cal F}(2,q)=\frac{p_1 q (1-p_1-(p_1+1)q^2)}{2(1-q)^2(1-q^4)}\ .
\end{equation}
This illustrates the importance of the assumption that the mutating node
must carry positive $\OmS$.
\bigskip
\noindent{\bf Example 2:} We consider a three node quiver of rank $(N_1, N_2, N_3)$
with
$\gamma_{12}=\gamma_{32}=a=1$ and
$\gamma_{31}=c=2$, and take the invariants
$\OmS(\ell\gamma_1)$ and $\OmS(\ell\gamma_3)$ to be generic functions of $\ell$, $y$ and $t$
and $\OmS(\ell\gamma_2;y;t)$ for different integers $\ell$
to be specific functions of $y$ and $t$ to be described below.
All other $\OmS(\gamma;y;t)$'s will be taken to vanish.
For the FI parameters, we take
\begin{equation} \label{efayet}
\zeta_1=(3\, N_2+.1)/N_1, \quad \zeta_2=-8, \quad \zeta_3=(5\, N_2-.1)/N_3\, .
\end{equation}
Under mutation with respect to the node 2, we get
\begin{equation}
\gamma_1' = \gamma_1 + M\, \gamma_2, \quad
\gamma_2'=-\gamma_2, \quad
\gamma_3'=\gamma_3 + M\, \gamma_2, \quad
\end{equation}
\begin{equation}
\gamma'_{12}=-a\ ,\quad \gamma'_{23}=a\ ,\quad \gamma_{31}=c
\end{equation}
where
\begin{equation}
M = \sum_{\ell\ge 1} \ell^2 \OmS(\ell \gamma_2; y=1; t=1)\, .
\end{equation}
Then
\begin{equation}
N_1 \gamma_1 + N_2 \gamma_2 + N_3\gamma_3=
N_1 \gamma_1' + (M N_1+ M N_3- N_2) \gamma_2' + N_3 \gamma'_3\, .
\end{equation}
The $\OmS'$'s for the mutated quiver are given by
\begin{equation} \label{eommut}
\OmS'(\ell\gamma_1';y;t) = \OmS(\ell\gamma_1; y; t), \quad \OmS'(\ell\gamma_3';y;t)
= \OmS(\ell\gamma_3; y; t), \quad \OmS'(\ell\gamma_2';y;t) = \OmS(\ell\gamma_2; y; t)\, .
\end{equation}
Finally the FI parameters of the mutated quiver are
\begin{equation}
\zeta_1' = (3\, N_2+.1)/N_1 - 8 M, \quad \zeta_3' = (5\, N_2-.1)/N_3- 8 M , \quad \zeta_2'
=8\, .
\end{equation}
As before we
denote $Q_{\rm Coulomb}(N_1\gamma_1+N_2\gamma_2+N_3\gamma_3;\zeta;y;t)$ by
$Q(N_1, N_2, N_3)$ and similarly for the mutated quiver.
Also $\OmS(\gamma)$ without any other argument will denote $\OmS(\gamma;y;t)$.
The expected relationship
between $Q$ and $Q'$ then takes the form:
\begin{equation} \label{eqqp}
Q(N_1, N_2, N_3) = Q'(N_1, M N_1 + M N_3 - N_2, N_3)\, .
\end{equation}
We shall now consider several choices for the single-centered indices $\OmS(\ell\gamma_2; y; t)$.
\medskip
\noindent{\bf (a):} $\OmS(\gamma_2; y; t) = 2, \quad \OmS(\ell\gamma_2; y; t) = 0$ for $\ell>1$.
In this case $M=2$, and the relation \eqref{eqqp} takes the form
\begin{equation} \label{ereqd}
Q(N_1, N_2, N_3) = Q'(N_1, 2 N_1 + 2 N_3 - N_2, N_3)\, .
\end{equation}
Explicit calculation gives
\begin{eqnarray}\displaystyle
&& Q(1,2,1) = -(y^{-1}+ y) (y^{-2}+4 + y^2) \, \OmS(\gamma_1)\OmS(\gamma_3), \nonumber\\ &&
Q'(1,2,1) = -(y^{-1}+ y) (y^{-2}+4 + y^2) \, \OmS'(\gamma_1')\OmS'(\gamma_3'), \nonumber\\
&& Q(1,3,1) = 2 (y^{-2}+1 + y^2) \, \OmS(\gamma_1)\OmS(\gamma_3), \quad
Q'(1,1,1) = 2 (y^{-2}+1 + y^2) \, \OmS'(\gamma_1')\OmS'(\gamma_3'), \nonumber\\
&& Q(1,4,1) = - (y^{-1}+ y) \, \OmS(\gamma_1)\OmS(\gamma_3), \quad
Q'(1,0,1) = - (y^{-1}+ y) \, \OmS'(\gamma_1')\OmS'(\gamma_3'), \nonumber\\
&& Q(1,3,2) = (y^{-6} + y^{-4} + y^{-2} + 1 + y^2 + y^4+
y^6) \nonumber \\
&& \qquad \qquad \times
\bigg\{\OmS(\gamma_3; y; t)^2 - \OmS(\gamma_3; y^2; t^2) -
2 (y^{-1} + y)\, \OmS(2\gamma_3; y; t)\bigg\} \, \OmS(\gamma_1; y; t)
\nonumber\\
&& Q'(1,3,2) = (y^{-6} + y^{-4} + y^{-2} + 1 + y^2 + y^4+
y^6) \nonumber \\
&& \qquad \qquad \times
\bigg\{\OmS'(\gamma_3'; y; t)^2 - \OmS'(\gamma_3'; y^2; t^2) -
2 (y^{-1} + y)\, \OmS'(2\gamma_3'; y; t)\bigg\} \, \OmS'(\gamma_1'; y; t)\, .
\end{eqnarray}
These results are in agreement with the generalized mutation hypothesis \eqref{ereqd}.
\medskip
\noindent{\bf (b):} $\OmS(\gamma_2; y; t) = 3, \quad \OmS(\ell\gamma_2; y; t) = 0$ for $\ell>1$.
In this case $M=3$, and the relation \eqref{eqqp} takes the form
\begin{equation} \label{ereqda}
Q(N_1, N_2, N_3) = Q'(N_1, 3 N_1 + 3 N_3 - N_2, N_3)\, .
\end{equation}
Explicit calculation gives
\begin{eqnarray}\displaystyle
&& Q(1,2,1) = -3 \, (y^{-3} + 4 y^{-1}+4 y + y^3) \, \OmS(\gamma_1)\OmS(\gamma_3), \nonumber\\ &&
Q'(1,4,1) = -3 \, (y^{-3} + 4 y^{-1}+4 y + y^3) \, \OmS'(\gamma_1')\OmS'(\gamma_3'), \nonumber\\
&& Q(1,3,1) = (y^{-4} + 10 y^{-2} + 10 + 10 y^2 + y^4) \, \OmS(\gamma_1)\OmS(\gamma_3), \nonumber\\
&&
Q'(1,3,1) = (y^{-4} + 10 y^{-2} + 10 + 10 y^2 + y^4) \, \OmS'(\gamma_1')\OmS'(\gamma_3'), \nonumber\\
&& Q(1,4,1) = - 3 \, (y^{-3} + 4 y^{-1}+4 y + y^3) \, \OmS(\gamma_1)\OmS(\gamma_3), \nonumber\\
&&
Q'(1,2,1) = - 3 \, (y^{-3} + 4 y^{-1}+4 y + y^3) \, \OmS'(\gamma_1')\OmS'(\gamma_3'), \nonumber\\
\end{eqnarray}
in agreement with the generalized mutation hypothesis \eqref{ereqda}.
\medskip
\noindent{\bf (c):} $\OmS(\gamma_2; y; t) = y^2+1+ y^{-2}, \quad
\OmS(\ell\gamma_2; y; t) = 0$ for $\ell>1$.
In this case $M=3$, and the relation \eqref{eqqp} takes the form
\begin{equation} \label{ereqdb}
Q(N_1, N_2, N_3) = Q'(N_1, 3 N_1 + 3 N_3 - N_2, N_3)\, .
\end{equation}
Explicit calculation gives
\begin{eqnarray}\displaystyle
&& Q(1,2,1) = -(2 y^{-5}+ 5 y^{-3} + 8 y^{-1} + 8 y + 5 y^3 + 2 y^5) \, \OmS(\gamma_1)\OmS(\gamma_3), \nonumber\\ &&
Q'(1,4,1) = -(2 y^{-5}+ 5 y^{-3} + 8 y^{-1} + 8 y + 5 y^3 + 2 y^5) \, \OmS'(\gamma_1')\OmS'(\gamma_3'), \nonumber\\
&& Q(1,3,1) = (y+y^{-1})^4 (y^2 + y^{-2}) \, \OmS(\gamma_1)\OmS(\gamma_3), \nonumber\\
&&
Q'(1,3,1) = (y+y^{-1})^4 (y^2 + y^{-2}) \, \OmS'(\gamma_1')\OmS'(\gamma_3'), \nonumber\\
&& Q(1,4,1) = - (2 y^{-5}+ 5 y^{-3} + 8 y^{-1} + 8 y + 5 y^3 + 2 y^5) \, \OmS(\gamma_1)\OmS(\gamma_3), \nonumber\\
&&
Q'(1,2,1) = - (2 y^{-5}+ 5 y^{-3} + 8 y^{-1} + 8 y + 5 y^3 + 2 y^5) \, \OmS'(\gamma_1')\OmS'(\gamma_3'), \end{eqnarray}
in agreement with the generalized mutation hypothesis \eqref{ereqdb}.
\medskip
\noindent{\bf (d):} $\OmS(\gamma_2; y; t) = 4, \quad \OmS(\ell\gamma_2; y; t) = 0$ for $\ell>1$.
In this case $M=4$, and the relation \eqref{eqqp} takes the form
\begin{equation} \label{ereqdc}
Q(N_1, N_2, N_3) = Q'(N_1, 4 N_1 + 4 N_3 - N_2, N_3)\, .
\end{equation}
Explicit calculation gives
\begin{eqnarray}\displaystyle
&& Q(1,4,1) = -(y^{-5} + 17 y^{-3} + 53 y^{-1} + 53 y + 17 y^3 + y^5)
\, \OmS(\gamma_1)\OmS(\gamma_3), \nonumber\\
&&
Q'(1,4,1) = -(y^{-5} + 17 y^{-3} + 53 y^{-1} + 53 y + 17 y^3 + y^5)
\, \OmS'(\gamma_1')\OmS'(\gamma_3')\, .
\end{eqnarray}
These results are in agreement with the generalized mutation hypothesis \eqref{ereqdc}.
\medskip
\noindent{\bf (e):} $\OmS(\gamma_2; y; t) = t+1/t, \quad \OmS(\ell\gamma_2; y; t) = 0$ for $\ell>1$.
In this case
$M=2$, and the relation \eqref{eqqp} takes the form
\begin{equation} \label{ereqde}
Q(N_1, N_2, N_3) = Q'(N_1, 2 N_1 + 2 N_3 - N_2, N_3)\, .
\end{equation}
Explicit calculation gives
\begin{eqnarray}\displaystyle
&& Q(1,2,1) = -(y^{-1}+ y) (t^{-2}+t^2 +y^{-2}+2 + y^2) \, \OmS(\gamma_1)\OmS(\gamma_3), \nonumber\\ &&
Q'(1,2,1) = -(y^{-1}+ y) (t^{-2}+t^2 +y^{-2}+2 + y^2) \, \OmS'(\gamma_1')\OmS'(\gamma_3'), \nonumber\\
&& Q(1,3,1) = (t^{-1}+t) (y^{-2}+1 + y^2) \, \OmS(\gamma_1)\OmS(\gamma_3), \nonumber\\ &&
Q'(1,1,1) = (t^{-1}+t) (y^{-2}+1 + y^2) \, \OmS'(\gamma_1')\OmS'(\gamma_3'), \nonumber\\
&& Q(1,4,1) = - (y^{-1}+ y) \, \OmS(\gamma_1)\OmS(\gamma_3), \quad
Q'(1,0,1) = - (y^{-1}+ y) \, \OmS'(\gamma_1')\OmS'(\gamma_3'), \nonumber\\
&& Q(1,3,2) = {1\over 2} (t + t^{-1}) (y^{-6} + y^{-4} + y^{-2} + 1 + y^2 + y^4+
y^6) \nonumber \\
&& \qquad \qquad \times
\bigg\{\OmS(\gamma_3; y; t)^2 - \OmS(\gamma_3; y^2; t^2) -
2 (y^{-1} + y)\, \OmS(2\gamma_3; y; t)\bigg\} \, \OmS(\gamma_1; y; t)
\nonumber\\
&& Q'(1,3,2) = {1\over 2} (t + t^{-1}) (y^{-6} + y^{-4} + y^{-2} + 1 + y^2 + y^4+
y^6) \nonumber \\
&& \qquad \qquad \times
\bigg\{\OmS'(\gamma_3'; y; t)^2 - \OmS'(\gamma_3'; y^2; t^2) -
2 (y^{-1} + y)\, \OmS'(2\gamma_3'; y; t)\bigg\} \, \OmS'(\gamma_1'; y; t)\, . \nonumber \\
\end{eqnarray}
These results are in agreement with the generalized mutation hypothesis \eqref{ereqde}.
\medskip
\noindent{\bf (f):} $\OmS(\gamma_2; y; t) = 0, \quad \OmS(2\gamma_2; y; t) = 1,
\quad \OmS(\ell\gamma_2; y; t) = 0$ for $\ell>2$.
In this case $M=4$, and the relation \eqref{eqqp} takes the form
\begin{equation} \label{ereqdf}
Q(N_1, N_2, N_3) = Q'(N_1, 4 N_1 + 4 N_3 - N_2, N_3)\, .
\end{equation}
Explicit calculation gives
\begin{eqnarray}\displaystyle
&& Q(1,4,1) = -(y^{-5} + 2 y^{-3} + 4 y^{-1} + 4 y + 2 y^3 + y^5)
\, \OmS(\gamma_1)\OmS(\gamma_3), \nonumber\\
&&
Q'(1,4,1) = -(y^{-5} + 2 y^{-3} + 4 y^{-1} + 4 y + 2 y^3 + y^5)
\, \OmS'(\gamma_1')\OmS'(\gamma_3')\, .
\end{eqnarray}
These results are in agreement with the generalized mutation hypothesis \eqref{ereqdf}.
\medskip
\noindent{\bf (g):} We end this series of examples with a choice of
$\OmS$ which violates condition i) on page \pageref{condi}, but which
preserves the mutation symmetry at the level of numerical
DT-invariants. We mentioned earlier this possibility in Section \ref{sgenmut}. We take $\OmS(\gamma_2;y;t)=-1$ and
$\OmS(2\gamma_2;y;t)=1$. We may expect the generalized mutation to be a symmetry for $y=t=1$
since the generating functions ${\cal F}(\vec N;q;\zeta;q;1;1)$ are symmetric
polynomials in $q$. In particular for this choice we have $M=3$ and
hence $Q(N_1, N_2, N_3)$ would have to be equal to $Q'(N_1,3 N_1+3N_3-N_2, N_3)$.
We find that while this does not hold for general $y$, it does hold for $y=t=1$.
For example we have $Q(1,4,1)=Q'(1,2,1)=2$ at $y=1$.
\medskip
\noindent{\bf Example 3:} Now we consider a three node quiver with
loop by choosing $\gamma_{12}=2$, $\gamma_{23}=1$, and
$\gamma_{31}=5$.
We choose
$\OmS(\gamma_2;y;t)=2$, $\OmS(\ell\gamma_2;y;t)=0$ for $\ell>1$, and
leave $\OmS(N_1\gamma_1+N_2\gamma_2+N_3\gamma_3; y; t)$ arbitrary
except for the constraints imposed due to the restrictions mentioned at the end
of \S\ref{sintro}. This in particular will require $\OmS$ to vanish when either
$N_1$ or $N_3$ vanishes with other $N_i$'s being given by positive integers.
The choice of FI parameters remain the same as
in \eqref{efayet}:
\begin{equation} \label{efayetrep}
\zeta_1=(3\, N_2+.1)/N_1, \quad \zeta_2=-8, \quad \zeta_3=(5\, N_2-.1)/N_3\, .
\end{equation}
Under mutation with respect to the node 2, we get
\begin{equation}
\gamma_1' = \gamma_1 + 4 \gamma_2, \quad
\gamma_2'=-\gamma_2, \quad
\gamma_3'=\gamma_3,
\end{equation}
\begin{equation}
N_1 \gamma_1 + N_2 \gamma_2 + N_3\gamma_3=
N_1 \gamma_1' + (4 N_1 - N_2) \gamma_2' + N_3 \gamma_3'\, .
\end{equation}
The $\OmS'$'s for the mutated quiver for charge vectors proportional
to the basis vectors continue to be given by \eqref{eommut}.
For general charge vectors we get from \eqref{egench00}
\begin{equation} \label{eomreln}
\OmS(N_1\gamma_1+N_2\gamma_2+N_3\gamma_3; y; t)
=\begin{cases} \OmS'(N_1\gamma_1'+(2 N_3-N_2)\gamma_2'+N_3\gamma_3'; y; t) \quad
\text{for} \quad 2 N_1 \ge N_3\\
\OmS'(N_1\gamma_1'+(4 N_1-N_2)\gamma_2'+N_3\gamma_3'; y; t) \quad
\text{for} \quad 2 N_1 < N_3
\end{cases}\, .
\end{equation}
Finally the FI parameters of the mutated quiver are
\begin{equation}
\zeta_1' = (3\, N_2+.1)/N_1 - 32 , \quad \zeta_3' = (5\, N_2-.1)/N_3, \quad \zeta_2'
=8 \, .
\end{equation}
The mutated quiver has
\begin{equation}
\gamma'_{12} = -2, \quad \gamma'_{13} = -1, \quad \gamma'_{23} = -1\, .
\end{equation}
and the expected relation is
\begin{equation} \label{expect}
Q(N_1, N_2, N_3) = Q'(N_1, 4 N_1-N_2, N_3)\, .
\end{equation}
Explicit calculation gives
\begin{eqnarray}
&& Q(1,2,1) = (y^{-4} + 5 y^{-2} + 6 + 5 y^2 + y^4) \, \OmS(\gamma_1;y;t)\OmS(\gamma_3;y;t) \nonumber\\
&& \quad + \, \OmS(\gamma_1+\gamma_3;y;t) + 2 \, \OmS(\gamma_1+\gamma_2+\gamma_3;y;t)
+ \, \OmS(\gamma_1+2\gamma_2+\gamma_3;y;t) \nonumber \\
&&Q'(1,2,1) = (y^{-4} + 5 y^{-2} + 6 + 5 y^2 + y^4) \, \OmS'(\gamma_1';y;t)\OmS'(\gamma_3';y;t) \nonumber\\
&& \quad +\, \OmS'(\gamma'_1+\gamma'_3;y;t) + 2 \, \OmS'(\gamma'_1+\gamma'_2+\gamma'_3;y;t)
+\, \OmS'(\gamma'_1+2\gamma'_2+\gamma'_3;y;t) \nonumber \\
&& Q(1,3,1) = 2\, (y^{-1} + y)^2 \, \OmS(\gamma_1;y;t)\OmS(\gamma_3;y;t) \nonumber\\
&& \quad +\OmS(\gamma_1+\gamma_2+\gamma_3;y;t) + 2 \, \OmS(\gamma_1+2\gamma_2+\gamma_3;y;t)
+ \, \OmS(\gamma_1+3\gamma_2+\gamma_3;y;t) \nonumber \\
&&Q'(1,1,1) = 2\, (y^{-1} + y)^2\, \OmS'(\gamma_1';y;t)\OmS'(\gamma_3';y;t)\, \nonumber\\
&& \quad + 2 \, \OmS'(\gamma'_1+\gamma'_3;y;t)
+\, \OmS'(\gamma'_1+\gamma'_2+\gamma'_3;y;t) \nonumber \\
&& Q(1,2,2) = {1\over 2}(y^{-2} + 1 + y^2) (y^{-2} + 4 +
y^2) \bigg\{(y^{-2} + 1 + y^2)\, \OmS(\gamma_3; y;
t)^2 \nonumber \\ && - (y^{-2} - 1 + y^2) \, \OmS(\gamma_3; y^2; t^2) - 2\, (y^{-2} - 1 + y^2)
(y^{-1} + y) \, \OmS(2\gamma_3; y; t))\bigg\} \, \OmS(\gamma_1;y;t) \nonumber\\
&& + (y^{-2}+1+y^2) \, \OmS(\gamma_3;y;t) \Big\{ \OmS(\gamma_1+\gamma_3;y;t) + 2 \, \OmS(\gamma_1+\gamma_2+\gamma_3;y;t) \nonumber \\&&
+ \, \OmS(\gamma_1+2\gamma_2+\gamma_3;y;t)\Big\}
+ \, \OmS(\gamma_1+2\gamma_2+2\gamma_3;y;t) \nonumber \\
&&Q'(1,2,2) = {1\over 2}(y^{-2} + 1 + y^2) (y^{-2} + 4 +
y^2) \bigg\{(y^{-2} + 1 + y^2)\, \OmS'(\gamma_3'; y;
t)^2 \nonumber \\ && - (y^{-2} - 1 + y^2) \, \OmS'(\gamma_3'; y^2; t^2) - 2\, (y^{-2} - 1 + y^2)
(y^{-1} + y) \, \OmS'(2\gamma_3'; y; t))\bigg\} \, \OmS'(\gamma_1';y;t) \nonumber\\
&& + (y^{-2}+1+y^2) \, \OmS'(\gamma_3';y;t)
\Big\{ \OmS'(\gamma'_1+\gamma'_3;y;t) +
2 \, \OmS'(\gamma'_1+\gamma'_2+\gamma'_3;y;t) \nonumber \\&&
+ \, \OmS'(\gamma'_1+2\gamma'_2+\gamma'_3;y;t)\Big\}
+ \, \OmS'(\gamma'_1+2\gamma'_2+2\gamma'_3;y;t) \nonumber
\end{eqnarray}
\begin{eqnarray}
&& Q(1,2,3) ={1\over 6} (y^{-2} + 4 +
y^2) \bigg[(y^{-2} + 1 + y^2)^3 \, \OmS(\gamma_3; y; t)^3 \nonumber\\ &&
-
3 (y^{-2} - 1 + y^2) (y^{-2} + 1 + y^2)^2 \, \OmS(\gamma_3; y;
t) \Big\{ \OmS(\gamma_3; y^2; t^2) +
2 (y^{-1} + y)\, \OmS(2\gamma_3; y;t)\Big\} \nonumber\\ && +
2 (y^{-6} +1 + y^6) \Big\{\OmS(\gamma_3; y^3; t^3) +
3 (y^{-2} + 1 + y^2) \, \OmS(3\gamma_3; y; t)\Big\}\bigg] \, \OmS(\gamma_1; y; t)\nonumber\\ &&
+ {1\over 2} \bigg\{(y^{-2} + 1 + y^2)^2 \OmS(\gamma_3; y;
t)^2 -
(y^{-4} + 1 + y^4)\OmS(\gamma_3; y^2; t^2) \nonumber \\ &&
-
2 (y^{-5} + y^{-3} + y^{-1} + y + y^3 + y^5) \OmS(2\gamma_3; y; t)
\bigg\} \nonumber \\ &&
\times \bigg\{ \OmS(\gamma_1+ \gamma_3; y;
t) + 2 \OmS(\gamma_1+\gamma_2+\gamma_3; y;
t) + \OmS(\gamma_1+2\gamma_2+\gamma_3; y;
t)
\bigg\} \nonumber \\ && +
(y^{-2} + 1 + y^2) \OmS(\gamma_3; y;
t) \OmS(\gamma_1+2\gamma_2+2\gamma_3; y;
t) \nonumber \\ &&
+ \OmS(\gamma_1+2\gamma_2+3\gamma_3; y;
t) \nonumber
\end{eqnarray}
\begin{eqnarray}
&&
Q'(1,2,3) ={1\over 6} (y^{-2} + 4 +
y^2) \bigg[(y^{-2} + 1 + y^2)^3 \, \OmS'(\gamma'_3; y; t)^3 \nonumber\\ &&
-
3 (y^{-2} - 1 + y^2) (y^{-2} + 1 + y^2)^2 \, \OmS'(\gamma'_3; y;
t) \Big\{ \OmS'(\gamma'_3; y^2; t^2) +
2 (y^{-1} + y)\, \OmS'(2\gamma'_3; y;t)\Big\} \nonumber\\ && +
2 (y^{-6} +1 + y^6) \Big\{\OmS'(\gamma'_3; y^3; t^3) +
3 (y^{-2} + 1 + y^2) \, \OmS'(3\gamma'_3; y; t)\Big\}\bigg] \, \OmS'(\gamma'_1; y; t)
\nonumber\\ &&
+ {1\over 2} \bigg\{(y^{-2} + 1 + y^2)^2 \OmS'(\gamma'_3; y;
t)^2 -
(y^{-4} + 1 + y^4)\OmS'(\gamma'_3; y^2; t^2) \nonumber \\ &&
-
2 (y^{-5} + y^{-3} + y^{-1} + y + y^3 + y^5) \OmS'(2\gamma'_3; y; t)
\bigg\} \nonumber \\ &&
\times \bigg\{ \OmS'(\gamma'_1+ \gamma'_3; y;
t) + 2 \OmS'(\gamma'_1+\gamma'_2+\gamma'_3; y;
t) + \OmS'(\gamma'_1+2\gamma'_2+\gamma'_3; y;
t)
\bigg\} \nonumber \\ && +
(y^{-2} + 1 + yf^2) \OmS'(\gamma'_3; y;
t) \OmS'(\gamma'_1+2\gamma'_2+2\gamma'_3; y;
t) \nonumber \\ &&
+ \OmS'(\gamma'_1+2\gamma'_2+3\gamma'_3; y;
t) \, .
\end{eqnarray}
Using \eqref{eomreln} we see that
these results are in agreement with
\eqref{expect}.
We have checked similar agreement for many other examples.
\section*{Acknowledgement}
We would like to thank Bernhard Keller, Gregory W. Moore, Andy Neitzke
and Piljin Yi for
inspiring discussions. Part of the reported results were obtained while J.M. was a postdoc of the Bethe Center for Theoretical
Physics of Bonn University. This work was supported in part by the National Science Foundation under
Grant No. PHYS-1066293 and the hospitality of the Aspen Center for Physics.
The work of A.S. was supported in part by DAE project 12-R\&D-HRI-5.02-0303 and the J.C. Bose fellowship of DST, Govt. of India.
\providecommand{\href}[2]{#2}\begingroup\raggedright |
2,869,038,155,074 | arxiv | \section*{Introduction}
\newtheorem{heorem}{Theorem}
\newtheorem{propo}[heorem]{Proposition}
In Banach space theory the Orlicz property and its connection to
unconditional
and absolute convergence is well understood. For instance,
unconditional convergence and absolute convergence only coincide in
finite
dimensional spaces, but unconditional converging series are at least
square summable in the spaces $L_q$, $1\le q\le 2$. This was discovered
by Orlicz, hence the name Orlicz property. Furthermore,
this is best possible in arbitrary infinite dimensional Banach space by
Dvoretzky's theorem.
In the category of operator spaces there are several possibilities
to generalize the classical Orlicz property. We have choose a
definition
where only sequences are involved and which is motivated
by the theory of absolutely summing operators
introduced by Grothendieck. To be more precise, we recall
that an unconditional converging series $(x_k)_k \subset E$ in a Banach
space $E$ corresponds to an operator defined on $c_0$ with values in
$E$.
This is a consequence of the contraction principle.
\[ \left \| \sum\limits_k e_k \otimes x_k \right \|_{\ell_1 \otimes_{\varepsilon} E}
\hspace{.1cm} = \hspace{.1cm} \sup_{|\alpha_k|\le 1} \left \| \sum\limits_k \alpha_k x_k \right \| \hspace{.1cm} \le \hspace{.1cm} 4 \hspace{.1cm}
\sup_{\varepsilon_k=\pm 1} \left \| \sum\limits_k \varepsilon_k x_k \right \| \hspace{.1cm} .\]
In order to involve the operator space structure of a an operator space
$E \subset B(H)$
we define an operator $T:E \rightarrow F$ to be 1-summing if there exits a
constant
$c>0$ such that
\[ \sum\limits_k \left \| Tx_k\right \| \hspace{.1cm} \le \hspace{.1cm} c \hspace{.1cm} \left \| \sum\limits_k e_k \otimes x_k
\right \|_{\ell_1\otimes_{\min}E} \hspace{.1cm} .\]
The best possible constant will be denoted by $\pi_{1,sc}(T)$.
Here $\min$ denotes the minimal or spatial tensor product and $\ell_1$
is considered
as an operator space (for example by identification of the unit vectors
with the generators of a free group).
Anyhow, in the definition of absolutely summing operators
we simple replace the \underline{norm} by the \underline{cb-norm} of
the corresponding operator.
Obviously, in this definition only the Banach space structure of $F$ is
involved.
That's why this notion lives from the interplay
of operator space and Banach space theory.
For a more complete notion which is entirely
defined in the category of operator spaces and where matrices instead
of sequences
are considered we refer to the work of Pisier about completely
p-summing operators
and factorization problems. With this background 1-summing operators
turns to be the
weakest possible notion. The classical notion of absolutely
summing operators is included, by defining an operator spaces structure
of a Banach space via the embedding of $E$ in the commutative
$C^*$ algebra $C(B_{E^*})$. Following Paulsen we will denote this
operator
space by $\min(E)$.
In the first chapter we collect basic properties of 1 summing operators
and study the relation between 1-summing operators and
$(1,C^*)$-summing operators defined early by Pisier for
$C^*$-algebras.
This is connected with Haagerup's characterization of injective von
Neumann algebras.
In this paper the framework of eigenvalue estimates
for operators factorizing completely through a commutative
$C^*$-algebra
is used to distinguish different operator spaces. The first part is
based on a
generalization of Maurey's inequality:
\begin{heorem} Let $2<q<\infty$. For an operator space $E$ a Banach
space $F$ and $T:E\rightarrow F$ the following assertions are equivalent.
\begin{enumerate}
\item[i)] There exists a constant $c_1$ such that for all $n$
dimensional subspaces $G \subset E$ and $(x_k)_k \subset G$
\[ \sum\limits_k\left \| T x_k \right \| \hspace{.1cm} \le \hspace{.1cm} c_1 \hspace{.1cm} n^{1-\frac{1}{q}} \hspace{.1cm} \left \|
\sum\limits_k e_k \otimes x_k \right \|_{\ell_1 \otimes_{min} E} \hspace{.1cm}. \]
\item[ii)] There exists a constant $c_2>0$ such that for all $n \in \nz$ and
$x_1,..,x_n \in E$ one has
\[ \sum\limits_1^n\left \| T x_k \right \| \hspace{.1cm} \le \hspace{.1cm} c_2\hspace{.1cm} n^{1-\frac{1}{q}} \hspace{.1cm} \left \|
\sum\limits_1^n e_k \otimes x_k \right \|_{\ell_1^n \otimes_{min} E} \hspace{.1cm}. \]
\item[iii)] There exists a constant $c_3$ such that for all operators
$S:F\rightarrow E$ which factor completely
through a $C(K)$ space, i.e. $S\hspace{.05cm}=\hspace{.05cm} PR$, $R:F \rightarrow C(K)$ bounded and
$P: C(K)\rightarrow E$
completely bounded one has
\[ \sup_{n \in \nz} n^{\frac{1}{q}}|\lambda_n(ST)| \hspace{.1cm} \le \hspace{.1cm} c_3\hspace{.1cm} \left \| P
\right \|_{cb} \hspace{.1cm} \left \| R: F\rightarrow C(K)\right \|_{op} \hspace{.1cm}.\]
\item[iv)] There exists a constant $c_3$ such that for all operators
$S :F \rightarrow E$ which factors completely through $B(H)$, i.e.
$S\hspace{.05cm}=\hspace{.05cm} PR$, $R:F \rightarrow B(H)$ and $P: B(H)\rightarrow E$ completely bounded
one has
\[ \sup_{n \in \nz} n^{\frac{1}{q}}|\lambda_n(ST)| \hspace{.1cm} \le \hspace{.1cm} c_4 \hspace{.1cm} \left \| P
\right \|_{cb} \hspace{.1cm} \left \| R: {\rm min}(F)\rightarrow B(H)\right \|_{cb} \hspace{.1cm}.\]
\end{enumerate}
where $(\lambda_n(ST))_{n \in \nz}$ denotes the sequence of eigenvalues in
non-increasing order according to their multiplicity.
\end{heorem}
As an application for identities we see that projection
constant of an $n$-dimensional subspace in a $(q,1)$-summing Banach
space
is at most $n^{\frac{1}{q}}$. This can already be deduced from
a corresponding theorem for identities on Banach spaces which was
proved in \cite{J}.
In the operator spaces setting we see that estimates for the growth
rate
of the 1-summing norm are useful to measure
the 'distance' of an operator space and its subspaces to
$\ell_{\infty}$ spaces.
Let us note that the theorem is not valid
for values $q<2$. For identities on Banach space this is not relevant,
all the properties are only satisfied for finite dimensional spaces.
In contrast to this operators spaces with 1-summing identity
are interesting spaces. For example the generators of the Cllifford
algebra spans such a space CL. This is probably not so surprising,
since this
example has proved to be relevant also for the closely connected notion
of
$(2,oh)$-summing operators, introduced by Pisier. Starting with CL we
construct
a scale of operator spaces with different growth rates for the
1-summing norm.
Further examples with small
1-summing norm were given
by randomly chosen $n$-dimensional
subspaces of the matrix algebra $M_N$, provided
$n\le N$. This was the starting point
to discover, independently of Paulsen and Pisier, the fact there are
only
few completely bounded operators between minimal and maximal operator
spaces.
Paulsen studied all possible operator space structures
on a given Banach space $E$ and realized that there is a minimal and
maximal one.
The minimal one is given by the commutative structure already defined
and the maximal by
the embedding $E \hookrightarrow (\min(E^*))^*$, where $*$ denotes the
operators
space dual discovered by Effros/Ruan and Blecher/Paulsen. This is
called the maximal
operator space $\max(E)$. Our approach is contained in the following
proposition which is a refinement of Paulsen/Pisier's result,
unfortunately
with a worse constant.
\begin{propo} Let $E$ be a maximal and $F$ be a minimal operator
space.
For an operator $T:F \rightarrow E$ of rank at most $n$ one has
\[ \left \| T \right \|_{cb} \hspace{.1cm} \le \hspace{.1cm} \gamma_2^*(T) \hspace{.1cm} \le \hspace{.1cm} 170 \left \| T \otimes Id_{M_n}:
M_n(F) \rightarrow M_n(E) \right \| \hspace{.1cm} ,\]
where $\gamma_2^*$ is defined by trace duality with respect to Hilbert
space factorizing norm $\gamma_2$. In particular,
\[ \frac{\sqrt{n}}{170} \hspace{.1cm} \le \hspace{.1cm} \left \| Id:\min(E) \rightarrow \max(E) \right \|_{cb}\hspace{.1cm}
,\]
for all $n$ dimensional Banach space $E$.
\end{propo}
This is contained in the second part of this paper
where the study of 1-summing operators is continued.
This turns out to be quite fruitful in the context of dual operator
spaces.
For instance, maximal operator spaces are 1-summing if and only if they
are isomorphic to Hilbert spaces. Moreover, the 1-summing norm
of a $n$ dimensional subspace of $\max(\ell_r)$, $\max(\ell_{r'})$,
$\max({\cal S}_r)$
$\max({\cal S}_{r'})$ is less then $ 4 n^{\frac{1}{2}-\frac{1}{r}}$ for all
$2\le r\le \infty$.
Most of the techniques for maximal operator spaces carry over to duals
of exact operator
spaces by the key inequality of \cite{JP}. The lack of local
reflexivity
in operator spaces leads to the notion of exactness defined by Pisier
and motivated
by Kirchberg's work. In this context we will say that an operator space
is exact if all
its finite dimensional subspaces are uniformly cb-isomorphic to
subspaces of the matrix algebra's $M_N$. In the next theorem the
connection
between 1-summing operators and factorization properties is established
for duals of exact operator spaces.\vspace{0.5cm}
\begin{heorem} Let $1< p< 2$, $G$ an exact operator space, $E\subset
G^*$ and $F$ a minimal operator space.
For an operator $T:E \rightarrow F$ the following are equivalent.
\begin{enumerate}
\item[i)] There exists a constant $c_1>0$ such that
\[ \sum\limits_1^n \left \| Tx_k \right \| \hspace{.1cm} \le \hspace{.1cm} c_1 \hspace{.1cm} n^{1-\frac{1}{p}} \hspace{.1cm} \left \|
\sum\limits_1^n e_i \otimes x_i \right \|_{\ell_1^n \otimes_{min} E} \hspace{.1cm}. \]
\item[ii)] There is a constant $c_2$ such that for all completely
bounded operators $S:F \rightarrow E$ one has
\[ \sup_k k^{\frac{1}{p}} \hspace{.1cm} \left | \lambda_k(TS) \right |
\hspace{.1cm} \le \hspace{.1cm} c_3 \hspace{.1cm} \left \| S:{\rm min}(F) \rightarrow E \right \|_{cb} \hspace{.1cm} .\]
\end{enumerate}
In the limit cases $p=1$ the eigenvalues are summable if and only if
the operator is 1-summing. In this case there exists a 1-summing
extension
$\hat{T}\hspace{.05cm}:\hspace{.05cm} G^* \rightarrow \min(F^{**})$ which factors completely bounded
through $R\cap C$.
Furthermore, every completely bounded $S: \min{F} \rightarrow E$ is
absolutely 2-summing
and hence the eigenvalues of a composition $TS$ are in $\ell_2$.
\end{heorem}
Let us note an application for an exact space $E\subset B(H)$ with
quotient map
$q:B(H)^*\rightarrow E^*$. The Banach space $E^*$ is of cotype 2 and
$B(\ell_{\infty},E^*) \subset
CB(\ell_{\infty},E^*)$ if and only if there is a constant $c>0$ such
that for every sequence $(x_k)_1^n\subset E^*$ there is a sequence
$(\tilde{x}_k)_1^n \subset B(H)^*$
such that $q(\tilde{x}_k)=x_k$ and
\[ {\rm I\! E} \left \| \sum\limits_1^n \tilde{x}_k \varepsilon_k \right \|_{B(H)^*} \hspace{.1cm} \le \hspace{.1cm} c \hspace{.1cm} {\rm I\! E}
\left \| \sum\limits_1^n x_k \varepsilon_k \right \|_{E^*} \hspace{.1cm} .\]
In particular, a space $\max(X)$ is of operator cotype 2 if and only if
it satisfies
the conditions above if and only if it is a cotype 2 space satisfying
Grothendieck's theorem.
A non trivial example is the dual $A(D)^*$ of the disk algebra.
{\bf Acknowledgement:} I would like to thank Gilles Pisier for
stimulating
discussions, access to his preprints and the new concepts in operator
space
theory.
\setcounter{section}{0}
\section*{Preliminaries}
In what follows $c_0, c_1,$ .. always denote universal constants.
We use standard Banach space notation. In particular, the classical
spaces $\ell_q$ and $\ell_q^n$, $1\le q\le \infty$, $n \in \nz$, are defined
in the usual way. We will also use the Lorentz spaces
$\ell_{p\infty}$.
This space consists of all sequences
$\sigma \in \ell_{\infty}$ such that
\[ \left \| \sigma \right \|_{p\infty} \hspace{.1cm} := \hspace{.1cm} \sup_{n \in \nz} \,
n^{\frac{1}{p}}\,\sigma_n^*
\hspace{.1cm} <\hspace{.1cm} \infty . \]
Here $\sigma^*\,=\,(\sigma_n^*)_{n \in \nz}$ denotes the
non-increasing rearrangement of $\sigma$.
The standard reference on operator ideals is the monograph of Pietsch
\cite{PIE}. The ideals of linear bounded operators, finite rank
operators,
integral operators are denoted by ${\cal B}$, ${\cal F}$, ${\cal I}$. Given an
operator ideal
$(A,\alpha)$ the adjoint operator ideal $(A^*,\alpha^*)$ is defined by
the set of bounded
operators $T:Y\rightarrow X$ such that
\[ \alpha^*(T) \hspace{.1cm}:=\hspace{.1cm} \sup\left \{ \left | tr(ST)\right | \left | { \atop } \right. S\in
{\cal F}(X,Y),\hspace{.05cm} \alpha(S) \hspace{.05cm} \le \hspace{.05cm} 1\right \} \]
is finite. In particular, the ideal of integral operator is adjoint to
bounded operators with
\[ \iota_1(T) \hspace{.1cm} := \hspace{.1cm} \left \| \cdot \right \|^*(T) \hspace{.1cm} .\]
We recall that an operator $T \in B(X,Y)$ factors through a Hilbert
space
($T\in \Gamma_2(X,Y)$)
if there are a Hilbert spaces $H$ and operators $S:X\rightarrow H$, $R:H\rightarrow
Y^{**}$
such that $\iota_{Y^*}T\hspace{.1cm} = \hspace{.1cm} RS$, where $\iota_{Y^*}:Y \rightarrow Y^{**}$ is
the canonical
embedding of $Y$ into its bidual. The corresponding norm $\gamma_2(T)$
is defined as $\inf \{\left \| S \right \| \left \| R\right \|\}$, where the infimum is
taken over such
factorizations.
\vspace{0.5cm}
Let $1 \le q \le p \le \infty$ and $n \in \nz$. For an operator $T \in
{\cal B}(X,Y)$
the pq-summing norm of T with respect to $n$ vectors is defined by
\[ \pi_{pq}^n(T) \hspace{.1cm} := \hspace{.1cm}
\sup\left\{\hspace{.05cm} \left ( \sum\limits_1^n \left \| Tx_k \right \|^p \right ) ^{1/p} \hspace{.05cm} \left | \hspace{.1cm}
\sup_{\left \| x* \right \|_{X^*}\le1} \left ( \sum\limits_1^n \left |\langle
x_k,x^*\rangle \right |^q \right ) ^{1/q}
\hspace{.1cm} \le \hspace{.1cm} 1\right.\hspace{.05cm} \right\} \hspace{.1cm} .\]
An operator is said to be absolutely pq-summing
$(T \in \Pi_{pq}(X,Y))$ if
\[ \pi_{pq}(T) \hspace{.1cm} := \hspace{.1cm} \sup_n \pi_{pq}^n(T) \hspace{.1cm} < \hspace{.1cm} \infty \hspace{.1cm} . \]
Then $(\Pi_{pq},\pi_{pq})$ is a maximal and injective Banach ideal (in the
sense
of Pietsch). As usual we abbreviate $(\Pi_q,\pi_q) :=
(\Pi_{qq},\pi_{qq})$.
For further information about absolutely pq-summing operators we refer
to
the monograph of Tomczak-Jaegermann \cite{TOJ}.
The definition of some s-numbers of an operator $T\in\B (E,F)$ is
needed.
The $n$-th $approximation$ $number$ is defined by
\[ a_n(T) \hspace{.1cm} :=\hspace{.1cm} \inf\{\, \left \| T-S \right \| \, | \,rank(S)\,< \, n \,\}
\hspace{1.5cm} ,\]
whereas the $n$-th $Weyl\,number$ is given by
\[ x_n(T) \hspace{.1cm} :=\hspace{.1cm} \sup\{\, a_n(Tu)\, |\, u \in {\cal B}(\ell_2,E) \,
\mbox{with}
\, \left \| u \right \| \,\le \, 1\,\} \hspace{1.5cm} .\]
Let $s \in \{a,x\}$. By ${\cal L}_{pq}^{(s)}$ we denote the ideal of
operators $T$
such that $(s_n(T))_{n \in \nz}\in \ell_{pq}$ with the associated quasi-norm
$\ell_{pq}^{(s)}(T) \hspace{.05cm}:=\hspace{.05cm} \left \| (s_n(T))_{n \in \nz} \right \|_{\ell_{pq}}$.
If $H$ is a Hilbert space the spaces ${\cal S}_{pq}(H) \hspace{.05cm}=\hspace{.05cm} {\cal
L}_{pq}^{(a)}$
are normable. Indeed all $s$-numbers coincide for operators
on Hilbert spaces. If $p\hspace{.05cm}=\hspace{.05cm} q$ we will briefly write ${\cal
S}_p(H)$.
This includes ${\cal S}_2(H)$ the set of Hilbert-Schmidt operators.
\vspace{0.5cm}
By Ruan characterization theorem there are two possibilities to
introduce operator spaces. Either as subspaces of ${\cal B}(H)$, where $H$ is
a
Hilbert space or as a Banach space $E$ together with a sequence of
norms
on the spaces of $n\times n$ matrices $M_n(E)$ with values in $E$.
To guarantee that such a sequence of norms is induced by
an embedding into some $B(H)$ the following axioms are required.
\begin{enumerate}
\item[i)] If $O=(O_{ij})$, $P=(P_{ij})$ are scalar $n\times n$ matrices
and $x =(x_{ij})$ in $M_n(E)$
one has
\[ \left \| (\sum\limits_{kl} O_{ik}x_{kl}P_{lj})_{ij} \right \|_{M_n(E)} \hspace{.1cm} \le \hspace{.1cm} \left \|
O\right \| \hspace{.1cm} \left \| x \right \|_{M_n(E)}\hspace{.1cm} \left \| P\right \|\hspace{.1cm} .\]
\item[ii)] If a matrix $B \hspace{.05cm}=\hspace{.05cm} \left ( {x \atop 0}{0\atop y}\right ) $
consists of two
disjoint blocs one has
\[ \left \| B \right \| \hspace{.1cm} = \hspace{.1cm} \max\{ \left \| x \right \|, \hspace{.05cm} \left \| y \right \| \} \hspace{.1cm} .\]
\end{enumerate}
A major step for the development of operator space theory
is the right definition of an operator space dual. Indeed, the norm
of a matrix $(x^*_{ij})\subset E^*$ is given by
\[ \left \| x^*_{ij} \right \|_{M_n(E^*)} \hspace{.1cm} = \hspace{.1cm} \left \| (x^*_{ij}) :E\rightarrow M_n
\right \|_{cb}
\hspace{.1cm} = \hspace{.1cm} \sup \left\{ \left \| \langle x^*_{ij}, x_{kl}\rangle \right \|_{M_{n^2}}
\left | { \atop } \right. \left \| x_{ij} \right \|_{M_n(E)} \hspace{.05cm} \le \hspace{.05cm} 1 \right\} \hspace{.1cm} .\]
For further information on this and operator space theory we
refer to the paper of Blecher and Paulsen, \cite{BPT}.
\section{The notion of 1-summing operators on operator spaces}
Given two Banach spaces $X$ and $Y$ a matrix structure corresponding
to operator spaces is defined on ${\cal B}(X,Y)$ in the following way.
The norm of a matrix $(T_{ij})\subset {\cal B}(X,Y)$ is induced
by considering this matrix as element in ${\cal B}(\ell_2^n(X),\ell_2^n(Y))$
\[ \left \| T_{ij} \right \|_n \hspace{.1cm} := \hspace{.1cm} \sup\left\{ \left ( \sum\limits_{i=1}^n
\left \| \sum\limits_{j=1}^n T_{ij} (x_j)\right \|^2 \right ) ^{\frac{1}{2}} \hspace{.1cm}\left |\hspace{.1cm}
\sum\limits_1^n \left \| x_j \right \|^2 \hspace{.1cm}
\le \hspace{.1cm} 1 \hspace{.1cm} \right. \right\}\hspace{.1cm} . \]
Following \cite{PCB}
an operator $u \in {\cal B}(E,F)$, where $E \subset {\cal B}(X_1,Y_1)$ and
$F \subset {\cal B}(X_2,Y_2)$ is said to be completely bounded if there is a
constant
$c>0$ such that for $(T_{ij}) \subset E$
\[ \left \| u(T_{ij}) \right \|_n \hspace{.1cm} \le \hspace{.1cm} c \hspace{.1cm} \left \| T_{ij} \right \|_n\hspace{.1cm} . \]
The infimum over all such constants is denoted by $\left \| u \right \|_{cb}$.
As usual $\ell_{\infty}^n$ will be considered as a subspace
of ${\cal B}(\ell_2^n)$. The matrix norm induced by this embedding corresponds to
the $\varepsilon$ tensor product. In analogy to the classical theory of
absolutely r1-summing operators we
define the r1-summing norm (with $n$ vectors) for an
operator $T \in B(E,F)$, where $F$ is a Banach space and
$E \subset B(X,Y)$ as follows
\begin{eqnarray*}
\pi_{r1,sc}^n(T) &:=& \sup\left \{ \left ( \sum\limits_1^n \left \| Tu(e_k) \right \|^r
\right ) ^{\frac{1}{r}}
\hspace{.1cm} \left | { \atop } \left \| u\hspace{.05cm}:\hspace{.05cm}\ell_{\infty}^n \rightarrow E \right \|_{cb} \hspace{.1cm} \le 1
\hspace{.1cm}\right.\right\} \hspace{.3cm} \mbox{and}\\
\pi_{r1,sc}(T) &:=& \sup_{n \in \nz} \pi_{r1,sc}^n(T) \hspace{.1cm} .
\end{eqnarray*}
An operator $T$ is said to be $r1-summing$ if $\pi_{r1,sc}(T)$ is finite.
The notion of absolutely r1-summing operators is included in this
definition if we consider $E$ to be embedded into $C(B_{E^*})\subset
{\cal B}(\ell_2(B_{E^*}),\ell_2(B_{E^*}))$.
A basic tool for the notion of r1-summing operators is a description
of the cb norm for operators acting on $\ell_{\infty}$. This is well-known but
since
it is crucial for the following we give a proof.
\begin{lemma}\label{cund} Let $E \subset {\cal B}(X,Y)$ and $u \in
{\cal B}(\ell_{\infty}^n,E)$ with $x_k=u(e_k)$.
Then we have
\[ \left \| u \right \|_{cb} \hspace{.1cm} = \hspace{.1cm} \sup\left\{ \sum\limits_1^n
\sigma_1(vx_kw)\hspace{.1cm}\left | { \atop } \right.\hspace{.1cm} v\in {\cal B}(Y,\ell_2),\hspace{.1cm} w\in {\cal B}(\ell_2,X) \hspace{.1cm}
\mbox{and} \hspace{.1cm}\pi_2(v),\pi_2(w^*)\hspace{.1cm}\le\hspace{.1cm} 1 \hspace{.1cm}\right\}\hspace{.1cm} , \]
where $\sigma_1$ denotes the trace class norm.
\end{lemma}
{\bf Proof:} Clearly, the supremum on the right hand remains unchanged
if
we replace all operators $v \in {\cal B}(Y,\ell_2)$, $w\in {\cal B}(\ell_2,X)$
by the supremum over $m\in {\rm I\! N}$ and $v\in {\cal B}(Y,\ell_2^m)$,
$u\in {\cal B}(\ell_2^m,X)$. By a well known characterization of 2-summing
operators, see \cite{PIL}, every operator $v\in {\cal B}(Y,\ell_2^m)$ can
be written in the form $v=Oz$ with
\[ \left ( \sum\limits_1^N \left \| z^*(e_k) \right \|^2 \right ) ^{\frac{1}{2}}\left \| O\hspace{.05cm}: \hspace{.05cm}
\ell_2^N \rightarrow \ell_2^m
\right \| \hspace{.1cm} \le \hspace{.1cm} (1+\varepsilon) \hspace{.1cm} \pi_2(v)\hspace{.1cm} , \]
for $\varepsilon>0$ arbitrary. Hence we get
\begin{eqnarray*}\lefteqn{
\sup\left\{ \sum\limits_1^n \sigma_1(vx_kw)\hspace{.1cm}\left | { \atop } \right.\hspace{.1cm}
v\in {\cal B}(Y,\ell_2),\hspace{.1cm} w\in {\cal B}(\ell_2,X) \hspace{.1cm} \mbox{and}
\hspace{.1cm}\pi_2(v),\pi_2(w^*)
\hspace{.1cm}\le\hspace{.1cm} 1 \hspace{.1cm} \right\}\hspace{.1cm} = }\\
&=& \sup_{N\in {\rm I\! N}} \sup\left\{ \sum\limits_1^n tr(A^k vx_kw) \left | { \atop } \right. \hspace{.1cm}
\left \| A^k \hspace{.05cm} :\hspace{.05cm}\ell_2^N \rightarrow \ell_2^N \right \| \hspace{.1cm} \le \hspace{.1cm} 1,
\hspace{.1cm} \sum\limits_1^N \left \| w(e_i)
\right \|^2,\hspace{.1cm} \sum\limits_1^N \left \| v^*(e_j) \right \|^2 \hspace{.1cm} \le \hspace{.1cm} 1 \right\}\\
&=& \sup_{N \in {\rm I\! N}} \sup \left \{ \sum\limits_{k=1}^n \sum\limits_{i=1}^N
<v^*(e_i), \sum\limits_{j=1}^N A^k_{ji}x_k(w(e_j))> \left | { \atop } \right. \hspace{.1cm}
\left \| A^k\hspace{.05cm}:\hspace{.05cm} \ell_2^N \rightarrow \ell_2^N \right \| \hspace{.1cm} \le \hspace{.1cm} 1,\hspace{.1cm}
\sum\limits_1^N \left \| w(e_i) \right \|^2, \right.\\
& &\hspace{1.5cm} \hspace{1.5cm} \hspace{1.5cm} \hspace{1.5cm}\pla \hspace{1.5cm}\pla\hspace{.3cm}\pll \left.\sum\limits_1^N \left \|
v^*(e_j) \right \|^2 \hspace{.1cm} \le \hspace{.1cm} 1 \right\}\\
&=&\sup_{N \in {\rm I\! N}} \sup\left\{ \left \| \left ( u\left ( \sum\limits_1^n e_k \otimes
A^k_{ji}\right ) \right ) _{ij} \right \|_N \left | { \atop } \right.\hspace{.3cm} \sup_k \left \| (A^k)^t \right \| \hspace{.1cm}
\le \hspace{.1cm} 1 \right\}\\
&=& \left \| u \right \|_{cb}\hspace{.1cm} .\\[-1.3cm]
\end{eqnarray*}\hfill $\Box$\vspace{0.5cm}
\begin{rem} \label{rcoh}
{\rm If $E \subset X^* \cong B(X,{\rm I\!\!\! C})$ or $E \subset Y\cong B({\rm I\!\!\! C},Y)$
the formula above
reduces to
\[ \left \| u :\ell_{\infty}^n \rightarrow E \right \|_{cb} \hspace{.1cm} =\hspace{.1cm} \pi_2(u) \hspace{.1cm} .\]
Therefore the 1-summing norm of an operator $T \in B(E,F)$
coincides with the absolutely 2-summing norm
\[ \pi_{1,sc}(T) \hspace{.1cm} =\hspace{.1cm} \pi_2(T) \hspace{.1cm} .\]
If the space $E$ has Cotype 2 (or is $(2,1)$ mixing, see \cite{PIE})
every
absolutely 2-summing operator is absolutely 1-summing and therefore all
these notions coincide. The most canonical examples are given by the
row space
$R = {\cal B}({\rm I\!\!\! C},\ell_2)$ and the column space $C = {\cal B}(\ell_2,{\rm I\!\!\! C})$.
In this cases it is a consequence of the "little Grothendieck
inequality", see \cite{TOJ},
\[ \pi_1(T) \hspace{.1cm} \le \hspace{.1cm} \hspace{.1cm} \frac{2}{\sqrt{\pi}}\hspace{.3cm} \pi_2(T) \hspace{.1cm} =\hspace{.1cm}
\frac{2}{\sqrt{\pi}}\hspace{.3cm} \pi_{1,sc}(T)\hspace{.1cm} . \]
By interpolation the same remains true for the operator Hilbert space
$OH$.}
\end{rem}
{\bf Proof:} Let $E \subset Y\cong {\cal B}({\rm I\!\!\! C},Y)$ and $u \in {\cal B}(\ell_{\infty}^n,E)$.
Trace duality for the absolutely 2 summing operators implies
\begin{eqnarray*}
\left \| u \right \|_{cb} &=& \sup_{\pi_2(v),\pi_2(w^*)\le 1} \sum\limits_1^n
\sigma_1(v(e_1 \otimes y_i)w)\hspace{.1cm} \le \hspace{.1cm}
\sup_{\pi_2(v),\pi_2(w^*)\le 1} \sum\limits_1^n \left \| v(y_i) \right \| \hspace{.1cm} \left \| w
\right \|\\
&=&\sup_{\pi_2(v)\le 1} \iota_1(vu)\hspace{.1cm} \le \hspace{.1cm}
\sup_{\pi_2(v), \left \| w \right \| \le 1} \left | tr(vuw)\right |\hspace{.1cm} \le \hspace{.1cm}
\sup_{\left \| w \right \| \le 1} \pi_2(uw) \hspace{.1cm} = \hspace{.1cm} \pi_2(u) \hspace{.1cm} .
\end{eqnarray*}
The argument for $E \subset {\cal B}(X,{\rm I\!\!\! C})$ is similar. For $T \in B(E,F)$
we use Pietsch' factorization theorem, again trace duality
and the fact that absolutely 1-summing operators on $\ell_{\infty}$
are integral
\begin{eqnarray*}
\pi_{1,sc}(T) &=& \sup \left \{ \pi_1(Tu) \left | { \atop } \right. \pi_2(u: \ell_{\infty}^n
\rightarrow E) \le 1\right\}\\
&=& \sup\left \{ \iota_1(Tu) \left | { \atop } \right. \pi_2(u: F \rightarrow E) \le 1\right \}
\hspace{.1cm} = \hspace{.1cm} \pi_2(T)\hspace{.1cm} . \\[-1.5cm]
\end{eqnarray*} \hspace*{\fill}$\Box$\hz\pagebreak[1] \vspace{0.5cm}
Nowadays it can be considered as a standard application of the
Hahn-Banach
separation theorem to deduce a factorization theorem for
1-summing operators. We refer to \cite{PSP} for the required
modification
in the infinite dimensional case.
\begin{prop}\label{fac} Let $X$, $Y$, $F$ be Banach spaces,
$E \subset {\cal B}(X,Y)$ and $u \in {\cal B}(E,F)$.
\begin{enumerate}
\item Let us assume that $X$ and $Y$ finite dimensional, of dimension
$n$ and $m \in {\rm I\! N}$, say. The operator $T$ is 1-summing if and only if
there exists
a constant $C>0$ and a probability measure $\mu$ on the compact space
$K\hspace{.05cm}:=\hspace{.05cm} {\rm B}_{\Pi_2^d(\ell_2^n,X)} \times {\rm
B}_{\Pi_2(Y,\ell_2^m)}$
such that
\[ \left \| Tx \right \| \hspace{.1cm} \le \hspace{.1cm} C \int\limits_K \sigma_1(vxu) \hspace{.1cm} d\mu(u,v) \hspace{.1cm} .
\]
\item $T$ is 1-summing if and only if there exists a constant $C>0$
and
an ultrafilter ${\cal U}$ over an index set ${\cal A}$ together with finite
sequences
$(\lambda^{\alpha}_i)_{i \in I^{\alpha}}$,
$(u_i^{\alpha}, v_i^{\alpha})_{i \in I^{\alpha}} \subset
{\cal B}_{\Pi_2^d(\ell_2,X)}\times B_{\Pi_2(Y,\ell_2)}$
such that
\[ \left \| Tx \right \| \hspace{.1cm} \le \hspace{.1cm} C \lim\limits_{\alpha \in {\cal U}} \sum\limits_{i \in
I^{\alpha}} \sigma_1(v_i^{\alpha}xu_i^{\alpha}) \hspace{.1cm} . \]
\end{enumerate}
In both cases C can be chosen to be $\pi_{1,sc}(T)$. In particular if
$E\subset B(H)$
is an operator space and $F$ carries its minimal (commmutative)
operator spaces
structure then every 1-summing operator is completely 1-summing in the
sense of Pisier, \cite{PSP}.
\end{prop}
In the next proposition we list the relations between the notion of
r1-summing operators and $(r1,C^*)$-summing operators defined on
$C^*$-algebra's by Pisier. More generally, let us recall that
an element $z\in {\cal B}(X,\overline{X^*})$, $\overline{X^*}$ the anti dual, is said to
be
positive if $\langle z(x),x \rangle\hspace{.05cm} \ge \hspace{.05cm} 0$ for all $x \in X$. An
operator
$u:\ell_{\infty}^n \rightarrow {\cal B}(X,\overline{X^*})$ is positive, if $u$ maps positive
sequences
into positive elements.
\begin{prop} Let $X$ be a Banach space.
\begin{enumerate}
\item An operator $u:\ell_{\infty}^n \rightarrow {\cal B}(X,\overline{X^*})$ is completely bounded
if and only if $u$ is decomposable into positive operators and
\[ \left \| u \right \|_{cb} \hspace{.1cm} \le \hspace{.1cm} \inf\left\{ \sum\limits_j \left | \lambda_j\right | \left \| u_j
\right \|\left | { \atop } \right. u \hspace{.05cm}=\hspace{.05cm} \sum\limits_j \lambda_j u_j\hspace{.1cm} ,\hspace{.1cm} u_j \hspace{.05cm} positive\right\}
\hspace{.1cm} \le \hspace{.1cm} 4 \hspace{.1cm} \left \| u \right \|_{cb} \hspace{.1cm} .\]
Therefore an operator $T :{\cal B}(X,\overline{X^*}) \rightarrow F$ is r1-summing if and
only if
\[\left ( \sum\limits_1^n \left \| Tz_k \right \|^r \right ) ^{ \frac{1}{r} } \hspace{.1cm} \le \hspace{.1cm} C \hspace{.1cm} \left \|
\sum\limits_1^n z_k \right \|\]
for all finite sequences of positive elements $(z_k)_1^n \subset
{\cal B}(X,\overline{X}^*)$.
The corresponding constants are equivalent up to a factor 4. Given an
operator
$v:X^* \rightarrow G$ then operator $T:=v \otimes \bar{v}:{\cal B}(X ,\overline{X^*})
\rightarrow G\otimes_{\varepsilon}\overline{G}$
is 1-summing if and only if $v$ is absolutely 2-summing.
\item If $E$ is a subspace of a $C^*$-algebra and $T \in {\cal B}(E,F)$
is a r1-summing operator then it is $(r1,C^*)$-summing, i. e.
for all $(x_k)_k \subset C^*$
\[ \left ( \sum\limits^n_1 \hspace{.05cm} \left \| u(x_k) \right \|^r\right ) ^{\frac{1}{r}} \hspace{.1cm} \le \hspace{.1cm}
4 \hspace{.1cm} \pi_{r1,sc}(T) \hspace{.1cm} \left \|
\sum\limits_1^n \left ( \frac{x^*_k x^{ }_k + x^{ }_k x_k^*}{2}
\right ) ^{\frac{1}{2}} \right \|_{C^*} \hspace{.1cm}. \]
Conversely, if $E$ is a von Neumann algebra, $E$ is injective if and
only if
every $(1,C^*)$-summing operator is 1-summing and satisfies
\[ \pi_{1,sc}(T) \hspace{.1cm} \le \hspace{.1cm} c \hspace{.1cm} \pi_{1,C^*}(T) \hspace{.1cm} ,\]
where $c$ is a constant depending on $E$ {\rm (}$\pi_{1,C^*}$ denotes
the best constant in the inequality above for $r=1${\rm )}.
In this case also $\pi_{r1,sc}(T) \hspace{.05cm} \le \hspace{.05cm} c \hspace{.05cm} \pi_{r1,C^*}(T)$ for all $1\le
r<\infty$.
\item If $T :E\rightarrow F$ is a 1-summing operator defined on an operator
space
$E \subset {\cal B}(H)$ it is $(2,oh)$, $(2,R)$ and $(2,C)$-summing. This
means
\begin{eqnarray*}
\left ( \sum\limits^n_1 \hspace{.05cm} \left \| T(x_k) \right \|^2 \right ) ^{\frac{1}{2}}&\le& \hspace{.1cm}
\pi_{1,sc}(T) \hspace{.1cm}
\left \| \sum\limits_1^n x_k \otimes \overline{x_k} \right \|_{E \otimes_{min}
\overline{E}}^{\frac{1}{2}}\hspace{.1cm} ,\hspace{1.5cm}\\
\left ( \sum\limits^n_1 \hspace{.05cm} \left \| T(x_k) \right \|^2 \right ) ^{\frac{1}{2}}&\le& \hspace{.1cm}
\pi_{1,sc}(T) \hspace{.1cm}
\left \| \left ( \sum\limits_1^n x^{ }_k x^*_k \right ) ^{\frac{1}{2}}\right \|_{\B(H)}\hspace{.1cm}
, \hspace{1.5cm}\\
\mbox{and}\hspace{1.5cm}
\left ( \sum\limits^n_1 \hspace{.05cm} \left \| T(x_k) \right \|^2 \right ) ^{\frac{1}{2}}&\le& \hspace{.1cm}
\pi_{1,sc}(T) \hspace{.1cm}
\left \| \left ( \sum\limits_1^n x^*_k x^{ }_k
\right ) ^{\frac{1}{2}}\right \|_{\B(H)}\hspace{.1cm}. \hspace{1.5cm} \\
\end{eqnarray*}
\end{enumerate}
\end{prop}\vspace{0.5cm}
{\bf Proof:} For the following let us denote by $\pi_{r1}^+(T)$ the
best constant $C$ satisfying
\[ \left ( \sum\limits_1^n \left \| T(z_k) \right \|_{F}^r \right ) ^{\frac{1}{r}} \hspace{.1cm} \le
\hspace{.1cm} C \hspace{.1cm} \left \| \sum\limits_1^n z_k \right \|_{{\cal B}(X,\overline{X^*})} \]
for all positive elements $(z_k)_1^n$. Then we have trivially
\[ \left ( \sum\limits_1^n \left \| T(u(e_k)) \right \|_{F}^r \right ) ^{\frac{1}{r}} \hspace{.1cm} \le
\hspace{.1cm} \pi_{r1}^+(T) \hspace{.1cm} \left \| u :\ell_{\infty}^n \rightarrow {\cal B}(X,\overline{X^*}) \right \|_{dec} \]
where
\[ \left \| u \right \|_{dec} \hspace{.1cm} :=\hspace{.1cm} \inf\left\{ \sum\limits_j \left | \lambda_j\right |
\left \| u_j \right \|_{op}\left | { \atop } \right. u \hspace{.05cm}=\hspace{.05cm} \sum\limits_j \lambda_j u_j\hspace{.1cm} ,\hspace{.1cm} u_j\hspace{.05cm}
positive\right\} \hspace{.1cm} .\]
We will first show that for a positive operator $u$
\[ \left \| u \right \|_{cb} \hspace{.1cm} = \hspace{.1cm} \left \| \sum\limits_1^n u(e_k) \right \| \hspace{.1cm} = \hspace{.1cm} \left \| u
\right \|_{op} \hspace{.1cm} .\]
For this we can assume that $z_k \hspace{.05cm}=\hspace{.05cm} u(e_k)$
are positive elements in ${\cal B}(X,\overline{X^*})$. Let us note that positive
elements
are automatically $\Gamma_2$ operators. On the tensor product
$\ell_2\otimes X$
we use the norm induced by the absolutely $2$ summing norm of the
corresponding
operator from $X^*$ with values in $\ell_2$. With this norm each
element
$x_k$ defines a positive, possibly degenerated, scalar product
\[ \phi_k \hspace{.05cm}:\left ( \ell_2\otimes X\right ) \times \left ( \ell_2\otimes X\right )
\rightarrow {\rm I\!\!\! C}\hspace{.1cm} \quad\mbox{with}\quad
\phi_k(v,w) \hspace{.05cm}:=\hspace{.05cm} tr(\overline{v^*}z_kw) \hspace{.1cm}.\]
From Lemma \ref{cund}, H\"older's and the Cauchy-Schwartz inequality we
deduce
\begin{eqnarray*}
\left \| u \right \|_{cb} &=& \sup\left \{ \sum\limits_1^n tr(\overline{A^k}\overline{v^*}z_kw)
\hspace{.1cm}\left | { \atop } \right. \pi_2(v^*), \pi_2(w^*), \left \| A^k \right \|\hspace{.1cm} \le \hspace{.1cm} 1 \right\}\\
&=& \sup\left \{ \sum\limits_1^n \phi_k(vA^k,w) \hspace{.1cm}\left | { \atop } \right.
\pi_2(v^*), \pi_2(w^*), \left \| A^k \right \|\hspace{.1cm} \le \hspace{.1cm} 1\right\}\\
&\le& \sup\left \{ \left ( \sum\limits_1^n \phi_k(vA^k,vA^k)
\right ) ^{\frac{1}{2}}\hspace{.05cm}
\left ( \sum\limits_1^n \phi_k(w,w) \right ) ^{\frac{1}{2}}
\hspace{.1cm}\left | { \atop } \right. \pi_2(v^*), \pi_2(w^*), \left \| A^k \right \|\hspace{.1cm} \le \hspace{.1cm} 1\right\}\\
&\le& \sup\left\{ \sum\limits_1^n \sigma_1(\overline{v^*}z_kv) \hspace{.1cm} \left | { \atop } \right.
\pi_2(v^*)\hspace{.1cm} \le \hspace{.1cm} 1 \right\}\hspace{.1cm} = \hspace{.1cm}
\sup\left \{ \sum\limits_1^n tr(\overline{v^*}z_kv) \hspace{.1cm} \left | { \atop } \right. \pi_2(v^*)\hspace{.1cm} \le
\hspace{.1cm} 1 \right\}\\
&\le& \gamma_2(\sum\limits_1^n z_k) \hspace{.1cm} \le \hspace{.1cm} \left \| \sum\limits_1^n z_k \right \| \hspace{.1cm} = \hspace{.1cm} \left \|
u(1,..,1) \right \| \hspace{.1cm} \le \hspace{.1cm} \left \| u \right \|_{op} \hspace{.1cm} .
\end{eqnarray*}
Where we used that for a positive element $z_k$ the composition
$\overline{v^*}z_kv$ actually
defines a positive operator on $\ell_2$ and that for the positive
element $\sum\limits z_k$
the $\gamma_2$-norm and the operator norm coincide.
(If $X \hspace{.05cm}=\hspace{.05cm} H$ the whole statement can be deduced from
\cite[theorem 2.4., proposition 3.5.]{PAU}.) In particular we obtain
\[ \left \| u \right \|_{cb} \hspace{.1cm} \le \hspace{.1cm} \left \| u \right \|_{dec} \quad \mbox{and} \quad
\pi_{r1}^+(T) \hspace{.1cm} \le \hspace{.1cm} \pi_{r1,sc}(T) \hspace{.1cm} .\]
{\bf\it 1:} Let $u: \ell_{\infty}^n \rightarrow {\cal B}(X,\overline{X^*})$ be a completely bounded
operator.
By Pisier's version \cite{PCB}
of the Haagerup/Wittstock factorization theorem, there exists
a $*$-representation $\pi: {\cal B}(\ell_2^n) \rightarrow {\cal B}(H)$ and operators
$V,W: H\rightarrow X$ such that
\[ u(\alpha) \hspace{.1cm} = \hspace{.1cm} \overline{W^*}\hspace{.05cm} \pi(D_{\alpha})\hspace{.05cm} V \quad \mbox{and} \quad
\left \| V\right \| \hspace{.1cm} = \hspace{.1cm} \left \| W \right \| \hspace{.1cm} \le \hspace{.1cm} \sqrt{\left \| u \right \|_{cb}} \hspace{.1cm} ,\]
where $D_{\alpha}$ denotes the diagonal operator with entries $\alpha$.
It is standard
to see that the operators
\[ u^k(\alpha) \hspace{.1cm} :=\hspace{.1cm} \frac{1}{4}\overline{(V+i^k W)}\hspace{.05cm} \pi(D_{\alpha})
\hspace{.05cm} (V+i^kW) \quad k=0,..,3 \]
are positive and of norm less than $\left \| u \right \|_{cb}$. But $u \hspace{.1cm} = \hspace{.1cm}
u^0-u^2 + i(u^1-u^3)$
implies $\left \| u \right \|_{dec} \hspace{.1cm} \le \hspace{.1cm} 4 \left \| u \right \|_{cb}$.
The second statement about operators $T$ of the form $v\otimes \overline{v}$
is a simple consequence of the observation that elementary tensors
$z_i \hspace{.05cm}=\hspace{.05cm} x^*_i \otimes \overline{x^*_i}$ are clearly positive. For the
reverse implication one simply uses Pietsch factorization theorem
for absolutely 2 summing operators.
{\bf\it 2:} Clearly we have $\pi_{r1}^+(T) \hspace{.1cm} \le \hspace{.1cm} \pi_{r1,C^*}(T)$. For
the
converse we only have to note that every element $x$ in a $C^*$ algebra
admits a decomposition $x\hspace{.1cm} = \hspace{.1cm} x^1-x^2+i(x^3-x^4)$ in positive elements
such that
\[ x^k \hspace{.1cm} \le \hspace{.1cm} \left ( \frac{x^* x + xx^*}{2} \right ) ^{\frac{1}{2}} \hspace{.1cm} .\]
Hence we get $\pi_{r1,C^*}(T) \hspace{.1cm} \le \hspace{.1cm} 4 \pi_{r1}^+(T)$. If $E$ is a von
Neumann
algebra we see that the existence of a constant $c_1>0$
\[ \pi_{1,sc}(T) \hspace{.1cm} \le \hspace{.1cm} c_1 \hspace{.1cm} \pi_{1,C^*}(T) \hspace{.1cm} ,\]
for all operators $T:E \rightarrow \ell_{\infty}^n$ is equivalent with the existence of
a constant $c_2$
\[ \iota^o(T) \hspace{.1cm} = \hspace{.1cm} \pi_{1,sc}(T) \hspace{.1cm} \le \hspace{.1cm} c_2\hspace{.1cm} \pi_1^+(T) \hspace{.1cm} ,\]
where $\iota^o$ is the operator integral norm.
Hence
trace duality implies that the condition above is equivalent to
\[ \left \| u \right \|_{dec} \hspace{.1cm} \le \hspace{.1cm} c_2 \left \| u \right \|_{cb} \hspace{.1cm} \]
for all $u:\ell_{\infty}^n \rightarrow E$. By Haagerup's theorem, see \cite{HA}, this
holds if and only if $E$ is
injective. Together with the proof of
1.we see that for an injective von Neumann algebra the notion of
r1-summing
and $(r1,C^*)$-summing coincide.
{\bf\it 3:} This is an easy variant of Kwapien's argument. By the
remark \ref{rcoh} we deduce that
for all diagonal operator $D_{\si} \hspace{.05cm}: \ell_{\infty}^n \rightarrow \ell_2^n$ and $G_n \in
\{R_n,C_n,OH_n\}$
\[ \left \| D_{\si} \hspace{.05cm} :\hspace{.05cm}\ell_{\infty}^n \rightarrow G_n \right \|_{cb} \hspace{.1cm} =\hspace{.1cm} \pi_2(D_{\si}) \hspace{.1cm} =\hspace{.1cm}
\left \| \sigma \right \|_2 \hspace{.1cm} .\]
Let us denote by $(e_k)_1^n$ the sequence of unit vectors of $G_n$.
Then we get for
all $w \in \B(G_n,E)$
\begin{eqnarray*}
\left ( \sum\limits_1^n \left \| Tw(e_k) \right \|^2 \right ) ^{\frac{1}{2}} &=&
\sup_{\left \| \sigma \right \|_2 \hspace{.05cm}\le\p1} \sum\limits_1^n \left \| TwD_{\si}(e_k) \right \| \\
&\le& \pi_{1,sc}(T) \hspace{.1cm} \sup_{\left \| \sigma \right \|_2 \hspace{.05cm}\le\p1} \left \| wD_{\si} \right \|_{cb}
\hspace{.1cm} \le \hspace{.1cm}
\pi_{1,sc}(T) \hspace{.1cm} \left \| w \right \|_{cb} \hspace{.1cm}.
\end{eqnarray*}
The assertion is proved by identifying the complete bounded norm of w
with
the corresponding expressions on the right hand side in 2.. For
$G_n \hspace{.05cm}=\hspace{.05cm} OH_n$ this was done in \cite{PLT}. For the two other cases
we refer to
\cite{BPT}.\hfill $\Box$
\begin{rem}\label{ocot} {\rm For an operator space $E\subset \B(H)$
which is of operator
cotype 2 the a priori different notions of summability coincide.
Indeed, using the same
arguments as in the commutative theory, see {\rm \cite{PIL}}, one can
deduce that every operator
$S \in \B(\ell_{\infty}^n,E)$ factors through $OH_n$ with $\gamma_{oh}(S)\hspace{.05cm}\le
c(E)\hspace{.05cm}\left \| S\right \|$. For notation and information see {\rm \cite{PLT}}.
A use of "little Grothendieck" inequality implies
\[ \pi_1(T) \hspace{.1cm} \le \hspace{.1cm} c_0\hspace{.1cm} c(E)\hspace{.1cm} \pi_{2,oh}(T)\hspace{.1cm} . \]
For all (2,oh)-summing operator $T \in \B(E,\ell_2)$. Finally the
factorization properties of (2,oh)-summing operators imply for all
operators $T\in \B(E,F)$
\[ \frac{1}{c_0 c(E)}\hspace{.1cm} \pi_1(T) \hspace{.1cm} \le \hspace{.1cm} \pi_{2,oh}(T)
\hspace{.1cm} \le \hspace{.1cm} \pi_{1,sc}(T) \hspace{.1cm} \le \hspace{.1cm} \pi_1(T)\hspace{.1cm} . \]
}
\end{rem}
The proof of the first theorem in the introduction
is based on a similar statement for the absolutely-summing norm
of operators defined on $C(K)$ spaces.
\begin{prop} \label{connect} Let $2 < r < \infty$, $K$ a compact
Haussdorf space, $F$ a Banach space
and $T:C(K) \rightarrow F$. If there exists a constant $C>0$ such that
\[ \sum\limits_1^n \left \| Tx_k \right \| \hspace{.1cm} \le \hspace{.1cm} C \hspace{.1cm} n^{1-\frac{1}{r}} \hspace{.1cm} \sup_{t\in
K} \sum\limits_1^n \left | x_k(t) \right | \]
for all elements $(x_k)_1^n \subset C(K)$, then we have
\[ \ell_{r,\infty}^{(x)}(T) \hspace{.1cm} \le \hspace{.1cm} c_0 \hspace{.05cm} \left ( \frac{1}{2}-\frac{1}{r} \right ) ^{-1}
\hspace{.1cm} C \hspace{.1cm} , \]
where $c_0$ is an absolute constant. If $F$ and $C(K)$ are complex
Banach spaces one has for every $S:F \rightarrow C(K)$
\[ \sup_{k \in \nz} k^{1/r} \lambda_k(TS) \hspace{.1cm} \le \hspace{.1cm} c^2_0 \hspace{.05cm} \left (
\frac{1}{2}-\frac{1}{r} \right ) ^{-1} \hspace{.1cm} C \hspace{.1cm} \left \| S \right \| \hspace{.1cm} . \]
\end{prop}
{\bf Proof:} First we show
\[ \left \| \hspace{.1cm} (\left \| Tx_k \right \|_F) \hspace{.1cm} \right \|_{r,\infty} \hspace{.1cm} \le \hspace{.1cm} C
\hspace{.1cm} \sup_{t \in K} \sum\limits_k \left | x_k(t)\right | \hspace{.1cm} \]
for all $(x_k)_1^n \subset C(K)$. Indeed we can assume $\left \| Tx_j\right \|$
non increasing. For fixed $1\hspace{.05cm} \le \hspace{.05cm} k \hspace{.05cm} \le \hspace{.05cm} n$ we get
\begin{eqnarray*}
k \hspace{.05cm} \left \| Tx_k \right \| &\le&
\sum\limits_1^k \left \| Tx_l \right \| \hspace{.1cm} \le \hspace{.1cm} C \hspace{.1cm} k^{1-\frac{1}{r}}
\hspace{.1cm}
\sup_{t \in K} \sum\limits_1^k \left | x_j(t)\right |
\end{eqnarray*}
Dividing by $k^{1-\frac{1}{r}}$ and taking the supremum over all $1
\hspace{.05cm} \le \hspace{.05cm} k \hspace{.05cm} \le \hspace{.05cm} n$ yields the
estimate. Now we choose $2<q<r$ with
$\frac{1}{2}+\frac{1}{r}=\frac{2}{q}$.
For $(x_k)_1^n \subset C(K)$ we obtain
\begin{eqnarray*}
\left ( \sum\limits_1^n \left \| Tx_k \right \|^q \right ) ^{1/q} &\le&
\left ( \sum\limits_1^n k^{-q/r} \right ) ^{1/q} \hspace{.1cm} (\left \| \hspace{.1cm}\left \| Tx_k\right \| )\hspace{.1cm}
\right \|_{r,\infty}\\
&\le& \left ( \frac{1}{q}-\frac{1}{r} \right ) ^{-1/q} \hspace{.1cm} n^{1/q-1/r} \hspace{.1cm}
c_2 \hspace{.1cm} \sup_{t \in K} \sum\limits_1^n \left | x_k(t)\right | \hspace{.1cm} .
\end{eqnarray*}
Therefore we have
\[ \pi_{q1}^n(T) \hspace{.1cm} \le \hspace{.1cm} C
\hspace{.1cm} \left ( \frac{1}{q}-\frac{1}{r} \right ) ^{-1/q} \hspace{.1cm} n^{1/q-1/r} \hspace{.1cm} . \]
Using Maurey's theorem, see \cite[theorem 21.7]{TOJ}\hspace{.05cm}, this implies
with our
choice of q
\begin{eqnarray*}
\pi_{q2}^n(T) &\le& C \hspace{.1cm} c_0
\hspace{.1cm} \left ( \frac{1}{2}-\frac{1}{q} \right ) ^{1/q-1} \hspace{.1cm}
\hspace{.1cm} \left ( \frac{1}{q}-\frac{1}{r} \right ) ^{-1/q} \hspace{.1cm} n^{1/q-1/r} \\
&\le& C \hspace{.1cm} 2\hspace{.05cm} c_0 \hspace{.1cm} \left ( \frac{1}{2}-\frac{1}{r} \right ) ^{-1} \hspace{.1cm}
n^{1/q-1/r} \hspace{.1cm} .
\end{eqnarray*}
Now let $u \in {\cal B}(\ell_2,C(K))$. By a Lemma, probably due to Lewis,
see
\cite[Lemma 2.7.1]{PIE}, one can find for all $n \in \nz$ an orthonormal
family $(o_k)_1^n$ in $\ell_2$ with
\[ a_k(Tu) \hspace{.1cm} \le \hspace{.1cm} 2\hspace{.1cm} \left \| Tu(o_k) \right \| \quad \mbox{for all}
\quad k=1,..,n\hspace{.1cm} .\]
Hence we deduce
\begin{eqnarray*}
n^{1/q} a_n(Tu) &\le& 2 \hspace{.1cm} \left ( \sum\limits_1^n \left \| Tu(o_k) \right \|^q
\right ) ^{1/q}\hspace{.1cm} \le \hspace{.1cm}
2\hspace{.1cm} \pi_{q2}^n(T) \hspace{.1cm} \sup_{t \in K} \left ( \sum\limits_1^n \left | u(o_k)(t)
\right |^2 \right ) ^{1/2}\\
&\le& 4\hspace{.1cm} C \hspace{.1cm} c_0 \hspace{.1cm} \left ( \frac{1}{2}-\frac{1}{r} \right ) ^{-1} \hspace{.1cm}
n^{1/q-1/r} \hspace{.1cm}
\sup_{\left \| \alpha \right \|_2 \le 1} \left \| u\left ( \sum\limits_1^n a_k\hspace{.05cm} o_k\right )
\right \|_{C(K)} \\
&\le& 4\hspace{.1cm} C \hspace{.1cm} c_0 \hspace{.1cm} \left ( \frac{1}{2}-\frac{1}{r} \right ) ^{-1}\hspace{.1cm}
n^{1/q-1/r}\hspace{.1cm} \left \| u \right \|\hspace{.1cm} .
\end{eqnarray*}
Dividing by the factor $n^{1/q-1/r}$ and taking the supremum over
$n \in \nz$
yields
\[ \sup_{n \in \nz} n^{1/r} \hspace{.05cm} a_n(Tu) \hspace{.1cm} \le \hspace{.1cm} 4\hspace{.05cm} c_0 \hspace{.1cm} \left (
\frac{1}{2}-\frac{1}{r} \right ) ^{-1} \hspace{.1cm} C \left \| u \right \| \hspace{.1cm} . \]
Now taking the supremum over all u with norm less than 1 the desired
estimate for
the Weyl numbers is proved. For the estimates of the eigenvalues
we use the fact that the ideal ${\cal L}_{r,\infty}^{(x)}$ is of eigenvalue type
$\ell_{r,\infty}$, \cite[3.6.5]{PIE}.
\hspace*{\fill}$\Box$\hz\pagebreak[1]
\begin{rem} {\rm In fact all these conditions are equivalent as far as
$2<r<\infty$.
If $1<r<2$ let us consider the embedding $I :\ell_1\rightarrow C[0,2\pi]$
given by the
Rademacher functions $r_j(t)= sign \sin(2^jt)$ and the corresponding
projection
$P: C[0,2\pi] \rightarrow \ell_2$. By Kintchine's inequality $P$ is
r1-summing
for all $r>1$. On the other hand if we compose with a continuous
diagonal operator $D_{\tau}: \ell_2 \rightarrow \ell_1$
we see that the best possible eigenvalue behaviour for r1-summing
operators
is actually $(\lambda_k(PD_{\tau}))_{k \in \nz} \in \ell_2$. For $r=2$ a
more complicated example was
constructed by \cite{KOE}. This shows that the assumption $r>2$ is
really
necessary. }
\end{rem}
\begin{rem} {\rm Since for an operator $A \in {\cal B}(\ell_{\infty}^n,\ell_{\infty}^m)$
the
operator norm coincides with completely bounded norm we have for
$1 \le r \le \infty$
\[ \pi_{r1,sc}^n(u) \hspace{.1cm} = \hspace{.1cm} \sup\left\{ \pi_{r1}^n(uw)\hspace{.1cm} \left | { \atop } \right. \hspace{.1cm} \left \| w\hspace{.05cm}:\hspace{.05cm}
\ell_{\infty}^m \rightarrow E\right \|_{cb} \hspace{.1cm} \le \hspace{.1cm} 1 \right\} \hspace{.1cm} . \]
Therefore the results of {\rm \cite{DJ}} can be applied to deduce for
each
operator u of rank at most $n$
\[\pi_{r1,sc}(u) \hspace{.1cm} \le \hspace{.1cm} c_0^{\frac{r'}{r}} \left \{\begin{array}{l
@{\quad} l}
\left ( \frac{1}{r}-\frac{1}{2} \right ) ^{-\frac{r'}{2r}} \hspace{.1cm}
\pi_{r1,sc}^{[n^{r'/2}]}(u) & for \hspace{.1cm} 1 < r <2\\[+0.2cm]
\pi_{21,op}^{[n(1+\ln n)]} (u) & for \hspace{.1cm} r=2\\[+0.2cm]
\left ( \frac{1}{2}-\frac{1}{r} \right ) ^{\frac{1}{r}} \hspace{.1cm} \pi_{r1,sc}^n(u)
& for \hspace{.1cm} 2<r<\infty \hspace{.1cm} ,
\end{array}\right.\]
where $r'$ is the conjugate index to $r$.}
\end{rem}
An operator $u \in {\cal B}(F,E)$, $E \subset {\cal B}(X,Y)$ is said to be
$completely$
$\infty-factorable$ ($u \in \Gamma_{\infty}^{\it o}(F,E)$) if there is a factorization
$u\hspace{.05cm}=\hspace{.05cm}
SR$, where $R\in CB(F,B(H))$, $S \in CB(B(H),E)$, H a Hilbert space.
The $\gamma_{\infty}^{\it o}$-norm of u is defined
as $\inf\{\left \| S \right \|_{cb}\hspace{.05cm}\left \| R\right \|_{cb}\}$ where the infimum is
taken over
all such factorizations. As in the commutative case this turns out to
be a
norm. Now we can prove the first theorem of the
introduction.
\begin{theorem} \label{opeigen} Let $2<r<\infty$, $X$, $Y$, $F$ Banach
spaces
and $E \subset B(X,Y)$. For an operator $T:E \rightarrow F$
the following assertions are equivalent.
\begin{enumerate}
\item[i)] There is a constant $c_1$ such that for all $n \in \nz$
\[ \pi_{1,sc}^n(T) \hspace{.1cm} \le \hspace{.1cm} c_1 \hspace{.1cm} n^{1-\frac{1}{r}} \hspace{.1cm} . \]
\item[ii)] There is a constant $c_2$ such that for all operators $R\in
{\cal B}(F,C(K))$,
$S\in CB(C(K),E)$, $K$ a compact Haussdorf space
\[ \sup_{k \in \nz} k^{1/r} \left | \lambda_k(TSR) \right | \hspace{.1cm} \le \hspace{.1cm} c_2 \hspace{.1cm} \left \| R
\right \|\hspace{.1cm}
\left \| S\right \|_{cb} \hspace{.1cm} . \]
\item[iii)] There is a constant $c_3$ such that for all
$n$-dimensional
subspaces $E_1 \subset E$ one has
\[ \pi_{1,sc}(T\iota_{E_1}) \hspace{.1cm} \le \hspace{.1cm} c_3 \hspace{.1cm} n^{1-\frac{1}{r}} \hspace{.1cm} .\]
\end{enumerate}
Moreover the best constants satisfy
\[ c_1 \hspace{.1cm} \le \hspace{.1cm} c_3 \hspace{.1cm} \le \hspace{.1cm} c_0 \hspace{.05cm} c_2 \hspace{.1cm} \le \hspace{.1cm} c_0^2 \hspace{.1cm} \left (
\frac{1}{2}-\frac{1}{r} \right ) ^{-1} \hspace{.1cm} c_1\hspace{.1cm} . \]
If $E \subset B(H)$ is an operator space and $F =\min{F}$ carries
its minimal operator space structure these conditions are equivalent to
\[ \sup_{k \in \nz} k^{1/r} \left | \lambda_k(TS) \right | \hspace{.1cm} \le \hspace{.1cm} c_4 \hspace{.1cm}
\gamma_{\infty}^{\it o}(S)\hspace{.1cm} \]
for all completely $\infty$-factorable operators $S$.
\end{theorem}\vspace{0.5cm}
{\bf Proof:} \boldmath$i) \Rightarrow ii)$\unboldmath\hspace{.1cm} By the remark
above we have for
all $S \in CB(C(K),E)$
\[ \pi_1^n(uS) \hspace{.1cm} \le \hspace{.1cm} c_1 \hspace{.1cm} \left \| S \right \|_{cb}\hspace{.1cm} n^{1-1/r} \hspace{.1cm} .
\]
By Proposition \ref{connect} this implies for all $R\in {\cal B}(F,C(K))$
\begin{eqnarray*}
\sup_{k \in \nz} k^{1/r} \left | \lambda_k(uSR) \right | &\le& c_0 \hspace{.1cm} \ell_{r,\infty}^{(x)}(uSR)\hspace{.1cm} \le \hspace{.1cm}
c_0^2 \hspace{.1cm} \left ( \frac{1}{2}-\frac{1}{r}\right ) ^{-1} \hspace{.1cm} c_1 \left \| S \right \|_{cb}
\hspace{.1cm}
\left \| R \right \|\hspace{.1cm} .
\end{eqnarray*}
For the implication \boldmath$ii) \Rightarrow iii)$\unboldmath\hspace{.1cm}
let $u: \ell_{\infty}^m \rightarrow E_1$ be a completely bounded map and
$(y^*_k)_1^m \subset B_{Y^*}$ such that
\[ \left \| Tu(e_k) \right \| \hspace{.1cm} = \hspace{.1cm} \langle Tu(e_k) , y^*_k \rangle \hspace{.1cm} .\]
We define the operator $S: Y \rightarrow \ell_{\infty}^m; S(y) \hspace{.05cm}=\hspace{.05cm} (\langle
y, y_k^*\rangle)_1^m$ which is of norm at most
1 and get
\begin{eqnarray*}
\sum\limits_1^m \left \| Tu(e_k) \right \| &=& tr(STu) \hspace{.1cm} \le \hspace{.1cm} 2 \hspace{.1cm} n^{1-\frac{1}{r}}\hspace{.1cm}
\sup_k k^{\frac{1}{r}} \left | \lambda_k(STu) \right | \hspace{.1cm} \le \hspace{.1cm}
2 \hspace{.1cm} n^{1-\frac{1}{r}} c_2 \left \| S \right \| \hspace{.1cm} \left \| u \right \|_{cb} \hspace{.1cm} .
\end{eqnarray*}
The implication \boldmath$iii) \Rightarrow ii)$\unboldmath\hspace{.1cm} is
obvious.
Since $\ell_{\infty}^n$ is a completely complemented subspace of $M_n$ we only
have to show
the eigenvalue estimate. In fact, let $S =PR$, $R:\min(F)\rightarrow B(H)$,
$P:B(H) \rightarrow E$ completely bounded.
Since $F$ is considered as a subspace of $C(K)$ for some compact
Haussdorf
space $K$, there is a completely bounded extension $\hat{R}:C(K) \rightarrow
B(H)$
of the same cb-norm by Wittstock's extension theorem, see \cite{PAU}.
If we apply $ii)$ to $S = (P\hat{R})\iota_F$, $\iota_F$ the inclusion
map we obtain the assertion.\hspace*{\fill}$\Box$\hz\pagebreak[1]
\section{1-summing operators in connection with
minimal and exact operator spaces}
\setcounter{lemma}{0}
In contrast to Banach space theory there are infinite dimensional
operator spaces such that the identity is 1-summing. This is possible
because this notion does not respect the whole operator space
structure. In fact we will see that these examples appear
in different contexts. We will start with a probabilistic approach.
\begin{lemma} \label{prob} Let $n,N \in {\rm I\! N}$. Then there exists a
biorthogonal
sequence $(x_j)_1^n \subset M_N$, i.e. $tr(x_j^*x_i^{ }) \hspace{.05cm}=\hspace{.05cm}
\delta_{ij}$ with
\[ \left \| \sum\limits_1^n e_j \otimes x_j:\ell_2^n \rightarrow M_N\right \|_{op} \hspace{.1cm} \le \hspace{.1cm}
\pi(1+\sqrt{2}) \hspace{.1cm} \left ( \frac{1}{\sqrt{N}} +
\frac{\sqrt{n}}{\sqrt{2}N} \right ) \hspace{.1cm} .\]
In fact a random frame for $n$-dimensional subspaces of $M_N$ satisfies
this inequality
up to a constant.
\end{lemma}
{\bf Proof:} Let $J$ be a subset of cardinality $n$ in $I \hspace{.1cm} = \hspace{.1cm} \{ (i,j)
\left | { \atop } \right. i,j=1,..,N\}$.
We set $y_s\hspace{.05cm} := \hspace{.05cm} e_i \otimes e_j \in M_N$, but $x_s \hspace{.05cm}:=\hspace{.05cm} e_i
\otimes e_j$
only for $s \in J$ and $0$ else. For $(s,t)\in I\times I$ let $h_{s,t}
\hspace{.05cm}=\hspace{.05cm} \frac{1}{\sqrt{2}}(g_{st}+ig_{st}^{'})$ be
a sequence of independent, normalized, complex gaussian variables
(Clearly $(g_{st})$ and $(g^{'}_{st})$ are assumed to be independent.)
Applying Chevet's
inequality twice we obtain
\begin{eqnarray*}
\lefteqn{ {\rm I\! E} \left \| \sum\limits_{s\in J, t \in I} h_{s,t} x_s \otimes y_t
\right \|_{op}}\\
&=& {\rm I\! E} \left \| \sum\limits_{s \in J,t\in I} g_{s,t} x_s \otimes
\frac{y_t}{\sqrt{2}} +
\sum\limits_{s \in J,t\in I} g^{'}_{s,t} (ix_s) \otimes
\frac{y_t}{\sqrt{2}}\right \|\\
&\le& \left ( \omega_2\{x_s,ix_s\} \hspace{.1cm} {\rm I\! E} \left \| \sum\limits_{t\in I} \frac{g_t
+g_t^{'}}{\sqrt{2}} y_t \right \|_{M_N} + \frac{1}{\sqrt{2}}\hspace{.05cm}
\omega_2\{y_t,y_t\} \hspace{.1cm} {\rm I\! E} \left \| \sum\limits_{s\in J} g_s x_s + g^{'}_s i x_s
\right \|_{(S_2^N)^*} \right ) \\
&\le& \hspace{.1cm} \left ( 2\sqrt{N} + \sqrt{2n}\right ) \hspace{.1cm} ,\\
\end{eqnarray*}
where $\omega_2\{y_t,y_t\}$ corresponds to the operator norm of the
corresponding real linear operator.
Using the comparison principle between random unitary matrices in
$U_{N^2}$
and gaussian $N\times N$ matrices, see \cite{MAP}, we get
\begin{eqnarray*}
{\rm I\! E} \left \| \sum\limits_{s \in J} x_s \otimes U(x_s) \right \|_{op}
&=& {\rm I\! E} \left \| \sum\limits_{s,t \in I} \langle y_t,U(x_s)\rangle x_s \otimes
y_t \right \|\\
&\le& \frac{\pi(1+\sqrt{2})}{2\sqrt{N^2}} \hspace{.1cm}
{\rm I\! E} \left \| \sum\limits_{s,t} h_{s,t} x_s \otimes y_t \right \|\\
&\le& \pi(1+\sqrt{2})
\hspace{.1cm} \left ( \frac{1}{\sqrt{N}}\hspace{.05cm} + \hspace{.05cm} \frac{\sqrt{n}}{\sqrt{2}N} \right ) \hspace{.1cm}
.
\end{eqnarray*}
For every $\varepsilon >0$ we can find a unitary $U$ such that the norm
estimate
is satisfied up to $(1+\varepsilon)$ by Chebychev's inequality. By passing to
a limit
we can even find a unitary $U$ satisfying the norm estimate for
$\varepsilon=0$.
Since $U$ is a unitary in $\ell_2^{N^2}$ we use the usual
identification
between trace and scalar product to see that the elements $U(x_s)$ are
biorthogonal. An application of the concentration phenomenon
\cite{MIS} gives the assertion for random frames of $n$-dimensional
spaces
subspaces of $M_N$. \hspace*{\fill}$\Box$\hz\pagebreak[1]
The notion of random subspaces of a given $N$-dimensional Banach space
$F$
is always defined by a "natural" scalar product and the group of
unitaries of
the associated Hilbert space. A property of random subspace, means that
this
property is satisfied with "high probability" for subspaces of a fixed
dimension $n$.
In this case the probability measure is taken from the surjection $U
\mapsto span\{U(e_1),..,U(e_k)\}$
with respect to the normalized Haar measure on the group of unitaries.
Implicitly,
it is understood that the constant may depend on how close to 1 the
probability is
chosen. However, if the expected value can be estimated the
concentration phenomenon
on the group of unitaries yields reasonable estimates . For further and
more
precise information of this concept see the book of Milman/Schechtman
\cite{MIS}.
In this sense we formulate the following\vspace{0.5cm}
\begin{samepage}
\begin{cor} Let $n \hspace{.05cm} \le \hspace{.05cm} N$ and $E$ a random subspace of $M_N$, then $E$
is 1-summing
with
\[ \pi_{1,sc}(id_E) \hspace{.1cm} \le \hspace{.1cm} C \hspace{.1cm} ,\]
where $C$ depends on the probability not on the dimension.
\end{cor}
\end{samepage}
{\bf Proof:} We keep the notation from the proof above. A random
$n$-dimensional
subspace of $M_N$ is of the form $E \hspace{.05cm}=\hspace{.05cm} span\{U(x_s) \left | { \atop } \right. s \in
J\}$. By lemma \ref{prob}
we can assume that with high probability the operator
\[ v \hspace{.1cm} :=\hspace{.1cm} \sum\limits_{s\in J} x_s \otimes U(x_s)\]
is of norm less than $\frac{C}{\sqrt{N}}$. The operator $vv^*$ acts as
a projection
$E$ and therefore we have the following factorization
\[ Id_E \hspace{.1cm} = \hspace{.1cm} (\sqrt{N}v)(\sqrt{N}v)^* \hspace{.1cm} (\frac{1}{N} Id:M_N \rightarrow
S_1^N) \hspace{.1cm} \iota_E \hspace{.1cm} ,\]
where $\iota_E$ is the canonical embedding and
$(\sqrt{N}v)(\sqrt{N}v)^*$ should
be considered as an operator from $S_1^N$ to $M_N$. As such it is of
norm at most
$C^2$. By the trivial part of the factorization theorem for 1-summing
operators \ref{fac} we get the assertion.
\hspace*{\fill}$\Box$\hz\pagebreak[1]
Paulsen, \cite{PAU}, proved that a unique operator space structure
for a given Banach space is only possible in small dimensional spaces.
This is base on the study of cb maps between minimal and maximal
operator spaces.
In this setting the author discovered lemma \ref{prob} above
in a preliminary version of this paper, noticing that this implies
an estimate for the operator integral norm for the identity
$\max(\ell_2^n) \rightarrow \min(\ell_2^n)$.
Indeed, such a factorization has just been constructed with help of the
random spaces $E$ above.
However, the constant which can be deduced from this approach is worse
than that obtained by Paulsen/Pisier. Before we indicate
our proof of Paulsen/Pisier
result let us recall an easy lemma which is merely the definition of
the dual space, see also \cite{JP}.
For this we will use the following notation
$\left \| T \right \|_n \hspace{.05cm}:=\hspace{.05cm} \left \| Id_{M_n} \otimes T: M_n(E)\rightarrow M_n(F)\right \|$
for an operator $T$
between to operator spaces $E$, $F$.
\begin{lemma} \label{ele} Let $E$, $F$ operator spaces and $T:E \rightarrow
F$ then we have
\[\left \| T \right \|_n \hspace{.1cm} = \hspace{.1cm} \sup \left \{ \sum\limits_{ijkl=1}^n \langle y_{ij},
a_{ik}T(x_{kl})b_{lj} \rangle \left | { \atop } \right. hs(a),\hspace{.05cm} hs(b) \hspace{.05cm} \le \hspace{.05cm} 1, \hspace{.1cm}
\left \| x_{ij} \right \|_{M_n(E)},\hspace{.1cm} \left \| y_{ij} \right \|_{M_n(F^*)}\hspace{.05cm} \le \hspace{.05cm} 1\right
\} \hspace{.1cm} .\]
\end{lemma}
With the probabilistic approach we can prove that it
suffices to consider $n\times n$ matrices for rank $n$ operators
between minimal and maximal spaces, improving Paulsen/Pisier's result.
\begin{prop} Let $E$ be a minimal, $F$ be a maximal operator space and
$T :E \rightarrow F$ an operator of rank at most $n$ then we have
\[ \gamma_2^*(T) \hspace{.1cm} \le \hspace{.1cm} 170 \hspace{.1cm} \left \| Id_{M_n} \otimes T: M_n(E) \rightarrow
M_n(F) \right \| \hspace{.1cm} .\]
Furthermore, for every $n$ dimensional space we have
\[ \sqrt{n} \hspace{.1cm} \le \hspace{.1cm} (\pi (1+\sqrt{2}))^2 \hspace{.1cm} \left \| Id: \min(E) \rightarrow
\max(E) \right \| \hspace{.1cm} ,\]
where $\min(E)$, $\max(E)$ means $E$ equipped with its minimal, maximal
operator space
structure, respectively.
\end{prop}
{\bf Proof:} First we will prove an estimate for operators $T:\ell_2^n
\rightarrow \ell_2^n$
\[ \left | tr(T) \right | \hspace{.1cm} \le \hspace{.1cm} 170 \left \| T: \min(\ell_2^n) \rightarrow \max(\ell_2^n)
\right \|_n \hspace{.1cm} .\]
Indeed, we use $N\hspace{.05cm}=\hspace{.05cm} n$ in lemma \ref{prob} and consider the elements
\[ z_{kl} \hspace{.1cm} = \hspace{.1cm} \sum\limits_i \langle x_i(e_k), e_l \rangle \otimes e_i \in
M_n(\min(\ell_2^n))\]
which are of norm at most $\frac{\pi
(2+\frac{3}{\sqrt{2}})}{\sqrt{n}}$. In lemma \ref{ele}
we use $a=b= \frac{1}{\sqrt{n}}Id_{\ell_2^n}$ to deduce
\begin{eqnarray*}
\left | tr(T) \right | &=& \left | \sum\limits_1^n \langle T(e_i), e_j\rangle \right | \hspace{.1cm} = \hspace{.1cm}
\left | \sum\limits_{i,j} tr(x^{ }_ix_j^{*}) \langle T(e_i), e_j\rangle \right |\hspace{.1cm} \le \hspace{.1cm}
\left | \sum\limits_{kl} \langle z^{*}_{kl}, T(z_{kl}) \rangle \right | \\
&\le& \left \| T \right \|_n \hspace{.1cm} hs(id)^2 \hspace{.1cm} \left \| z \right \|_{M_n(\min(\ell_2^n))}
\left \| z^* \right \|_{M_n(\min(\ell_2^n))}\\
&\le&
\left \| T\right \|_n \hspace{.1cm} n \hspace{.1cm} \frac{ \pi^2 (2+\frac{3}{\sqrt{2}})^2 }{n}\hspace{.1cm} \le \hspace{.1cm}
170 \hspace{.1cm} \left \| T \right \|_n \hspace{.1cm} .
\end{eqnarray*}
For an arbitrary operator $T: \min(E) \rightarrow \max(F)$ we use trace
duality.
Indeed, let $S:F \rightarrow E$ an operator which factors through a Hilbert
space, i.e.
$S \hspace{.1cm} = \hspace{.1cm} uv$, $v: F \rightarrow H$, $u:H \rightarrow E$. In order to estimate the
trace
we can modify $S$ by inserting the orthogonal projection on
$v(Im(T))$.
Therefore there is no loss of generality to assume $H\hspace{.1cm} = \hspace{.1cm} \ell_2^n$.
Hence we get
\begin{eqnarray*}
\left | tr(TS) \right | &=& \left | tr(vTu) \right |\hspace{.1cm} \le \hspace{.1cm}
170 \hspace{.1cm} \left \| vTu \right \|_n \hspace{.1cm} \le \hspace{.1cm}
170 \hspace{.1cm} \left \| v \right \|_n \hspace{.1cm} \left \| T \right \|_n \hspace{.1cm} \left \| u \right \|_n
\hspace{.1cm} \le \hspace{.1cm} 170 \hspace{.1cm} \left \| T \right \|_n \hspace{.1cm} \left \| v \right \| \left \| u \right \| \hspace{.1cm} .
\end{eqnarray*}
We used that by the definition of the minimal operator space every
operator
with values in $\min(E)$ is automatically completely bounded. Taking
the infimum
over all factorizations we get the first assertion. The second one
follows
from duality by applying the estimate for the identity operator and
recalling that John's theorem $\gamma_2(Id_E) \hspace{.05cm} \le \hspace{.05cm} \sqrt{n}$.
The better constant is obtained
by letting $N$ tend to infinity in lemma \ref{prob} and the
corresponding modification in the proof above.
\hspace*{\fill}$\Box$\hz\pagebreak[1]
As a consequence one obtains that the identity on $\max(\ell_2)$ is
indeed a
1-summing operator. More general results
hold in the context of duals
of exact operator spaces using the key inequality of \cite{JP}. We will
need some notation.
Given a Hilbert space $H$ there are at least two natural ways to
associate an operator spaces
with $H$, the column space
\[ C_H \hspace{.1cm}:= \hspace{.1cm} \{ x \otimes y \in B(H)\left | { \atop } \right. x \in H \}\quad
\mbox{and the row space}\quad R_H \hspace{.1cm}:= \hspace{.1cm} \{ y \otimes x \in B(H)
\left | { \atop } \right. x \in H \}\hspace{.1cm} ,\]
where $y$ is a fixed, normalized element in $H$. It is quite easy to
check that
the corresponding norm of a matrix $(x_{ij}) \subset H$ is given by
\[\left \| x_{ij} \right \|_{M_n(C_H)} \hspace{.1cm} = \hspace{.1cm} \left \| \left ( \sum\limits_k \langle
x_{ki},x_{kj}\rangle\right ) _{ij} \right \|_{M_n}^{\frac{1}{2}}
\hspace{.1cm} \mbox{and}\hspace{.1cm}
\left \| x_{ij} \right \|_{M_n(R_H)} \hspace{.1cm} = \hspace{.1cm} \left \| \left ( \sum\limits_k \langle
x_{jk},x_{ik}\rangle\right ) _{ij} \right \|_{M_n}^{\frac{1}{2}} \hspace{.1cm} ,\]
where we assume the scalar product $\langle \cdot,\cdot\rangle$ to be
antilinear in the first
component. It turns out that $R_H^*\hspace{.1cm} = \hspace{.1cm} C_H$ in the category of
operator spaces. The space
$R_H \cap C_H$ is $H$ equipped with matrix norm given by the supremum
of $R_H$ and $C_H$. The dual space $(R_H \cap C_H)^* \hspace{.05cm}=\hspace{.05cm} C_H +R_H$
carries
a natural operator spaces structure and was intensively studied by
Lust-Picard, Haagerup and Pisier, see \cite{LPP,HP} .
In connection with this row and column spaces it is very useful to
consider the
following notion. Let $E\subset B(K)$ an operator space and $F$ Banach
space.
An operator $T :E\rightarrow F$ is $(2,RC)$-summing if there exists a
constant $c>0$
such that
\[ \sum\limits_k \left \| T(x_k) \right \| \hspace{.1cm} \le \hspace{.1cm} c \hspace{.1cm} \max\left \{ \left \| \sum\limits_k
x_k^*x_k^{ } \right \|_{B(K)}, \left \| \sum\limits_k x_k^{ }x_k^{*} \right \|_{B(K)}
\right \}\hspace{.1cm} . \]
The best possible constant is denoted by $\pi_{2,RC}(T)$. Let us note
that the right hand side is a weight in the sense of \cite{PSI}. We
start with a
description of $(2,RC)$ summing operators with values in a Hilbert
space,
which was suggested by C. le Merdy.
\begin{prop} \label{ext}Let $E\subset B(K)$ an operator space, $H$ a
Hilbert space
and $T:E\rightarrow H$ a bounded linear operator. $T$ is $(2,RC)$ summing if
and only if there is a bounded extension $\hat{T}: B(K) \rightarrow H$ of $T$
if and only if there is a completely bounded extension $\hat{T}: B(K)
\rightarrow R_H+C_H$ of
$T$.
\end{prop}
{\bf Proof:} Let us observe that by the non-commutative Grothendieck
inequality
every bounded $S:B(K) \rightarrow H$ is $(2,RC)$ summing, see e.g.
\cite{PIL}. Therefore,
we are left to prove the existence of a cb extension for
$(2,RC)$-summing operators
$T:E \rightarrow H$. Using a variant of Pietch's factorization theorem,(for
more
precise information see \cite{PSI},) there are states $\phi$, $\psi$ on
$B(K)$ and $0\le \theta \le 1$ such that
\[ \left \| v(x) \right \| \hspace{.1cm} \le \hspace{.1cm} \pi_{2,RC}(T) \hspace{.1cm} \left ( \theta \phi(xx^*) +
(1-\theta) \psi(x^*x) \right ) ^{\frac{1}{2}} \hspace{.1cm} .\]
We define the sesquilinearforms $\langle x,y \rangle_{\phi} \hspace{.05cm}:=\hspace{.05cm}
\phi(yx^*)$ and
$\langle x,y \rangle_{\psi} \hspace{.1cm} :=\hspace{.1cm} \psi(x^*y)$.
Furthermore, we denote by $C_{\phi}$, $R_{\psi}$ the column, row
Hilbert space which is induced
by the corresponding scalar product. It is easy to check that the
identities
$I_{\phi}: B(K) \rightarrow C_{\phi}$, $I_{\psi} :B(K) \rightarrow R_{\psi}$ are in
fact
completely bounded of norm $1$. We denote by
\[ M\hspace{.1cm}:=\hspace{.1cm} cl\{ (\sqrt{\theta}x,\sqrt{1-\theta}x) \left | { \atop } \right. x \in E \}\hspace{.1cm}
\subset \hspace{.1cm} C_{\phi} \oplus_2 R_{\psi}\]
the closure of the image of $J \hspace{.05cm}:=\hspace{.05cm}\sqrt{\theta}I_{\phi} \oplus
\sqrt{1-\theta}I_{\psi}$ restricted
to $E$. $P_M$ denotes the orthogonal projection of $C_{\phi}\oplus_2
R_{\psi}$ onto $M$.
Then we get an extension $\hat{T} \hspace{.1cm} = \hspace{.1cm} \tilde{v}J$ of $T$, where
$\tilde{v}$ acts as $v$ but considered as an operator on $M$.
By the first inequality $\tilde{v}$ is of norm at most $\pi_{2,RC}(T)$
and by definition
of $R_H+C_H$ we get $\left \| \tilde{v}:R_M+C_M \rightarrow R_H+C_H\right \|_{cb}
\hspace{.05cm} \le \hspace{.05cm} \pi_{2,RC}(T)$.
By duality it is easy to see that $P_M: C_{\phi}\oplus_1 R_{\psi} \rightarrow
R_M+C_M$ is completely
bounded of norm 1. On the other hand the cb norm of
\[ \sqrt{\theta}Id_{C_{\phi}} \oplus \sqrt{1-\theta}Id_{R_{\psi}}:
C_{\phi}\oplus_{\infty} R_{\psi} \rightarrow C_{\phi}\oplus_1 R_{\psi} \]
is at most $\sqrt{2}$. \hspace*{\fill}$\Box$\hz\pagebreak[1]
Now we will give a description of completely bounded operators between
the class of exact operator spaces and maximal operator spaces.
Pisier's notion of exact operator spaces, \cite{PSE}, is motivated
by Kirchberg's work.
One possible definition says that an operator space
is exact if its finite dimensional subspaces are uniformly cb
isomorphic
to subspaces of the spaces of compact operators.
\begin{samepage}
\begin{prop} \label{cb}
Let $E \subset B(K)$ be either an exact operator space and $F$ a
maximal operator space, i.e, a quotient
of $\ell_1(I)$ for some index set $I$,
or $E$ a $C^*$ algebra and $F \hspace{.05cm}=\hspace{.05cm} \ell_1(I)$.
For an operator $T:E \rightarrow F$ the following are equivalent.
\begin{enumerate}
\item[i)] $T$ is completely bounded.
\item[ii)] There is a Hilbert space $H$ and operators $v:E \rightarrow H$,
$u: H \rightarrow F$ such that
$v$ is $(2,RC)$ summing and $u^*$ is absolutely $2$ summing.
\item[iii)] There is a completely bounded extension $\hat{T}: B(K)
\rightarrow \ell_1(I)$
of $T$.
\item[iv)] $T$ factors completely through $R_H + C_H$ for some Hilbert
space $H$.
\end{enumerate}
Moreover, the corresponding constants are equivalent.
\end{prop}
\end{samepage}
{\bf Proof:} The implication $i) \Rightarrow ii)$ is either the
non-commutative
Grothendieck inequality, see \cite{PIL}, or the key inequality in
\cite{JP}.
The implications $ii) \Rightarrow iii), iv)$ are direct consequences of
proposition \ref{ext} and
the extension properties of absolutely 2 summing operators. We only
have to note
that an absolutely $2$ summing operator $u^*:\ell_{\infty}(I)\rightarrow R_H
\cap C_H$ is
completely bounded. The rest is trivial. \hspace*{\fill}$\Box$\hz\pagebreak[1]
For the proof of theorem 3 we will need some more notation. Let $1 < p
< \infty$, $E$ be an operator space
and $F$ a Banach space. An operator $T:E \rightarrow F$ belongs to
$\Gamma_{p,RC}$ if
\[ \gamma_{p,RC}(T) \hspace{.1cm} :=\hspace{.1cm} \sup\{ \sigma_{p,\infty}(vTu) \left | { \atop } \right. v\in
\Pi_2(F,\ell_2), \hspace{.1cm} u \in CB(R+C,E)\hspace{.1cm}
\pi_2(v), \hspace{.1cm} \left \| u\right \|_{cb}\hspace{.1cm} \le \hspace{.1cm} 1\} \hspace{.1cm} < \hspace{.1cm} \infty\hspace{.1cm} .\]
For $p=1$ we will use $\Gamma_{1,RC}$, $\gamma_{1,RC}$ for the
corresponding expressions
with $\sigma_{p,\infty}$ replaced by $\sigma_1$.
This notion is modeled close to the notion of Hilbert space factoring
operators
and forms a 'graduation' of $\Gamma_{1,RC}$ in the cases $p>1$.
This has already been proved to be useful for
eigenvalue estimates.
\begin{theorem} \label{exact} Let $1< p< 2$, $G$ an exact operator
space, $E\subset G^*$ and $F$ a minimal operator space.
For an operator $T:E \rightarrow F$ the following are equivalent.
\begin{enumerate}
\item[i)] There exists a constant $c_1>0$ such that
\[ \sum\limits_1^n \left \| Tx_k \right \| \hspace{.1cm} \le \hspace{.1cm} c_1 \hspace{.1cm} n^{1-\frac{1}{p}} \hspace{.1cm} \left \|
\sum\limits_1^n e_i \otimes x_i \right \|_{\ell_1^n \otimes_{min} E} \hspace{.1cm}. \]
\item[ii)] $T$ is in $\Gamma_{p,RC}$
\item[iii)] There is a constant $c_3$ such that for all completely
bounded operator $S:F \rightarrow E$ one has
\[ \sup_k k^{\frac{1}{p}} \hspace{.1cm} \left | \lambda_k(TS) \right |
\hspace{.1cm} \le \hspace{.1cm} c_3 \hspace{.1cm} \left \| S:{\rm min}(F) \rightarrow E \right \|_{cb} \hspace{.1cm} .\]
\end{enumerate}
In the limit cases $p=1$ the same remains true if we replace the
$\ell_{p,\infty}$ norm of the eigenvalues by the $\ell_1$ norm.
Furthermore, every completely bounded $S: \min{F} \rightarrow E$ is
absolutely 2-summing
and hence the eigenvalues of a composition $TS$ are in $\ell_2$.
\end{theorem}
{\bf Proof:} The implication $iii) \Rightarrow i)$ follows along the
same line as $iii)\Rightarrow i)$ in \ref{opeigen}.
For the implication \boldmath $ii) \Rightarrow iii)$ \unboldmath let
$S: \min(F) \rightarrow E$ completely bounded
and consider $\hat{S} \hspace{.05cm}:=\hspace{.05cm}\iota_{E}S: \min(F) \rightarrow G^*$. By
proposition \ref{cb} we can assume that
$\hat{S} \hspace{.05cm}=\hspace{.05cm} uv$, where $v:F \rightarrow H$ is absolutely 2-summing and
$u:R_H \cap C_H \rightarrow G^*$
is completely bounded. Using an orthogonal projection $P$ on
$u^{-1}(E)$ together
with the homogeneity of the spaces $R_H$ and $C_H$, see \cite{BPT}, we
can assume that
$u(H) \subset E$ and therefore $S\hspace{.05cm}=\hspace{.05cm} uv$. Using the well-known
eigenvalue estimate of the class
${\cal S}_{p,\infty}$ and the principle of related operators, \cite{PIE},
we get
\begin{eqnarray*}
\sup_k k^{\frac{1}{p}} \hspace{.1cm} \left | \lambda_k(TS) \right | &=&
\sup_k k^{\frac{1}{p}} \hspace{.1cm} \left | \lambda_k(vTu) \right | \hspace{.1cm} \le \hspace{.1cm}
c_0 \hspace{.1cm}\sup_k k^{\frac{1}{p}} \hspace{.1cm} a_k(vTu) \\
&\le& c_0 \hspace{.1cm} \gamma_{p,RC}(T) \hspace{.1cm} \left \| u \right \|_{cb} \hspace{.1cm} \pi_2(v) \hspace{.1cm} \le \hspace{.1cm}
c_0 \hspace{.1cm} b_0 \hspace{.1cm} \left \| S \right \|_{cb} \hspace{.1cm} \gamma_{p,RC}(T) \hspace{.1cm} ,
\end{eqnarray*}
where $b_0\hspace{.05cm} \le \hspace{.05cm} 4 \sqrt{2}$ is the constant from proposition \ref{cb}.
In order to prove \boldmath $i) \Rightarrow ii)$ \unboldmath we will
use
the notion of Grothendieck numbers for an operator $R: X \rightarrow Y$
introduced by S. Geiss.
\[ \Gamma_n(R) \hspace{.1cm} :=\hspace{.1cm} \sup \left\{ \left | {\rm det}(\langle R(x_i),
y_j \rangle)_{ij} \right |^{\frac{1}{n}} \left | { \atop } \right.
(x_i)_1^n \subset B_X, \hspace{.05cm} (y_j)_1^n\subset B_{Y^*} \right \} \hspace{.1cm}. \]
Using an inequality of \cite{DJ1} we have to show that
\[\sup_n n^{\frac{1}{p}-\frac{1}{2}}\hspace{.1cm} \Gamma_n(Tu) \hspace{.1cm} \le \hspace{.1cm} c_2 \hspace{.1cm} \left \| u
\right \|_{cb} \]
for all operator $u : R+C \rightarrow E$. By the definition of the
Grothendieck numbers
we have to consider elements $(y_k^*)_1^n \subset B_{F^*}$ and $v:=
\sum\limits_1^n y_k^{*}\otimes e_k:F \rightarrow \ell_{\infty}^n$
which is of norm $1$. If $\iota_{2,\infty}^n :\ell_{\infty}^n \rightarrow
\ell_2^n$
denotes the canonical inclusion map we have to show
\[ \Gamma_n(\iota_{2,\infty}^n vTu) \hspace{.1cm} \le \hspace{.1cm} c_2 \hspace{.1cm}
n^{\frac{1}{2}-\frac{1}{p}} \hspace{.1cm} \left \| u \right \|_{cb} \hspace{.1cm} .\]
Now let $w: \ell_2^n \rightarrow H$ such that
\[ \sum\limits_1^n a_j(\iota_{2,\infty}^k vTu) \hspace{.1cm} = \hspace{.1cm} tr(\iota_{2,\infty}^k vTuw) \hspace{.1cm} .\]
Using basic properties of Grothendieck numbers, see \cite{GEI} and the
geometric/arithmetic mean
inequality we get for $S:= uw\iota_{2,\infty}^k : \ell_{\infty}^n \rightarrow E$
\begin{eqnarray*}
\Gamma_n(\iota_{2,\infty}^n vTu) &\le&
\left ( \prod_1^n a_j(\iota_{2,\infty}^k vTu) \right ) ^{\frac{1}{n}}
\hspace{.1cm} \le \hspace{.1cm} \frac{1}{n} \hspace{.1cm} \sum\limits_1^n a_j(\iota_{2,\infty}^k vTuw) \hspace{.1cm} = \hspace{.1cm}
\frac{1}{n} \hspace{.1cm} \left | tr(vTS) \right | \\
&\le& \frac{1}{n} \hspace{.1cm} \sum\limits_1^n \left \| TS(e_k) \right \| \hspace{.1cm} \sup_{i} \left \|
y_k^* \right \| \hspace{.1cm} \le \hspace{.1cm}
c_1 \hspace{.1cm} n^{-\frac{1}{p}} \hspace{.1cm} \left \| S \right \|_{cb} \\
&\le& c_1 \hspace{.1cm} n^{-\frac{1}{p}} \hspace{.1cm} \pi_2(\iota_{2,\infty}^k) \hspace{.1cm} \left \| wu :R_n\cap C_n
\rightarrow E\right \|_{cb}\hspace{.1cm} \le \hspace{.1cm}
c_1 \hspace{.1cm} n^{\frac{1}{2}-\frac{1}{p}}\hspace{.1cm} \left \| u \right \|_{cb}\hspace{.1cm} ,
\end{eqnarray*}
where we have used the homogeneity of the space $R_H\cap C_H$ and
remark \ref{rcoh} to estimate
the cb norm of $\iota_{2,\infty}^k: \ell_{\infty}^n \rightarrow R_n\cap C_n$.
In the case $p=1$ we have to estimate $\sigma_1(vTu)$ for a 1-summing $T$,
absolutely
2 summing v and completely bounded $u:R_n\cap C_n \rightarrow E$
By Pietsch's factorization theorem, see \cite{PIE}, there is
a factorization of $v \hspace{.05cm}=\hspace{.05cm} SR$, $S:\ell_{\infty}^M \rightarrow \ell_2$, with
absolutely 2 summing $S$. Since $uwS$ is completely bounded
for all bounded $w$ we can use the definition to see that $TuwS$ is
integral
in the Banach space sense and hence the trace of $wvTu \hspace{.05cm}=\hspace{.05cm} SRTuw$ can
be estimated by
the 1-summing norm. This gives the right estimate for trace class norm,
and hence the eigenvalues of $ST$, provided
$w$ is chosen by polar decomposition as above. \hspace*{\fill}$\Box$\hz\pagebreak[1]
\begin{rem} \label{fact}{\rm A variant of Kwapien theorem for Hilbert
space factorizing operators
shows that an operator $T: G^* \rightarrow \min(F^{**})$
factors completely through $R_H\cap C_H$ if and only if $|tr(TS)|\hspace{.1cm} \le \hspace{.1cm} C$
for all operators $S:F^{**}\rightarrow G^*$ which admit a factorization
$S=vu$, $\pi_2(v)\le 1$ and $\pi_{2,RC}(u^*)\le 1$. Indeed, this
duality
concept was studied in the more general framework of $\gamma$-norms by
Pisier \cite{PSI}. We want to indicate the connection to 1-summing
operators in this
context. Given a 1-summing operator $T:E\subset G^* \rightarrow F$ we observe
that
$T$ corresponds by trace duality to a linear functional
on $F^*\otimes_{\alpha} E$ where $\alpha(S) \hspace{.05cm}:=\hspace{.05cm} \inf\{\pi_2(v) \hspace{.05cm}
\left \| u :R \cap C \rightarrow E \right \|_{cb}\}$
and the infimum is taken over all factorizations $S=vu$. Since
$F^*\otimes_{\alpha}E$ embeds isometrically into $F^* \otimes_{\alpha}
G^*$
an application of Hahn-Banach yields a norm preserving functional on
the whole tensor product,
i.e. an extension $\hat{T}:G^* \rightarrow F^{**}$ of $T$, which is also
1-summing by theorem \ref{exact}.
As a consequence of the key inequality in \cite{JP} and
proposition \ref{ext} we deduce that for all $u:R_H\cap C_H \rightarrow G^*$
the cb-norm
is equivalent to $\pi_{2,RC}(u^*)$. Therefore, we can apply the
modification of Kwapien's argument, see also \cite{PSI}, to obtain a
completely bounded
factorization of $\hat{T}: G^* \rightarrow \min(F^{**})$
through $R_H\cap C_H$ for some Hilbert space $H$. Clearly, if $\hat{T}$
admits such a factorization it must be 1-summing and all these
properties coincide
due to the fact that $G$ is exact.}
\end{rem}
\begin{cor} Let $G$ be an exact operator space, $q:B(H)^* \rightarrow G^*$
the quotient map
and $E \subset G^*$.
The following conditions are equivalent
\begin{enumerate}
\item The Banach space $E$ is of cotype 2 and every bounded operator
$u:c_0 \rightarrow E$ is completely bounded.
\item The Banach space $E$ is of cotype 2 and every operator $v: E
\rightarrow R\cap C$
which admits a completely bounded extension $\hat{v}:G^* \rightarrow R\cap C$
is absolutely 1-summing.
\item There exists a constant $c>0$, such that for every sequence
$(x_k)_1^n\subset E$ there is a sequence $(\tilde{x}_k)_1^n \subset
B(H)^*$
such that $q(\tilde{x}_k)=x_k$ and
\[ {\rm I\! E} \left \| \sum\limits_1^n \tilde{x}_k \varepsilon_k \right \|_{B(H)^*} \hspace{.1cm} \le \hspace{.1cm} c \hspace{.1cm} {\rm I\! E}
\left \| \sum\limits_1^n x_k \varepsilon_k \right \|_{E} \hspace{.1cm} .\]
\end{enumerate}
In particular, a maximal operator space satisfies one of the conditions
above
if and only if it is of operator cotype 2 if and only if it is a G.T.
space of
cotype 2, see \cite{PIL}.
\end{cor}
{\bf Proof:} Let $X$ be a Banach space we define $Rad(X) \subset
L_2( {\rm I\!D},X)$
to be the span of $\{ \varepsilon_i \otimes x_i \}$, where
$ {\rm I\!D} =\{-1,1\}^{{\rm I\! N}}$ is the group of signs with its Haar measure $\mu$
and $\varepsilon_i$ the i-th coordinate.
For a sequence
$(x_i)_i$ the norm in $Rad(X)$ is given by
\[ \left \| (x_i)_i \right \| \hspace{.1cm} :=\hspace{.1cm} \left ( \int_{ {\rm I\!D}} \left \| \sum\limits_i \varepsilon_i x_i
\right \|_{X}^2 d \mu \right ) ^{\frac{1}{2}} \hspace{.1cm} .\]
It was shown by Pisier and Lust-Picard \cite{LPP} that
$Rad(B(H)^*)$ and $(R+C)(B(H)^*)$ have equivalent norms. Since the map
$Id_{R+C}\otimes q$ is a complete quotient map, condition $iii)$ is
equivalent to
\[ \left \| Id\otimes \iota_E: Rad(E) \rightarrow (R+C)(G^*) \right \| \hspace{.1cm} < \hspace{.1cm}
\infty \hspace{.1cm} ,\]
where $\iota_E: E \rightarrow G^*$ is the inclusion map. We deduce
from theorem \ref{exact} and remark \ref{fact} that condition $i)$ and
$ii)$ are
equivalent by trace duality. Moreover, all conditions imply that $E$ is
of cotype 2,
since $B(H)^*$ is of cotype 2, \cite{TOJ}. Now let $v : = \sum\limits_i x_i^*
\otimes e_i$
be an operator from $E$ to $R\cap C$. We deduce from \cite[5.16]{PIL}
\[ \frac{1}{C_1(E)}\hspace{.1cm} \pi_1(v) \hspace{.1cm} \le \hspace{.1cm} \pi_2(v) \hspace{.1cm} \le \hspace{.1cm} \left \| (x^*_i)^{ }_i
\right \|_{(Rad(E))^*}
\hspace{.1cm} \le \hspace{.1cm} C_2(E) \hspace{.1cm} \pi_2(v) \hspace{.1cm} \le \hspace{.1cm} C_2(E) \hspace{.1cm} \pi_1(v) \hspace{.1cm} ,\]
where $C_2(E)$ is the cotype 2 constant of $E$ and $C_1(G)$ only
depends
on $C_2(E)$. Finally we note that $CB(G^*,R\cap C) \hspace{.1cm} = \hspace{.1cm} (R\cap
C)(G^{**})$.
But this means that the set of operators admitting a cb extension can
be identified
with the dual space of $(R+C)^{inj}(E) := (Id \otimes
\iota_E)^{-1}(R+C)(G^*)$.
Therefore condition $ii)$ is equivalent to
\[ \left \| Id \otimes Id_G:(R+C)^{inj}(E) \rightarrow (Rad(E))^* \right \| <
\infty\hspace{.1cm}.\]
Duality implies the assertion. In the situation of maximal operator
spaces
we deduce from remark \ref{ocot} that a maximal operator space
$X=\ell_1(I)/S$
with operator cotype 2 satisfies condition $i)$ whereas $iii)$ implies
operator
cotype 2 since $\ell_1(I)$ has operator cotype 2. (This seems not to be
the case for
${\cal S}_1(H)$.)\hspace*{\fill}$\Box$\hz\pagebreak[1]
In the last part we will study the operator spaces associated to
the Clifford algebra. Recalling that the generators of the Clifford
algebra
have already been useful to find an example of a $(2,oh)$-summing
space, see \cite{PLT},
it is probably not surprising that this space is also 1-summing. More
precisely,
let $(u_i)_{i\in N} \subset {\otimes \atop n \in \nz}M_2$ be the generators
of the
Clifford algebra, i.e.
\begin{eqnarray*}
u_i \hspace{.1cm} = \hspace{.1cm} u_i^* &\hspace{.1cm} \mbox{and}\hspace{.1cm}& u_i^2 \hspace{.1cm} = \hspace{.1cm} Id \quad \quad
\mbox{for} \hspace{.1cm} i\in {\rm I\! N}\hspace{.1cm} ,\\
u_iu_j\hspace{.05cm} +\hspace{.05cm} u_ju_i &=& 0 \quad \quad \mbox{if}\hspace{1.5cm} i \neq j \hspace{.1cm}.
\end{eqnarray*}
By $CL$ we denote of the span of these generators. The next proposition
collects some
facts about this space. ($OH$ is the operator Hilbert space introduced
and studied by Pisier \cite{PLT}).
\begin{prop}\label{Sn}
\begin{enumerate}
\item $CL$ is $\sqrt{2}$ isomorphic to a Hilbert space.
\item The identity $id_{CL}$ is 1-summing with $\pi_{1,sc}(T) \hspace{.05cm} \le \hspace{.05cm} 2$ and for
every operator $T:CL\rightarrow CL$ we have
\[ \sum\limits_k \left | \lambda_k(T) \right | \hspace{.1cm} \le \hspace{.1cm} c_0 \hspace{.1cm} \gamma_{\infty}^{\it o}(T) \hspace{.1cm} . \]
\item Let $G\in \{ OH,C,R,C+R,R\cap C\}$ and $u:G \rightarrow CL$ then one
has
\[ \left \| u \right \|_{cb} \sim_c \pi_2(u) \hspace{.1cm} .\]
\end{enumerate}
\end{prop}
{\bf Proof:} By approximation it is sufficient to consider the finite
dimensional case. Therefore we fix
$n \in \nz$ and $u_1,..,u_n \in \otimes_{k=1}^n M_2 \cong M_{2^n}$. For an
element
$x \hspace{.05cm}=\hspace{.05cm}\sum\limits_j \alpha_j u_j$ we have
\begin{eqnarray*}
x^*x+xx^* &=& \sum\limits_{kj} \overline{\alpha_k} \hspace{.05cm} \alpha_j\hspace{.05cm} u_k u_j\hspace{.1cm}
+\hspace{.1cm}
\sum\limits_{jk} \alpha_j\hspace{.05cm} \overline{\alpha_k} \hspace{.05cm} u_j u_k\\
&=& 2 \hspace{.1cm}\sum\limits_1^n \left | \alpha_k \right |^2 u_k^2 \hspace{.1cm} +\hspace{.1cm} \sum\limits_{k<j}
\overline{\alpha_k}\hspace{.05cm} \alpha_j\hspace{.05cm} (u_ku_j+u_ju_k)
+\hspace{.1cm}+\hspace{.1cm} \sum\limits_{k>j} \overline{\alpha_k}\hspace{.05cm} \alpha_j\hspace{.05cm}
(u_ku_j+u_ju_k)\\
&=& 2 \hspace{.1cm} \left \| \alpha \right \|_2^2 \hspace{.1cm} Id \hspace{.1cm} .
\end{eqnarray*}
In particular, we get
\[\left \| \alpha \right \|_2 \hspace{.1cm} = \hspace{.1cm} \left \| \frac{x^*x+xx^*}{2}
\right \|_{M_{2^n}}^{\frac{1}{2}}
\hspace{.1cm} \le \hspace{.1cm} \hspace{.1cm} \left \| x \right \|_{M_{2^n}} \hspace{.1cm} \le \hspace{.1cm} \sqrt{2} \left \| \frac{x^*x+xx^*}{2}
\right \|^{\frac{1}{2}}
\hspace{.1cm} = \hspace{.1cm} \sqrt{2} \left \| \alpha \right \|_2 \hspace{.1cm}. \]
This is the first assertion. In order to estimate the 1-summing norm
we define $\hat{x} \hspace{.05cm}=\hspace{.05cm} \left ( {x \atop x^*} {0 \atop 0} \right ) $
in ${\cal S}_1^{2^{n+1}}$. With the triangle inequality in ${\cal S}_1^{2^{n+1}}$
we get
\[2^n \hspace{.1cm} \left \| \alpha \right \|_2 \hspace{.1cm} = \hspace{.1cm} \left \| \left ( \frac{x^*x+xx^*}{2}
\right ) ^{\frac{1}{2}} \right \|_1
\hspace{.1cm} = \hspace{.1cm} \frac{1}{\sqrt{2}} \hspace{.1cm} \left \| \hat{x} \right \|_{{\cal S}_1^{2^{n+1}}} \hspace{.1cm} \le \hspace{.1cm}
\sqrt{2} \left \| x \right \|_{{\cal S}_1^{2^n}} \hspace{.1cm} .\]
Combining these estimates we have found a factorization of the identity
on $CL^n$ through
the restriction of $2^{-n} Id:M_{2^n} \rightarrow {\cal S}_1^{2^n}$ on $E$.
By proposition \ref{fac} the 1-summing norm of identity on $CL^n$ is
at most $2$. As a consequence every operator $T: CL \rightarrow CL$ which
factors
completely through a $C(K)$ space is integral and since $CL$ is
isomorphic to a Hilbert space
the eigenvalues are absolutely summing. To prove $3.$ let $u:R\cap
C\rightarrow CL$. In order
to show that this operator is absolutely 2-summing we use trace
duality.
For this let $v:CL \rightarrow R\cap C$ which is absolutely 2-summing. By
Pietsch factorization
theorem $v$ factors through a 2-summing operator $S:C(K)\rightarrow R\cap C$,
which is completely bounded,
see \ref{rcoh}.
Since $CL$ is 1-summing the composition $Su$ is integral we get the
right estimate for the
trace. Vice versa, we consider an absolutely 2 summing operator $u:R+C
\rightarrow CL$.
All the underlying Banach spaces are isomorphic to Hilbert spaces and
therefore
$u$ admits a factorization $u\hspace{.05cm}=\hspace{.05cm} wv$, $v^*$ absolutely 2-summing and
$w: \ell_1 \rightarrow CL$.
This operator $w$ is automatically completely bounded, whereas $w$ is
completely bounded
in view of \ref{rcoh} and duality. \hspace*{\fill}$\Box$\hz\pagebreak[1]
Now we will construct operator spaces $E_r$, $E_r^n$ which are
isomorphic to $\ell_2$, $\ell_2^n$, respectively,
but the 1 summing norm has a certain growth rate.
For $1<r<2$ we define a matrix structure on $\ell_2$, ($\ell_2^n$) as
follows
\[ \left \| (x_{ij}) \right \|_r \hspace{.1cm} := \hspace{.1cm} \sup_{k \in \nz} k^{\frac{1}{r}-1}\hspace{.05cm}
\sup \left\{ \left \| (P_H\hspace{.05cm} x_{ij}) \right \|_{CL} \left | { \atop } \right. H \subset \ell_2,\hspace{.05cm}
(H\subset \ell_2^n)\hspace{.05cm} dimH
\hspace{.05cm}\le\hspace{.05cm} k \right\}\hspace{.1cm},\]
where we identify $CL$ and $\ell_2$ via the isomorphism from
proposition
\ref{Sn} and $P_H \hspace{.05cm}:\hspace{.05cm} \ell_2 \rightarrow H$ denotes the orthogonal
projection on $H$.
The next proposition states the properties of this operator spaces.
\begin{prop} Let $1<r<2<p<\infty$ with
$\frac{1}{r}=\frac{1}{2}+\frac{1}{p}$.
\begin{enumerate}
\item[i)] $E_r$ is an operator space which is $2$ isomorphic to
$\ell_2$.
\item[ii)] For all $n \in \nz$ one has $\pi_{1,sc}^n(id_{E_r}) \hspace{.1cm} \sim_2 \hspace{.1cm}
n^{\frac{1}{r}}\hspace{.1cm}. $
\item[iii)] For all completely $\infty$-factorable operators $T \in
\Gamma_{\infty}^{\it o}(E_r,E_r)$
one has
\[ \sup_{n \in \nz} n^{\frac{1}{r}} \hspace{.05cm}\left |\lambda_n(T)\right | \hspace{.1cm}\le \hspace{.1cm} c_r\hspace{.1cm}
\gamma_{\infty}^{\it o}(T:\min(E_r) \rightarrow E_r)\hspace{.1cm}.\]
\item[iv)] For the completely bounded operators with values
in $E_r$ and defined on $\ell_{\infty}$ or $G \in \{R,C,R+C,R\cap C,
OH\}$
one has
\[ CB(\ell_{\infty},E_r) \hspace{.1cm}=\hspace{.1cm} {\cal L}_{p,\infty}^{(a)}(\ell_{\infty},E_r)
\hspace{.3cm} \mbox{and} \hspace{.3cm} CB(G,E_r) \hspace{.1cm}=\hspace{.1cm} {\cal L}_{p,\infty}^{(a)}(OH,E_r)\hspace{.1cm}.\]
\end{enumerate}
A similar statement holds uniformly in $n$ for the spaces $E_r^n$.
\end{prop}
\pagebreak
{\bf Proof:} i) is clear by definition and proposition \ref{Sn}.
ii) and iii) follows from iv) and standard estimates of $\ell_{r,\infty}^{(x)}(id:
\ell_{\infty}^n\rightarrow \ell_2^n) \sim_{c_r} n^{\frac{1}{r}}$.
For $iv)$ we note that by definition and the fact that $CL$ is 1
summing we have
\[ \left \| T: \ell_{\infty} \rightarrow CL \right \|_{cb} \sim_{2} \sup_{k \in \nz,dim H
\le k} k^{\frac{1}{r}-1} \hspace{.1cm} \iota(P_HT)\hspace{.1cm} .\]
For an operator $u: \ell_2 \rightarrow \ell_{\infty}$ we deduce by Schmidt
decomposition
\[ \sup_k k^{\frac{1}{r}} a_k(Tu) \hspace{.1cm} \le \hspace{.1cm} \sup_{k \in \nz, dim H\le k}
k^{\frac{1}{r}-1} \sigma_1(P_HTu)
\hspace{.1cm} \le \hspace{.1cm} 2 \left \| T\right \|_{cb} \hspace{.1cm} \left \| u\right \| \hspace{.1cm} .\]
For the converse implication we use an interpolation argument. Indeed,
by
standard relations between different s-numbers, \cite{PIE}, one has
\[{\cal L}_{r,\infty}^{(x)} \hspace{.1cm} \subset \hspace{.1cm} {\cal L}_{p,\infty}^{(a)} \hspace{.1cm} \subset \hspace{.1cm} (\L_{2,1}^{(a)},
\L_{\infty}^{(a)})_{\theta,\infty} \hspace{.1cm} ,\]
with $\frac{1}{p}=\frac{1-\theta}{2}+\frac{\theta}{\infty}$ and
$\ell_{2,1}^{(a)}(T)$
is the norm of the approximation numbers in the Lorentz spaces
$\ell_{2,1}$.
By definition of the $K_t$ functional for $t=\sqrt{k}$ we can find a
decomposition
$T \hspace{.05cm}=\hspace{.05cm} T_1 +T_2$ such that $\ell_{2,1}^{(a)}(T_1) + \sqrt{k} \hspace{.05cm}\left \| T_2 \right \| \hspace{.05cm} \le \hspace{.05cm}
c_p \hspace{.1cm} k^{\frac{1}{2}-\frac{1}{p}}\hspace{.05cm}\ell_{r,\infty}^{(x)}(T)$.
An application of "little Grothendiek's inequality", \cite{PIL}, gives
$\iota(T_1) \hspace{.1cm} \le \hspace{.1cm} c_1 \hspace{.05cm} \ell_{2,1}^{(a)}(T_1)$. Hence we get
for every $k$ dimensional subspace $H$
\begin{eqnarray*}
\iota(P_HT) &\le& \iota(P_HT_1) + \iota(P_HT_2) \hspace{.1cm} \le \hspace{.1cm}
\iota(T) +\sqrt{k}\hspace{.05cm} \pi_2(T_2)\\
&\le&
(c_1 + \frac{2}{\sqrt{\pi}}) \hspace{.1cm} \left ( \ell_{2,1}^{(a)} (T_1) +\sqrt{k} \hspace{.05cm} \left \| P_H
T_2 \right \| \right )
\hspace{.1cm} \le \hspace{.1cm} c_p \hspace{.05cm} (c_1 + \frac{2}{\sqrt{\pi}}) \hspace{.1cm} k^{1-\frac{1}{r}}\hspace{.1cm}
\ell_{r,\infty}^{(x)}(T) \hspace{.1cm} .
\end{eqnarray*}
The second formula is proved along the same line, although
Grothendiek's inequality is not
used in this argument. The key point here is the following formula
which we deduce from
proposition \ref{Sn}
\begin{eqnarray*}
\left \| T\hspace{.05cm}: \hspace{.05cm} G\rightarrow E_r \right \|_{cb}&=&
\sup_{k \in \nz} k^{\frac{1}{r}-1} \hspace{.1cm} \sup_{H,\hspace{.05cm} dim(H)\le k} \left \| P_HT \hspace{.05cm}:
\hspace{.05cm} G
\rightarrow CL\right \|_{cb}\\
&\sim_c& \sup_{k \in \nz} k^{\frac{1}{r}-1} \hspace{.1cm} \sup_{H,\hspace{.05cm} dim(H)\le k}
\pi_2 (P_HT \hspace{.05cm}: \hspace{.05cm} G
\rightarrow CL)\hspace{.1cm} .\\[-1.3cm]
\end{eqnarray*}
\hspace*{\fill}$\Box$\hz\pagebreak[1]
\begin{rem} {\rm An easy modification of the spaces above allows us to
construct
an operator space $E_1$ such that the identity is not
1-summing, but
\[ \sup_{n \in \nz} n \hspace{.1cm}\left | \lambda_n(T) \right | \hspace{.1cm} \le \hspace{.1cm} c_0 \hspace{.1cm} \gamma_{\infty}^{\it o}(T) \]
for $T \in \Gamma_{\infty}^{\it o}(E_1,E_1)$. In fact, we define the matrix norm on $E_1$
by
\[ \left \| (x_{ij}) \right \| \hspace{.1cm} := \hspace{.1cm} \sup_{k \in \nz} \hspace{.05cm}
\sup \left\{ \left \| (P_H\hspace{.05cm} x_{ij}) \right \|_{CL} \left | { \atop } \right. H \subset \ell_2, \hspace{.05cm}
dim(H)
\hspace{.05cm}\le\hspace{.05cm} k \hspace{.1cm} \mbox{and} \hspace{.1cm} H \subset H_k
\right\}\hspace{.1cm},\]
where $H_k\hspace{.05cm} :=\hspace{.05cm} spann\{e_j\hspace{.05cm} |\hspace{.05cm} k\hspace{.05cm}\le\hspace{.05cm} j\}$. Using similar
arguments as above we
can prove
\[ CB(\ell_{\infty},E_1) \hspace{.1cm}\subset\hspace{.1cm} {\cal L}_{1,
\infty}^{(x)}(\ell_{\infty},E_1) \hspace{.1cm}, \]
and the diagonal operator $D_{\si} \in \B(\ell_{\infty},E_1)$ defined by
$\sigma_k \hspace{.05cm}=\hspace{.05cm} \frac{1}{k}$ is completely bounded, but not 1-summing.}
\end{rem}
\begin{exam} {\rm At this point we want to give a review
of infinite dimensional operator spaces such that $\pi_{1,sc}^n(id_E) \hspace{.1cm} \le \hspace{.1cm}
n^{1-\frac{1}{r}}$
for some $1<r<2$.
By theorem \ref{exact} it is easy to see that this holds for $\max(X)$
if and only
if $X$ is a so called weak $r$ Hilbertian Banach spaces, see
\cite{PI3}, \cite{GEI} and \cite{DJ1}.
Standard examples are obtained by interpolation $X\hspace{.05cm}=\hspace{.05cm}(H,Y)_{\theta}$,
$\frac{1}{r} \hspace{.05cm}=\hspace{.05cm} \frac{1-\theta}{1} + \frac{\theta}{2}$,
where $H$ is a Hilbert space and $Y$ an arbitrary Banach space.
Therefore, $\max(\ell_r)$ and $\max(\ell_{r'})$ are typical examples,
but
also $\max({\cal S}_r)$ and $\max({\cal S}_{r'})$. Moreover, we see that
$\pi_{1,sc}^n(Id_{\max(E)}) \hspace{.05cm} \le \hspace{.05cm} n^{1-\frac{1}{r}}$
if and only if the same holds for $\max(E^*)$. In the limit case $r=1$
the identity
of a maximal operator space is 1-summing if and only if the associated
Banach space is isomorphic
to a Hilbert space, whereas a subspace of $\max(E)$ is 1-summing if and
only
if it is a complemented Hilbert space in $E$.
It is easy to see that the operator space $CL$ spanned by the
generators of
the Clifford algebra is an exact operator space. Moreover, the
exactness constant \cite{PSE}
is uniformly in $n$ bounded for the spaces $E_r^n$.
Using theorem \ref{exact} and the last proposition it is quite
standard to deduce that the operator space dual $CL^*$ is not
1-summing,
but $\pi_{1,sc}^k(id_{(E_r^n)^*}) \sim_{c_r} k^{\frac{1}{r}-\frac{1}{2}}$
for $k \le n$. }
\end{exam}
\newcommand{\vspace{-0.2cm}}{\vspace{-0.2cm}}
|
2,869,038,155,075 | arxiv | \section{Introduction}
\label{sec:intro}
In the real world, a quantum system will always interact with the external environment and cannot be treated as an isolated system, leading to the introduction of open quantum systems \cite{Breuer2007}. Recently, open quantum systems have attracted significant attention, and phenomena such as quantum decoherence \cite{Joos2003,Schlosshauer2005,Helm2009} and quantum dissipation \cite{Diehl2008,Krauter2011,Weiss2012} have always been a key focus of several researchers in this field. Moreover, open quantum system concepts have been extensively applied in quantum optics \cite{Carmichael2014}, quantum measurement and metrology \cite{Schlosshauer2005,Barchielli1983,Goldstein2011,Alipour2014}, quantum control\cite{Liu2011,Schmidt2011}, and quantum information science including quantum computation \cite{Verstraete2009} and quantum simulation\cite{Barreiro2011,Houck2012}.
Typically, an open quantum system is described by a reduced density matrix, and its dynamics is mathematically governed by a master equation. In a Markov approximation, the time evolution of a system is given by the well-known Lindblad master equation \cite{Lindblad1976}. However, the Markov limit is not always a good approximation since open systems in Markov limit usually lose information to the surroundings and exhibit irreversible dissipation and decoherence, which are bottlenecks to address in quantum information science. On the contrary, non-Markovian process-based systems \cite{Breuer2007,Cederbaum2005,Apollaro2011}, where back reaction of environments is considered, regain the information previously lost to the surroundings. This makes the non-Markovian processes more physical and appropriate than the Markovian process under certain circumstances. One useful approach to cope with non-Markovian situations is called the non-Markovian stochastic Schr\"{o}dinger equation or the quantum-state-diffusion (QSD) equation \cite{Diosi1998,Diosi1998_2}; it describes an open system by a stochastic pure state rather than a density matrix.
One of the successful cases that was dealt with using QSD is a system comprising two interacting qubits coupled to a bosonic environment \cite{Zhao2011}, which was studied by Zhao \emph{et~al.} They derived the exact time-local QSD equation for such a system and discovered the entanglement generation caused by the environmental memory. However, describing an open system with a reduced density matrix instead of a stochastic pure state would sometime be better since a stochastic state may carry an uncertainty represented by a complex stochastic Gaussian process. In contrast, the uncertainty in a reduced density matrix is ruled out by an ensemble average. In addition, in quantum control problems such as those discussed in \cite{Tai2014}, knowledge of the analytic form of the master equation is quite important. Furthermore, since control problems usually require a large amount of calculation, making a good approximation to reduce the computational complexity is a necessity.
Based on the formal form of the master equation for multi-qubit dissipative systems \cite{Chen2014}, we have derived the analytic form of the master equation for this two-qubit system, where the noise is modeled by the Ornstein\textendash Uhlenbeck process. In particular, we write down an exact master equation composed of several functions whose time derivatives are analytically given. Moreover, we validate our analytic master equation by examining the entanglement generation and the purity of state, comparing them to the results in \cite{Zhao2011}. In addition, we discuss the effects of first-order noise and show that from an asymptotic perspective, the exact master equation can be reduced to an approximate form, wherein the terms related to the first-order noise are eliminated.
This study is organized as follows. In Sec.~\ref{sec:math_des}, we introduce the model and the stochastic Schr\"{o}dinger equation for the system comprising two interacting atoms coupled to a bosonic environment and derive the analytic form of the non-Markovian master equation. In Sec.~\ref{sec:sim_discuss}, we use the non-Markovian master equation to examine the entanglement generation and state purity phenomena due to environmental memory and then validate the master equation. Next, we compare the solutions of the exact and approximate master equations in the last part of this section. Finally, Sec.~\ref{sec:conclusion} presents a conclusion of this study.
\section{Mathematical Description}
\label{sec:math_des}
\subsection{Model and stochastic Schr\"{o}dinger equation}
For a system comprising two interacting two-level atoms coupled to a bosonic bath \cite{Zhao2011}, the total Hamiltonian can be divided into three parts (set $\hbar =1$):
\begin{equation}
\label{eq:hamiltonian}
\begin{aligned}
H_{sys} = &~\omega_A \sigma_z^A + \omega_B \sigma_z^B + J_{xy}(\sigma_+^A \sigma_-^B + \sigma_-^A \sigma_+^B)\\ &+ J_z \sigma_z^A \sigma_z^B, \\
H_{bath} =& \sum_i \omega_i a_i^\dag a_i, \\
H_{int} =& \sum_i (g_i a_i^\dag L + g_i^\ast a_i L^\dag),
\end{aligned}
\end{equation}
where $w_A$ and $w_B$ are the transition frequencies of two interacting atoms, $\sigma_\pm = (\sigma_x \pm i\sigma_y)/2$ is the creation/annihilation operator for an atom, $a_i (a_i^\dag)$ is the annihilation (creation) operator of the $i^{\text{th}}$ mode of the bosonic bath, $g_i$ is the coupling constant between the system and the $i^{\text{th}}$ mode of the bath, and $L = \kappa_A \sigma_-^A + \kappa_B \sigma_-^B$ is a system operator. Note that the interaction between two atoms is modeled on the Heisenberg XXZ model, wherein the coupling constants $J_x = J_y = J_{xy}$ are not necessarily equal to $J_z$.
The non-Markovian stochastic Schr\"{o}dinger equation for two interacting atoms coupled to a common bath is given as follows:
\begin{equation}
\label{eq:SSE}
\f{\partial}{\partial t} \psi_t = -i H_{sys} \psi_t + Lz_t^\ast \psi - L^\dag\int_0^t \mathrm{d} s \alpha(t,s) \f{\delta \psi_t}{\delta z_s^\ast},
\end{equation}
where $\psi_t = \psi_t(z^\ast)$ is a stochastic wave function of the system, and $z_t \equiv i \sum_j g_j z_j e^{-i\omega_j t}$ is a complex Gaussian process satisfying $\mathcal{M}[z_t] = 0$, $\mathcal{M}[z_t z_s] = 0$, and $\mathcal{M}[z_t z_s^\ast] = \alpha(t,s)$, which is the correlation function of the bath and determines the environmental memory time. Here, the symbol, $\mathcal{M}[\bullet] = \int \cdots \int \f{\mathrm{d}^2 z_j}{\pi} e^{-|z_j|^2} \cdots \int \f{\mathrm{d}^2 z_1}{\pi} e^{-|z_1|^2}(\bullet)$, represents the ensemble average operation.
In general, the functional derivative in the integral of Eq.~(\ref{eq:SSE}) can be replaced with an operator $O(t,s,z^\ast)$, such that
\begin{equation}
\label{eq:diff2op}
\f{\delta \psi_t(z^\ast)}{\delta z_s^\ast} = O(t,s,z^\ast)\psi_t(z^\ast).
\end{equation}
For this two-interacting-qubit model, the $O$ operator can be written as follows:
\begin{equation}
\label{eq:O_op_form}
O(t,s,z^\ast) = O_0(t,s) + \int_0^t \mathrm{d} s_1 z_{s_1}^\ast O_1(t,s,s_1),
\end{equation}
where $O_0(t,s)$ and $O_1(t,s,s_1)$ are related to the zeroth- and first-order noise components, which may be formulated as follows:
\begin{equation}
\label{eq:O_op}
\begin{aligned}
O_0(t,s) =&~ f_1(t,s) \sigma_-^A + f_2(t,s) \sigma_-^B + f_3(t,s) \sigma_z^A \sigma_-^B \\
&+ f_4(t,s) \sigma_-^A \sigma_z^B, \\
O_1(t,s,s_1) =&~ i f_5(t,s,s_1)(2\sigma_-^A \sigma_-^B),
\end{aligned}
\end{equation}
in which the time derivatives and the initial conditions of the $f_j$ terms are listed in Appendix \ref{app:time_deri}.
\subsection{Exact non-Markovian master equation}
In general, the formal non-Markovian master equation can be derived from Eq.~(\ref{eq:SSE}), and it reads as follows:
\begin{equation}
\label{eq:formal_ME}
\begin{aligned}
\f{\partial \rho_t}{\partial t} =& -i[H_{sys},\rho_t] + \big[L,\mathcal{M}[P_t\bar{O}^\dag(t,z^\ast)]\big] \\ &+ \big[M[\bar{O}(t,z^\ast)P_t],L^\dag\big],
\end{aligned}
\end{equation}
where $\bar{O}(t,z^\ast) = \int_0^t \mathrm{d} s\alpha(t,s)O(t,s,z^\ast)$, $P_t = \Ket{\psi_t(z^\ast)}\Bra{\psi_t(z)}$, and $\rho_t = \mathcal{M}[P_t]$, which is the reduced density matrix used to describe the dynamics of the system.
For a system that has the first order of noise as the highest order, $\mathcal{M}[P_t \bar{O}^\dag]$ can be written as follows\cite{Chen2014}:
\begin{widetext}
\begin{equation}
\label{eq:pt_obar_dag}
\begin{aligned}
\mathcal{M}[P_t \bar{O}^\dag] =&~ \rho_t \bar{O}_0^\dag(t) + \int_0^t \mathrm{d} s_1 \int_0^t \mathrm{d} s_2 \alpha(s_1,s_2) O_0(t,s_2) \rho_t \bar{O}_1^\dag(t,s_1)
\\ &+ \int_0^t \mathrm{d} s_1 \int_0^t \mathrm{d} s_2 \int_0^t \mathrm{d} s_3 \int_0^t \mathrm{d} s_4 \alpha(s_1,s_2)\alpha(s_3,s_4)^\ast O_1(t,s_2,s_3) \rho_t O_0^\dag(t,s_4) \bar{O}_1^\dag(t,s_1),
\end{aligned}
\end{equation}
\end{widetext}
where $\bar{O}_0(t) = \int_0^t \mathrm{d} s \alpha(t,s) O_0(t,s)$ and $\bar{O}_1(t,s_1) = \int_0^t \mathrm{d} s \alpha(t,s) O_1(t,s,s_1)$. In our two-qubit system, the term involving the quadruple integral vanishes since $O_0^\dag\bar{O}_1 = 0$ due to $\sigma_+\sigma_+ = 0$.
To make good use of Eq.~(\ref{eq:formal_ME}), we have to convert Eq.~({\ref{eq:pt_obar_dag}}) to a more explicit form, from which $\mathcal{M}[P_t\bar{O}^\dag]$ can be easily computed. For this, we first define
\begin{equation}
\bar{f}_j(t) \equiv \int_0^t \mathrm{d} s~\alpha(t,s)f_j(t,s),~ j = 1 \sim 4,
\end{equation}
and
\begin{equation}
\begin{aligned}
F_j(t) \equiv \int_0^t \mathrm{d} s_1 \int_0^t \mathrm{d} s_2~\alpha(s_1,s_2)f_j(t,s_2)&\bar{f}_5^\ast(t,s_1),\\& j = 1 \sim 4.
\end{aligned}
\end{equation}
When $O_0$ and $O_1$ given in Eq.~(\ref{eq:O_op}) are substituted into Eq.~(\ref{eq:pt_obar_dag}), the equation gets converted to an explicit form as shown below:
\begin{equation}
\label{eq:pt_obar_fun}
\begin{aligned}
\mathcal{M}&[P_t\bar{O}^\dag] = \rho_t(\bar{f}^\ast_1 \sigma_+^A + \bar{f}^\ast_2 \sigma_+^B + \bar{f}^\ast_3 \sigma_z^A \sigma_+^B + \bar{f}^\ast_4 \sigma_+^A s_z^B) \\
&-2i[F_1 \sigma_-^A + F_2 \sigma_-^B + F_3 \sigma_z^A \sigma_-^B + F_4 \sigma_-^A \sigma_z^B)\rho_t \sigma_+^A \sigma_+^B.
\end{aligned}
\end{equation}
Here, we consider the Ornstein\textendash Uhlenbeck process with the correlation function $\alpha(t,s) = \f{\gamma}{2}e^{-\gamma|t-s|}$, which corresponds to a Lorentzian power spectrum, to model the noise. In this process, with the help of the three other auxiliary functions
\begin{equation}
F_5(t) = \int_0^t \mathrm{d} s_1 \int_0^t \mathrm{d} s_2~\alpha(s_1,s_2)\bar{f}_5(t,s_2)\bar{f}_5^\ast(t,s_1),
\end{equation}
\begin{equation}
\bar{f}_5(t,s_1) = \int_0^t \mathrm{d} s~\alpha(t,s)f_5(t,s,s_1),
\end{equation}
and
\begin{equation}
\tilde{f}_5(t) = \int_0^t \mathrm{d} s~\alpha(t,s) \bar{f}_5(t,s),
\end{equation}
the explicit form of $\mathcal{M}[P_t\bar{O}^\dag]$ in Eq.~(\ref{eq:pt_obar_fun}) becomes very useful, since $\big\{\f{\mathrm{d} \bar{f}_1}{\mathrm{d} t}, \f{\mathrm{d} \bar{f}_2}{\mathrm{d} t}, \f{\mathrm{d} \bar{f}_3}{\mathrm{d} t}, \f{\mathrm{d} \bar{f}_4}{\mathrm{d} t}, \f{\mathrm{d} \tilde{f}_5}{\mathrm{d} t}\big\}$ and $\big\{\f{\mathrm{d} F_1}{\mathrm{d} t}, \f{\mathrm{d} F_2}{\mathrm{d} t}, \f{\mathrm{d} F_3}{\mathrm{d} t}, \f{\mathrm{d} F_4}{\mathrm{d} t}, \f{\mathrm{d} F_5}{\mathrm{d} t}\big\}$ form two sets of coupled differential equations (see Appendix \ref{app:time_deri}), implying that the analytic $\mathcal{M}[P_t \bar{O}^\dag]$ can be easily calculated by numerically solving these differential equations. With the derivation of $\mathcal{M}[P_t\bar{O}^\dag]$, we can compute $\rho_t$ from Eq.~({\ref{eq:formal_ME}), and collectively, this completes the construction of the analytic form of the exact non-Markovian master equation for this two-interacting-atom system.
\subsection{Markovian regime and noise effects}
If the correlation function $\alpha(t,s)$ in Eq.~(\ref{eq:SSE}) is given by $\gamma\delta(t-s)$, meaning that the quantum state at time $t$ is independent of the state at time $s$ ($s < t$), then there is no memory effect in the surrounding, and Eq.~({\ref{eq:SSE}) becomes
\begin{equation}
\label{eq:SSE_markovian}
\f{\partial}{\partial t} \psi_t = -i H_{sys} \psi_t + Lz_t^\ast \psi_t - \f{\gamma}{2}L^\dag L \psi_t,
\end{equation}
which is the stochastic Schr\"{o}dinger equation in the Markovian limit. Note that in the Ornstein\textendash Uhlenbeck process, when $\gamma \to \infty$, the correlation function will behave like a Dirac delta function, and hence, the environment becomes memoryless.
In fact, when the system is close to the Markovian regime, the operator $O_1(t,s,s_1)$ associated with the first-order noise in Eq.~({\ref{eq:O_op_form}) becomes less important. To find the noise effect from $O_1$, we may approximate $O(t,s,z^\ast)$ with $O_0(t,s)$ or equivalently let $f_5(t,s,s_1) = 0$ and then compare the results from the exact and approximate master equations. In the approximate master equation, $M[P_t\bar{O}^\dag]$ is reduced to
\begin{equation}
\mathcal{M}[P_t\bar{O}^\dag] = \rho_t(\bar{f}^\ast_1 \sigma_+^A + \bar{f}^\ast_2 \sigma_+^B + \bar{f}^\ast_3 \sigma_z^A \sigma_+^B + \bar{f}^\ast_4 \sigma_+^A s_z^B),
\end{equation}
where the time derivatives of the $\bar{f}_j$ terms form a set of coupled differential equations obtained by eliminating $\bar{f}_5$ and $\tilde{f}_5$ from Eq.~(\ref{eq:dfbar_dt}).
\section{Numerical simulations and discussions}
\label{sec:sim_discuss}
In this section, we validate the newly derived master equation and compare the time evolutions of the density matrix $\rho_t$ governed by the exact and approximate $O$ operators. Here, the states and operators are represented in the basis $\{\Ket{11}, \Ket{10}, \Ket{01}, \Ket{00} \}$, where $\sigma_z\Ket{1} = \Ket{1}$ and $\sigma_z\Ket{0} = -\Ket{0}$.
\subsection{Entanglement generation and state purity}
To validate the exact master equation, we examine whether it can reproduce the same entanglement generation and time evolution of state purity as given by the QSD equation in \cite{Zhao2011}. Here purity is defined as $\text{tr}[\rho^2(t)]$, which conveys whether a system is in a pure or a mixed state.
From \cite{Zhao2011}, we know that if the initial state is $\psi_0 = \ket{10}$ (or $\rho_0 = \ket{10}\bra{10}$), the two-qubit system evolves into a mixed and entangled steady state; while, if the initial state is $\psi_0 = \f{1}{\sqrt{2}}(\ket{11}+\ket{00}), \f{1}{\sqrt{2}}(\ket{10}+\ket{01})$, or $\ket{11}$, the system will first evolve to a mixed state because of the coupling interaction and finally decay to the pure state $\ket{00}$ resulting from the dissipative environment.
\begin{figure}[htb]
\centering
\includegraphics[width=7cm]{fig1.ps}
\caption{(Color online) Time evolution of purity for four different initial states. Here $\gamma = 1$, $\omega_A = \omega_B = 0.5$, $\kappa_A = \kappa_B = 1$, $J_{xy} = 0.7$, and $J_z = 0.3$.}
\label{fig:purity_exact}
\end{figure}
In Fig.~\ref{fig:purity_exact}, we plot the time evolution of purity for those four initial states generated by the exact master equation. We can see that unlike the other three initial states whose purities first become smaller than unity and then return to unity, the system remains in a mixed state with a purity value close to $0.5$ only when $\psi_0 = \ket{10}$.
Moreover, from the time evolution of the density matrix with $\rho_0 = \ket{10}\bra{10}$, plotted in Fig.~\ref{fig:ent_purity}, we can clearly observe the system ending up in a state that is not only mixed but also entangled. Therefore, we can infer that the results from the exact master equation match well with the ones from the QSD equation described above, and this consistency confirms the validity of our master equation.
\begin{figure}[htb]
\centering
\includegraphics[width=8.6cm]{fig2.ps}
\caption{(Color online) Dynamics of the density matrix with the initial state $\psi_0 = \ket{10}$. Parameters are chosen as $\gamma = 1$, $\omega_A = \omega_B = 0.5$, $\kappa_A = \kappa_B = 1$, $J_{xy} = 0.7$, and $J_z = 0.3$. The values of other elements are always zero except $\rho_{32} = \rho_{23}^\ast$.}
\label{fig:ent_purity}
\end{figure}
\begin{figure*}[bht]
\centering
\includegraphics[width=17.2cm]{fig3.ps}
\caption{(Color online) Dynamics of the density matrix with different combinations of $\gamma$ and $\omega$. Here, the initial state is $\psi_0 = \f{1}{2}(\ket{11}+\ket{10}+\ket{01}+\ket{00})$. Legend ($\gamma,~\omega,~$e/a) indicates values of $\gamma$ and $\omega$ for a curve, and e/a is used to indicate whether a curve is generated using the exact or approximate master equation. Other parameters are chosen as $\kappa = 1$, $J_{xy} = 0.7$, and $J_z = 0.3$. Note that the absolute value of each complex element is shown here.}
\label{fig:g_w_cmp}
\end{figure*}
\subsection{Exact master equation versus approximate master equation}
In the following, we assume that $\omega_A = \omega_B = \omega$ and $\kappa_A = \kappa_B = \kappa$ for simplicity. For such a condition, we can deduce from Eq.~({\ref{eq:dfbar_dt}) that $\bar{f}_1 = \bar{f}_2$ and $\bar{f}_3 = \bar{f}_4$ given the symmetry in their time derivatives and the same initial conditions, share $\bar{f}_1(0) = \bar{f}_2(0) = \bar{f}_3(0) = \bar{f}_4(0) = 0$. Furthermore, if we substitute $\psi_t = c_1(t) \Ket{11} + c_2(t) \Ket{10} + c_3(t) \Ket{01} + c_4(t) \Ket{00}$ into Eq.~(\ref{eq:SSE}), the dynamics of the $c_j$ values are found to be
\begin{equation}
\label{eq:dcj_dt}
\begin{aligned}
\f{\mathrm{d} c_1}{\mathrm{d} t} =& -i(2\omega+J_z)c_1 - \kappa(2\bar{f}_1+2\bar{f}_3)c_1, \\
\f{\mathrm{d} c_2}{\mathrm{d} t} =&~ \kappa z_t^\ast c_1 - 2\kappa \hat{f}_5 c_1 + i J_z c_2 - \kappa(\bar{f}_1 - \bar{f}_3)c_2 -i J_{xy}c_3 \\ &- \kappa(\bar{f}_1-\bar{f}_3)c_3, \\
\f{\mathrm{d} c_3}{\mathrm{d} t} =&~ \kappa z_t^\ast c_1 - 2\kappa \hat{f}_5 c_1 + i J_z c_3 - \kappa(\bar{f}_1 - \bar{f}_3)c_2 -i J_{xy}c_2 \\ &- \kappa(\bar{f}_1-\bar{f}_3)c_3, \\
\f{\mathrm{d} c_4}{\mathrm{d} t} =&~ \kappa z_t^\ast (c_2 + c_3) - i(J_z - 2\omega)c_4,
\end{aligned}
\end{equation}
where $\hat{f}_5(t,z^\ast) = i\int_0^t \mathrm{d} s_1 \bar{f}_5(t,s_1) z_{s_1}^\ast$. Again, we can infer the relationship $c_2(t) = c_3(t)$ when $c_2(0) = c_3(0)$ because of the symmetry between $\dot{c}_2$ and $\dot{c}_3$. For convenience, we shall assume $c_2(0) = c_3(0)$ without any loss of generality in the comparisons.
Under these assumptions, $\rho_{j2} = \mathcal{M}[c_j c_2^\ast] = \mathcal{M}[c_j c_3^\ast] = \rho_{j3}$ ($j = 1 \sim 4$), and thereby, together with the property $\rho_t = \rho_t^\dag$ of a density matrix, there are only six independent elements in $\rho_t$: $\rho_{11}$, $\rho_{12}$, $\rho_{14}$, $\rho_{22}$, $\rho_{24}$, and $\rho_{44}$. Next, we compare the differences of these six elements considering the situations with or without the first-order noise.
\begin{figure}[!htb]
\centering
\includegraphics[width=8.6cm]{fig4.ps}
\caption{(Color online) Dynamics of the density matrix with $\gamma = 0.1$, $\omega = 0.5$, $\kappa = 2$, $J_{xy} = 0.7$, and $J_z = 0.3$. The initial state is $\psi_0 = \f{1}{2}(\ket{11}+\ket{10}+\ket{01}+\ket{00})$.}
\label{fig:k_cmp}
\end{figure}
First, we consider the time evolution of a state, $\psi_t$, initialized as $\psi_0 = \f{1}{2}(\ket{11}+\ket{10}+\ket{01}+\ket{00})$ with parameters $\gamma = 0.1$, $\omega = 0.5$, and $\kappa = 1$. From Fig.~\ref{fig:g_w_cmp}, we can infer that the dynamics of $\psi_t$ governed by the exact and approximate master equations only show very small differences although the environment is way off from the Markovian regime because of the small value of $\gamma$. Next, if we increase the value of $\gamma$ to unity, it is as expected that the system reaches its final state faster due to limited back reaction from the environment in the Markovian regime. In this case, the results produced by both master equations match each other perfectly because the first-order noise is less important in the Markov limit. From Fig.~\ref{fig:g_w_cmp}, we can further deduce that the approximate master equation behaves well when $\omega$ increases from $0.5$ to $2$, while the environment is still in a non-Markovian regime ($\gamma = 0.1$). In fact, variation of $J_{xy}$ and $J_z$ also does not initiate any clear difference between the results from both these master equations.
Next, in Fig.~\ref{fig:k_cmp}, we show the dynamics from the same initial state governed by the exact and approximate master equations when $\gamma = 0.1$, $\omega = 0.5$, and $\kappa = 2$. Compared with the red solid line and orange dashed line in Fig.~\ref{fig:g_w_cmp} with $\kappa = 1$, the oscillations are more intense with a shorter response time for each element of the density matrix, which can be explained by the analytical solution to Eq.~(\ref{eq:dcj_dt}). For example, from the solution for $c_1(t)$
\begin{equation}
c_1(t) = c_1(0)e^{-i(2\omega+J_z)t}e^{-2\kappa\int_0^t (\bar{f}_1(\tau) + \bar{f}_3(\tau)) \mathrm{d} \tau},
\end{equation}
we can infer that $\kappa$ has a direct influence on the oscillations and the response time. Incidentally, the same effect can also be observed in the solutions to $c_2$, $c_3$, and $c_4$.
Moreover, it is also observed that the results from the exact and approximate master equations do not match each other in the transient time. As for the cause, we first notice in Eq.~(\ref{eq:O_op}) that the operator associated with the first-order noise is $\sigma_-^A \sigma_-^B$, and then, apparently, the difference comes from the coefficient $c_1(t)$ of state $\ket{11}$, which is operated on by $\sigma_-^A \sigma_-^B$. To verify this argument, we consider another initial state $\psi_0 = \f{1}{\sqrt{3}}(\ket{10}+\ket{01}+\ket{00})$, where $\ket{11}$ is dropped out. With the same parameters that are used in the simulation shown in Fig.~\ref{fig:k_cmp}, we plot the dynamics of the new state without $\ket{11}$ in Fig.~\ref{fig:wo_11}. It is evident from this figure that the results from both master equations as expected are well matched.
Moreover, in Fig.~\ref{fig:k_cmp}, although the approximate dynamics do not mimic the exact ones in the transient time, they do match well in the steady state regime. Furthermore, even if $\kappa$ is increased to a higher value, the dynamics given by both master equations still match in the final state.
\begin{figure}[thb]
\centering
\includegraphics[width=8.6cm]{fig5.ps}
\caption{(Color online) Dynamics of the density matrix with $\psi_0 = \f{1}{\sqrt{3}}(\ket{10}+\ket{01}+\ket{00})$. Parameters are chosen as $\gamma = 0.1$, $\omega = 0.5$, $\kappa = 2$, $J_{xy} = 0.7$, and $J_z = 0.3$. The other three independent elements $\rho_{11}$, $\rho_{12}$, and $\rho_{14}$ are always zero.}
\label{fig:wo_11}
\end{figure}
From the above discussions, we can summarize the findings as follows: the master equation without the terms related to the first-order noise can be quite a good approximation, especially when state $\ket{11}$ does not form part of the initial state, or at least, the exact master equation can be asymptotically approximated by neglecting the effect from the first-order noise. Hence, we can also anticipate that residual entanglement and state impurity from the initial state $\ket{10}$ would still be preserved without the first-order noise.
\section{Conclusion}
\label{sec:conclusion}
In this study, we derived the analytic form of the non-Markovian master equation of a system comprising two interacting two-level qubits coupled to a bosonic environment. Following the derivation, the master equation was validated by examining entanglement generation and state purity. The exact and approximate master equations were compared for various situations, and our results reveal that the approximate master equation is well fitted to the exact master equation in steady state, and therefore, it serves as a good approximation in an asymptotic sense. One of the potential applications of our results is to implement two-qubit gates with this two-qubit system. Furthermore, the concept developed for this two-qubit system can be extended to multi-qubit systems.
\begin{acknowledgements}
I thank Mr. Kuan-Ting Lin for introducing me the concept of QSD, and I am also grateful to have many interesting discussions with him.
\end{acknowledgements}
|
2,869,038,155,076 | arxiv | \section{Introduction}
Thanks to gravitational amplification, galaxy clusters are a powerful
tool to probe distant galaxies. Indeed, one of the most distant
galaxy detected is a gravitational arc at $z$=4.92 in the cluster
Cl1358+62 (\cite{franx97}). Other very distant lensed sources ($z>$3)
have been identified behind Cl0939+4713 (\cite{trager97}) and A2390
(\cite{pello98a}, \cite{frye98}) leading to important results on
the
formation history and evolution of galaxies at large redshift.
Determining the redshift of arcs and arclets is of great importance as
it fixes the angular scales of the optical configuration, hence giving
an absolute cluster mass estimate within the arc radius ({\it e.g.}
\cite{soucail88}, \cite{mellier91}). But despite the cluster
magnification, measuring arclets redshifts is a difficult
observational task due to their low surface brightness
(\cite{bezecourt97}, \cite{ebbels98b}), and the lack of strong
spectral features in the optical domain for galaxies with
$1<z<2.5$.
Accurate modeling of cluster potentials based on the analysis of
multiple images and weak shear distortions has shown that cluster
mass distributions are best described by the sum of a smooth and large
scale component (the cluster) and the contribution of cluster galaxy
halos (\cite{kneib96}, \cite{natarajan98}, \cite{geiger98}). For a given
mass distribution, Kneib et al (1996) demonstrated that a redshift
can be estimated if one can measure accurately the shape of an
individual arclet. In order to validate this method and study its
biases, extensive programs of gravitational arclet spectroscopy
have
been undertaken. In particular, Ebbels et al. (1998) have measured 19
redshifts of lensed objects behind Abell 2218. Most of them confirm
the lens redshift prediction, and allow the accuracy of the mass
model to be improved. Similarly, B\'ezecourt and Soucail (1997) have
started the spectroscopy of arclets in Abell 2390, which has been used
to constrain the mass distribution in this cluster with a great
accuracy (\cite{kneib98}).
Using these accurate cluster mass models and a spectrophotometric
description of galaxy evolution (Bruzual \& Charlot 1993,
\cite{pozzetti96}),
B\'ezecourt et
al. (1998a) have predicted the expected arclet
number counts and their redshift distribution. This model presents
many improvements with respect to previous work ({\sl e.g.}
\cite{nemiroff89}, \cite{wu93}, \cite{grossman94}, \cite{hattori97})
as it includes many observational limits such as magnitude ranges,
surface brightness cut-off or a choice of the optical waveband, and
this for any mass distribution, regardless of its complexity.
Abell 2218 is the first cluster where the number counts and redshift
distribution of the background arclets have been examined in
detail
(\cite{bezecourt98}, \cite{ebbels98b}). We propose in this paper to
further extend this study to another well known cluster lens, namely
Abell 370. Its mass distribution was first accurately derived by
Kneib et al. (1993) [hereafter K93] who showed that the mass model has
to be bimodal in shape in order to accommodate the gravitational pair
B2--B3. This was later confirmed by the HST/WFPC1 observation described
in Smail et al. (1996) and the X-ray map of the cluster, displaying a
bimodal shape compatible with the lens model (\cite{fort94}).
We present new observations of the cluster Abell 370 in Section 2: a
deep HST/WFPC2 image and spectroscopic data. Section 3 discusses the
new lensing mass model. In section 4, we use an improved version of
the code developed by B\'ezecourt et al. (1998a) to determine the
expected counts and redshift distribution of arclets in Abell 370.
Our analysis explores two different scenarios of galaxy evolution
to study their differences, and compute the depletion curves of
background number counts at different wavebands. Discussion,
conclusions and further prospects to constrain galaxy evolution
through
lenses are presented in Section 5. Throughout the paper, we use
a Hubble constant of H$_0 = 50 \, {\rm km \,s}^{-1} {\rm Mpc}^{-1}$,
with $\Omega_0$= 0 or 1 and $\Omega_{\Lambda} = 0$.
\section{New observational data}
\subsection{HST observations and photometry}
Abell 370 was observed with the HST/WFPC2 camera with the
F675W filter [ID: 6003, P.I.:R.P. Saglia], resulting in a reasonably
deep image with a total exposure time of $T_{exp}$ = 5.6 ksec. These
data were provided by the ESO/ST-ECF Science Archive Facility,
Garching, Germany and were reduced with standard reduction
procedures. We used the IRAF/STSDAS packages for the image
combinations after centering and cosmic ray cleaning. The
absolute photometry was obtained using magnitude zero-points given in
Holtzman et al. (1995). The photometry of the field was obtained with
the Sextractor package (\cite{bertin96}). A criterion of 12
contiguous
pixels above the given detection threshold was used to identify an
object. The 1 $\sigma$ detection limit is $R_{675W}=24.9$ mag
arcsec$^{-2}$. From the magnitude histogram obtained from the catalog,
we estimate the completeness limit to be $R_{675W}=25.5$.
We also built a sample of arclets for the purpose of our study of
their photometric and statistical properties. To define an arclet we
imposed the following selection criteria: at least 12 contiguous
pixels above 2$\sigma$ of the local sky level, an axis ratio greater
than 2, a central surface brightness lower than $\mu_{675W}=25.5$
mag. arcsec$^{-2}$ and a magnitude range $20<R_{675W}<26$. The final
sample contains 81 arclets and their magnitude histogram is discussed
in Section 4.2.
\subsection{Identification of multiple images and arclets}
\begin{figure*}
\centerline{\psfig{figure=arclets.ps,width=0.8\textwidth}}
\caption{Detailed view of the multiple image candidates detected
in the
WFPC2/F675W image. B2--B3--B4 is a triple image configuration, as well
as C1--C2--C4. R1/R2 is a radial arc.
E1--E2--E3 is also a triple configuration with a clear inversion
of parity between E2 and E1/E3 (see text for more details). }
\label{ima_arclets}
\end{figure*}
Abell 370 ($z$=0.37) is a rich optical cluster dominated by two giant
elliptical galaxies identified as \#20 and \#35, following the
numbering of Mellier et al. (1988). A set of multiple image
candidates and gravitational arcs are identified on the WFPC2/F675W
image and are displayed in Figure \ref{ima_arclets}. Their photometric
and geometrical properties are also summarized in Table
\ref{table_arclets}. We now discuss them in detail:
\noindent{\bf A0 : } Near galaxy \#35
is located the spectacular giant arc ($z$=0.724) initially detected by
Soucail et al. (1987). From the high resolution WFPC2 image more details
on its morphology are clearly visible, suggesting that it is a
gravitationally lensed image of a spiral galaxy (\cite{soucail98}).
\noindent{\bf A1 to A6 : }
A first set of arclets (labelled A1 to A6) was detected from
ground based images by Fort et al. (1988) and further discussed by
Kneib et al. (1994). Most of them are blue objects, but none have a
spectroscopic redshift yet. A5 is the most extended one and presents
very blue colors and a strong dimming in the reddest bands, suggesting
a young star forming galaxy. Despite deep spectroscopic data
for A5 (\cite{mellier91}) no significant
emission line has been identified suggesting that $1<z_{A5}<2.2$.
Arclets A3 to A6 are probably single images of background lensed
galaxies in view of their location in the lens plane. Arclets A1 and
A2 may be multiple images of the same source.
\noindent{\bf The multiple images B, C and D : }
K93 demonstrated that the B2--B3 objects (Fort et al. 1988) correspond
to a gravitational pair with a counter image labeled as B4. The
B2--B3 pair and the arc A0 were used to constrain the K93 model which
showed a bimodal mass distribution. This model proposed that objects
C1--C2--C3 and D1--D2, identified on high quality ground-based images,
were also multiple image systems. A redshift estimate based on the
mass model was proposed for each of the pairs: $z_B= 0.865$, $z_C=
0.81$ and $z_D= 0.95$. The reality of B4 was confirmed {\it a
posteriori} by the HST/WFPC1 data (Smail et al. 1996). C3 is likely
to be a wrong identification of the counter image of C1--C2, and we
denote C4 the correct counter image used in the present model.
\noindent{\bf The radial arc R : }
In the HST/WFPC1 image, Smail et al. (1996) discovered a radial arc
candidate R (R1-R2) located close to galaxy \#35. They modeled it as a
5-image configuration, and predicted a redshift of $z\sim 1.3$ using
the K93 model.
This radial arc is well identified in the F675W image and is clearly
the merging of 2 images.
\noindent{\bf New multiple images : }
The detailed insight of the mass model and the exquisite HST
resolution allow the identification of other multiple image
candidates. We only discuss here the E1--E2--E3 multiple configuration
as it presents a characteristic inversion of parity as expected from
lensing theory (see also \cite{smail95} and
\cite{colley96} for other spectacular examples). Other multiple
image candidates will be discussed in a forthcoming paper.
\begin{table}
\caption[]{Main geometrical and photometric properties of the multiple
images candidates identified on the F675W HST image. The origin of the
coordinates is the center of galaxy \#35 and the XY orientation is the
CCD one, {\em i.e.} North is to the top with an angle of 85$^\circ$
41$'$ clockwise from the X axis and East is to the left. Coordinates
are in arcseconds. $e$ is the ellipticity of the objects ({\em i.e.}
1 -- b/a where b/a is the axis ratio of the isophotes) and $\theta$ is
the orientation of the isophote with respect to the X axis. Surface
photometry was computed on the two brightest elliptical galaxy using
the {\it ellipse} package in STSDAS. The two galaxies are well fitted
by a de Vaucouleurs law with $R_e= 32.6 h_{50}^{-1}$ kpc for \#20
and $R_e=44 h_{50}^{-1}$ kpc for \#35.
}
\label{table_arclets}
\begin{flushleft}
\begin{tabular}{crrcrc}
\hline\noalign{\smallskip}
Object & X ($''$) & Y ($''$) & $e$ & $\theta \quad$ & R$_{675W}$ \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
A1 & 16.35 & 57.30 & 0.74 & -23.8 & 24.83 \\
A2 & 5.56 & 60.43 & 0.75 & -4.6 & 24.38 \\
B2 & 8.46 & 21.14 & 0.76 & -0.2 & 23.08 \\
B3 & 13.80 & 20.58 & 0.74 & -15.7 & 23.23 \\
B4 & -19.45 & 20.98 & 0.42 & -41.6 & 23.82 \\
C1 & 6.46 & 6.50 & 0.17 & 4.6 & 24.05 \\
C2 & 10.19 & 4.96 & 0.10 & 63.2 & 24.00 \\
C3 & -21.82 & 2.78 & 0.14 & -60.9 & 23.99 \\
R1/R2 & 5.27 & 7.03 & 0.76 & 76.4 & 23.98 \\
E1 & 32.86 & 18.62 & 0.51 & 74.6 & 24.83 \\
E2 & -31.22 & 19.15 & 0.68 & 64.4 & 24.53 \\
E3 & 0.54 & 22.21 & 0.66 & -69.8 & 24.44 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\# 20 & 3.09 & 37.85 & 0.36 & -70$\pm 3$ & 16.9 \\
\# 35 & 0.00 & 0.00 & 0.45 & -77$\pm 4$ & 17.1\\
\noalign{\smallskip}
\hline
\end{tabular}
\end{flushleft}
\end{table}
\subsection{Spectroscopic Observations}
Spectroscopic data were acquired at the 3.6m telescope of La Silla
(ESO) with EFOSC on October 1996. A long slit of 1.5\arcsec\ width was
positioned along the two objects B2 and B3 for a total integration
time of 3 hours with an average seeing of 1.9\arcsec. Following the
predictions of K93, the [O{\sc ii}] emission line was expected to lie
at $\lambda\sim 6950$\AA . Thus the R300 grism was used, providing a
useful spectral range 6000\AA --9000\AA\ and a dispersion of
7.5\AA/pixel. The data were reduced with standard procedures for
flat fielding, wavelength and flux calibration in the IRAF
environment. Sky subtraction was performed on the 2D image, and the
final spectra were extracted with an optimal extraction algorithm.
\begin{figure}
\psfig{figure=B2B3_2Dnew.ps,width=0.5\textwidth}
\caption{Two dimensional spectrum of objects B2 and B3.
The [O{\sc ii}] emission line is visible for the
spectrum of B2 (bottom) and B3 (top) in the center of the image.}
\label{fig-B2B3}
\end{figure}
\begin{figure}
\psfig{figure=B2B3_1Dnew.ps,width=0.5\textwidth}
\caption{Spectra of objects B2 and B3 co-added and flux calibrated.
The ordinate is $F_{\lambda}$ in arbitrary units with zero level
at the bottom of
the graph. The wavelength scale is the observed one at the bottom, and
the rest frame one at the top. The two emission lines detected in the
spectra
are quoted with their identification, as well
as the location of the absorption H and K lines, not clearly detected.
}
\label{fig-specB2B3}
\end{figure}
Both objects B2 and B3 show a prominent emission line at
$\lambda=6728$\AA\ and $\lambda=6727$\AA\ respectively on the two
dimensional spectra, which we identify with [O{\sc ii}] 3727\AA\
(Figure \ref{fig-B2B3} and \ref{fig-specB2B3}). Another feature
appears at $\lambda=9045$\AA\ and $\lambda=9042$\AA\ respectively,
identified with [O{\sc iii}] 5007\AA\ thus confirming the
redshifts. We find $z_{B2}=0.8058$ and $z_{B3}=0.8054$, clearly
demonstrating the similarity of the two objects within the error bars,
and in good agreement with the $z=0.865$ prediction. This definitely
confirms the nature of these objects as a multiply imaged galaxy.
Although, arclets A1 and A2 were observed during 4.5 hours, no
continuum nor emission line could be detected.
\subsection{A closer look at the B2--B3--B4 multiple image}
\begin{figure}
\psfig{figure=a370_B3.ps,width=0.5\textwidth,angle=-90}
\caption{Multicolor photometry of object B3 in $U, B, R, I, J$ and $K'$.
The best fit of the data points corresponds to a star forming galaxy with a
constant star formation rate seen at $t=1.7$ Gyr, with $Z=Z_{\odot}$
and a null absorption. Magnitudes are:
$U = 22.43 \pm 0.1$, $B = 24.58 \pm 0.1$, $R = 23.84 \pm 0.2$,
$J = 22.08 \pm 0.2$, $K' = 21.26 \pm 0.5$.
}
\label{fig-SED-B3}
\end{figure}
This multiple image is one of the bluest arclets observed in this
field. Accurate photometry is available in many filters extending from
$U$ to $K'$ with data coming from a U HST image with the F336W filter
(ID: 5709, P.I. J.M Deharveng, \cite{paper2}, hereafter Paper II), $B$, $R$
and $I$ images from CFHT
(\cite{kneib94}) and unpublished CFHT infrared images in $J$ and $K'$ bands
(Le Borgne and Picat, private communication).
B2 lies too close to galaxy \#20 and is contaminated by its envelope at
long wavelengths, hence we only concentrate on B3.
We compute the spectral energy distribution (SED) of B3, corrected
from its redshift, in order to study the stellar content of the
galaxy. We compare and fit the B3 SED with synthetic ones from the
Bruzual
and Charlot spectrophotometric evolutionary code (GISSEL96). This
code can provide SEDs for different metallicities ranging from
Z$_{\odot}$/50 to 5 Z$_{\odot}$, different spectroscopic galaxy type
determined by the star formation history (burst, elliptical, spiral
and irregular), and different internal absorption by dust can be
studied. The extinction law used is the one proposed by Calzetti
(1997) which is similar to what is observed in our galaxy without the
2175\AA\ bump. We explore the 3-dimensional parameter space
(spectroscopic type, metallicity and absorption, E($B-V$) from 0 to
0.3 mag.) to fit the B3 photometry by the model SEDs.
The main constraint comes from the high UV flux which requires a
weak internal absorption ($E(B-V)<0.2$mag) and a low metallicity
($Z<Z_{\odot}$) (Figure
\ref{fig-SED-B3}). Despite this, the fit does not reproduce the UV
part very well. A possibility is that we are seeing the superposition of a
very
recent burst of star formation on an older stellar population. A
strong starburst alone would show a much fainter flux at red and
infrared wavelengths, hence the observed IR emission must come from an
already existing old population. The B multiple image is similar to
arclet A5 which is also very bright in the UV but much fainter in the
IR ($R-K'=2.6$ for B3 and $R-K'<1.9$ for A5). More precisely, assuming
a redshift of
$z$=1.3 (Mellier et al. 1991), we have similarly fitted the A5 SED.
It shows evidences for a low absorption ($E(B-V)=0.1 $ mag.) and a
very low metallicity ($Z=Z_{\odot}/50$) (Figure \ref{fig-SED-A5}).
The fact that the best-fit metallicity ($Z_{\odot}/50$) is
the lower bound allowed by the code should not be a problem as a satisfying
fit is also obtained for $Z_{\odot}/5$ with comparable significance.
The trend towards low $Z$ is then stable and not too fast.
The corresponding age is 0.3 Gyr, drawing the portrait of a very young
object in an active phase of star formation. An additional old
population is not required for A5 which is the major difference with
object B.
\begin{figure}
\psfig{figure=SED_A5.ps,width=0.5\textwidth,angle=-90}
\caption{Multicolor photometry of object A5 in $U, B, R, I,$ and $J$
(undetected in $K'$). The best fit of the data points corresponds to
a burst of star formation redshifted at $z=1.3$
and seen at $t=0.3$ Gyr, with $Z=Z_{\odot}/50$ and $E(B-V)=0.1$
mag. Magnitudes are: $U=21.70 \pm 0.15, B= 22.90 \pm 0.04, R=22.26\pm
0.06$, $I=21.67\pm 0.09, J= 20.07\pm 0.11$.}
\label{fig-SED-A5}
\end{figure}
These two objects B3 and A5 are typical examples of the population
revealed by UV imaging. The UV selection exhibits young objects with
low metallicity and low absorption in the redshift range
[0.5,2.0].
Such detailed stellar population study, could in principle be applied
on any other arclets in the field, for which redshifts have been
spectroscopically confirmed or are strongly constrained by lensing and
photometric redshift techniques (Pell\'o et al. 1998a).
\section{An improved lens model}
\begin{figure*}
\psfig{figure=a370_675.ps,width=\textwidth}
\caption{Central part of Abell 370 seen with WFPC2 in F675W.
Overlaid is the mass distribution (black thin lines) and the critical
lines at $z=0.806$ (thick white lines) and $z=4$ (thick black
lines). The most important arclets and multiple images are shown.
}
\label{fig-modele}
\end{figure*}
Using both the giant arc A0 and the B2--B3--B4 triple system, K93
showed that a bimodal mass distribution was an adequate fit to these
multiple image constraints (see also AbdelSalam et al. 1998 for a
different
approach). Here, we improve this gross picture by
taking into account the contribution of the cluster galaxies (mainly
E/S0 galaxies in the cluster core) in a similar way as Kneib et
al. (1996) and Natarajan et al. (1998). All mass components
(galaxies+clusters) have been assumed to follow a truncated
pseudo isothermal mass distribution (PIEMD, Kassiola and Kovner 1993,
Hjorth \& Kneib 1998).
Following Kneib et al. (1996), for each galaxy halo,
the velocity dispersion $\sigma_0$,
the truncation radius $r_t$ and the core radius $r_0$
are scaled to the galaxy luminosity computed from the
observed F675W magnitude.
The scaling relations used for the galaxy halos are:
\begin{equation}
\sigma_0=\sigma_{0\ast} \left({L\over L_{\ast}}\right)^{{1\over 4}},
\end{equation}
\begin{equation}
r_t=r_{t\ast} \left({L\over L_{\ast}}\right)^{{0.8}},
\end{equation}
\begin{equation}
r_0=r_{0\ast} \left({L\over L_{\ast}}\right)^{{1\over 2}}.
\end{equation}
The scaling relations adopted are motivated by the properties of the
fundamental plane (FP)
and are similar to the one used by Brainerd et al. (1996). In
particular, the exponent 0.8 in eq. (2) leads to a total mass-to-light
ratio that scales with $L^{0.3}$ in agreement with the observed
correlation of the FP ({\it e.g.} Jorgensen, Franx \& Kjaergaard
1996). Brighter galaxies have more extended and therefore more massive dark
halos.
The orientation and ellipticity of the galaxy halos are taken from the
observed values of the light distribution while $\sigma_{0\ast}$ and
$r_{t\ast}$ are optimized. $r_{0\ast}$ is fixed at 0.15 $kpc$.
To model the cluster
components we considered two `large scale' mass distributions
(represented by 2 truncated PIEMD mass distribution) centered on
galaxies \#20 and \#35. Their orientations, ellipticities, velocity
dispersions and core radius are left as free parameters.
The constraints imposed in the optimization procedure are the multiple
images described in section 2.2, along with the redshifts of A0 and of
the B2--B3--B4 triplet. The best fit parameters are summarized in Table
\ref{hst} ($\chi^2$=4.5).
We also derive a ``lensing'' redshift estimate for the multiple images. The
C
system is at $z_C=0.75\pm 0.1$, D is at $z_D=0.85\pm 0.1$ and E is at
$z_E=1.3\pm 0.1$. R is moved to larger redshift with $z_R=1.7\pm 0.2$
and A1/A2 could form a gravitational pair at $z_{A1/A2} =1.4 \pm 0.2$.
We computed the total projected mass in different apertures centered
at the barycentre of the \#20 and \#35 galaxies.
We find that within
75, 150 and 300 kpc the total mass is, respectively, for this modeling
and the previous one (K93):
0.5 (0.45) $\pm$0.05 10$^{14}$ M$_\odot$,
1.8 (1.6) $\pm$0.1 10$^{14}$ M$_\odot$ and
4.8 (4.3) $\pm$0.15 10$^{14}$ M$_\odot$.
Furthermore, 5\% of the total mass is retained in cluster galaxy
halos. The total mass-to-light ratio is $\sim$180 (M/L$_V$)$_\odot$
(out to 300kpc, 160(M/L$_V$)$_\odot$ for the K93 modeling),
and $\sim$9 (M/L$_V$)$_\odot$ for a $L_{\ast}$
galaxy halos. These results are similar to the one found in other
cluster lenses like Abell 2218, 2390 and AC114 (Kneib et al 1996, 1998;
Natarajan et al 1998).
\begin{table}
\caption{Model parameters for A370 potential}
\label{hst}
\begin{flushleft}
\begin{tabular}{ccccccc}
\hline
id&$\varepsilon$&$\theta$&$\sigma_0$&$r_0$&$r_t$\\
&${a^2-b^2\over a^2+b^2}$ & &$km\,s^{-1}$&$kpc$&$kpc$\\
\hline
35&0.23&-80&1050&75&800\\
20&0.12&-57&1100&91&800\\
galaxies& -- & -- &$\sigma_{0\ast}=125 $&$r_{0\ast}=0.15
$&$r_{t\ast}=15 $\\
\hline
\end{tabular}
\end{flushleft}
\end{table}
\section{Statistical properties of faint galaxies}
\subsection{Modeling the number counts of lensed galaxies}
The number counts of gravitational arclets in the field of a massive
cluster of galaxies are a competition between
the magnification of the luminosity by the cluster
potential that makes more objects visible and the surface
dilution that
decreases the surface density of arclets by the same factor as for the
magnification.
The number of arcs brighter than magnitude $m$ with an axis ratio
greater than $q_{min}$ and a surface brigthness brighter than $\mu_0$
within the field of a cluster of galaxies is:
\begin{equation}
\begin{tabular}{ccc}
$\lefteqn{N(m,q_{min},\mu_0) = }$ \\
& & \\
& & $ \sum_{i} \int_{z_l}^{z_{max}} \int_{q_{min}}^{\infty}
S(q,z) \ \int_{L_{min}}^{L_{max}} \Phi_i(L,z) \, dL \, dq
{dV \over dz} \, dz $
\end{tabular}
\end{equation}
The sum is over the different morphological types $i$.
$z_l$ is the lens redshift and $z_{max}(\mu_0,i)$ is the redshift cutoff
corresponding to the limit in central surface brightness $\mu_0$.
$S(q,z)$ is the angular area in the source plane
(at redshift $z$) that gives arcs with an axis ratio between $q$ and
$q+dq$. $\Phi_i(L,z)$ is the luminosity function at redshift $z$ for each
morphological type.
It was required as
a preliminary step that counts in wavebands from $U$ to $K$ and redshift
distributions in $B$ and $I$ in empty field are correctly reproduced.
We used the model for galaxy evolution of Bruzual and Charlot (1993)
with the prescriptions of Pozzetti et al. (1996) for the
optimisation of the parameters representing the field galaxy
distributions (see B\'ezecourt et al. 1998a for details).
The weak point in the B\'ezecourt et al. 1998a model, is the
approximation of circular sources used for the calculation of arclets
axis ratio. Indeed, this may underestimate the number of galaxies with
axis ratio larger than a certain threshold especially at low redshifts
and then bias the predicted redshift distribution. To provide a more
accurate and reliable redshift distribution, we now take into account
the galaxy ellipticity distribution as given by Ebbels (1998) (this
distribution is based on the analysis of HST/MDS-like images, and is
therefore appropriate to our study):
\begin{equation}
p(\tau) \propto \tau \, \exp \left[- \left({\tau \over 0.036}\right)^{0.54}
\right]
\label{ell}
\end{equation}
where $\tau={ a^2-b^2 \over 2\, a \, b }$ for an elliptical object
with semi major axis $a$ and semi minor axis $b$.
The complex deformation $\vec \tau$ is defined by: $\vec \tau = \tau
\cos (2\theta)+ i\tau \sin (2\theta)$ where $\theta$ is the
orientation of the major axis. At each point in the image
plane (and at each redshift step for the sources), we estimate the
fraction of the background galaxies which can give a lensed image more
distorted than a fixed limit. We first consider the effect of the
lens deformation through the complex relation given by
Kneib et al. (1996) that relates the image deformation
$\vec \tau_I$ as a function of the
source deformation $\vec \tau_S$ and the strength of the potential
$\vec \tau_{pot}$:
\begin{equation}
{\rm sgn}(\det{A^{-1}}){\vec \tau_I}=
{\vec \tau_S}+{\vec \tau_{pot}}
\left(\delta_S + \Re( \vec \tau_S\vec g^*_{pot})\right)\
\label{eq:lenscomplex}
\end{equation}
where
$\delta = \sqrt{\left( 1 + \tau^2 \right)} = 1 + \vec g \vec \tau^*$
(see Kneib et al 1996 for a full description of this formalism).
The term $\Re( \vec \tau_S\vec g^*_{pot})$ is a correction for
strong lensing only.
A lower threshold in the observed axis ratio ($a/b>2$ here)
corresponds to a lower threshold in the deformation ($\tau_I > 0.75$)
whatever the position angle of the image. Within this limit, we then
scan all the allowed solutions for $\vec\tau_S$ and the source
position angle $\theta_S$. The fraction ${\cal F} (x_I,y_I,z)$ of galaxies
at the location $(x_I,y_I)$ in the image plane and at redshift $z$ which
fill
the condition is computed using the probability distribution of equation
(4) for $\vec\tau_S$ and an average over the position angle
$\theta_S$.
\begin{equation}
{\cal F} (x_I,y_I,z) =\frac{1}{2 \pi} \int_{0}^{2 \pi} \int_{\vec\tau_S\
|\ \tau_I>0.75} p(\tau_S)d\tau_S d\theta_S
\end{equation}
\subsection{Comparison between the observed and predicted arclets
number counts}
\begin{figure}
\psfig{figure=histo_a370_F675W.ps,width=0.5\textwidth,angle=-90}
\caption{Number counts of arclets in A370 in the F675W HST image with the
following selection criteria: $a/b>2$ and $\mu_{675W}<25.5$. Observed
counts ($\circ$) are compared to two number counts models with
respectively $q_0=0.5$ (solid line) and $q_0=0$ (dashed line). Error
bars are statistical poissonian uncertainties. }
\label{histo_F675W}
\end{figure}
The magnitude histogram of the arclets observed in A370 is shown in
Figure \ref{histo_F675W}. As already stressed in B\'ezecourt et
al. (1998a), we find that the addition of galaxy scale components and
the effect of the source ellipticity distribution significantly
increase arclets counts by a factor of 1.4 ($q_0=0.5$) and 1.7
($q_0=0$) with respect to a K93 model with the assumption of circular
sources. Such increase in the arclets counts was already observed in
B\'ezecourt et al. (1998a) when using a detailed mass distribution
but only with circular sources.
Considering an ellipticity distribution for the galaxies also
increases the arclet counts and strongly modifies the redshift distribution
as discussed below. With this refinement, the arclet counts
prediction is now more consistent with the data over the whole
magnitude range. Clearly, detailed lens models and proper assumptions
on the source ellipticity distribution are mandatory to explore
arclets counts.
\subsection{Redshift distribution of arclets}
\begin{figure}
\psfig{figure=z_370_R.ps,width=0.5\textwidth,angle=-90}
\caption{Redshift distribution of arclets in A370 for the following
selection criteria: $R_{675W}<23.5$,
$\mu_R<25.5$ and $a/b>2$. The solid line corresponds to the model $q_0=0.5$
and the dashed line is
for $q_0=0$. The dotted line corresponds to the ellipticity distribution
of equation \ref{ell_2} with $q_0=0.5$.}
\label{redshift_675}
\end{figure}
The next step is to consider the redshift distribution of the
arclets. Figure \ref{redshift_675} shows the predicted one with the
selection criteria close to the observational limits for faint
object
spectroscopy ($R_{675}<23.5$, $\mu_R<25.5$ and $a/b>2$). The
comparison with real data is not easy at present because no well defined
sample of arclets, complete in magnitude, has yet been spectroscopically
explored. The redshift distributions displayed in Figure
\ref{redshift_675} present a prominent peak at $z\simeq 0.6$ and a
secondary one at $z > 2$ due essentially to the contribution of bright
and young Elliptical-type galaxies in their process of strong star
formation. This second peak is strongly attenuated compared to what
was discussed in B\'ezecourt et al. (1998a) where there was clearly a strong
excess in the high redshift tail. Taking into account the ellipticity
distribution of the sources has fixed this problem.
In order to see how stable the redshift distribution is with respect to
the ellipticity distribution, another $p(\tau)$ is considered following
Ebbels (1998):
\begin{equation}
p(\tau) \propto \tau \, \exp \left[- \left({\tau \over 0.20}\right)^{0.85}
\right]
\label{ell_2}
\end{equation}
representative of objects with $18<I<25.5$ while equation \ref{ell} was
derived with bright objects ($18<I<22$). As this distribution corresponds
to more elongated objects than in equation \ref{ell}, more arclets appear at
low $z$ (Figure \ref{redshift_675}, dotted line). However, the total
number of arclets is increased by only 11\%.
Analysis of Figure \ref{redshift_675}
suggests several comments:
\begin{itemize}
\item Compared with the redshift distribution of field galaxies within the
same observational limits, the arclet distribution is biased towards
more distant objects. We can take advantage of this
modification of the redshift distribution to select more distant
galaxies in a redshift survey sample. This is particularly true near
the central part of the lens where very high redshift galaxies
($z>2.5$) have already been found (\cite{ebbels96},
\cite{trager97}, \cite{franx97}, \cite{frye98},
\cite{pello98a}).
\item The two galaxy evolution models presented in this paper
($q_0=0$ and $q_0=0.5$) have been validated in empty fields where the
number counts and redshift distribution of galaxies were compatible
with the observed ones. However they present significant differences
at high redshift for arclets. The $q_0=0$ model with ``Pure
Luminosity Evolution" (PLE) predicts more high redshift arclets than
the $q_0=0.5$ model where number density evolution is included. This
different behavior of the redshift distribution at $z>2$ is
encouraging as it could be a way to distinguish the two scenarios
by
analyzing the redshift distribution of arclets with $z>2$.
Although it is presently difficult to speculate on the true
fraction
of very high redshift objects in well defined samples of arclets, this
population of very high redshift arclets does not seem to dominate the
actual samples. Hence we favor
the $q_0=0.5$ model corresponding to a merging scenario of galaxy
formation and evolution.
\item Because the high redshift domain is sensitive
to many uncertainties included in the evolutionary code (dust
obscuration in the UV, uncertainties in the slope of the IMF for
massive stars or in their UV tracks, influence of the short time scale
phenomena), we must be cautious on the above conclusions.
Furthermore, the galaxy evolution models assume that all the galaxies
form instantaneously at a given redshift, which is a clear limitation
of these models (we have $z_{\rm form} = 4.5$ for $q_0=0$, which gives
for the present day galaxies an age of 16 Gyr, and $z_{\rm form} = 5$
for $q_0=0.5$,
corresponding to an age of 12.2 Gyr). A more phenomenological
model of
galaxy formation that follows in detail the mass evolution of
galaxy
sub units (\cite{baugh98}) may improve this simple description, and
would likely be the best to be compared with the current data.
\end{itemize}
\subsection{Depletion curves in Abell 370}
\begin{figure}
\centerline{
\psfig{figure=depletion.ps,width=0.5\textwidth,angle=-90}
}
\caption{Depletion of lensed object in different filters and limiting
magnitudes: $U<23$ ({\bf-- $\cdot$ $\cdot$
$\cdot$ --}), $B<24.5$ ({\bf---}), $R<24$ ({\bf-- -- --}), $I<23$
({\bf-- $\cdot$ --})
and $K<21$ ({\bf $\cdot$ $\cdot$ $\cdot$})
for $q_0=0$. The small excess near $r=40''$ is due to the local
magnification
enhancement of an individual cluster galaxy.}
\label{depletion}
\end{figure}
For a simple description of the background population, it is easy to show
that the number density of objects brighter than a given magnitude $m$
behind a lens is:
\begin{equation}
N(<m, A) = N_0 (<m) \, A^{2.5 \alpha - 1}
\end{equation}
where $N_0 (<m)$ is the number density in blank field, $A$ is the
magnification and $\alpha$ is the logarithmic slope of field number
counts. An excess or a lack of objects is then expected for a slope
steeper or shallower than 0.4 respectively. In practice, optical
number counts of field galaxies show a slope smaller than 0.4 in
nearly all wavebands (except in U and B at relatively bright
magnitudes), so a``depletion'' is expected in most cases, being
more pronounced at longer wavelengths (Fort et al. 1996, Taylor et
al. 1998).
With our model, it is possible to compute radial depletion curves
instead of global number counts, for any filter or magnitude
range. A clear illustration of the wavelength dependence of the
magnification bias is given by the ratio of the number of lensed
objects expected in a given area in the field of Abell 370 over the
number of field galaxies in the same area (Figure
\ref{depletion}). One can note that the predicted intensity of the
depletion is higher at longer wavelengths, as expected, because of
the
shallower slope of the field galaxies counts. Moreover, because of the
flattening of the counts at faint magnitudes, the depletion is also
very sensitive to the magnitude threshold (see the I curve in Figure
\ref{depletion}). UV counts have a slope
larger than 0.4 at bright magnitudes ($U<23$) but it quickly decreases
at fainter levels. This fast flattening is due to the Lyman break that
goes through the red limit of the U filter at redshift $z\simeq
2.5$. Hence, the lack of objects at faint magnitudes in $U$ produces
the depletion curve shown in Figure \ref{depletion} (see Paper II for
more details of the UV modeling).
The detection of this magnification bias is highly dependant on the
poissonian noise of the background sources and the contamination by
cluster members and foreground objects. This is critical in cluster
cores where the density of objects is very low, of the order of a
few units per arcmin$^2$. Hence, these very poor statistics cannot
bring valuable information on the sources redshift distribution as most of
the information is expected in the dip of the depletion curve.
Only massive clusters are able
to show such a depletion curve but the recovery of the sources
redshift distribution seems out of reach with present datasets.
\section{Conclusions}
In this paper, we have presented a new analysis of the cluster lens
Abell 370. Thanks to the measurement of the redshift for the B2--B3
gravitational pair ($z=0.806$) and the identification of several new
multiple arclets in the WFPC2 image, a more accurate and well
constrained model of the mass distribution in Abell 370 is proposed,
including galaxy scale components. We
studied the spectral energy distribution (SED) of arclets B3 and A5
which are found to be both young, low metallicity
star forming objects without strong interstellar extinction.
The lens model has been used to study the background population of
galaxies. Taking the
galaxy ellipticity distribution into account induces significant
changes in the predicted redshift distribution of arclets.
The excess of very distant sources found
in previous analyses is now strongly attenuated and arclets at low
redshift are recovered.
For the two prescriptions used on galaxy evolution,
although they both reproduce well the number counts in
empty fields and in cluster lenses, the ``Pure Luminosity Evolution''
model in a low density universe over predicts the number
of high redshift ($z>2$) galaxies, compared to what is currently observed.
If this effect is observationally confirmed it may constrain the number
density evolution of galaxies in order to interpret the redshift
distribution of arclets.
To understand further the properties of the background population
through cluster lenses, deep dedicated cluster surveys are needed
in
order to enlarge the number of arclets and have significant
statistical properties. A multi-color approach is favored as it
can
help in constraining the redshift distribution. This is particularly true
for
those unresolved sources for which no shape information is available and
only the spectral energy distribution can discriminate between background
objects and cluster members. In addition, detailed study of their
spectral energy distribution will be useful to characterize properties like
the luminosity function, dust extinction and star formation histories of
distant galaxies (e.g. Pell\'o et al. 1998b).
\acknowledgements
We thank R. Pell\'o, R.S. Ellis and Y. Mellier
for many fruitful discussions and encouragements.
This research has been conducted under the auspices of a European
TMR network programme made possible via generous financial support
from the European Commission ({\tt http://www.ast.cam.ac.uk/IoA/lensnet/}).
This work was also supported by the Programme National de Cosmologie and CNRS.
|
2,869,038,155,077 | arxiv | \section{Introduction}
We report new observations of the debris disc around q$^{1}$\,Eri\ (HD\,10647, HR\,506) using the {\it Herschel} Space Observatory \citep{pilbratt2010}. The observations form part of a larger Key Programme (KP), viz. DUNES\footnote{DUst around NEarby Stars, {\tiny {\ttfamily http://www.mpia-hd.mpg.de/DUNES/}}}, which is described in more detail by \citet{eiroa2010}. Here, we give a brief summary to put the contents of this Letter into context. The DUNES KP is a sensitivity limited study with the goal of discovering and characterising extra-solar analogues of the Edgeworth-Kuiper Belt (EKB) in an unbiased, statistical sample of nearby F, G and K main-sequence stars. The sample is volume limited, with distances $\stackrel {<}{_{\sim}}$\,20\,pc, and spans a broad range of stellar ages, from \about\,0.1 to roughly 10\,Gyr. In addition to the object of the present study (q$^{1}$\,Eri), a number of M- and A-type stars will be observed in collaboration with the DEBRIS-KP team \citep{matthews2010}, implying that the whole sample covers a decade in stellar mass from 0.2 to 2\,$M_{\odot}$.
The PACS \citep{poglitsch2010} observations at 100\,\um\ aim at the detection of the stellar photospheres down to the confusion noise with a signal to noise ratio (S/N) of at least 5. Together with observations in the other {\it Herschel} bands, this will lead to an unprecedented characterisation of discs and will allow detailed theoretical modelling. It is foreseen that fractional luminosities $L_{\rm dust} / L_{\odot}$ of a few times $10^{-7}$ will be reached, i.e. similar to that of the EKB of the Solar System \citep{stern1996,jewitt2009} and a more than order of magnitude improvement over {\it Spitzer} data \citep{bryden2009}.
\begin{figure*}[t]
\resizebox{\hsize}{!}{
\rotatebox{00}{\includegraphics{14601fig1.eps}}
}
\caption{PACS photometric imaging of q$^{1}$\,Eri\ at, from left to right, 70\,\um\ ({\it blue}), 100\,\um\ ({\it green}) and 160\,\um\ ({\it red}). The 70\,\um\ image was taken in chop-nod mode, whereas the other two in scan-map mode. The upper panels display the reduced observations. Below, de-convolved images are shown, using observations of $\alpha$\,Boo for the definition of the PSF. Displayed are the results for ten iterations of a MEM algorithm \citep{hollis1992}. The star defines the origin of the frames, i.e. offset coordinates (0, 0). Within the positional accuracy (2$^{\prime \prime}$\ rms), the stellar position and the centre of the elliptical brightness distributions coincide (see Table\,\ref{results}) and offsets are in seconds of arc. The lowest contours are at 5\% of the maximum values and consecutive steps are also by this amount. At the distance of the star, 20$^{\prime \prime}$\ corresponds to 350\,AU.
}
\label{observed}
\end{figure*}
A main-sequence star with an infrared excess larger by more than three orders of magnitude than this limit is the late F-type star q$^{1}$\,Eri. This will potentially allow to study the material giving rise to the excess in great detail. The star is known to be accompanied by a giant planet, q$^{1}$\,Eri\,b, orbiting at the distance of 2\,AU \citep[][and references therein] {butler2006}, which corresponds to \asecdot{0}{1} at the distance of the system \citep[17.35\,pc;][]{perryman1997,vanleeuwen2007}. Such a small angle will not be resolved by the observations described in this paper, but q$^{1}$\,Eri\ is also surrounded by a ring or belt system of scattering and thermally emitting dust particles. The extent of the system, which from optical to sub-millimetre wavelengths is up to several tens of arc seconds in size \citep[e.g.,][and references therein]{stapelfeldt2007,liseau2008}, should be readily accessible to PACS at the {\it Herschel} telescope ($\theta_{\rm diff}^{\,\prime \prime} = 7 \times \lambda_{\mu{\rm m}}/100\,\mu{\rm m}$).
This contribution presents the observed properties of the q$^{1}$\,Eri\ system by {\it Herschel}. The results of theoretical model calculations will be communicated by Augereau et al. (in prep.). The observations and data reduction are described in Sect.\,2, with the results presented in Sect.\,3. These are discussed in Sect.\,4 and, finally in Sect.\,5, our main conclusions are briefly summarised.
\begin{table*}
\begin{flushleft}
\caption{\label{results} Brightness, orientation and extent of the q$^{1}$\,Eri\ debris system}
\resizebox{\hsize}{!}{
\begin{tabular}{lcllllll}
\hline
\noalign{\smallskip}
Observation & $\Delta {\rm RA}$, $\Delta {\rm Dec}^{a}$ & Flux$^{b}$ & 2a$^{b}$ & 2b$^{b}$ & PA$^{c}$ & $i^{\,d}$ & Alternate flux \\
ID and Mode & ($^{\prime \prime}$, $^{\prime \prime}$) & (mJy) &($^{\prime \prime}$) & ($^{\prime \prime}$) & ($^{\circ}$) & ($^{\circ}$) & measurement (mJy) \\
\noalign{\smallskip}
\hline \\
1342187142 ~~ - \phantom{1}70 : chop-nod&$+0.9$, $-0.9$& $ 828 \pm 83$ & 13.9 (11.0) & \phantom{1}7.9 (4.4) & 54.4 (60.0) & 55.4 (66.4) & \phantom{1}$859 \pm \phantom{1}6$ {\it Spitzer}/MIPS$^{e}$\\
1342187141~~~ - 100 : chop-nod & $+2$, $0$ & $ 816 \pm 82$ & 15.5 (12.6) & \phantom{1}8.8 (5.1) & 54.4 (56.2) & 55.4 (66.1) & $1080\pm 36$ IRAS FSC\\
134218739/40 - 100 : scan map & $0$, $-1.4$ & $ 810 \pm 81$ & 15.1 (12.5) & \phantom{1}8.8 (4.7) & 56.1 (54.8) & 54.4 (67.9) & $1080\pm 36$ IRAS FSC\\
1342187142 ~~~- 160 : chop-nod & $-1.4$, $+0.9$ & $ 529 \pm 106$ & 18.9 (13.5) & 12.3 (7.8) & 51.3 (48.9) & 49.4 (54.7) & \phantom{1}$453 \pm 50$ {\it Spitzer}/MIPS$^{f}$\\
134218739/40 - 160 : scan map &$+1.4$, $-1.6$ & $ 537 \pm 107$ & 19.3 (13.3) & 12.5 (6.5) & 51.2 (55.7) & 49.6 (60.7) & \phantom{1}$453 \pm 50$ {\it Spitzer}/MIPS$^{f}$\\
\noalign{\smallskip}
\hline
\end{tabular}
}
\end{flushleft}
Notes to the Table: \\
$^{a}$ Offsets of ellipse centre relative to stellar coordinates: proper motion corrected to IRCS 2009.89 \citep[+0.186\,s and \asecdot{$-1$}{055}][]{perryman1997,vanleeuwen2007} \radot{01}{42}{29}{502}, \decdot{$-53$}{44}{28}{06}. The photospheric fluxes are 17\,mJy at 70\,\um, 8\,mJy at 100\,\um\ and 3\,mJy at 160\,\um\ (cf. Fig.\,\ref{SED}). \\
$^{b}$ Flux for Gaussian ellipse with fitted FWHM to major and minor axis, 2a and 2b, respectively. Values in parentheses refer to de-convolved images.\\
$^{c}$ Position angle measured from North over East.\\
$^{d}$ Lower limit to the inclination, where $i=90$$^{\circ}$\ refers to an edge-on geometry.\\
$^{e}$ \citet{trilling2008}.\\
$^{ f}$ \citet{tanner2009}.
\end{table*}
\section{Observations and data reduction}
During this initial observing run, two different modes of observing were executed for test reasons, in order to optimise the efficiency of the programme in terms of observing time and signal-to-noise ratio (S/N). These modes were, respectively, the chop-and-nod mode (70, 100 and 160\,\um), adopted for point source observing, and the scan-map option (100 and 160\,\um), for extended sources (Fig.\,\ref{observed}). These modes are described by \citet{eiroa2010} and in a future technical note.
The reduction was done within the {\it Herschel} interactive processing environment, using HIPE\_v2.0.0\_RC3, and with scripts for the pipeline developed by members of the PACS-ICC\footnote{{\it Herschel} science demonstration phase data processing workshop, 14-16 December 2009, ESAC, Spain.}. At the medium scan speed of 20$^{\prime \prime}$\,s$^{-1}$, two maps along position angles 63$^{\circ}$\ and 117$^{\circ}$, respectively, were obtained in order to minimise striping in the resultant images. At both 100\,\um\ and 160\,\um, the sky noise was lower in the scan-map data (2.8 and $5.2) \times 10^{-5}$\,Jy\,arcsec$^{-2}$, respectively, as compared to (3.9 and $12) \times 10^{-5}$\,Jy\,arcsec$^{-2}$ of the chop-nod observations. These values are comparable to the surface brightness of the scattering disc at 0.6\,\um\ \citep[$1.3 \times 10^{-5}$\,Jy\,arcsec$^{-2}$, as deduced from data in ][]{stapelfeldt2007}.
The prime calibrator Arcturus \citep[$\alpha$ Boo; e.g.,][]{cohen2005} was observed close in time with q$^{1}$\,Eri\ and was used to provide the instrumental point spread function (PSF)\footnote{See
{\tiny {\ttfamily {http://Herschel.esac.esa.int/AOTsReleaseStatus.shtml}}} \\
{\tiny {\ttfamily {http://Herschel.esac.esa.int/Docs/AOTsReleaseStatus/PACS\_Scan\\
ChopNod\_ReleaseNote\_22Feb2010.pdf}}} \\
{\tiny {\ttfamily {http://Herschel.esac.esa.int/Docs/AOTsReleaseStatus/PACS\_Phot\\
Map\_ReleaseNote\_23Feb2010.pdf}}} }.
The filters at the reference wavelengths of 70, 100 and 160\,\um\ are referred to as {\it blue}, {\it green} and {\it red}, spanning $60-85$\,\um, $85-130$\,\um\ and $130-210$\,\um, respectively \citep{poglitsch2008}. The PSF has a tripolar shape at low intensities, i.e. a few percent of the peak value, and the half-power-width is circular for the 70\,\um\ and 100\,\um\ filters, but somewhat elongated at 160\,\um\ in the scan direction. Currently, the estimated accuracy of the absolute flux calibration is estimated to be better than 20\% in the long-wave and better than 10\% in the short-wave bands.
\section{Results}
In the Scan Map AOT release is the following note: {\it The fluxes are too high and have to be scaled down by the following factors 1.05 in the blue band, 1.09 in the green band 1.29 in the red band}, and in Table\,\ref{results}, no colour correction has been applied to the reported fluxes. The internal consistency of the PACS data - at a given wavelength but for different observing modes - is strikingly good and lends confidence to the quality of these data. From the comparison with previous measurements, it is apparent that IRAS fluxes are on the high side, whereas long-wave {\it Spitzer} data are somewhat on the low side.
At the level of the pointing accuracy of {\it Herschel} (2$^{\prime \prime}$\ rms, corresponding to 35\,AU), there is no significant offset of the centres of the elliptical isophotes with respect to the position of the star (Table\,\ref{results}). This uncertainty is much larger than the offset of 8\,AU observed for Fomalhaut \citep{kalas2008}. For the nominal wavelengths of 70\,\um, 100\,\um\ and 160\,\um, the angular resolution is limited to about 6$^{\prime \prime}$, 7$^{\prime \prime}$\ and \asecdot{11}{5}, respectively. The image of q$^{1}$\,Eri\ seems clearly resolved, therefore, only in the long dimension.
The average of the position angle of the elliptical source is PA = 54$^{\circ}$\ and the average of the lower limit to the inclination of an intrinsically circular feature (ring or belt) is $i > 53$$^{\circ}$, with estimated uncertainties of $\sim \pm 5$$^{\circ}$\ ($i=90$$^{\circ}$\ for an edge-on geometry). For the de-convolved images (see below), the disc would be seen more edge-on and in this case, $i > 63$$^{\circ}$. This is consistent with the tilt derived from optical images of the light scattered off the disc \citep[$i=76$$^{\circ}$,][]{stapelfeldt2007}.
\section{Discussion}
\begin{figure}[t]
\resizebox{\hsize}{!}{
\rotatebox{00}{\includegraphics{14601fig2.ps}}
}
\caption{This figure is similar to the SED of q$^{1}$\,Eri\ shown in \citet{liseau2008} with the difference that the PACS data of Table\,\ref{results} are shown by the filled symbols. In addition, the 160\,\um\ flux of {\it Spitzer} \citep{tanner2009} has also been added. As before, the solid curve for the IR excess is that of a single temperature blackbody ($\beta = 0$, where $\kappa_{\nu} \propto \nu^{\,\beta}$), and the dashed line refers to the ring-belt composite model of that paper.
}
\label{SED}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{
\rotatebox{00}{\includegraphics{14601fig3.ps}}
}
\caption{One-dimensional cuts (averages of 5 pxl wide strips), along the major axis, through the 100\,\um\ scan map image, from which the stellar source has been subtracted prior to the de-convolution. The histogram depicts the observed data, whereas the smooth line shows the result of the applied de-convolution algorithm.
}
\label{ring}
\end{figure}
\subsection{The spectral energy distribution (SED) revisited}
Figure\,\ref{SED} is taken from the paper by \citet{liseau2008}, but with the PACS data included. The 70\,\um\ and 100\,\um\ fluxes fall essentially on top of the 60\,K blackbody curve, shown as a solid line. For 160\,\um, the {\it Spitzer} observations of \citet{tanner2009} are also included. These seemed to confirm the ring-belt model in our earlier paper. The new PACS datum is marginally below the blackbody, but also marginally above the composite SED and, at the moment, we need to leave this issue as undecided.
\subsection{Image de-convolution}
From high-S/N data, possible fine-structural details in the image can be retrieved. This would require image sharpening techniques, such as, e.g., image de-convolution with the known PSF.
The images were de-convolved with the rotated Arcturus-PSF using the algorithm for a maximum entropy method (MEM) developed by \citet{hollis1992}. Guided by experience, we stopped the routine after about a dozen iterations in order not to produce artefacts and/or false super-resolution. More automatic and/or quantitative methods were tried, but were eventually abandoned due to insufficient reliability. In Fig.\,\ref{observed}, the direct observations are compared with the de-convolved images and in Table\,\ref{results}, values measured on the sharpened images are reported.
In Figure\,\ref{ring}, one-dimensional cuts along the major axis of the observed and de-convolved 100\,\um\ image are displayed. It would have been more natural to select the observation at 70\,\um, having the highest spatial resolution, but the 100\,\um\ scan map data are of considerably higher S/N, outweighing the apparent resolution advantage of the shorter wavelength data. In the analysed sub-frame, the flux was conserved within 7\% by the MEM routine. Prior to the de-convolution, a stellar point source with photospheric flux of 8\,mJy at 100\,\um\ (Table\,\ref{results}) was subtracted from the PACS image. The resulting sharpened image reveals a central {\it broadened plateau} and a central depression with a depth of about 2\%, which is consistent with the debris residing in a ring or belt around the star.
With standard assumptions regarding the emitting grains (astronomical silicates, a blow-out size limit $a_{\rm min}= 0.6$\,\um\ for the F9\,V star and a $-3.5$ power law index for the size distribution) we find that the 100\,\um\ surface brightness profiles along the major and minor axes are well reproduced assuming a disk inclination of about 70$^{\circ}$\ and a two-parameter model for the surface density, $\Sigma(r)$. These parameters are the peak density position, $r_{\rm max}$, and the power law index of the surface density profile for $r > r_{\rm max}$. The best fit to the surface brightness profiles is consistent with a ring-like disc, having values of $r_{\rm max} \sim 85$\,AU and $\Sigma(r > r_{\rm max}) \propto r^{-3}$, respectively. A more elaborate model, with the size distribution computed self-consistently and taking into account the profiles also at other wavelengths, will be presented by Augereau et al. (in prep.).
This roughly 40\,AU wide ring or belt at about 85\,AU from the star appears similar to the EKB of the Solar System. Based on an analogy with the debris disc around Fomalhaut and on theoretical expectations, it is quite possible that another gas giant planet, q$^{1}$\,Eri\,c, could be orbiting the star inside the inner belt edge. Given the age of the system, \gapprox\,2\,Gyr, the direct detection of q$^{1}$\,Eri\,c, for instance by means of coronography, can be expected to be hard \citep[see, e.g.,][]{beichman2006}.
\section{Conclusions}
Based on imaging observations with PACS in the three photometric bands at 70\,\um, 100\,\um\ and 160\,\um\ we find that
\begin{itemize}
\item[$\bullet$] the debris around the solar-type star q$^{1}$\,Eri\ has an oval-shaped brightness distribution, the size of which increases with the wavelength;
\item[$\bullet$] the integrated flux density at these wavelengths leads to an SED which is in good agreement with earlier results;
\item[$\bullet$] the very high signal-to-noise of the 100\,\um\ scan map is adequate to sharpen the image using an image de-convolution technique, revealing a ring-like structure with maximum surface density at \about\,85\,AU from the star;
\item[$\bullet$] with a width of about 35 to 45\,AU, this ring or belt around the F9\,V star q$^{1}$\,Eri\ is similar to the Edgeworth-Kuiper Belt around the Sun. This may hint at the presence of another planet, q$^{1}$\,Eri\,c.
\end{itemize}
\acknowledgement{We wish to acknowledge the support by the SNSB, the CNES, the PNP and the MICINN. In addition, we have benefited from HCSS / HSpot / HIPE which are joint developments by the {\it Herschel} Science Ground Segment Consortium, consisting of ESA, the NASA {\it Herschel} Science Center, and the HIFI, PACS and SPIRE consortia.}
|
2,869,038,155,078 | arxiv | \section{Introduction}
Two of the most crucial elements of autonomous driving systems are sensing and planning. Sensing deals with finding a compact representation of the present state of the environment, while planning deals with deciding on what actions to take so as to optimize future objectives. Supervised machine learning techniques are very useful for solving sensing problems. In this paper we describe a machine learning algorithmic framework for the planning part. Traditionally, machine learning approaches for planning are studied under the framework of Reinforcement Learning (RL) --- see \cite{bertsekas1995dynamic,kaelbling1996reinforcement,sutton1998reinforcement,szepesvari2010algorithms} for a general overview and \cite{kober2013reinforcement} for a comprehensive review of reinforcement learning in robotics.
Typically, RL is performed in a sequence of consecutive rounds. At round $t$, the planner (a.k.a. the agent) observes a state, $s_t \in S$, which represents the agent as well as the environment. It then should decide on an action $a_t \in A$. After performing the action, the agent receives an immediate reward, $r_t \in \mathbb{R}$, and is moved to a new state, $s_{t+1}$. As a simple example, consider an adaptive cruise control (ACC) system, in which a self driving vehicle should implement acceleration/braking so as to keep an adequate distance to a preceding vehicle while maintaining smooth driving. We can model the state as a pair, $s_t = (x_t,v_t) \in \mathbb{R}^2$, where $x_t$ is the distance to the preceding vehicle and $v_t$ is the velocity of the car relative to the velocity of the preceding vehicle. The action $a_t \in \mathbb{R}$ will be the acceleration command (where the car slows down if $a_t < 0$). The reward can be some function that depends on $|a_t|$ (reflecting the smoothness of driving) and on $s_t$ (reflecting that we keep a safe distance from the preceding vehicle). The goal of the planner is to maximize the cumulative reward (maybe up to a time horizon or a discounted sum of future rewards). To do so, the planner relies on a policy, $\pi : S \to A$, which maps a state into an action.
Supervised Learning (SL) can be viewed as a special case of RL, in which $s_t$ is sampled i.i.d. from some distribution over $S$ and the reward function has the form $r_t=-\ell(a_t,y_t)$, where $\ell$ is some loss function, and the learner observes the value of $y_t$ which is the (possibly noisy) value of the optimal action to take when viewing the state $s_t$.
There are several key differences between the fully general RL model and the specific case of SL. These differences makes the general RL problem much harder.
\begin{enumerate}
\item In SL, the actions (or predictions) taken by the learner have no effect on the environment. In particular, $s_{t+1}$ and $a_t$ are independent. This has two important implications:
\begin{itemize}
\item In SL we can collect a sample $(s_1,y_1),\ldots,(s_m,y_m)$ in advance, and only then search for a policy (or predictor) that will have good accuracy on the sample. In contrast, in RL, the state $s_{t+1}$ usually depends on the action (and also on the previous state), which in turn depends on the policy used to generate the action. This ties the data generation process to the policy learning process.
\item Because actions do not effect the environment in SL, the contribution of the choice of $a_t$ to the performance of $\pi$ is local, namely, $a_t$ only affects the value of the immediate reward. In contrast, in RL, actions that are taken at round $t$ might have a long-term effect on the reward values in future rounds.
\end{itemize}
\item In SL, the knowledge of the ``correct'' answer, $y_t$, together with the shape of the reward, $r_t = -\ell(a_t,y_t)$, gives us a full knowledge of the reward for all possible choices of $a_t$. Furthermore, this often enables us to calculate the derivative of the reward with respect to $a_t$. In contrast, in RL, we only observe a ``one-shot'' value of the reward for the specific choice of action we took. This is often called a ``bandit'' feedback. It is one of the main reasons for the need of ``exploration'', because if we only get to see a ``bandit'' feedback, we do not always know if the action we took is the best one.
\end{enumerate}
Before explaining our approach for tackling these difficulties, we briefly describe the key idea behind most common reinforcement learning algorithms. Most of the algorithms rely in some way or another on the mathematically elegant model of a Markov Decision Process (MDP), pioneered by the work of Bellman \cite{bellman1956dynamic,bellman1971introduction}. The Markovian assumption is that the distribution of $s_{t+1}$ is fully determined given $s_t$ and $a_t$. This yields a closed form expression for the cumulative reward of a given policy in terms of the stationary distribution over states of the MDP. The stationary distribution of a policy can be expressed as a solution to a linear programming problem. This yields two families of algorithms: optimizing with respect to the primal problem, which is called policy search, and optimizing with respect to the dual problem, whose variables are called the \emph{value function}, $V^\pi$. The value function determines the expected cumulative reward if we start the MDP from the initial state $s$, and from there on pick actions according to $\pi$. A related quantity is the state-action value function, $Q^\pi(s,a)$, which determines the cumulative reward if we start from state $s$, immediately pick action $a$, and from there on pick actions according to $\pi$. The $Q$ function gives rise to a crisp characterization of the optimal policy (using the so called Bellman's equation), and in particular it shows that the optimal policy is a deterministic function from $S$ to $A$ (in fact, it is the greedy policy with respect to the optimal $Q$ function).
In a sense, the key advantage of the MDP model is that it allows us to couple all the future into the present using the $Q$ function. That is, given that we are now in state $s$, the value of $Q^\pi(s,a)$ tells us the effect of performing action $a$ at the moment on the entire future. Therefore, the $Q$ function gives us a local measure of the quality of an action $a$, thus making the RL problem more similar to SL.
Most reinforcement learning algorithms approximate the $V$ function or the $Q$ function in one way or another. Value iteration algorithms, e.g. the $Q$ learning algorithm \cite{watkins1992q}, relies on the fact that the $V$ and $Q$ functions of the optimal policy are fixed points of some operators derived from Bellman's equation. Actor-critic policy iteration algorithms aim to learn a policy in an iterative way, where at iteration $t$, the ``critic'' estimates $Q^{\pi_t}$ and based on this, the ``actor'' improves the policy.
Despite the mathematical elegancy of MDPs and the conveniency of switching to the $Q$ function representation, there are several limitations of this approach. First, as noted in \cite{kober2013reinforcement}, usually in robotics, we may only be able to find some approximate notion of a Markovian behaving state. Furthermore, the transition of states depends not only on the agent's action, but also on actions of other players in the environment. For example, in the ACC example mentioned previously, while the dynamic of the autonomous vehicle is clearly Markovian, the next state depends on the behavior of the other driver, which is not necessarily Markovian. One possible solution to this problem is to use partially observed MDPs~\cite{white1991survey}, in which we still assume that there is a Markovian state, but we only get to see an observation that is distributed according to the hidden state. A more direct approach considers game theoretical generalizations of MDPs, for example the Stochastic Games framework. Indeed, some of the algorithms for MDPs were generalized to multi-agents games. For example, the minimax-Q learning \cite{littman1994markov} or the Nash-Q learning \cite{hu2003nash}. Other approaches to Stochastic Games are explicit modeling of the other players, that goes back to Brown's fictitious play~\cite{brown1951iterative}, and vanishing regret learning algorithms \cite{hart2000simple,CesaBianchiLu06}. See also \cite{uther1997adversarial, Thrun95a,kearns2002near,brafman2003r}. As noted in \cite{shoham2007if}, learning in multi-agent setting is inherently more complex than in the single agent setting.
A second limitation of the $Q$ function representation arises when we depart from a tabular setting. The tabular setting is when the number of states and actions is small, and therefore we can express $Q$ as a table with $|S|$ rows and $|A|$ columns. However, if the natural representation of $S$ and $A$ is as Euclidean spaces, and we try to discretize the state and action spaces, we obtain that the number of states/actions is exponential in the dimension. In such cases, it is not practical to employ the tabular setting. Instead, the $Q$ function is approximated by some function from a parametric hypothesis class (e.g. neural networks of a certain architecture). For example, the deep-Q-networks (DQN) learning algorithm of \cite{mnih2015human} has been successful at playing Atari games. In DQN, the state space can be continuous but the action space is still a small discrete set. There are approaches for dealing with continuous action spaces (e.g. \cite{silver2014deterministic}), but they again rely on approximating the $Q$ function. In any case, the $Q$ function is usually very complicated and sensitive to noise, and it is therefore quite hard to learn it. Indeed, it was observed that value based methods rarely work out-of-the-box in robotic applications~\cite{kober2013reinforcement}, and that the best performing methods rely on a lot of prior knowledge and reward shaping \cite{laud2004theory,ng1999policy}. Intuitively, the difficulty in learning $Q$ is that we need to implicitly understand the dynamics of the underlying Markov process.
In the autonomous driving domain we tackle in this paper, the multi-agent adversarial environment leads to non-Markovianity of the natural state representation. Moreover, the natural state and action representations are continuous in nature. Taken together, we found out that $Q$-based learning approaches rarely work out-of-the-box, and require long training time and advanced reward shaping.
A radically different approach has been introduced by Schmidhuber~\cite{schmidhuber1991reinforcement}, who tackled the RL problem using a recurrent neural network (RNN). Following \cite{schmidhuber1991reinforcement}, there have been several additional algorithms that rely on RNNs for RL problems. For example, Backer~\cite{bakker2001reinforcement} proposed to tackle the RL problem using recurrent networks with the LSTM architecture. His approach still relies on the value function. Sch{\"a}fer~\cite{schafer2008reinforcement} used RNN to model the dynamics of partially observed MDPs. Again, he still relies on explicitly modeling the Markovian structure. There have been few other approaches to tackle the RL problem without relying on value functions. Most notably is the REINFORCE framework of Williams~\cite{williams1992simple}. It has been recently successful for visual attention \cite{mnih2014recurrent,xu2015show}. As already noted by \cite{schmidhuber1991reinforcement}, the ability of REINFORCE to estimate the derivative of stochastic units can be straightforwardly combined within the RNN framework.
In this paper we combine Schmidhuber's approach, of tackling the policy learning problem directly using a RNN, with the notions of multi-agents games and robustness to adversarial environments from the game theory literature. Furthermore, we do not explicitly rely on any Markovian assumption. Our approach is described in the next section.
\section{Planning by Prediction}
Throughout, we assume that the state space, $S$, is some subset of $\mathbb{R}^d$, and the action space, $A$, is some subset of $\mathbb{R}^k$. This is the most natural representation in many applications, and in particular, the ones we describe in \secref{sec:experiments}.
Our goal is to learn a policy $\pi: S \rightarrow A$. As is standard in machine learning, we bias ourselves to pick a policy function $\pi$ from a hypothesis class $\mathcal{H}$. Namely, $\mathcal{H}$ is a predefined set of policy functions from which we should pick the best performing one.
In order to learn $\pi$ using the SL framework, one would need a training set of pairs (state,optimal-action). We of course do not have such a training set. Instead, we only have an access to a ``simulator'', that can be used to assess the quality of $\pi$. Formally, fixing a horizon $T$, any policy $\pi$ induces a distribution over $\mathbb{R}^T$, such that the probability of $(r_1,\ldots,r_T) \in \mathbb{R}^T$ is the probability to apply our simulator for $T$ steps, while on step $t$ we observe $s_t$, feed the simulator with the action $a_t=\pi(s_t)$, and observe the reward $r_t$. Denote by $B$ the random bits used by the simulator, we note that we can write the vector $r = (r_1,\ldots,r_T)$ of rewards as a deterministic function $R(B,\pi)$. We use $R_t(B,\pi)$ to denote the $t$'th element of $R(B,\pi)$. We can now formulate the problem of learning the policy $\pi$ as the following optimization problem:
\begin{equation} \label{eqn:policyOptimization}
\max_{\pi \in \mathcal{H}} ~ \E_B\left[ \sum_{t=1}^T R_t(B,\pi) \right] ~,
\end{equation}
where the expectation is over the distribution over $B$.
We assume that the hypothesis class, $\mathcal{H}$, is the set of deep neural networks (DNN) of a certain predefined architecture, and therefore every $\pi \in \mathcal{H}$ is parametrized by a vector of weights, $\theta$. We use $\pi_\theta$ to denote the policy associated with the vector $\theta$.
If we could express $R(B,\pi_\theta)$ as a differential function of $\theta$, we could have utilized the Stochastic Gradient Descent (SGD) approach for maximizing \eqref{eqn:policyOptimization}. That is, starting with an initial $\theta$, at each iteration of SGD we first sample $B$, then we calculate the gradient of $\sum_{t=1}^T R_t(B,\pi_\theta)$ with respect to $\theta$, and finally we update $\theta$ based on this gradient.
Our key observation is that by solving two SL problems, described below, we can approximate $R(B,\pi_\theta)$ by a differential function of $\theta$. Hence, we can implement SGD for learning $\pi_\theta$ directly.
The goal of the first SL problem is to learn a deep neural network (DNN), that represents the mapping from a (state,action) pair into the immediate reward value. We denote this DNN by $\mathrm{DNN}_r$ and it is formally described as a function $\mathrm{DNN}_r : S \times A \to \mathbb{R}$. We shall later explain how to learn $\mathrm{DNN}_r$ using SL, but for now lets just assume that we can do it and have the network $\mathrm{DNN}_r$ such that $\mathrm{DNN}_r(s_t,a_t) \approx r_t$.
The goal of the second SL problem is to learn a DNN that represents the mapping from (state,action) into the next state. Formally, this DNN is the function $\mathrm{DNN}_N : S \times A \rightarrow S$, and for now lets assume we managed to learn $\mathrm{DNN}_N$ in a supervised manner such that $\mathrm{DNN}_N(s_t,a_t) \approx s_{t+1}$.
Equipped with $\mathrm{DNN}_r$ and $\mathrm{DNN}_N$ we can describe the process of generating a random $B$ and calculating $R(B,\pi_\theta)$ as follows. Initially, the simulator picks a seed for its pseudo random number generator and then it determines the initial state $s_1$. At round $t$, the agent receives $s_t$ from the simulator and applies $\pi_\theta$ to generate the action $a_t = \pi_\theta(s_t)$. The simulator receives $a_t$ and generates $r_t$ and $s_{t+1}$. At the same time, the agent applies $\mathrm{DNN}_r$ to generate $\hat{r}_t = \mathrm{DNN}_r(s_t)$ and applies $\mathrm{DNN}_N$ to generate $\hat{s}_{t+1} = \mathrm{DNN}_N(s_t)$. Let us denote $\nu_{t+1}= s_{t+1} - \hat{s}_{t+1}$. Therefore, if the simulator receives $\hat{s}_{t+1}$ it can generate $\nu_{t+1}$ and send it to the agent.
A single round of this process is depicted below. The entire process is obtained by repeating the shaded part of the picture $T$ times. Solid arrows represent differentiable propagation of information while dashed arrows represent non-differentiable propagation of information.
\begin{center}
\begin{tikzpicture}[var/.style={minimum size=12mm,circle,draw=blue!50,fill=blue!20,thick},
dnn/.style={minimum size=6mm,rectangle,draw=black!50,fill=black!20,thick},
simulator/.style={minimum size=20mm,rectangle,draw=black!50,fill=yellow!20,thick},
arrow/.style={->,thick},
simarrow/.style={->,thick,draw=red,dashed}]
\node [var] (st) {$s_t$};
\node [dnn] (pi) [above=of st] {$\pi_\theta$};
\draw[arrow] (st) -- (pi);
\node [var] (at) [above=of pi] {$a_t$};
\draw[arrow] (pi) -- (at);
\node [dnn] (DNNn) [right=of pi] {$\mathrm{DNN}_N$};
\draw[arrow] (st) to [bend right=35] (DNNn.south);
\draw[arrow] (at) to [bend left=35] (DNNn.north);
\node[dnn] (DNNr) [above=of at] {$\mathrm{DNN}_r$};
\draw[arrow] (at) -- (DNNr);
\node[var] (hrt) [above=of DNNr] {$\hat{r}_t$};
\draw[arrow] (DNNr) -- (hrt);
\draw[arrow] (st.west) to [bend left=35] (DNNr.west);
\node [var] (hstp) [right=of DNNn] {$\hat{s}_{t+1}$};
\draw[arrow] (DNNn) -- (hstp);
\node [dnn] (plus) [right=of hstp] {$+$};
\node [var] (nut) [below=of plus,yshift=-5mm] {$\nu_{t+1}$};
\draw[arrow] (hstp.east) to (plus.west);
\draw[arrow] (nut.north) to (plus.south);
\node [var] (stp) [right=of st,xshift=60mm] {$s_{t+1}$};
\draw[arrow] (plus.east) to ([xshift=-3mm] stp.north);
\node[simulator] (simtp) [below=of hstp,xshift=-10mm,yshift=-20mm] {Simulator$_{t+1}$};
\draw[simarrow] (hstp.south) to ([xshift=3mm,yshift=1mm] simtp.north);
\draw[simarrow] ([xshift=2mm] at.south) to ([yshift=1mm,xshift=1mm] simtp.north west);
\draw[simarrow] (simtp.east) to ([xshift=-2mm] nut.south);
\node[simulator] (simt) [left=of simtp,xshift=-35mm] {Simulator$_{t}$};
\draw[simarrow] (simt.east) to (simtp.west);
\node[simulator] (simtpp) [right=of simtp,xshift=35mm] {Simulator$_{t+2}$};
\draw[simarrow] (simtp.east) to (simtpp.west);
\begin{pgfonlayer}{background}
\node [fill=black!30,fit=(st) (hrt) (simtp) (nut)] {};
\end{pgfonlayer}
\end{tikzpicture}
\end{center}
Recall that we assume that $\hat{r}_t \approx r_t$ and $\hat{s}_{t+1} \approx s_{t+1}$. If these approximations are exact, then the dashed arrows can be eliminated from the figure above and the entire process of generating the rewards becomes a differentiable recurrent neural network. In such case, we can solve \eqref{eqn:policyOptimization} using the SGD framework, and at each iteration we calculate the gradient by backpropagation in time over the recurrent neural network.
In most situations, we expect $\hat{r}_t$ and $\hat{s}_{t+1}$ to slightly deviate from $r_t$ and $s_{t+1}$. The deviation of $\hat{r}_t$ from $r_t$ is less of an issue in practice, because it is often the case that there is nothing special about the exact reward $r_t$, and maximizing an approximation of it leads to similar performance. Therefore, for the sake of simplicity, we assume that maximizing the sum of $\hat{r}_t$ is sufficiently good.
The more problematic part is the deviation of $\hat{s}_{t+1}$ from $s_{t+1}$. There are several possible sources for this deviation.
\begin{enumerate}
\item \emph{Non-determinism}: in the traditional MDP model, $s_{t+1}$ is a random variable whose distribution is a function of $(s_t,a_t)$. But, the actual value of $s_{t+1}$ is not necessarily a deterministic function of $(s_t,a_t)$.
\item \emph{Non-Markovianity}: it may be the case that the process is not Markovian in the state representation. It will always be Markovian in another representation, that is known to the simulator, but we do not know the Markovian representation or we do not want to model it. For example, in the ACC problem given in the next section, $s_{t+1}$ depends on the acceleration commands of the driver in front of us. While the simulator models this behavior in some complicated way, we do not want to model it and prefer to stick with a simple state representation that does not allow us to predict the acceleration of the other driver, but only allows us to react to the other driver's behavior.
\item \emph{Failures of the learning process}: as we will show, we are going to learn $\mathrm{DNN}_N$ from examples, and we may suffer from the usual inaccuracies of learning algorithms (approximation error, estimation error, and optimization error). As this part is standard we ignore this issue and assume that we have managed to learn $\mathrm{DNN}_N$ sufficiently good.
\end{enumerate}
In any case, despite the fact that $\hat{s}_{t+1}$ can deviate from $s_{t+1}$, we can still apply backpropagation in time in order to calculate an approximate gradient of the cumulative reward w.r.t. $\pi$. In particular, the forward part of the backpropagation is correct, due to the correction made by defining $s_{t+1}$ as a sum of the prediction $\hat{s}_{t+1}$ and the correction term $\nu_{t+1}$ (supplied by the simulator during the forward pass). In the backward part, we propagate the error through the solid arrows given in the figure, but we kill the messages that go through dashed arrows, because we refer to the simulator as a black box. As mentioned previously, we do not impose explicit probabilistic assumptions on $\nu_t$. In particular, we do not require Markovian relation. Instead, we rely on the recurrent network to propagate ``enough'' information between past and future through the solid arrows. Intuitively, $\mathrm{DNN}_N(s_t,a_t)$ describes the predictable part of the near future, while $\nu_t$ expresses the unpredictable aspects, mainly due to the behavior of other players in the environment. The learner should learn a policy that will be robust to the behavior of other players. Naturally, if $\|\nu_t\|$ is large, the connection between our past actions and future reward values will be too noisy for learning a meaningful policy.
As noted in \cite{schmidhuber1991reinforcement}, explicitly expressing the dynamic of the system in a transparent way enables to incorporate prior knowledge more easily. For example, in \secref{sec:experiments} we demonstrate how prior knowledge greatly simplifies the problem of defining $\mathrm{DNN}_N$.
Finally, it is left to explain how we can learn $\mathrm{DNN}_r$ and $\mathrm{DNN}_N$ within the SL framework.
For this, we observe that by relying on the access to the simulator, we have the ability to generate tuples $(s,a,r,s')$ as training examples, where $s$ is the current state, $a$ is the action, $r$ is the reward, and $s'$ is the next state. We note that it is customary to use some elements of exploration in generating the training set. Since this is a standard technique, we omit the details.
Equipped with such training examples, we can learn $\mathrm{DNN}_r$ in the SL framework by extracting examples of the form $((s,a),r)$ from each tuple $(s,a,r,s')$. The key point here is that even though the action $a$ is not necessarily optimal for $s$, it does not pose any problem for the task of learning the mapping from state-action into the correct reward. Furthermore, even though the reward is given in a ``bandit'' manner for the policy learning problem, it forms a ``full information'' feedback for the problem of learning a network $\mathrm{DNN}_r$, such that $\mathrm{DNN}_r(s_t,a_t) \approx r_t$. Likewise, we can learn $\mathrm{DNN}_N$ in the SL framework by extracting examples of the form $((s,a),s')$ from each tuple $(s,a,r,s')$. Again, the fact that $a$ is not the optimal action for $s$ does not pose any problem for the task of learning the near future, $s'$, from the current state and action, $(s,a)$.
It is also possible to simultaneously learn $\mathrm{DNN}_r, \mathrm{DNN}_N, $ and $\pi_\theta$, by defining an objective that combines the cumulative reward with supervised losses of the form $\|\hat{s}_{t+1}-s_{t+1}\|^2$ and $(\hat{r}_{t+1}-r_t)^2$. In the experimental section we report results for both separate and join training.
\subsection{Robustness to Adversarial Environment}
Since our model does not impose probabilistic assumptions on $\nu_t$, we can consider environments in which $\nu_t$ is being chosen in an adversarial manner. Of course, we must make some restrictions on $\nu_t$, otherwise the adversary can make the planning problem impossible. A natural restriction is to require that $\|\nu_t\|$ is bounded by a constant. Robustness against adversarial environment is quite useful in autonomous driving applications. We describe a real world aspect of adversarial environment in \secref{sec:experiments}.
Here, we show that choosing $\nu_t$ in an adversarial way might even speed up the learning process, as it can focus the learner toward the robust optimal policy. We consider the following simple game. The state is $s_t \in \mathbb{R}$, the action is $a_t \in \mathbb{R}$, and the immediate loss function is $0.1 |a_t| + [|s_t| - 2]_+$, where $[x]_+ = \max\{x,0\}$ is the ReLU function. The next state is $s_{t+1} = s_t + a_t + \nu_t$, where $\nu_t \in [-0.5,0.5]$ is chosen by the environment in an adversarial manner.
It is possible to see that the optimal policy can be written as a two layer network with ReLU: $a_t = -[s_t - 1.5]_+ + [-s_t - 1.5]_+$. Observe that when $|s_t| \in (1.5,2]$, the optimal action has a larger immediate loss than the action $a=0$. Therefore, the learner must plan for the future and cannot rely solely on the immediate loss.
Observe that the derivative of the loss w.r.t. $a_t$ is $0.1
\,\mathrm{sign}(a_t)$ and the derivative w.r.t. $s_t$ is $1[|s_t| > 2] \, \mathrm{sign}(s_t)$.
Suppose we are in a situation in which $s_t \in (1.5,2]$. The adversarial choice of $\nu_t$ would be to set $\nu_t = 0.5$, and therefore, we will have a non-zero loss on round $t+1$, whenever $a_t > 1.5 - s_t$. In all such cases, the derivative of the loss will back-propagate directly to $a_t$. We therefore see that the adversarial choice of $\nu_t$ helps the learner to get a non-zero back-propagation message in all cases for which the choice of $a_t$ is sub-optimal.
\section{Example Applications} \label{sec:experiments}
The goal of this section is to demonstrate some aspects of our approach on two toy examples: adaptive cruise control (ACC) and merging into a roundabout.
\subsection{The ACC Problem}
In the ACC problem, a host vehicle is trying to keep an adequate distance of 1.5 seconds to a target car, while driving as smooth as possible. We provide a simple model for this problem as follows.
The state space is $\mathbb{R}^3$ and the action space is $\mathbb{R}$. The first coordinate of the state is the speed of the target car, the second coordinate is the speed of the host car, and the last coordinate is the distance between host and target (namely, location of the host minus location of the target on the road curve). The action to be taken by the host is the acceleration, and is denoted by $a_t$. We denote by $\tau$ the difference in time between consecutive rounds (in the experiment we set $\tau$ to be 0.1 seconds).
Denote $s_t = (v^{\textrm{target}}_t , v^{\textrm{host}}_t, x_t)$ and denote by $a^{\textrm{target}}_t$ the (unknown) acceleration of the target. The full dynamics of the system can be described by:
\begin{align*}
v^{\textrm{target}}_t &= [v^{\textrm{target}}_{t-1} + \tau\,a^{\textrm{target}}_{t-1} ]_+\\
v^{\textrm{host}}_t &= [v^{\textrm{host}}_{t-1} + \tau\,a_{t-1} ]_+ \\
x_t &= [x_{t-1} + \tau\,(v^{\textrm{target}}_{t-1} - v^{\textrm{host}}_{t-1} ) ]_+
\end{align*}
This can be described as a sum of two vectors:
\begin{align*}
s_t &= ( [s_{t-1}[0] + \tau a^{\textrm{target}}_{t-1}]_+ , [s_{t-1}[1] + \tau a_{t-1}]_+ , [s_{t-1}[2] + \tau(s_{t-1}[0]-s_{t-1}[1])]_+ ) \\
&= \underbrace{( s_{t-1}[0] , [s_{t-1}[1] + \tau a_{t-1}]_+ , [s_{t-1}[2] + \tau(s_{t-1}[0]-s_{t-1}[1])]_+ )}_{\mathrm{DNN}_N(s_{t-1},a_t)} + \underbrace{([s_{t-1}[0] + \tau a^{\textrm{target}}_{t-1}]_+ - s_{t-1}[0] , 0 , 0)}_{\nu_t}
\end{align*}
The first vector is the predictable part and the second vector is the unpredictable part.
The reward on round $t$ is defined as follows:
\[
-r_t ~=~ 0.1 \, |a_t| ~+~ [ | x_t / x^*_t - 1 | - 0.3 ]_+ ~~~~\textrm{where}~~~~ x^*_t = \max\{1, 1.5\, v^{\textrm{host}}_t\}
\]
The first term above penalizes for any non-zero acceleration, thus encourages smooth driving. The second term depends on the ratio between the distance to the target car, $x_t$, and the desired distance, $x^*_t$, which is defined as the maximum between a distance of $1$ meter and brake distance of 1.5 seconds. Ideally, we would like this ratio to be exactly $1$, but as long as this ratio is in $[0.7,1.3]$ we do not penalize the policy, thus allowing the car some slack (which is important for maintaining a smooth drive).
\subsection{Merging into a Roundabout}
In this experiment, the goal of the agent is to pass a roundabout.
An episode starts when the agent is approaching the bottom entrance of the roundabout. The episode ends when the agent reaches the second exit of the roundabout, or after a fixed number of steps. A successful episode is measured first by keeping a safety distance from all other vehicles in the roundabout at all times. Second, the agent should finish the route as quickly as possible. And third, it should adhere a smooth acceleration policy. At the beginning of the episode, we randomly place $N_T$ target vehicles on the roundabout.
To model a blend of adversarial and typical behavior, with probability $p$, a target vehicle is modeled by an ``aggressive'' driving policy, that accelerates when the host tries to merge in front of it. With probability $1-p$, the target vehicle is modeled by a ``defensive'' driving policy that deaccelerate and let the host merge in. In our experiments we set $p=0.5$. The agent has no information about the type of the other drivers. These are chosen at random at the beginning of the episode.
We represent the state as the velocity and location of the host (the agent), and the locations, velocities, and accelerations of the target vehicles. Maintaining target accelerations is vital in order to differentiate between aggressive and defensive drivers based on the current state.
All target vehicles move on a one-dimensional curve that outlines the roundabout path. The host vehicle moves on its own one-dimensional curve, which intersects the targets' curve at the merging point, and this point is the origin of both curves. To model reasonable driving, the absolute value of all vehicles' accelerations are upper bounded by a constant. Velocities are also passed through a ReLU because driving backward is not allowed.
Note that by not allowing driving backward we make long-term planning a necessity (the agent cannot regret on its past action).
\begin{figure}
\begin{center}
\includegraphics[width=0.3\textwidth]{demo1.png} ~
\includegraphics[width=0.3\textwidth]{demo2.png}
\end{center}
\caption{Screenshots from the game. The agent is the car in red. Target vehicles are in blue (aggressive cars) and in green (defensive cars). The agent doesn't observe the type of the target cars. It should infer it from their position and acceleration. On the left, the agent correctly understands that the approaching car is aggressive and therefore stop and wait. On the right we see a successful merge.} \label{fig:demo}
\end{figure}
Recall that we decompose the next state, $s_{t+1}$, into a sum of a predictable part, $\mathrm{DNN}_N(s_t,a_t)$, and a non-predictable part, $\nu_{t+1}$. In our first experiment, we let $\mathrm{DNN}_N(s_t,a_t)$ be the dynamics of locations and velocities of all vehicles (which are well defined in a differentiable manner), while $\nu_{t+1}$ is the targets' acceleration.
It is easy to verify that $\mathrm{DNN}_N(s_t,a_t)$ can be expressed as a combination of ReLU functions over an affine transformation, hence it is differentiable with respect to $s_t$ and $a_t$. The vector $\nu_{t+1}$ is defined by a simulator in a non-differentiable manner, and in particular implement aggressive behavior for some targets and defensive behavior for other targets. Two frames from the simulator are show in Figure~\ref{fig:demo}. As can be seen in the supplementary videos\footnote{\url{http://www.mobileye.com/mobileye-research/long-term-planning-by-short-term-prediction/}}, the agent learns to slowdown as it approaches the entrance of the roundabout. It also perfectly learned to give way to aggressive drivers, and to safely continue when merging in front of defensive ones.
Our second experiment is more ambitious: we do not tell the network the function $\mathrm{DNN}_N(s_t,a_t)$. Instead, we express $\mathrm{DNN}_N$ as another learnable part of our recurrent network. Besides the rewards for the policy part, we add a loss term of the form $\|\mathrm{DNN}_N(s_t,a_t) - s_{t+1}\|^2$, where $s_{t+1}$ is the actual next state as obtained by the simulator. That is, we learn the prediction of the near future, $\mathrm{DNN}_N$, and the policy that plan for the long term, $\pi_\theta$, concurrently. While this learning task is more challenging, as can be seen in the supplementary videos, the learning process still succeeds.
\section{Discussion}
We have presented an approach for learning driving polices in the presence of other adversarial cars using recurrent neural networks. Our approach relies on partitioning of the near future into a predictable part and an un-predictable part. We demonstrated the effectiveness of the learning procedure for two simple tasks: adaptive cruise control and roundabout merging. The described technique can be adapted to learning driving policies in other scenarios, such as lane change decisions, highway exit and merge, negotiation of the right of way in junctions, yielding for pedestrians, as well as complicated planning in urban scenarios.
\bibliographystyle{plainnat}
|
2,869,038,155,079 | arxiv | \section{Introduction}
\label{sec:intro}
RUN 1 of the LHC culminated in the discovery of the Higgs boson at 125
GeV. After the Higgs discovery, the most important question is,
naturally, where is the new physics beyond the Standard Model (SM),
and how will it manifest itself? Minimal additions to the SM could be revealed as an
unexpected excess in dileptons, $W^+W^-$, $ZZ$, diphotons, $t\bar{t}$
or $b \bar{b}$, indicating the presence of a new boson
resonance. More involved new physics could also appear as missing
energy, or any other signals indicating the presence of
supersymmetry or extra-dimensions or other exotic particles. But also, new physics could
manifest itself indirectly. In particular it could affect the production
cross section and decay widths of the Higgs boson, expected to be
measured with increased precision during the current RUN2 of the LHC. There are already
several promising signals in the RUN 1 data indicating possible
deviations from the SM expectations. In particular, both ATLAS and CMS
report a possible increase in the signal strength of the $t{\bar t}h$
associated production in the LHC data. Of particular interest
from RUN 1 at CMS and ATLAS are the same-sign dilepton (SS2$l$) and
trilepton ($3l$) signals coming from leptonic Higgs decays in the associated
$t{\bar t}h$ production events.
The best fit signal strengths are found to be
$\mu_{SS2l}=5.3^{+2.1}_{-1.8}$ and $\mu_{3l}= 3.1^{+2.4}_{-2.0}$ at
CMS \cite{Khachatryan:2014qaa}, and $\mu_{SS2l}=2.8^{+2.1}_{-1.9}$ and
$\mu_{3l}= 2.8^{+2.2}_{-1.8}$ at ATLAS \cite{Aad:2015iha}.
These leptonic excesses are associated to the channels $tt(h\to
WW^*)$, $tt(h\to ZZ^*)$ and $tt(h\to \tau \tau)$ where one of the tops
decays leptonically.
Within the preliminary results of RUN 2 in those same leptonic channels, both ATLAS and CMS still
report excesses with, for example, $\mu_{SS2l}=1.9^{+0.9}_{-0.8}$ at CMS
\cite{CMS:2016vqb} and $\mu_{SS2l}=4.0^{+2.1}_{-1.7}$ at
ATLAS \cite{ATLAS:2016awy}.
The most recent preliminary results reported by CMS in the $t{\bar t}h$
associated production searches make use of an integrated
luminosity of 35.9 fb$^{-1}$ and seem to show mixed results. In the leptonic
channels ($W^+W^-$ and $ZZ$ channels) they still show an enhancement of $1.5 \pm0.5$
times the SM prediction, with an observed (expected) significance of
3.3 $\sigma$ (2.5 $\sigma$) obtained from combining these results
with the 2015 data \cite{CMS:2017vru}. On the other hand in the
$h\to\tau\tau$ decay channel search, a slight suppression of $0.72^{+0.62}_{-0.53}$
times the SM prediction is found, with an observed (expected) significance of
$1.4\sigma$ ($1.8\sigma$) \cite{CMS:2017lgc}. Note that, unlike the $W^+W^-$ and $ZZ$ decays, this last
signal is sensitive to both the top-Higgs Yukawa and
the $\tau$-Higgs Yukawa couplings, and thus enhancements or suppressions are possible as long as there are
variations in {\it either} the top quark and the $\tau$ lepton Yukawa couplings.
All measurements are still hindered by having few events so
far, but
nevertheless, should these tantalizing signals survive more precise measurements at higher
luminosities, they will provide the much awaited signals for new
physics. We summarize relevant production and decay channels in Table
\ref{tab:HIGGS_ttH_12} with the overall combinations obtained by the ATLAS
and CMS collaborations, for the signal strengths associated to
each Higgs production and decay channels.
\begin{table}[tb]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Production Mode & Channel & RUN-1 \cite{Khachatryan:2016vau} & Production Mode & Channel & ATLAS RUN-2 & CMS RUN-2 \\
\hline
\hline
$ggh$ & $\gamma \gamma$ & $1.1^{+0.23}_{-0.22}$& $ggh$ & $\gamma \gamma$ & $0.62^{+0.30}_{-0.29}$ \cite{ATLAS:2016hru} & $0.77^{+0.25}_{-0.23}$ \cite{CMS:2016ixj}
\\ \cline{2-5}
&$WW^*$ & $0.84^{+0.17}_{-0.17} $
&&$WW^*$ & - & -
\\ \cline{2-5}
& $ZZ^*$ & $1.13^{+0.34}_{-0.31} $ &
& $ZZ^*$ & $1.34^{+0.39}_{-0.33} $ \cite{ATLAS:2016hru} & $0.96^{+0.44}_{-0.33} $ \cite{CMS:2016ilx}
\\ \hline
$ t{\bar t}h$ &$\gamma \gamma$ & $2.2^{+1.6}_{-1.3}$& $ t{\bar t}h$ &$\gamma \gamma, b{\bar b}, $ leptons & $1.8^{+0.7}_{-0.7}$ \cite{ATLAS:2016axz} & - \\ \hline
& $b {\bar b}$& $1.15^{+0.99}_{-0.94}$& &$WW^\star ,ZZ^\star, \tau^+\tau^-$ & & $2.0^{+0.8}_{-0.7}$ \cite{CMS:2016vqb} \\ \hline
& $W W^\star$& $5.0^{+2.6}_{-2.2}$& & & &
\\ \hline
\end{tabular}
\end{center}
\caption{
\label{tab:HIGGS_ttH_12}
Higgs signal strengths used in our analysis, from $ggh$ and $ t{\bar t}h$
production modes measured at the LHC from RUN 1 (combined $\sqrt{s}=7$
and $8$ TeV results) and RUN 2 ($\sqrt{s}=13$ TeV).
}
\end{table}
One possibility to explain the SS2$l$ excess,
is that it could be due to a modified Higgs coupling to the SM
top quark, resulting in an enhanced $t{\bar t}$ ($h \to$ multileptons)
production. A simple explanation put forward to explain this latest
possible signal of physics beyond the SM has been to
invoke the presence of vector-like quarks
\cite{Angelescu:2015kga}. Previous studies have adopted
an effective theory approach, involving generic couplings and mixings
with the third generation quarks, by which they induce modifications
of the Yukawa couplings of the top and bottom quarks. The scenario has
been put forward to explain deviations from SM expectations in the
forward-backward asymmetry in $b$ decays $A_{\rm FB}^b$ and the
enhancement of the $pp \to t {\bar t}h$ cross section at the
LHC. Mixing with the additional states in the bottom sector allows for
a sufficiently large increase of the $Zb_R{\bar b}_R$ coupling to
explain the forward-backward anomaly, as well as imply new effects in
Higgs phenomenology \cite{Choudhury:2001hs,vectorlike}.
The mixing could provide a strong enhancement of the $t{\bar t}h$
Yukawa coupling, which would explain an increase of the cross section
at the LHC. In this scenario, rates for the loop-induced processes
stay SM-like due to either small vector-like contributions or
compensating effects between fermion mixing and loop
contributions. For this to be a viable scenario, vector-like quark
masses of order 1-2 TeV are required, still safe from the LHC lower
limits on their masses, $m_{\rm VLQ} \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 800$ GeV
\cite{Aad:2015mba}.\footnote{Alternative explanations involving
supersymmetric partners have also been put forward
\cite{Huang:2015fba}, as well as early studies on the implications
on some coefficients of operators of the effective lagrangian
\cite{Okada}.}
In Section \ref{sec:!D1S} of this article, we first revisit the SM augmented by the addition of
one vector-like quark doublet and two singlets (one top-like and one
bottom-like) and review the mixing with the third family of quarks.
In Sec. \ref{subsec:braneHiggs}, we then show that there is a specific
region of parameter space where large corrections to the top Yukawa
coupling (caused by contributions from the new vector-like quarks) do not cause large
corrections to the radiative coupling of the Higgs and gluons.
We show that in that limit, Higgs signal strengths can be simply
parametrized in terms of four variables only, related to the top and
bottom quark shifts in Yukawa couplings, which we analyze in
Sec. \ref{subsec:Yukawas}, in terms of the parameters of the model.
Based on these results, we present simple predictions between signal strength in
Higgs production (through gluon fusion and $t {\bar t}h$) and decays
into $\gamma \gamma,\, WW$ and $ZZ$, in Sec. \ref{subsec:pheno}. As seen in
that subsection, large regions of parameter space are excluded by both
theoretical considerations and experimental constraints. Finally, in
Sec \ref{sec:RS} we describe how to reproduce our scenario within the
conventional Randall Sundrum model and then summarize our findings in
Sec. \ref{sec:conclusion}.
\section{Top and Bottom Mirrors: a doublet and two singlets}
\label{sec:!D1S}
The simple scenario that we wish to consider contains the usual SM
gauge groups and matter fields, with the addition of a
vector-like quark $SU(2)$ doublet and two vector-like quarks
$SU(2)$ singlets, one with up-type gauge charge and another with
down-type gauge charge. They can be regarded
as top or bottom partners as we will consider that their Yukawa couplings
are large. As we will show in the next section, this structure can appear naturally in models of warped
extra dimensions with the Higgs localized near the TeV brane, and with
fermions in the bulk. The presence of brane kinetic terms can lower
significantly the mass of some of the heavy Kaluza Klein fermions \cite{Frank:2016vtv}. The
rest of the KK fields decouple due to their heavy masses , giving rise
to something similar to the simple setup considered here.
We denote
$ q^0_L\equiv \begin{pmatrix}
t^0_L \\
b^0_L
\end{pmatrix} $
as the SM third generation doublet,
and $t^0_R$ as the SM right handed top. Using similar notation we
define $Q_{L,R}\equiv\begin{pmatrix}
Q^t_{L,R} \\
Q^b_{L,R}
\end{pmatrix}$
as the new vector-like quark doublet, $T_{R,L}$ as the new
vector-like up-type quark singlet, and $B_{L,R}$ as the new
vector-like down-type singlet.
In principle we should also consider the mixings with the $up$ and
$charm$ quarks, and $down$ and $strange$ quarks of the SM when writing down the most general Yukawa
couplings in the up and down sectors.
Without any additional assumption or theory input, we should write down the most general Yukawa
couplings between SM quarks and the new vector-like quarks, leading to a
$5\times5$ fermion mass matrices. In models of warped extra dimensions
with bulk fermions, the couplings between up or charm and heavy fermions are suppressed by factors of
order $\sqrt{m_f/m_t}$ with respect to the couplings to the top,
which are of order 1, and so we will just neglect those terms, leading to
a much simpler $3 \times 3$ fermion mass matrix (and similarly in
the down sector with the bottom quark).
The mass and interaction Lagrangian in the top sector, including
its Yukawa couplings with the SM-like Higgs doublet $\tilde{H}$ can be then written as
\bea
{\cal L}_{mass}&=& Y_t^0\ \overline{q}_L\tilde{H} t_R +Y_{qT}\ \overline{q}_L\tilde{H}T_R
+Y_{Qt}\ \overline{Q}_L\tilde{H}t_R + Y_{1} \overline{Q}_L\tilde{H} T_R +
Y_{2} \overline{Q}_R \tilde{H} T_L \non\\
&& + M_Q \overline{Q}_L Q_R + M_T \overline{T}_LT_R,
\eea
with a similar expression for the bottom sector.
After electroweak symmetry breaking, the Yukawa couplings induce off-diagonal terms
into the fermion mass matrix. In the basis defined
by the vectors $(\overline{q}_L,\overline{Q}_L,\overline{T}_L)$ and $(t_R,Q_R,T_R)$
we can write the heavy quarks mass matrix as
\bea
{\bf M_t}=\begin{pmatrix}
v Y_t^0 &0& v Y_{qT}\\
v Y_{Qt}&M_Q& vY^t_{1}\\
0&vY^t_{2} &M_T
\end{pmatrix}.\label{Mt}
\eea
where, in general, all entries are complex\footnote{We can eliminate
five phases from the mass matrix ${\bf M_t}$ through phase redefinitions. We
keep the notation general, since the phases regroup together easily
in all the expressions.}
and where $v$ is the Higgs vacuum expectation value (VEV).
We can also express the bottom sector heavy quark mass matrix as
\bea
{\bf M_b}=\begin{pmatrix}
vY_b^0 &0& vY_{qB}\\
vY_{Qb}&M_Q& vY^b_{1}\\
0&vY^b_{2}&M_B
\end{pmatrix}~,
\label{Mb}
\eea
where again all entries can be complex and the value of $M_Q$ is the
same in both ${\bf M_t}$ and ${\bf M_b}$.
The associated top and bottom Yukawa coupling matrices are
\bea
{\bf \tilde{Y}_t}=\begin{pmatrix}
Y_t^0 &0& Y_{qT} \\
Y_{Qt} &0& Y^t_{1} \\
0&Y^t_{2} &0
\end{pmatrix}\ \ \ {\rm and}\ \ \ \ {\bf \tilde{Y}_b}=\begin{pmatrix}
Y_b^0 &0& Y_{qB} \\
Y_{Qb} &0& Y^b_{1} \\
0&Y^b_{2} &0
\end{pmatrix}. \label{Yt}
\eea
The mass matrices ${\bf M_t}$ and ${\bf M_b}$ are diagonalized by bi-unitary transformations,
${\bf V_t}^\dagger_L{\bf M_t}{\bf V_t}_R={\bf M_t}^{diag}$, and
${\bf V_b}^\dagger_L{\bf M_b} {\bf V_b}_R={\bf M_b}^{diag}$.
At the same time, the Higgs Yukawa couplings are obtained after
transforming the Yukawa matrices into the physical basis,
${\bf V_t}^\dagger_L {\bf \tilde{Y}_t}{\bf V_t}_R={\bf Y_t}^{phys}$
and ${\bf V_b}^\dagger_L {\bf \tilde{Y}_b}{\bf V_b}_R={\bf Y_b}^{phys}$.
\subsection{Higgs Production in the {\it Brane Higgs Limit}}
\label{subsec:braneHiggs}
In the physical basis, the top quark mass and the top Yukawa coupling
(the first entries in the physical mass matrix and the physical Yukawa
matrix) are not related anymore by the SM relationship $m_t^{phys}= v
y_t^{SM}$ \cite{Azatov:2009na} (with $v$ normalized to $v=174$ GeV for simplicity).
The same goes for the bottom quark, and we thus define the
shifts, $\delta y_t$ and $\delta y_t$, between the SM and the physical
Yukawa couplings, due to the diagonalization, as
\bea
\label{deltayt}
y^{phys}_t = y^{SM}_t - \delta y_t
\eea
and
\bea
\label{deltayb}
y^{phys}_b = y^{SM}_b - \delta y_b.
\eea
Later we will give an approximate expression of these shifts in terms
of the model parameters of Eqs.~(\ref{Mt}) and (\ref{Mb}). But before that, we
consider the radiative coupling of the Higgs to gluons. This coupling
depends on the physical Yukawa couplings $y_{nn}$ of all
the fermions running in the loop and on their physical masses
$m_n$. The real and imaginary parts of the couplings (the scalar and
pseudoscalar parts) contribute to the cross section through different loop functions,
$A^S_{1/2}$ and $A^P_{1/2}$, as they generate the two operators $h
G_{\mu\nu}G^{\mu\nu}$ and $h G_{\mu\nu}\tilde{G}^{\mu\nu}$.
\begin{figure}[t]
\center
\begin{center}
\includegraphics[height=4.cm]{GluGluHiggs.pdf
\end{center}
\vspace{.1cm}
\caption{Feynman diagram for the production cross section $gg \to h$
in a setup with new vector-like fermions $Q$, $T$ and $B$.
} \label{fig:decayhgg}
\vspace{.4cm}
\end{figure}
The cross section, depicted in Fig. \ref{fig:decayhgg}, is
\bea
\sigma_{gg\rightarrow h} = {\alpha_s^2 m_h^2\over 576 \pi} \left[|c^S_{ggh}|^2 + |c^P_{ggh}|^2\right]\ \delta(s-m_h^2)
\eea
where
\bea
c^S_{ggh} =\sum^3_{n=1} \re\left({\frac{y_{nn}}{m_{n}}}\right) A^S_{1/2}(\tau_f)
\hspace{.6cm} {\rm and} \hspace{.6cm}
c^P_{ggh} = \sum^3_{n=1} {\rm Im}\left(\frac{y_{nn}}{m_{n}}\right) A^P_{1/2}(\tau_f)
\eea
with $\ \tau = m^2_h/4m^2_n\ $ and with the loop functions
$A^S_{1/2}(\tau)$ and $A^P_{1/2}(\tau)$ as defined in \cite{Gunion:1989we}.
Note that we use a normalization of the loop functions such that for very heavy quarks
with masses $m_n$ much greater than the Higgs mass
$m_h$ (i.e. when $\tau$ is very small) they behave
asymptotically as $\displaystyle \lim_{\tau
\to 0} A^S_{1/2} =1 \hspace{.2cm} \mbox{and} \hspace{.2cm}
\lim_{\tau \to 0} A^P_{1/2} =3/2.$ On the other hand, for light quarks (all the SM quarks
except top and to some extent, bottom), the loop functions essentially vanish
since $\displaystyle \lim_{\tau \to \infty} A^S_{1/2}
= \lim_{\tau \to \infty} A^P_{1/2} = 0$, and we thus neglect
contributions from the light SM quarks and consider only the
effect of the top, the bottom and the four remaining physical heavy quarks.
The amplitudes $c^S_{ggh}$ and $c^P_{ggh}$
can then be written in terms of traces involving the fermion mass and
Yukawa matrices involving top and vector-like up-quarks, and
bottom and any vector-like down quarks, ${\bf M}_i$ and ${\bf
Y}_i$ with $i=t,b$, so that we obtain
\bea
c^S_{ggh} = \sum_n \re\left({\frac{y^u_{nn}}{m^u_{n}}}\right) +
\sum_n \re\left({\frac{y^d_{nn}}{m^d_{n}}}\right ) -
\re\left({\frac{y_b}{m_b}} \right)+ \re\left({\frac{y_b}{m_b}}\right) A^S_{1/2}(\tau_b)
\label{csggh}
\eea
where we have added and subtracted the bottom quark loop contribution
in order to keep the dependence in $A_{1/2}(\tau_b)$, and with a similar expression holding for $c^P_{ggh}$.
We evaluate exactly the sums in the top and bottom sectors and find
\bea
\sum_n \left({\frac{y^u_{nn}}{m^u_{n}}}\right)&=&
\frac{1}{v} \frac{1 + 3 \varepsilon_{Q_t} \varepsilon_T
\frac{|Y^t_2|}{|Y_t^0|}e^{i\theta^t_2}\left(1
-e^{i\theta^t_1}\frac{|Y^t_1||Y_t^0|}{|Y_{Qt}||Y_{qT}|}
\right)}{1 + \varepsilon_{Q_t} \varepsilon_T
\frac{|Y^t_2|}{|Y_t^0|}e^{i\theta^t_2}\left(1
-e^{i\theta^t_1}\frac{|Y^t_1||Y_t^0|}{|Y_{Qt}||Y_{qT}|}
\right)}, \label{hggtopcontribution}
\eea
and
\bea
\sum_n \left({\frac{y^d_{nn}}{m^d_{n}}}\right)&=&
\frac{1}{v} \frac{1 + 3 \varepsilon_{Q_b} \varepsilon_B
\frac{|Y^b_2|}{|Y_b^0|}e^{i\theta^b_2}\left(1
-e^{i\theta^b_1}\frac{|Y^b_1||Y_b^0|}{|Y_{Qb}||Y_{qB}|}
\right)}{1 + \varepsilon_{Q_b} \varepsilon_B
\frac{|Y^b_2|}{|Y_b^0|}e^{i\theta^b_2}\left(1
-e^{i\theta^b_1}\frac{|Y^b_1||Y_b^0|}{|Y_{Qb}||Y_{qB}|}
\right)}, \label{hggbottomcontribution}
\eea
where we have defined the small parameters
$\displaystyle \varepsilon_T=\frac{v| Y_{qT}|}{|M_T|}\ $,
$\displaystyle \varepsilon_B=\frac{v| Y_{qB}|}{|M_B|}\ $,
$\displaystyle\ \varepsilon_{Q_t}=\frac{v|Y_{Qt}|}{|M_Q|}\ $ and
$\displaystyle\ \varepsilon_{Q_b}=\frac{v|Y_{Qb}|}{|M_Q|}\ $, and with
the relative phases $\theta^i_1$ and $\theta^i_2$ defined as
$\theta^t_1 = Arg\left(\frac{Y_t^0 Y^t_1}{Y_{Qt}Y_{qT}} \right)$
and $\theta^t_2 = Arg\left(\frac{Y_{qT}Y_{Qt} Y^t_2}{M_TM_QY^0_t}
\right)$, with similar definitions for $\theta_1^b$ and
$\theta_2^b$. In the SM limit, the expression in Eq. (\ref{csggh}) should tend to
$\sim \displaystyle \frac{1}{v}(1+A_{1/2}^S(\tau_b))$, and so if one
wished to limit the contribution coming from the top partners to
Higgs production (in gluon fusion), we must
reduce/eliminate these corrections. We note the following observations.
\bit
\item Of course for heavier and heavier vector-like fermions, the parameters $\varepsilon_i$
become more and more suppressed, and thus we can smoothly recover the SM
limit, but the new physics effect will decouple from everywhere else (and in particular
the top Yukawa quark will also tend to its SM value).
\item It might also be possible to reduce the couplings $|Y^t_2|$ or $|Y^b_2|$ , but
then this will also affect the physical top or bottom Yukawa couplings shifts,
and in particular no enhancement in the top quark Yukawa coupling
will be possible (although suppression might still be possible), as we
will show later.
\item Another interesting possibility would be to set the overall phase of the
correction term in Eqs.~(\ref{hggtopcontribution}) or
(\ref{hggbottomcontribution}) to be $\pi/2$ so that the real part
vanishes (in general, we expect that the real part would dominate the overall corrections, at
least for $\varepsilon_i <1$). This possibility might limit the amount of enhancement in
the top Yukawa
coupling, since that correction also depends on the phase $\theta_2$. Again
when we compute the approximate expression of the Yukawa couplings shift, we
will see that the phase $\theta_2$ should be close to $0$ to yield an
enhancement in the top Yukawa coupling.
\item In our considerations, we will impose a seemingly contrived constraint
on the model parameters, which we call the {\it Brane Higgs Limit},
such that
\bea
\det {\bf \tilde{Y}_t} = \det {\bf \tilde{Y}_b} =0,
\eea
with the matrices ${\bf \tilde{Y}_t}$ and ${\bf \tilde{Y}_b}$ defined in Eq.~(\ref{Yt}). This
constraint implies that
\bea
Y^t_2 \left(1 -e^{i\theta_1}\frac{|Y^t_1||Y_t^0|}{|Y_{Qt}||Y_{qT}|}\right)=0, \label{cancellation}
\eea
and thus ensures that the top sector contribution to Higgs production,
given in Eq.~(\ref{hggtopcontribution}), gives the same result as the SM
top quark contribution to the same process. The vanishing determinant
condition could come from a specific flavor structure in the Yukawa
matrix, emerging for example from democratic textures, etc. We will
show in the next section that the flavor structure required can also be obtained in models of
extra-dimensions, so that the cancellation in
Eq.~(\ref{cancellation}) is satisfied exactly if the scenario arises out of
the usual Randall-Sundrum warped extra-dimensional scenario with
matter fields in bulk. It is necessary, though, that the Higgs be
sufficiently localized towards the brane and that the KK modes of the
top quark (and bottom quark) be much lighter than the KK partners
of the up and charm quarks (and the down and strange quarks). We will then refer to the vector-like partners of the top and bottom quarks throughout as KK partners, and we return to this scenario in Sec. \ref{sec:RS}.
\eit
Therefore we work in the {\it Brane Higgs Limit} of the general parameter
space. In the down sector, we also have
$Y^b_2
\left(1-e^{i\theta^b_1}\frac{|Y^b_1||Y_b^0|}{|Y_{Qb}||Y_{qB}|}\right)=0$,
so that
we have
\bea
\sum_n \left({\frac{y^u_{nn}}{m^u_{n}}}\right)\ =\ \sum_n
\left({\frac{y^d_{nn}}{m^d_{n}}}\right)\ =\ \frac{1}{v} \, .
\eea
This means that now we can write the $ggh$ couplings as
\bea
c^S_{ggh} &=& \frac{1}{v}\left(1+A^S_{1/2}(\tau_b)\right) + \re\left(\frac{\delta
y_b}{m_b}\right) \left(1- A^S_{1/2}(\tau_b) \right) \, , \\
\hspace{-3cm}{\rm and}\hspace{3cm} &&\non\\
c^P_{ggh} &=&{\rm Im}\left( \frac{\delta y_b}{m_b}\right) \left(\frac{3}{2}-A^P_{1/2}(\tau_b)\right),\ \ \ \ \ \
\eea
where we have used the definitions of the Yukawa coupling shifts in Eqs.~(\ref{deltayt}) and (\ref{deltayb}).
Evaluating the values for the bottom quark loop functions
$A^{S,P}_{1/2}(\tau_b)$ we obtain
\bea
\frac{\sigma_{gg\rightarrow h}}{\sigma^{SM}_{gg\rightarrow
h}} =\frac{\Gamma_{h\rightarrow gg}}{\Gamma^{SM}_{h\rightarrow
gg}}= (1+ \Delta_{gg})~,
\eea
where the correction term is
\bea
\Delta_{gg}=2.13 v \left( \re \frac{\delta y_b}{m_b}\right)
+1.13 v^2\left(\re \frac{\delta y_b}{m_b}\right)^2 +2.51
v^2\left({\rm Im} \frac{\delta y_b}{m_b}\right)^2.
\eea
This result links in a simple and nontrivial way Higgs production through gluon fusion to the bottom quark
Yukawa coupling (or more precisely to its relative shift $v\delta
y_b/m_b$). In a similar fashion we can also obtain the correction to the Higgs
decay into $\gamma\gamma$ in the {\it Brane Higgs Limit}, since the fermion loop is the same as
the gluon fusion loop (although there is an additional $W$ loop contribution in this
case).
We obtain
\bea
\frac{\Gamma_{h\to \gamma\gamma}}{\Gamma^{SM}_{h\to
\gamma\gamma}}= (1 + \Delta_{\gamma\gamma})\, ,
\eea
with
\bea
\Delta_{\gamma\gamma} &=& -0.14 v \left( \re \frac{\delta
y_b}{m_b}\right) + 0.005 v^2 \left( \re
\frac{\delta y_b}{m_b}\right)^2+ 0.01 v^2 \left( {\rm Im} \frac{\delta y_b}{m_b}\right)^2 ,
\eea
and where we took the SM loop contributions to be $|c_{\gamma\gamma}| = |-7 A_1(\tau_W) + 16/9
A^S_{1/2}(\tau_t) +4/9 A^S_{1/2}(\tau_b)|
\simeq 6.53$, with the W-loop function $A_1(\tau_W)$
\cite{Gunion:1989we} and the fermion loop function $A^S_{1/2}(\tau_f)$ normalized so that
$\lim_{\tau\to 0} A_i(\tau)=1$.
Finally, from Eqs.(\ref{deltayt}) and (\ref{deltayb}) we can now write
\bea
\frac{\sigma_{pp\rightarrow tth}}{\sigma^{SM}_{pp\rightarrow tth}} =
\left|\frac{y_t}{y_{SM}}\right|^2 = (1 +\Delta_{tt}) \ \ \ \ \ \ \ {\rm
and} \ \ \ \ \ \ \ \frac{\Gamma_{h\to b}}{\Gamma^{SM}_{h\to
bb}}=\left|\frac{y_b}{y_{SM}}\right|^2 = (1
+\Delta_{bb}),\ \ \ \
\eea
where
\bea
\Delta_{tt}= - 2 v \re \left(\frac{\delta
y_t}{m_t}\right) + v^2\left|\frac{\delta y_t}{m_t}\right|^2 \ \ \ \ {\rm
and} \ \ \ \ \Delta_{bb}= - 2v \re \left(\frac{\delta
y_b}{m_b}\right) +v^2\left|\frac{\delta y_b}{m_b}\right|^2 \, .
\eea
We are interested in studying the dependence on the Yukawa shifts $\delta y_t$ and
$\delta y_b$ of the signal strengths
\bea
\displaystyle \mu^{ii}_{ggh} =
\frac{\sigma(gg\to h) Br(h\to ii)}{\sigma_{SM}(gg\to h) Br^{SM}(h\to
ii)}\ = \frac{\sigma(gg\to h)}{\sigma_{SM}(gg\to h)}\ \frac{\Gamma(h\to
ii)\ }{\Gamma_{SM}(h\to ii)}\ \frac{\Gamma_{SM}^{tot}(h)}{\Gamma^{tot}(h)},
\eea
with a similar expression for $\mu^{ii}_{t\bar{t}h}$ (and using the small
width approximation).
In these expressions, the ratio of total Higgs widths can be written as
\bea
\frac{\Gamma_{SM}^{tot}(h)}{\Gamma^{tot}(h)}&=& \frac{1}{\left(1+
Br^{SM}_{h\to bb}\Delta_{bb} +
Br^{SM}_{h\to gg}\Delta_{gg}+Br^{SM}_{h\to \gamma\gamma}\Delta_{\gamma\gamma}\right)}\ \ \ \, ,
\eea
where, taking into account numerical values for the SM Higgs branching ratios, gives simply
\bea
\frac{\Gamma_{SM}^{tot}(h)}{\Gamma^{tot}(h)}&\simeq& \frac{1}{1+ 0.58
\Delta_{bb} + 0.086 \Delta_{gg}}~,
\eea
and where we have dropped the dependence in $\Delta_{\gamma\gamma}$ as it
is much suppressed.
With all these ingredients, we find the $t{\bar t}h$ production-and-decay strengths
\bea
\mu^{VV}_{t{\bar t}h}&=& \frac{(1+\Delta_{tt})}{(1+ 0.58
\Delta_{bb} + 0.086 \Delta_{gg})} \, , \\
\mu^{bb}_{t{\bar t}h}&=&\mu^{VV}_{t{\bar t}h} (1+\Delta_{bb}) \, , \\
\mu^{\gamma\gamma}_{t{\bar t}h}&=& \mu^{VV}_{t{\bar t}h} (1+\Delta_{\gamma\gamma})~,
\eea
as well as the $ggh$ strengths
\bea
\mu^{VV}_{ggh}&=& \frac{(1+\Delta_{gg})}{(1+ 0.58
\Delta_{bb} + 0.086 \Delta_{gg})} \, , \\
\mu^{\gamma\gamma}_{ggh}&=& \mu^{VV}_{ggh} (1+\Delta_{\gamma\gamma})~,
\eea
with the corrections $\Delta_{ii}$ depending only on top or bottom quark Yukawa coupling shifts
\bea
\Delta_{tt}&=& - 2 v \re \left(\frac{\delta
y_t}{m_t}\right) + v^2\left|\frac{\delta y_t}{m_t}\right|^2
\ \ \label{Dtt}\\
\Delta_{bb} &=& - 2v \re \left(\frac{\delta
y_b}{m_b}\right) +v^2\left|\frac{\delta y_b}{m_b}\right|^2 \label{Dbb} \\
\Delta_{gg}&=&2.13 v \left( \re \frac{\delta y_b}{m_b}\right)
+1.13 v^2\left(\re \frac{\delta y_b}{m_b}\right)^2 +2.51
v^2\left({\rm Im} \frac{\delta y_b}{m_b}\right)^2 \label{Dgg}\\
\Delta_{\gamma\gamma}&=& -0.14 v \left( \re \frac{\delta
y_b}{m_b}\right) + 0.005 v^2 \left( \re
\frac{\delta y_b}{m_b}\right)^2+ 0.01 v^2 \left( {\rm Im} \frac{\delta y_b}{m_b}\right)^2 \label{Dgaga}~.
\eea
\subsection{Yukawa Coupling Shifts}
\label{subsec:Yukawas}
As indicated before, the mass matrices ${\bf M_t}$ and ${\bf M_b}$ from Eqs.~(\ref{Mt}) and (\ref{Mb})
are diagonalized by bi-unitary transformations,
${\bf V_t}^\dagger_L{\bf M_t}{\bf V_t}_R={\bf M_t}^{diag}$, and
${\bf V_b}^\dagger_L{\bf M_b}{\bf V_b}_R={\bf M_b}^{diag}$. In order to obtain simple
analytical expressions for the Yukawa couplings emerging after the
diagonalization, we expand the unitary matrices ${\bf V_{t,b}}_L$
and ${\bf V_{t,b}}_R$ in powers of $\varepsilon \sim v/M$, where $v$
is the Higgs VEV and $M$ represents the vector-like masses $M_Q$, $M_T$
or $M_B$.
In this approximation we can obtain the lightest mass eigenvalues (the top quark and
the bottom quark masses) as well as the physical $t{\bar t}h$ Yukawa coupling and
the $b{\bar b}h$ Yukawa coupling.
This yields the relative deviation between the physical
Yukawa couplings $y_t^{phys}$ and $y_b^{phys}$, and the SM Yukawa
couplings, defined as $y_t^{SM}= m_t^{phys}/v$ and $y_b^{SM}=
m_b^{phys}/v$. In terms of the mass matrix parameters from Eqs.~(\ref{Mt}) and (\ref{Mb}), we obtain
\bea
\frac{\delta y_t}{y_t^{SM}}\
=\ \varepsilon_T^2 + \varepsilon_{Q_t}^2
- 2 \varepsilon_T \varepsilon_{Q_t} \frac{|Y_2|}{|Y_t^0|}
e^{i\theta^t_2} \ +\ {\cal O}(\varepsilon^4), \label{deltayt2}
\eea
and similarly for the bottom quark
\bea
\frac{\delta y_b}{y_b^{SM}}
=\ \varepsilon_B^2 +
\varepsilon_{Q_b}^2 - 2 \varepsilon_B \varepsilon_{Q_b} \frac{|Y^b_2|}{|Y_b^0|}
e^{i\theta^b_2} \ +\ {\cal O}(\varepsilon^4). \label{deltayb2}
\eea
As previously,
$\displaystyle \varepsilon_T=\frac{v| Y_{qT}|}{|M_T|}\ $,
$\displaystyle \varepsilon_B=\frac{v| Y_{qB}|}{|M_B|}\ $,
$\displaystyle\ \varepsilon_{Q_t}=\frac{v|Y_{Qt}|}{|M_Q|}\ $ and
$\displaystyle\ \varepsilon_{Q_b}=\frac{v|Y_{Qb}|}{|M_Q|}\ $ and
the relative phases $\theta_2^t$ and $\theta_2^b$ as
$\theta_2^t= Arg\left(\frac{Y_{qT}Y_{Qt} Y^t_2}{M_TM_QY^0_t}\right)$
and $\theta_2^b= Arg\left(\frac{Y_{qB}Y_{Qb}
Y^b_2}{M_BM_QY^0_b}\right)$. Note that these
perturbative expressions are only valid for $\varepsilon_i <
1$. Nevertheless they are very useful in identifying limits and
parameter behavior, and moreover the limit $\varepsilon_i <
1$ is the natural one as the top and bottom KK partners are
expected to be heavy enough to make the expansions converge.
The first two terms of both expressions always yield
a suppression in the physical Yukawa coupling strength, irrespective of the phases
within the original fermion mass matrices. However, the third term,
proportional to $Y_2$, could induce an
overall enhancement of the top Yukawa coupling or of the bottom Yukawa
coupling, when the phases $\theta^{t,b}_2$ are such that
$-\pi/2<\theta^{t,b}_2<\pi/2$, and for sufficiently high values of
$Y_2$. An enhancement effect would be maximal when $\theta^{t,b}_2=0$.
Note that the top/bottom mirror sector, even though essentially decoupled
from the light quarks, should still have some impact in the CKM quark
mixing matrix. In the same perturbative limit used to obtain Yukawa shifts, we can also
obtain an approximation to the corrections on $V_{tb}$, due to
the presence of the top/bottom vector-like mirrors. the value is shifted as
\bea
|V_{tb}| \simeq 1 - \frac{1}{2} |V_{cb}|^2 - \frac{1}{2} |V_{ub}|^2 -
\frac{1}{2} (\varepsilon_T-\varepsilon_B)^2 \, ,
\eea
where the first two terms represent the usual SM CKM unitarity
constraint, and the last term is the new contribution (where we
have eliminated the relative phases between $Y_{qT}$ and $M_T$, and
between $Y_{qB}$ and $M_B$ through a phase redefinition).
The current Tevatron and LHC average on $V_{tb}$, coming from single top
production is $V_{tb} = 1.009\pm 0.031$
\cite{Olive:2016xmw}, which gives a lowest bound of about $V_{tb} \sim
0.97$. That means that the corrections from our scenario
should be limited to about
\bea\label{vtbbound}
\frac{1}{2} (\varepsilon_T-\varepsilon_B)^2 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} (0.15)^2,
\eea
requiring that $Y v/M \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 0.15$ or $M \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 1$ TeV, unless a strong
cancellation between the top and bottom terms happens.
In a similar way, the rest of third row and third column CKM mixing angles $V_{ub}$, $V_{cb}$,
$V_{td}$ and $V_{ts}$ will receive corrections producing
deviations on the usual SM unitarity relations. For example we have
\bea
(1-\epsilon^2_{B})|V_{cb}|^2& \simeq& \left(1-|V_{cd}|^2-|V_{cs}|^2\right)
\eea
so that imposing the experimental uncertainties in $|V_{cd}|$,
$|V_{cs}|$ and $|V_{cb}|$ \cite{Olive:2016xmw}, we find that
\bea
\epsilon^2_{B}\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} (0.44)^2.
\eea
This is a slightly less constraining bound on the vector-like sector,
compared to the one in Eq.~(\ref{vtbbound}). A thorough full fit
analysis on CKM unitarity is beyond the scope of this paper, although
should the $tth$ signal survive the higher luminosity data, with
improved constraints in the Higgs sector, such a study might become
useful.\footnote{Note that we are still assuming that first and second
quark generations have highly suppressed Yukawa couplings with the
top and bottom vector-like partners.}
Finally, flavor mixing between vector-like quarks and the third
generation can affect other flavor observables, particularly in
$B$-physics. This was extensively discussed in the literature
\cite{Bobeth:2016llm}, where a suppression of BR($B_s \to \mu {\bar \mu}$) and an enhancement in BR($B_d \to \mu {\bar \mu}$) are shown to be most likely. Here we will simply ask that the mixing in the
bottom sector remains small. i.e. we should consider parameter space
points where the shift in bottom quark Yukawa coupling $\delta y_b$ is
small. Again, a full flavor analysis should be addressed if the $tth$ enhanced signal
is confirmed.
\subsection{Higgs Phenomenology}
\label{subsec:pheno}
\begin{figure}[t]
\center
\begin{center}
\includegraphics[height=7.2cm]{ppblarge.pdf}\hspace{.4cm}
\includegraphics[height=7.2cm]{ppbsmall.pdf
\end{center}
\vspace{.1cm}
\caption{Contours of the bottom quark Yukawa correction
$\displaystyle\left|\frac{\delta y_b}{y_b^{SM}}\right|$ with respect to the gluon
fusion signal strengths $\mu_{ggh}^{VV}$ and
$\mu^{\gamma\gamma}_{ggh}$. The right panel zooms in on the region
marked by a dashed box in the left panel. The horizontal and
vertical gray bands represent the experimental bounds set by the LHC
RUN 1 (darker) and the preliminary data from LHC RUN 2 (lighter). The ``Theory
Excluded'' regions are points excluded by the {\it Brane Higgs
Limit} constraint. Each contour is traced by varying the phase of
$\delta y_b$ and we include two parameter space points as example limits, marked by a
$\oplus$ and a $\ominus$, representing, respectively, an
overall enhancement or suppression with respect to the SM predictions
} \label{fig:ppb}
\vspace{.4cm}
\end{figure}
As we have seen earlier, the {\it Brane Higgs Limit} condition is
quite predictive, and easily falsifiable in the near future from LHC
Higgs data.
The first important point is that within our minimal general setup, all signal strengths associated with
the Higgs will deviate from the SM values {\it only} due to
shifts in the top and bottom quark Yukawa couplings.
This means that ratios of Higgs signal strengths involving electroweak
production processes, and decays through the same channels ``$ii$'', should be equal to one, i.e
\bea
\frac{\mu^{ii}_{VBF}}{\mu^{ii}_{Wh}} =\frac{\mu^{ii}_{VBF}}{\mu^{ii}_{Zh}}= 1
\eea
Also signal strengths involving decays into $WW$ should be equal to signals
with decays into $ZZ$, i.e.
\bea
\frac{\mu^{WW}_{ggh}}{\mu^{ZZ}_{ggh}}=\frac{\mu^{WW}_{tth}}{\mu^{ZZ}_{tth}}=\frac{\mu^{WW}_{Vh}}{\mu^{ZZ}_{Vh}}=\frac{\mu^{WW}_{VBF}}{\mu^{ZZ}_{VBF}}=1.
\eea
These are strong model dependent predictions, likely testable at the present RUN 2 at the LHC.
Now, more specific to our setup, and as seen from Eqs. (\ref{Dtt})-(\ref{Dgaga}), the
corrections to all of the Higgs signal strengths depend only on four
parameters, i.e. the absolute values of the relative top
and bottom Yukawa coupling deviations $|\delta y_t|$ and $|\delta y_b|$, and their
two phases. Moreover, only the $t{\bar t} h$ signal strengths depend on all
four parameters. We thus start exploring the dominant Higgs production mechanism,
the gluon fusion process, paying particular attention to the signal strengths
$\mu^{\gamma\gamma}_{ggh}$ and $\mu^{WW,ZZ}_{ggh}$. These depend only
on the deviation of the bottom quark coupling (magnitude and phase).
It is therefore possible to study the relationship between these two
signal strengths, for different values of $\delta y_b$. This is plotted
in Fig. \ref{fig:ppb}, where we show that only a specific region in the
($\mu^{\gamma\gamma}_{ggh},\mu^{WW,ZZ}_{ggh})$ plane is allowed, due
to the {\it Brane Higgs Limit} constraint. The horizontal and vertical gray bands
correspond to limits set by LHC RUN 1 and preliminary LHC RUN 2 data, as summarized in Table \ref{tab:HIGGS_ttH_12}.
In the right panel of that figure, we zoom in the square enclosed by dashed lines in the left panel to consider signal strengths close
to the SM model value, and we can see that the region where
$\mu^{\gamma\gamma}_{ggh} < \mu^{WW,ZZ}_{ggh}$ is
not allowed, thus providing a very simple and strong
prediction of the scenario. Corrections in the direction $\mu^{\gamma\gamma}_{ggh} >
\mu^{WW,ZZ}_{ggh}$ are possible, but require increasingly large
deviations in the bottom Yukawa coupling. For relatively small values
of $\delta y_b$, one can still obtain important deviations in the
signals if one moves along the $\mu^{\gamma\gamma}_{ggh} \simeq \mu^{WW,ZZ}_{ggh}$ diagonal line. For
future use, we choose two points along that line, close to the boundaries
set by the LHC constraints. We denote them with a $\oplus$ and
$\ominus$, and they represent either an overall enhancement in the $ggh$ signal strengths,
or an overall suppression, with respect to the SM predictions.
\begin{figure}[t]
\center
\begin{center}
\includegraphics[height=6.5cm]{pptb.pdf}\hspace{.4cm}
\end{center}
\vspace{.1cm}
\caption{Contours of the bottom quark Yukawa correction
$\displaystyle\left|\frac{\delta y_b}{y_b^{SM}}\right|$ with respect to the gluon
fusion signal strength $\mu_{ggh}^{VV}$ and the ratio of the $t{\bar t}h$ signal
strengths $\displaystyle\frac{\mu^{bb}_{tth}}{\mu_{tth}^{VV}}$. The horizontal and
vertical grey bands represent the experimental bounds set by the LHC
RUN 1 (darker) and the preliminary data from LHC RUN 2 (lighter). The ``Th. Excl.''
region comprises all points excluded by the {\it Brane Higgs Limit} constraint. Each contour is traced by
varying the phase of $\delta y_b$ and we included two same parameter space
points, marked by $\oplus$ and $\ominus$, as in the previous figure.
} \label{fig:pptb}
\vspace{.4cm}
\end{figure}
Once the gluon fusion signals have been fixed, we can study the
effect on other signal strengths which receive corrections only through the bottom
quark Yukawa coupling. In particular we can explore how the ratios
$\displaystyle \frac{\mu^{bb}_{tth}}{\mu^{VV}_{tth}}
=\frac{\mu^{bb}_{Vh}}{\mu^{VV}_{Vh}}$ behave as a function of
$\mu^{VV}_{ggh}$ (all top quark Yukawa dependence cancels out in the
ratio). This is shown in Fig. \ref{fig:pptb}, where we
consider variations of the ratio
$\displaystyle\frac{\mu^{bb}_{tth}}{\mu^{VV}_{tth}}$ (with the corresponding LHC bounds
represented by the horizontal gray bands), with respect to the gluon fusion
strength $\mu^{bb}_{tth}$. As we can see, the current experimental
data tend to prefer values for that ratio close to 1 or less,
therefore putting some pressure on the allowed parameter space. We can
see that if the $\mu^{bb}_{tth}$ signal strength is smaller than the $\mu^{VV}_{tth}$ one (both in $t{\bar t}h$
production), then the data prefers a slight enhancement in the gluon
fusion production strength. Conversely, if the $\mu^{bb}_{tth}$ signal is enhanced, then gluon fusion
signals should be suppressed. Overall, the deviations on the bottom
quark Yukawa coupling must be kept small, unless the $\mu^{bb}_{tth}$ signal
happens to be very much larger than the $\mu^{VV}_{tth}$ signal. The chosen example points $\oplus$ and
$\ominus$ stay within a ratio of $\mu_{tth}$ production signals close to 1.
\begin{figure}[t]
\center
\begin{center}
\includegraphics[width=5.32cm,height=8cm]{pptVzero.pdf}\hspace{.1cm}
\includegraphics[width=5.32cm,height=8cm]{pptVX.pdf}\hspace{.1cm}
\includegraphics[width=5.32cm,height=8cm]{pptVplus.pdf}
\end{center}
\vspace{.1cm}
\caption
Higgs signal strength $\mu_{tth}^{VV}$ with
respect to the top quark Yukawa coupling correction $\displaystyle \left|\frac{\delta
y_t}{y_t^{SM}}\right|$. The contours trace points with constant
value of the phase of $\delta y_t$, and the horizontal gray bands
represent the experimental bounds set by the LHC RUN 1 (darker) and the
preliminary data from LHC RUN 2 (lighter). The ``Theory Excluded'' regions are
excluded by the {\it Brane Higgs Limit} constraint.
In the left panel, we consider a parameter space point where the
bottom Yukawa coupling is SM-like. In the central panel the bottom
quark Yukawa coupling is corrected by 18\% (point $\ominus$) and
the right panel has a bottom quark Yukawa correction of 44\% (point $\oplus$). } \label{fig:pptV}
\vspace{.4cm}
\end{figure}
Once we analyzed the restriction on the deviations from bottom quark Yukawa couplings, we can investigate the signals that do depend on the top Yukawa
coupling deviations. In Fig. \ref{fig:pptV} we choose to study the variation of $\mu^{VV}_{tth}$ with respect to
the top Yukawa deviation $|\delta y_t|$. The rest of $t{\bar t}h$ signals
strengths can be obtained from ratios of other Higgs production
signals strengths, since for example $\displaystyle \mu^{\gamma\gamma}_{tth}
=\frac{\mu^{\gamma\gamma}_{ggh}}{\mu^{VV}_{ggh}}\ \mu^{VV}_{tth}$. We fix the values of the bottom
quark Yukawa coupling in three limits, i.e. when $y_b$ is SM-like
($\delta y_b =0$), when it has a 18\% correction
($\left|\delta y_b/y_b^{SM}\right| =0.18$, corresponding to the point $\ominus$), and when
it has a 44\% correction ($\left|\delta y_b/y_b^{SM}\right|
=0.44$, corresponding to the point $\oplus$).
As can be seen in Fig. \ref{fig:pptV},
for moderate values of $\delta y_b$ there is very mild dependence on $\delta y_b$, so that the three
panels show very similar behavior of the signal strength as a function of the deviations in top quark Yukawa couplings. The parameter space region is a diagonal
band, and we show contours of the phase of the top Yukawa shift $\delta
y_t$, tracing the band diagonally. The dependence is very sensitive to
variations in the phase of the shift of the top Yukawa coupling. We
can clearly see that if the magnitude of the top Yukawa deviation is
less than 1 (the natural expectation for heavy KK top partners), in
order to obtain a signal enhancement (as hinted by LHC data), the
phase must be close to $\pi$. This is in agreement with the perturbative
expressions obtained earlier for the Yukawa shifts and it corresponds
to values of the mass matrix phase $\theta^t_2$ close to
0.
\section{``Brane Higgs Limit'' in Randall Sundrum models}
\label{sec:RS}
In this section we describe briefly how to reproduce the previous
phenomenological scenario within the context of the Randall Sundrum model \cite{RS1}.
Consider a sector of a 5D scenario with a 5D top quark,
i.e. a doublet fermion $Q(x,y)$ and a singlet $T(x,y)$ defined by the following action:
\bea
&&S=-i\int d^4xdy \sqrt{-g}\left[\bar{Q} D\!\!\!\!/ Q + c_q
\sigma'\bar{Q}Q + \bar{T} D\!\!\!\!/ T + c_t \sigma'\bar{T}T
\right. \non\\
&& \left.+ \delta(y-y_1) \left(\alpha_{qL} \bar{Q}_L \partial \!\!\!/ Q_L +
\alpha_{qR} \bar{Q}_R \partial\!\!\!/ Q_R + \alpha_{tL}
\bar{T}_L \partial \!\!\!/ T_L +
\alpha_{tR} \bar{T}_R \partial \!\!\!/ T_R \right)\right. \non\\
&& \left.+ \delta(y-y_1) \left( Y_1 H \bar{Q}_L T_R + Y_2 H \bar{Q}_R T_L +h.c. \right)\right]
\label{5Daction}
\eea
where $D\!\!\!\!/ = \gamma_A e^M_A D_M$ and $\partial \!\!\!/ =
\gamma_a e^\mu_a \partial_\mu$, with $\gamma_A$ the 5D
gamma matrices, $e^M_A$ the vielbein, and
$D_M=(\partial_M+\Gamma_M) $ the 5D covariant derivative involving
the spin connection $\Gamma_M$, with $\Gamma_\mu=\frac{1}{2}
\gamma_5\gamma_\mu \sigma'$ and $\Gamma_5=0$.
The fifth dimension is understood as an interval, with the
boundary terms fixing the boundary conditions of the fields.
We have added a set of fermion kinetic terms localized at the boundary
$y=y_1$. Other boundary fermion kinetic terms, involving
$y$-derivatives, are allowed but we leave them out for
simplicity. Also note that we should only consider positive brane kinetic
term coefficients $\alpha_i$, in order to avoid tachyons and/or ghosts
\cite{delAguila:2006atw,Carena:2004zn}.
We also consider Higgs localized Yukawa couplings on the same boundary. Note that the doublet $Q$
is vector-like in 5D, and we define $Q=Q_L+Q_R$ and $T=T_L+T_R$
where $Q_L, T_L$ and $Q_R,T_R$ are the left and right handed components.
The background spacetime metric is assumed to take the form
\bea
ds^2=e^{-2\sigma(y)}\eta_{\mu\nu} dx^\mu dx^\nu + dy^2\label{warpedmetric}
\eea
where $\sigma(y)=ky$ is known as the warp factor (note that the
signature is $(-,+,+,+,+)$) and with $k\sim M_{Pl}$ being the 5D curvature.
We assume that $\sigma(y_0)=1$ and $\sigma(y_1)\simeq 34$ such that
there are some 15 orders of magnitude of scale hierarchy between both boundaries.
In the absence of fermion brane kinetic terms (proportional to $\alpha_i$'s), this
setup produces a tower of Kaluza Klein (KK) modes, such that the lowest lying
modes of the doublet and singlet fields have wavefunctions
exponentially localized towards either of the boundaries \cite{Davoudiasl:1999tf}. The
localization depends on the value of the 5D fermion mass parameters
$c_q$ and $c_t$. When $c_q < 1/2$ and $c_t > -1/2$, the zero
modes of $Q$ and $T$ will be localized near the $y=y_1$ boundary, and
will be identified as the two chiral components of the SM top quark. The rest of SM
quarks will be obtained in a similar way, but the value of their bulk mass will localize
them towards the $y=y_0$ boundary. Because the Higgs boson is by
construction located at the $y=y_1$ boundary, the top quark will be
``naturally'' heavy (coupled strongly
to the Higgs) whereas the rest of quarks quarks are lighter, since
they couple to the Higgs weakly due to their geographical separation.
On the other hand, the excited modes of all fermions will be very
heavy and localized towards the $y=y_1$ boundary; their typical KK
masses are of order $M_{\rm Pl} e^{-k y_1} \sim TeV$ and they will also
couple strongly with the Higgs.
The scenario that we call the {\it Brane Higgs Limit}, requires the
presence of $only$ the top quark and bottom quark heavy partners, which means the rest of KK
partners should be decoupled (i.e. much heavier). For this we turn-on the fermion brane kinetic
terms (the $\alpha_i$'s) of the 5D top and bottom quarks. There will still be massless fermion modes
(associated to the SM top and bottom quarks\footnote{Which acquire
their SM masses after electroweak symmetry breaking, like in the SM.}), but it now becomes possible to
lower the masses of the KK top and bottom modes. In general, it is possible to obtain
analytically the associated KK spectrum (before electroweak symmetry
breaking) in terms of Bessel functions. Nevertheless, since we are
mainly interested in the top quark, it is much simpler and transparent to
treat the special case where $c_q=c_t=0$. These simple bulk masses are
perfectly top-like, and they have the advantage of producing very
simple equations of motion.
The usual dimensional reduction procedure involves a mixed separation
of variables performed on the 5D fermions, i.e.
\bea
Q_L(x,y)= Q_L(y) t_L(x)\\
Q_R(x,y)= Q_R(y) t_R(x)\\
T_L(x,y)= T_L(y) t_L(x)\\
T_R(x,y)= T_R(y) t_R(x)
\eea
where $t_L(x)$ and $t_R(x)$ are the left and right handed components
of 4D fermions (the lightest of which is the SM top quark). From
there, one must solve for the KK profiles along the extra dimension
$Q_L(y)$, $Q_R(y)$, $T_L(y)$ and $T_R(y)$. In the simple case of
$c_q=c_t=0$, and before electroweak symmetry breaking,the equation
for the profile $\tilde{Q}_R(y)= e^{-2\sigma(y)} Q_R(y)$, for example, becomes
\bea
\left(e^{- k y} \tilde{Q}'_R\right)'+
m^2 e^{k y} \tilde{Q}_R =0
\eea
with Dirichlet boundary condition on the $y=0$ boundary (since there
are no kinetic terms there). The solution is simple,
\bea
\tilde{Q}_R(y)= N_Q \sin{\left(\frac{m(e^{k y}-1)}{k} \right)}
\eea
which obviously vanishes at $y=0$.
The brane kinetic term on the $y=y_1$ boundary enforces a matching
boundary condition at that location, and that boundary condition fixes the
spectrum of the whole tower of KK modes. In this simple case, the KK
spectrum of the 5D fermions $Q(x,y)$ and $T(x,y)$ is given by
\bea
\tan{\left(
\frac{m e^{ky_1}}{k}\right)} =-
\sqrt{\frac{\alpha_{qL}}{\alpha_{qR}}} \tan{\left(
m e^{ky_1}\sqrt{\alpha_{qR} \alpha_{qL}} \right)}
\eea
and
\bea
\tan{\left(
\frac{m e^{ky_1}}{k}\right)} =-
\sqrt{\frac{\alpha_{tL}}{\alpha_{tR}}} \tan{\left(
m e^{ky_1}\sqrt{\alpha_{tR} \alpha_{tL}} \right)},
\eea
in agreement with the flat metric limit considered in
\cite{delAguila:2006atw}.
With a further simplification, taking
$\alpha_{qR}=\alpha_{qL}=\alpha_q$ and
$\alpha_{tR}=\alpha_{tL}=\alpha_t$, the conditions become
\bea
\tan{\left(
\frac{m e^{ky_1}}{k}\right)} = - \tan{\left(
m e^{ky_1}\alpha_i \right)}
\eea
with a spectrum given by
\bea
m_n= \frac{n \pi }{1+k \alpha_i} k e^{-ky_1}
\eea
for $n=0,1,2,3...$ This shows that, indeed, the spectrum of the KK
tops can be significantly reduced in the presence of brane kinetic
terms.
In the scenario we have in mind, only the 5D top and bottom quarks have large brane
kinetic terms (without further justification) and therefore their
associated KK modes can be much lighter than the rest, maybe as light
as 1 TeV. At the same time, the rest of quarks and KK gauge bosons
follow the usual RS pattern with KK
masses maybe an order of magnitude larger ($\sim 10$ TeV). In this
limit, flavor and precision electroweak bounds are much safer and the
main phenomenological effects of the model may occur within the
Higgs sector of the scenario.
If we decouple the $up$, $down$, $strange$ and $charm$ heavy KK
quarks, the fermion mass matrices will involve only SM quarks along
with KK $tops$ and KK $bottoms$. Mixing between light quarks
localized near $y=0$ and heavier quarks localized near $y=y_1$ is
going to be CKM suppressed, as usual in RS, and therefore the mass
matrices to consider have the same form of those in Eqs.~(\ref{Mt})
and (\ref{Mb}), but with the phases $\theta_1^t =\theta^b_1=0$. Indeed,
the values of the off-diagonal terms in the mass
matrices are now associated to the 5D Yukawa interactions localized at
the $y=y_1$ brane and in this case are such that
\bea
Y_t^0&=& Y_{33}^{5D} f_{q_L} f_{t_R}\\
(Y_1)_{nm} &=& Y_{33}^{5D} f_{Q^n_L} f_{T^m_R}\\
(Y_{qT})_n&=&Y_{33}^{5D} f_{q_L} f_{T^n_R}\\
(Y_{Qt})_n&=&Y_{33}^{5D} f_{Q^n_L} f_{t_R}
\eea
where $Y_{33}^{5D}$ is the 5D top Yukawa coupling, and where the $f_i$'s are
the wavefunctions evaluated at the $y=y_1$ brane\footnote{Note that
another effect of the brane kinetic terms is to suppress the value of
wavefunctions through normalization, due to the new brane localized kinetic
terms. Nevertheless, in order to obtain the top quark
mass, the 5D coupling $Y_{33}$ must be enhanced accordingly
and thus the wavefunction suppression is compensated by a coupling
enhancement, while remaining in a perturbative regime
\cite{Carena:2004zn}.}, with $q_L$ and
$t_R$ being zero modes and $Q^n_L$ and $T^n_R$ representing the $n^{th}$ KK modes.
We can see that all these terms share the same phase, so that we can
set $\theta_1^t =\theta^b_1=0$.
Now, consider first a mass matrix with only one KK level ($n=1$), so that the matrices are exactly
the same as before and thus the corresponding effect in Higgs production will
come from the sum
\bea
\sum_n \left({\frac{y^u_{nn}}{m^u_{n}}}\right)&=&
\frac{1}{v} \frac{1 + 3 \varepsilon_{Q_t} \varepsilon_T
\frac{|Y_2|}{|Y_t^0|}e^{i\theta^t_2}\left(1
-\frac{|Y_1||Y_t^0|}{|Y_{Qt}||Y_{qT}|}
\right)}{1 + \varepsilon_{Q_t} \varepsilon_T
\frac{|Y_2|}{|Y_t^0|}e^{i\theta^t_2}\left(1
-\frac{|Y_1||Y_t^0|}{|Y_{Qt}||Y_{qT}|}
\right)}.
\eea
It becomes then apparent that $\left(1
-\frac{|Y_1||Y_t^0|}{|Y_{Qt}||Y_{qT}|}\right)=0$ due to the structure of the
5D couplings. It is important that the zero modes (SM top and
bottom) come from the same 5D fermion as the KK modes, since the cancellation will
only happen if they all share the same 5D Yukawa coupling.
It turns out that it is simple to prove that if we take into
account the complete towers of KK tops we still have
\bea
\sum^\infty_n \left({\frac{y^u_{nn}}{m^u_{n}}}\right)&=&\frac{1}{v}
\eea
and similarly for the bottom quarks, and thus the Higgs phenomenology of
this scenario is indeed the same as in the {\it Brane Higgs limit} introduced
earlier in a bottom-up approach, since relative corrections due to the KK
modes of the up, down, strange and charm quarks, will scale as
$\left(m_{KK}^{tops}/m_{KK}^{rest}\right)^2 \sim 1\%$ (assuming that
the rest of KK quarks are an order of magnitude heavier than KK tops/bottoms).
Note that if the Higgs boson is not exactly localized at the boundary, then
the cancellation will not be exact and new corrections will arise.
The contributions of this RS scenario to flavor and precision electroweak
observables will be limited to effects due to the mixing of top and
bottom with the vector-like partners, since we are considering very
heavy KK gauge bosons ($\sim 10$ TeV). As pointed out earlier, Yukawa coupling mixing
effects can lead to deviations in $|V_{tb}|$ which can easily be kept under
control. Also effects can appear in the couplings $Z \to
b_R{\bar b}_R$ and in $Z \to b_L{\bar b}_L$ \cite{Agashe:2006at}, but
since we consider the contribution from heavy KK gauge
bosons to be suppressed, only Yukawa coupling mixings contribute, limiting the correction.
In the usual RS scenario, it was already possible to find points in
the $(c_b,c_{q_3})$ plane, such that $Zb{\bar b}$ couplings remain within
experimental bounds, with all the SM masses and mixings correctly
obtained \cite{Casagrande:2008hr}. In our scenario, finding
parameter points safe from precision tests will be even easier, since the source of corrections is further limited.
\section{Conclusion}
\label{sec:conclusion}
In this work, we presented a simple explanation of the possible
enhancement in the $t {\bar t} h$ associated production seen at the
LHC. We added one $SU(2)$ doublet, and two $SU(2)$ singlet vector-like
quarks to the matter content of the SM, as partners of the third
family, and allow significant mixing between these with the third
family only. After electroweak symmetry breaking, Yukawa couplings
induce off-diagonal terms into the fermion mass matrix and, once in the
physical basis, the top and bottom Yukawa couplings and their corresponding
masses loose their SM alignment.
With the proper sign (or phase), this misalignment can induce an
enhancement of the top quark Yukawa coupling, and thus increase the
cross section for $t {\bar t}h$ production. But the mechanism should
also affect other observables in the Higgs sector, in particular, the
cross section for Higgs production through gluon fusion. This is the
main production channel for the SM Higgs and, being a radiative effect,
it receives a contribution from all the fermions in the model. Each
contribution is proportional to the ratio of the Yukawa fermion
couplings to the fermion mass, so that the main contributions come
from the top quark (with an enhanced Yukawa coupling), and from the
new vector-like quarks.
We showed that working in a particular limit of parameter space, the
corrections to gluon fusion caused by the top Yukawa coupling
enhancement are exactly offset by the contributions of the new top
partners, so that the overall top sector of our scenario (top quark
plus heavy partners) gives the same contribution as the single top quark
contribution in the SM. We call this scenario the {\it Brane
Higgs limit} and it yields extremely predictive relationships
between the productions cross sections ($ gg $ and $t {\bar t}h$) and
decay branching ratios for the Higgs bosons (into $WW, \, ZZ, \,
b{\bar b}$ and $\gamma \gamma$),
where the only free parameters are the absolute values of the shifts
in the Yukawa couplings of the top and the bottom quarks, and their
phases. For instance, in $t {\bar t}h$ production, if the branching
ratio into $b {\bar b}$ is smaller than that into $VV$, then the gluon
fusion production cross section must be also greater than its
SM value, and conversely, an enhancement of the $b {\bar b}$
branching ratio in $t {\bar t}h$ production indicates a suppressed
gluon fusion signal. Overall, the deviations in the Yukawa coupling of
the bottom quark are constrained to be small, unless new data
indicates a significant enhancement of the $b {\bar b}$ branching
ratio. The scenario we consider predicts that any enhancement or
suppression in the $\gamma \gamma$ signal should be matched with
identical enhancement or suppression in $VV=WW, ZZ$ decays (for gluon fusion
production), or at least remain always slightly higher than decays into $VV=WW,ZZ$,
but never lower. Finally, a shift in the top quark Yukawa coupling
will affect all $t {\bar t}h$ signals through the production cross section, and
enhancement or suppression will depend on the phase of the shift.
The mixing in the top quark and bottom quark sectors should have also
consequences in the Cabibbo-Kobayashi-Makskawa (CKM) mixing
matrix $V_{CKM}$, as well as in precise electroweak measurements. We
briefly discussed how the scenario affects (and thus is constrained
by) the $V_{tb}$ entry of $V_{CKM}$ and the decay $Z \to b {\bar b}~(A_{FB}^b)$.
Finally we showed that the phenomenology we described here depends on a
specific structure of the fermion mixing matrix, mixing top quark with
its partners. In particular the Yukawa coupling matrix should have
a vanishing determinant, and thus some mechanism or flavor symmetry
should be invoked to realize the scenario. The
required structure is naturally realized in a Randall Sundrum model
without a need for flavor symmetries. A key ingredient of this scenario is the
presence of brane kinetic terms for the top and bottom, which can then
result in lighter $n=1$ KK modes for the top and bottom partners, but
heavy masses for all other KK modes. If the Higgs is localized exactly
at the boundary, the overall phenomenology of the simple model
introduced here is essentially recovered (i.e the cancellation of the terms happening in the
gluon fusion calculation occurs by construction, even if in this case
the effect comes from a complete tower of KK states).
The model presented here thus has a simple theoretical
realization, is highly predictable, and can be tested (or ruled out) by more
precise measurements of the Higgs signal strength in RUN 2 at LHC.
\section{Acknowledgments}
M.T. would like to thank FRQNT for partial financial support under
grant number PRCC-191578 and M.F. acknowledges NSERC support under
grant number SAP105354.
|
2,869,038,155,080 | arxiv | \section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
By submitting a manuscript to WACV, the authors assert that it has not been
previously published in substantially similar form. Furthermore, no paper
which contains significant overlap with the contributions of this paper
either has been or will be submitted during the WACV 2019 review period to
{\bf either a journal} or any conference (including WACV 2019) or any
workshop. {\bf Papers violating this condition will be rejected.}
If there are papers that may appear to the reviewers
to violate this condition, then it is your responsibility to: (1)~cite
these papers (preserving anonymity as described in Section 1.6 below),
(2)~argue in the body of your paper why your WACV paper is non-trivially
different from these concurrent submissions, and (3)~include anonymized
versions of those papers in the supplemental material.
\subsection{Paper length}
Consult the call for papers for page-length limits.
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven. If you submit 8 for review expect to pay the added page
charges for them.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\wacvfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $097.15$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith, it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors06} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2011 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors06b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the WACV 70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
FAQ: Are acknowledgements OK? No. Leave them for the final copy.
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors06} to
\cite{Alpher02,Alpher03,Authors06}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be
kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches
(22.54 cm) high.
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors06}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Color is valuable, and will be visible to readers of the electronic copy.
However ensure that, when printed on a monochrome printer, no important
information is lost by the conversion to grayscale.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
Please direct any questions to the production editor in charge of these
proceedings at the IEEE Computer Society Press: Phone (714) 821-8380, or
Fax (714) 761-1784.
{\small
\bibliographystyle{ieee}
\subsection{Preliminaries}
Before developing our method, we introduce the necessary notation.
Let $\mathcal{I}$ denote the image space, $\mathcal{T}$ the text space and $\mathcal{C}=\lbrace 1,...,R\rbrace$ be the discrete label space. Further, let $x_i \in \mathcal{I}$ be the $i$-th input data point, $t_i \in \mathcal{T}$ its corresponding textual description and $y_i \in \mathcal{C}$ its label.
In the few-shot setting, we consider two disjunct subsets of the label space: $\mathcal{C}_{\text{base}}$, labels for which we have access to sufficient data samples; and novel classes $\mathcal{C}_{\text{novel}}$ which are underrepresented in the data. Note that both subsets exhaust the label space $\mathcal{C}$, i.e. $\mathcal{C} = \mathcal{C}_{\text{base}} \cup \mathcal{C}_{\text{novel}}$. We further assume that in general $|\mathcal{C}_{\text{novel}}| < |\mathcal{C}_{\text{base}}|$.
We organize the data set $\mathcal{S}$ as follows.
Training data $\mathcal{S}_{\text{train}}$ consists of tuples $\{(x_i, t_i, y_i)\}_{i=1}^{n}$ taken from the whole data set and test data $\mathcal{S}_{\text{test}} = \{(x_i, y_i) : y_i \in \mathcal{C}_{\text{novel}}\}_{i=1}^m$ that belongs to novel classes and $\mathcal{S} = \mathcal{S}_{\text{train}} \cup \mathcal{S}_{\text{test}}$, $\mathcal{S}_{\text{train}} \cap \mathcal{S}_{\text{test}} = \emptyset$.
Naturally, we can also consider $\mathcal{S}_{\text{train}}^{\text{novel}} = \{(x_i, t_i, y_i) : (x_i, t_i, y_i) \in \mathcal{S}_{\text{train}}, y_i \in \mathcal{C}_{\text{novel}}\}_{i=1}^k \subset \mathcal{S}_{\text{train}}$,
where in accordance with a few-shot scenario $k = \left|\mathcal{S}_{\text{train}}^{\text{novel}}\right|\ll\left|\mathcal{S}_{\text{train}}\right| = n$.
Additionally, in a few-shot learning scenario, the number of samples per category of $\mathcal{C}_{\text{base}}$ may be limited to $g$, denoted by $\mathcal{S}_{\text{train}}^{\text{novel}}(g)$.
\subsection{Text-conditioned Data Generation}
The core idea of our method is to improve accuracy in few-shot learning scenarios by using an augmented dataset with additional hallucinated samples conditioned on textual descriptions.
For that purpose we employ a text-conditioned GAN (tcGAN) \citep[e.g.][]{reed16_gen, zhang_stackgan++:_2017,xu_attngan:_2017} which can be interpreted as a variant of cGAN - see Sec.~\ref{sec:background} for details.
The purpose of a tcGAN is to learn the mapping $G:\mathcal{T}\to \mathcal{I}$. In this regard, $G$'s objective is to generate samples $i\in\mathcal{I}$ conditioned on textual descriptions $t\in\mathcal{T}$ that cannot be distinguished from ``real'' images. In contrast, the adversarially trained discriminator $D$ aims to detect generated ``fake'' samples. To do so, $D$ generates two probability distributions: $D_s(i) = p(s|i)$, a distribution over the state of the image (``real'' or ``fake''), and $D_t(i) = p(t|i)$, a distribution over textual representations\footnote{Note that we are implicitly using text embeddings as a textual representation.} for a given image $i$.
Slightly abusing notation, let $T = \{t_1, \ldots, t_n\}$, $I = \{i_1,\ldots,i_n\}$ be observed texts, images and classes respectively.
The objective of a tcGAN can then be expressed as
\begin{eqnarray}
\label{tcGANloss}
\mathcal{L}_{tcGAN}\left(G,D\right)=\mathbb{E}_{I,T}\left[\log D_T\left(I\right)\right]\\\nonumber +\mathbb{E}_{T,z}\left[\log D_S\left(G\left(T,z\right)\right)\right],
\end{eqnarray}
where $z$ denotes a random noise vector
\newline
We use the StackGAN architecture proposed by \cite{zhang_stackgan++:_2017} as our tcGAN. Here, the idea is to use multiple GANs with different levels of granularity. In a StackGAN with $l$ stacked GANs, we consider generators $G_1, \ldots, G_l$ and discriminators $D_1, \ldots, D_l$.
Now, $G_1$ is conditioned on a text embedding $\varphi_t$ for text $t$ and generates a low-resolution image $i_1$.
Both the generated image $i_1$ and $\varphi_t$ act as input to $D_1$ which in turn predicts whether the image is real or fake given the textual description.
On a next stage, $G_2$ takes the generated image provided from $G_1$ in conjuction with the textual embedding as input in order to generate a more detailed image of higher resolution.
Having this pipeline, the image quality is increased at every stage of the StackGAN, resulting in a high-resolution image at its final stage. See \cite{zhang_stackgan++:_2017} for further details.\\
StackGANs allow for text-conditioned image synthesis optimized for realistic appearance.
However, they lack the ability to take into account that textual representations and images might be labeled with class information. This calls out for an extension to utilize class labels in a augmented-with-hallucinated-data few-shot scenario and is presented next.
\subsection{Auxiliary Classifier GAN}
\label{subsec:auxclassifier}
Conventional tcGANs cannot consider class labels
and are therefore not adequate in the few-shot scenario.
Hence, we propose to employ the auxiliary classifier GAN architecture \cite{odena2016conditional}.
Specifically, this entails augmentation of the default tcGAN objective as in Eq.~\eqref{tcGANloss} with a classification loss $\mathcal{L}_{class}$, which is defined as
\begin{equation}
\mathcal{L}_{class}\left(D\right)=\mathbb{E}_{C,I}\left[\log p\left(C\mid I\right)\right].
\end{equation}
Further, let
\begin{equation*} \mathcal{L}_{class}\left(G\right)\triangleq\mathcal{L}_{class}\left(D\right).
\end{equation*}
Now, augmenting the objective leads to the two loss terms,
\begin{equation}\mathcal{L}\left(D\right)=\mathcal{L}_{tcGAN}\left(G,D\right)+\mathcal{L}_{class}\left(D\right)
\end{equation}
\begin{equation}\mathcal{L}\left(G\right)=\mathcal{L}_{tcGAN}\left(G\right)-\mathcal{L}_{class}\left(G\right),
\end{equation}
which are optimized in an alternating fashion.\\
Conceptually, optimization of the augmented loss implies that $D$ solves the additional task of predicting the class label of given images in addition to discrimination between ``real'' or ``fake''.
Adding the auxiliary output layer to the StackGAN architecture therefore suggests a two-fold advantage in the context of few-shot learning.
First, backpropagating the classification loss to $G$ favors the generation of samples which are both class-specific and realistic.
This will prove to be key for performing classification using a dataset extended with generated (i.e. hallucinated) samples.
From now on, we denote images generated by $G$ for $\mathcal{C}_{\text{novel}}$ as $\mathcal{S}_{\text{gen}}^{\text{novel}}$.
Second, the new output layer of $D$ can be used to perform classification.
As a consequence, $D$ can be used as a classifier both for novel classes and base classes for which meaningful latent representations are readily available.
\begin{algorithm}[ht]
\caption{Self-paced adversarial training, \textsc{RANK}() is a function that ranks generated images based on their score of $D'$ and \textsc{TOP}() returns the highest ranked images}\label{spl}
\begin{algorithmic}[1]
\State \textbf{Input:} Pre-trained networks $G$, $D'$ and $K$
\State \textbf{Output:} Finetuned classifier $D'$
\For{$i = 1,\ldots, n$}
\State $\mathcal{S}_{\text{gen}}^{\text{novel}} = \emptyset$
\For{$c \in \mathcal{C}_{\text{novel}}$}
\State $\text{candidates}=\emptyset$
\For{$\text{caption} \in t_c$}
\State $\text{candidates} = \text{candidates}\cup G(\text{caption})$
\EndFor
\State $\text{candidates}_{\text{ranked}}$ $=$ \textsc{rank}$(\text{candidates}, D')$
\State $\text{sample} = \textsc{top}(\text{candidates}_{\text{ranked}}, K)$
\State $\mathcal{S}_{\text{gen}}^{\text{novel}} = \mathcal{S}_{\text{gen}}^{\text{novel}} \cup \text{sample}$
\EndFor
\State $\mathcal{S}_{\text{all}}^{\text{novel}} = \mathcal{S}_{\text{train}}^{\text{novel}} \cup \mathcal{S}_{\text{gen}}^{\text{novel}}$
\State update $D'$, $G$ with $\mathcal{S}_{\text{all}}^{\text{novel}}$
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Self-paced Finetuning on Novel Classes}
The representation learning phase, which consists of training the StackGAN with an auxiliary classifier, yields the discriminator $D$.
As described, the discriminator is able to distinguish real from fake samples as well as to perform classification w.r.t. the base classes.
However, to classify w.r.t. novel classes as well, $D$ has to be adapted.
Specifically, the class-aware layer with $|\mathcal{C}_{\text{base}}|$ output neurons is replaced and reduced to $|\mathcal{C}_{\text{novel}}|$ output neurons, which are randomly initialized.
We refer to this modified discriminator as $D'$.
Now, using the notion of hallucinating additional samples, the network can be finetuned with generated images as well as training data for novel categories, i.e. with the data given by $S_{train}^{novel} \cup S_{gen}^{novel}$.
It should be noted that samples contained in $\mathcal{S}_{\text{gen}}^{\text{novel}}$ can be very noisy which can be attributed to the fact that $G$ does not always output high-quality images.
In order to alleviate that problem, we propose a self-paced learning strategy ensuring that only the best generated samples within $\mathcal{S}_{\text{gen}}^{\text{novel}}$ are used.
In particular, we employ the softmax activation for class-specific confidence scoring.
To this end, optimization is performed on the loss defined as,
\begin{eqnarray}
\max_{G} \min_{D, \boldsymbol{\alpha}} \mathcal{L}\left(D,G \mid I_{novel}, T_{novel} \right)\\\nonumber + \sum_{T \in {T}_{\text{novel}}} \alpha_{T} \mathbb{E}_{I_T \sim G(T) } [ \log D( I_T ) ] \nonumber
\label{eq:2}
\end{eqnarray}
\begin{eqnarray}
\text{subject to:} & 0 \leq \alpha_T \leq 1, \quad \sum_{T \in {T}_{\text{novel}}} \alpha_I \leq K, \nonumber
\end{eqnarray}
where $\mathcal{L}\left(D,G \mid I_{novel}, T_{novel} \right) $ is the auxiliary GAN loss on the joint set of generated and training data $\mathcal{S}_{\text{train}}^{\text{novel}} \cup \mathcal{S}_{\text{gen}}^{\text{novel}}$,
$G(T)$ is the tcGAN generator, and $\alpha_T$ is a soft selector for the images $I_T$ generated from textual descriptions $T$, where the descriptions come from a set of captions for novel categories $T_{novel}$ (i.e. $T \in {T}_{\text{novel}}$).
Since each $\alpha_T$ is a selector for the generated data, $K$ specifies
the maximum number of generated
of samples to be included in $\mathcal{S}_{\text{gen}}^{\text{novel}}$ for the next finetuning step. Our pseudo-code is given in algorithm~\ref{spl}.
\subsubsection{Initialization for SPL}
To obtain a meaningful ranking in the self-paced learning phase, $D'$ has to be initialized on novel classes.
Again taking into account the setting of few-shot learning, we restrict the number of samples per class available to $n$ for doing this, i.e. use $\mathcal{S}_{\text{train}}^{\text{novel}}(n)$ instead of $\mathcal{S}_{\text{train}}^{\text{novel}}$.
Since only real images are used for initialization, the quality of this data is very high compared to the noise-prone generated samples.
Due to the limited amount of samples, the initialized $D'$ will be weak on the classification task, but sufficiently powerful for performing an initial ranking of the generated images.
\subsubsection{Self-paced Adversarial Training}
The ability to rank generated images with the pre-trained $D'$ allows for data selection and guided optimization.
In our approach we specifically follow a self-paced learning strategy.
This entails iteratively choosing generated images that have highest probability in $D'$ for $\mathcal{C}_{\text{novel}}$, yielding a curated set of high-quality generated samples $\mathcal{S}_{\text{gen}}^{\text{novel}}$.
Finally, we aggregate original samples and generated images $\mathcal{S}_{\text{train}}^{\text{novel}} \cup \mathcal{S}_{\text{gen}}^{\text{novel}}$ for training, during which we alternately update $D'$ and $G$.
Doing so yields both a more accurate ranking as well as higher class prediction accuracy as the number of samples increases.
Ultimately, the approach summarized in algorithm~\ref{spl} learns a reliable classifier that performs well in few-shot learning scenarios.
\subsection{Datasets}
We test the applicability of our method on two fine-grained classification datasets, namely CUB \cite{WahCUB_200_2011} with bird data and Oxford-102~\cite{nilsback2008automated} containing flower data.
Specifically, the CUB bird dataset contains 11,788 images of 200 different bird species, with $\mathcal{I} \subset \mathbb{R}^{256\times256}$.
The data is split equally into training and test data.
As a consequence, samples are roughly equally distributed, with training and test each containing $\approx 30$ images per category.
Additionally, 10 short textual descriptions per image are provided by \cite{reed_learning_2016}.
Similar to \cite{zhang_stackgan++:_2017}, we use the text-encoder pre-trained by \cite{reed_learning_2016}, yielding a text embedding space $\mathcal T \subset \mathbb{R}^{1024}$.
Following \cite{zhang_stackgan++:_2017}, we split the data such that $\left|C_{base}\right|=150$ and $\left|C_{novel}\right|=50$.
To simulate few-shot learning, $n\in\{1,2,5,10,20\}$ images of $C_{novel}$ are used for training, as proposed by \cite{hariharan_low-shot_2017}.
In contrast, the Oxford-102 dataset contains images of 102 different categories of flowers. Similar to the CUB-200 dataset, Reed et al.~\cite{reed_learning_2016} provide 10 textual descriptions per image. As for the CUB dataset, we use the text-encoder pre-trained by \cite{reed_learning_2016}, yielding a text embedding space $\mathcal T \subset \mathbb{R}^{1024}$. Following Zhang et al.~\cite{zhang_stackgan++:_2017}, we split the data such that $\left|C_{base}\right|=82$ and $\left|C_{novel}\right|=20$.
To simulate few-shot learning, $n\in\{1,2,5\}$ images of $C_{novel}$ are used for training.
\begin{table}[t!]
\setlength{\tabcolsep}{5pt}
\centering{
\begin{tabular}{lccc} \toprule
& & n & \\
Model & 1 & 2 & 5\\ \midrule
Finetuning & 70.42 & 82.53 & 81.14 \\
Initialization & 75.26 & 89.45 & 89.97 \\
SPL-D'G & \textbf{78.37} & \textbf{91.18} & \textbf{92.21}
\\\bottomrule
\end{tabular}
}
\captionsetup{position=above}
\captionof{table}{Top-5 accuracy in percent for our model on Oxford-102 dataset in different settings (best results are in bold)}
\label{tab:results_oxford}
\end{table}
\begin{figure*}[ht]
\centering
\captionsetup{justification=centering, margin=-1cm}
\begin{tikzpicture}[scale = 1]%
\begin{axis}[
legend columns=2,
width=0.7\textwidth,
height=0.4\textwidth,
legend pos=south east,
major grid style={line width=.1pt,draw=gray!40},
grid=both,
xtick={0,3,6,...,30},
ytick={0.45, 0.475 ,0.5,...,0.6},
ymin=0.42,
xlabel={Iteration in self-paced learning phase},
ylabel= Accuracy,
axis y discontinuity=crunch,
xmin=0,xmax=30,
ymax=0.6]
\addplot+[mark=none] table [x=iteration, y=top5, col sep=comma] {data/evolution_DG.csv};
\addplot+[mark=none] table [x=iteration, y=top5, col sep=comma] {data/evolution_D.csv};
\addplot+[domain=0:30, mark=none, dashed]{0.4916};
\legend{D' and G, D', Initialization}
\end{axis}
\end{tikzpicture}%
\caption{ Top-5 Accuracy in every iteration for different strategies in the 1-shot learning scenario}
\label{fig:evolution}
\end{figure*}
\subsection{Algorithmic Details}
During representation learning, we train a StackGAN for $900$ epochs. Similar to \cite{zhang_stackgan++:_2017}, we use Adam \cite{kingma2014adam} for optimization.
We set the learning rate $\tau$ to $2 \times 10^{-4}$ and the batch size to 24 for both $G$ and $D$.
In the initialization phase for self-paced learning, we construct $D'$ by replacing the last layer of $D$ by a linear softmax layer.
The resulting network is then optimized using the cross-entropy loss function and a SGD optimizer with learning rate $\tau=10^{-3}$ and momentum $0.5$.
Batch size is set to 32 and training proceeds for 20 epochs.
Self-paced learning of $D'$ continues to use the same settings (i.e. SGD with $\tau=10^{-3}$ and momentum $0.5$, minimizing a cross-entropy loss).
Additionally, Adam's learning rate for $G$ is reduced to $2 \times 10^{-5}$.
In every iteration we choose exactly one generated image per category and perform training for 10 epochs.
\begin{table}
\setlength{\tabcolsep}{5pt}
\centering{
\begin{tabular}{lcccc} \toprule
& Top 1 & Top 3 & Top 5 \\ \midrule
SGM \cite{hariharan_low-shot_2017} & 19.08 & 40.55 & 48.89 \\
SGM+Hallucination \cite{hariharan_low-shot_2017} & 20.27 & \textbf{41.06} & 50.59 \\
Our proposed method & \textbf{24.90} & 37.59 & \textbf{57.67}
\\\bottomrule
\end{tabular}
}
\captionsetup{position=above}
\captionof{table}{Top-1, top-2 and top-5 accuracy of single modality models for 1-shot Learning task proposed by \cite{hariharan_low-shot_2017}(SGM-loss and SGM-loss with Hallucination) compared to our model (best results are in bold) }
\label{tab:comparison}
\end{table}
\subsection{Models}
In order to asses the performance of individual components, we perform an ablation study.
A simple approach for transfer learning is to make use of a pre-trained representation and then finetune that network on novel data.
We apply this strategy on a first baseline (\textbf{Finetuning}), for which we pre-train a classifier $T$ that has exactly the same architecture as $D$ on the base classes, followed by finetuning with the few instances of novel classes on $\mathcal{S}_{\text{train}}^{\text{novel}}$.
This meta-learning strategy learns meaningful representations on the base classes $\mathcal{C}_{\text{base}}$ that can be used for novel classes $\mathcal{C}_{\text{novel}}$.
A second baseline (\textbf{Initialization}) constitutes our first contribution.
We modify the discriminator $D$ of the StackGAN, which we obtain from the representation learning phase, to obtain $D'$ by exchanging the discriminator's last layer.
Finetuning is then performed on the real samples from novel classes $\mathcal{S}_{\text{train}}^{\text{novel}}$.
Note that the \textit{initialization} baseline uses $D$ which is pre-trained using the adversarial principle during the StackGAN training, in contrast to the \textit{finetuning} baseline that uses $T$ as pre-trained by a conventional classifier.
Afterwards, we iteratively add high-quality generated samples for novel categories $\mathcal{S}_{\text{gen}}^{\text{novel}}$ as described.
In a first self-paced experiment (\textbf{SPL-D'}) we update $D'$ using selected generated samples in every iteration.
In a second experiment, additionally to updating $D'$, we update $G$ in every iteration in order to be fully self-paced (\textbf{SPL-D'G}). Following previous approaches (e.g. \citep{hariharan_low-shot_2017}), we evaluate our approach by reporting the top-5 accuracy.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\textwidth]{figures/rankedImages.jpg}
\caption{a) real images of birds and b) ranked birds after different iterations}
\label{fig:lastFigure}
\end{figure*}
\subsection{Analysis of Self-paced Finetuning}
We run several additional experiments to further analyze the behavior of our method. For the following experiments we use the CUB bird dataset.
\subsection{Results of Ablation Study}
\subsubsection{CUB-200 Dataset}
We report top-1-, top-3- and top-5-accuracy for our method in different settings.
The results in top-5-accuracy are shown in Tab.~\ref{tab:results}.
We observe that the initialization phase already provides a large margin compared to the finetuning phase.
Both are finetuned exclusively on real images with the difference that our initialized $D'$ is pre-trained in an adversarial fashion on text and image data during the representation learning phase.
In contrast, the finetuning baseline is pre-trained only on image data without adversarial training.
Our results indicate that including text information in representation learning already provides accuracy gains.
Further, we observe that the self-paced finetuning phase improves classification accuracy.
In particular, the margin in a 1-shot scenario is large.
For most scenarios, by updating both $G$ and $D'$ we achieve higher accuracy than by only updating $D'$.
This observation indicates that generated images add upon class-discriminatory power through our self-paced finetuning strategy if $G$ is involved in training.
Our full self-paced sample selection procedure, including updating $D'$ and $G$ provides an accuracy boost of $17$, $20$ and $17$ percent in the challenging $1$-,$2$- and $5$-shot scenarios, respectively.
Also for $10$- and $20$-shot learning we outperform the baseline by more than $10$ percent.
We observe the same trends for top-1- and top-3 accuracy.
\subsubsection{Oxford-102 Dataset}
Similar to the experiments for CUB, we compare our method (1) to a \textit{Finetuning} baseline, i.e. pretrain classifier on base classes and finetune it on novel classes and (2) to an \textit{Initialization} model, i.e. finetune $D^*$ obtained from representation learning on novel classes. For our method, we update both $D'$ and $G$ (SPL-D'G). Classification results in top-5-accuracy are given in Tab. \ref{tab:results_oxford}. We can observe similar trends compared to the CUB dataset. In all few-shot scenarios, we outperform the finetuning baseline with a large margin. Specifically, our method yields a performance boost by ca. 8\%, 10.5\% and 11\% in the challenging 1-, 2- and 5-shot scenarios compared to the finetuning baseline. The initialization for the self-paced learning phase provides performance gains as well. However, the full model yields the best results in every few-shot scenario.
\subsection{Comparison to Similar Work}
We compare our method to the single modality methods recently proposed by Hariharan and Girshick \cite{hariharan_low-shot_2017}. For the comparison in Tab.~\ref{tab:comparison} we use the CUB bird dataset. It can be observed that our approach outperforms the methods proposed by \cite{hariharan_low-shot_2017}, justifying the usage of multimodal data for training in few-shot scenarios.
\subsubsection{Evolution of classifiers during SPL}
In order to check the behavior of the the self-paced learning phase, we study the classification accuracy across all iterations. In Fig. \ref{fig:evolution} we report the top-5-accuracy for 30 iterations in the 1-shot learning scenario. The dashed line shows the accuracy after the initialization phase and can be interpreted as lower bound for the self-paced learning phase. The blue line shows the accuracy for our method updating only $D'$, while the red line shows the accuracy for updating $D'$ and $G$. For the first iterations both models behave very similarly, but after a certain warm-up phase the model entailing updates of $G$ performs better. That indicates that updating $G$ leads to the synthesis of images of higher quality, which in turn further increases the classification accuracy.
\begin{figure*}[ht]
\centering
\captionsetup{justification=centering, margin=-1cm}
\begin{tikzpicture}[scale = 0.65]%
\begin{axis}[
legend pos=north east,
legend columns=2,
width=0.6\textwidth,
height=0.5\textwidth,
major grid style={line width=.1pt,draw=gray!40},
grid=none,
xtick={1,2,...,10},
ytick={0.2,0.3,0.4,...,1.0},
ymin=0.1,
xlabel={Ranked chunks of generated images},
ylabel= Accuracy,
bar width=0.3cm,
ymax=0.9,xmax=11]
\addplot+[ybar, mark=none, , pattern=north east lines,pattern color = red] table [x=step, y=top1, col sep=comma] {data/rankingAccuracy.csv};
\addplot+[ybar, mark=none] table [x=step, y=top5, col sep=comma] {data/rankingAccuracy.csv};
\legend{Top-1, Top-5}
\end{axis}
\end{tikzpicture}%
\begin{tikzpicture}[scale = 0.8]%
\begin{axis}[
legend columns=2,
width=0.6\textwidth,
height=0.4\textwidth,
legend pos=south east,
major grid style={line width=.1pt,draw=gray!40},
grid=both,
xtick={0,1,...,10},
ytick={1.0, 1.5 ,2.0,...,4.0},
ymin=1,
xlabel={Ranked chunks of generated images},
ylabel= Inception score,
xmin=1,xmax=10,
ymax=4.0]
\addplot+[mark=none] table [x=step, y=inception_score, col sep=comma] {data/inceptionScores.csv};
\end{axis}
\end{tikzpicture}%
\hspace{1cm}(a) \hspace{5.5cm} (b)
\caption{ (a) Classification accuracy for chunks and (b) inception score for chunks}
\label{fig:chunkResults}
\end{figure*}
\subsubsection{Effect of Ranking}
To check the suitability of $D'$ to rank the generated samples, we further analyze the behavior of the ranked samples. Therefore, we split 30 ranked samples into chunks of size 3, yielding 10 ranked chunks. In a first experiment, we evaluate the class-discriminativeness of those chunks. In this regard, we use the initialized $D'$ and consider every single chunk as test set. The classification results in top-1-accuracy and top-5-accuracy are shown in Fig.~\ref{fig:chunkResults}a. It can be observed that the classification accuracy for high-ranked chunks is higher in test time. This shows the ranking is working in the desired fashion, i.e. prioritizing generated samples for which $D'$ is more confident w.r.t. to correct class-discriminative information.
In a second experiment, we study the quality of the generated images across the ranked chunks. Note that ranking is performed based on class-discriminativeness alone and specifically does not take into account the quality of the image. Therefore, we use the inception score \cite{salimans_improved_2016}
to assess the quality of images generated by GANs.
As can be observed from Fig.\ref{fig:chunkResults}b, the quality of the images remains constant across the chunks, validating that we do not suffer a quality drop by ranking images based on class-discriminativeness. Hence, our proposed ranking allows to pick images that perform best for classification without any loss in quality of the generated images. Fig. \ref{fig:lastFigure} shows some generated images for one category ranked in different iterations. We can observe that the ranking improves over the iterations.
\subsection{Few-Shot Learning}
For learning deep networks using limited amounts of data, different approaches have been developed. Following Taigman et al.~\cite{taigman2014deepface}, Koch et al.~\cite{koch_siamese_2015} interpreted this task as a verification problem, i.e. given two samples, it has to be verified, whether both samples belong to the same class. Therefore, they employed siamese neural networks \cite{bromley1994signature} to compute the distance between the two samples and perform nearest neighbor classification in the learned embedding space. Some recent works approach few-shot learning by striving to avoid overfitting by modifications to the loss function or the regularization term. Yoo et al.~\cite{yoo_efficient_2017} proposed a clustering of neurons on each layer of the network and calculated a single gradient for all members of a cluster during the training to prevent overfitting. The optimal number of clusters per layer is determined by a reinforcement learning algorithm. A more intuitive strategy is to approach few-shot learning on data-level, meaning that the performance of the model can be improved by collecting additional related data. Douze et al.~\cite{douze_low-shot_2017} proposed a semi-supervised approach in which a large unlabeled dataset containing similar images was included in addition to the original training set. This large collection of images was exploited to support label propagation in the few-shot learning scenario. Hariharan et al.~\cite{hariharan_low-shot_2017} combined both strategies (data-level and algorithm-level) by defining the squared gradient magnitude loss, that forces models to generalize well from only a few samples, on the one hand and generating new images by hallucinating features on the other hand. For the latter, they trained a model to find common transformations between existing images that can be applied to new images to generate new training data (see also \cite{wang_low-shot_2018}). Other recent approaches to few-shot learning have leveraged meta-learning strategies. Ravi et al.~\cite{ravi_optimization_2017} trained a long short-term memory (LSTM) network as meta-learner that learns the exact optimization algorithm to train a learner neural network that performs the classification in a few-shot learning setting. This method was proposed due to the observation that the update function of standard optimization algorithms like SGD is similar to the update of the cell state of a LSTM. Bertinetto et al.~\cite{bertinetto_learning_2016} trained a meta-learner feed-forward neural network that predicts the parameters of another, discriminative feed-forward neural network in a few-shot learning scenario. Another tool that has been applied successfully to few-shot learning recently is attention. Vinyals et al.~\cite{vinyals_matching_2016} introduced matching networks for one-shot learning tasks. This network is able to apply an attention mechanism over embeddings of labeled samples in order to classify unlabeled samples. One further outcome of this work is that it is helpful to mimic the one-shot learning setting already during training by defining mini-batches, called episodes with subsampled classes. Snell et al.~\cite{snell_prototypical_2017} generalize this approach by proposing prototypical networks. Prototypical networks search for a non-linear embedding space (the prototype) in which classes can be represented as the mean of all corresponding samples. Classification is then performed by finding the closest prototype in the embedding space. In the one-shot scenario, prototypical networks and matching networks are equivalent.
\subsection{Multimodal Learning}
\cite{kiros_unifying_2014} propose to align visual and semantic information in a joint embedding space using a encoder-decoder pipeline.
Building on this, \cite{faghri_vse++:_2017} improve upon this mixed representation by incorporating a triplet ranking loss.
\cite{karpathy_deep_2015} generate textual image descriptions. Their model infers latent alignments between regions of images and segments of sentences of their respective descriptions.
\cite{reed_learning_2016} focus on fine-grained visual descriptions.
They present an end-to-end trainable deep structured joint embedding trained on two datasets containing fine-grained visual descriptions.
In addition to multimodal embeddings, another related field using data from different modalities is text-to-image generation.
\cite{reed16_gen} study image synthesis based on textual information. \cite{zhang_stackgan++:_2017} greatly improve the quality of generated images to a photo-realistic high-resolution level by stacking multiple GANs (StackGANs).
Extensions of StackGANs include an end-to-end trainable version \cite{zhang_stackgan++:_2017} and considering an attention mechanism over the textual input \citep{xu_attngan:_2017}. Sharma et al. \cite{sharma2018chatpainter} extended the conditioning by involving dialogue data and further improved the image quality.
Beside the usage of GANs for conditioned image generation, other work employed Variational Autoencoders \cite{kingma2013auto} to generate images \cite{mishra2017generative}. However, they conditioned on attribute vectors instead of text.
\subsection{Learning from Simple to Complex}
Recently, many studies have shown the benefits of organizing the training examples
from simple to complex for model training. Bengio et al. \cite{bengio2009curriculum} first proposed a general learning strategy: curriculum learning. They show that suitably sorting the training samples, from the easiest to the most difficult, and iteratively training a classifier starting with a subset of easy samples
can be useful to find better local minima. In \cite{chen2015webly}, easy and difficult images
are provided for training a CNN in order to learn generic CNN features using webly annotated data. Note that in this and in all the other curriculum-learning-based approaches, the order of the samples is provided by an external supervisory signal, taking into account human domain-specific expertise.
Curriculum learning was extended to self-paced learning by Kumar et al. \cite{kumar2010self}. They proposed the self-paced learning framework, automatically expanding the training pool in an easy-to-hard manner by converting the curriculum mechanism into a concise regularization term. Curriculum learning uses human design to organize the examples, and self-paced learning can automatically choose training examples according to the loss. Supancic et al. \cite{supancic2013self} adopt a similar framework in a tracking scenario and train a detector using a subset of video frames, showing that this selection is important to avoid drifting.
In \cite{zhang2017bridging} saliency is used to progressively select samples in weakly supervised object detection.
Although some of these self-paced methods use pre-trained CNN-based features to represent samples (e.g., \cite{liang2015towards}), or uses a CNN as the classifier directly(e.g., \cite{sangineto2016self}), none of them formulates the self-paced strategy in a GAN training protocol as we do in this paper.
\section{Introduction}
\subfile{sections/introduction}
\section{Related Work}
\subfile{sections/relatedWork}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{figures/eccv_pipeline.jpg}
\caption{Single iteration of self-paced finetuning on novel classes: $G$ generates images, $D'$ ranks the generated images based on their class-discriminatory power. Then the ``best'' images are added to the real samples and used to update $D'$ and $G$. This process is repeated multiple times.}
\label{fig:method}
\end{figure*}
\section{Background on GANs}
\subfile{sections/background}
\section{Method}
\subfile{sections/method}
\section{Experiments}
\subfile{sections/experiment}
\section{Conclusion and Future Work}
\subfile{sections/conclusion}
{\small
\bibliographystyle{ieee}
\section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
By submitting a manuscript to WACV, the authors assert that it has not been
previously published in substantially similar form. Furthermore, no paper
which contains significant overlap with the contributions of this paper
either has been or will be submitted during the WACV 2019 review period to
{\bf either a journal} or any conference (including WACV 2019) or any
workshop. {\bf Papers violating this condition will be rejected.}
If there are papers that may appear to the reviewers
to violate this condition, then it is your responsibility to: (1)~cite
these papers (preserving anonymity as described in Section 1.6 below),
(2)~argue in the body of your paper why your WACV paper is non-trivially
different from these concurrent submissions, and (3)~include anonymized
versions of those papers in the supplemental material.
\subsection{Paper length}
Consult the call for papers for page-length limits.
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven. If you submit 8 for review expect to pay the added page
charges for them.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\wacvfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $097.15$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith, it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors06} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2011 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors06b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the WACV 70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
FAQ: Are acknowledgements OK? No. Leave them for the final copy.
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors06} to
\cite{Alpher02,Alpher03,Authors06}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be
kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches
(22.54 cm) high.
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors06}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Color is valuable, and will be visible to readers of the electronic copy.
However ensure that, when printed on a monochrome printer, no important
information is lost by the conversion to grayscale.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
Please direct any questions to the production editor in charge of these
proceedings at the IEEE Computer Society Press: Phone (714) 821-8380, or
Fax (714) 761-1784.
{\small
\bibliographystyle{ieee}
\section{Introduction}
\subfile{sections/introduction}
\section{Related Work}
\subfile{sections/relatedWork}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{figures/eccv_pipeline.jpg}
\caption{Single iteration of self-paced finetuning on novel classes: $G$ generates images, $D'$ ranks the generated images based on their class-discriminatory power. Then the ``best'' images are added to the real samples and used to update $D'$ and $G$. This process is repeated multiple times.}
\label{fig:method}
\end{figure*}
\section{Background on GANs}
\subfile{sections/background}
\section{Method}
\subfile{sections/method}
\section{Experiments}
\subfile{sections/experiment}
\section{Conclusion and Future Work}
\subfile{sections/conclusion}
{\small
\bibliographystyle{ieee}
\subsection{Preliminaries}
Before developing our method, we introduce the necessary notation.
Let $\mathcal{I}$ denote the image space, $\mathcal{T}$ the text space and $\mathcal{C}=\lbrace 1,...,R\rbrace$ be the discrete label space. Further, let $x_i \in \mathcal{I}$ be the $i$-th input data point, $t_i \in \mathcal{T}$ its corresponding textual description and $y_i \in \mathcal{C}$ its label.
In the few-shot setting, we consider two disjunct subsets of the label space: $\mathcal{C}_{\text{base}}$, labels for which we have access to sufficient data samples; and novel classes $\mathcal{C}_{\text{novel}}$ which are underrepresented in the data. Note that both subsets exhaust the label space $\mathcal{C}$, i.e. $\mathcal{C} = \mathcal{C}_{\text{base}} \cup \mathcal{C}_{\text{novel}}$. We further assume that in general $|\mathcal{C}_{\text{novel}}| < |\mathcal{C}_{\text{base}}|$.
We organize the data set $\mathcal{S}$ as follows.
Training data $\mathcal{S}_{\text{train}}$ consists of tuples $\{(x_i, t_i, y_i)\}_{i=1}^{n}$ taken from the whole data set and test data $\mathcal{S}_{\text{test}} = \{(x_i, y_i) : y_i \in \mathcal{C}_{\text{novel}}\}_{i=1}^m$ that belongs to novel classes and $\mathcal{S} = \mathcal{S}_{\text{train}} \cup \mathcal{S}_{\text{test}}$, $\mathcal{S}_{\text{train}} \cap \mathcal{S}_{\text{test}} = \emptyset$.
Naturally, we can also consider $\mathcal{S}_{\text{train}}^{\text{novel}} = \{(x_i, t_i, y_i) : (x_i, t_i, y_i) \in \mathcal{S}_{\text{train}}, y_i \in \mathcal{C}_{\text{novel}}\}_{i=1}^k \subset \mathcal{S}_{\text{train}}$,
where in accordance with a few-shot scenario $k = \left|\mathcal{S}_{\text{train}}^{\text{novel}}\right|\ll\left|\mathcal{S}_{\text{train}}\right| = n$.
Additionally, in a few-shot learning scenario, the number of samples per category of $\mathcal{C}_{\text{base}}$ may be limited to $g$, denoted by $\mathcal{S}_{\text{train}}^{\text{novel}}(g)$.
\subsection{Text-conditioned Data Generation}
The core idea of our method is to improve accuracy in few-shot learning scenarios by using an augmented dataset with additional hallucinated samples conditioned on textual descriptions.
For that purpose we employ a text-conditioned GAN (tcGAN) \citep[e.g.][]{reed16_gen, zhang_stackgan++:_2017,xu_attngan:_2017} which can be interpreted as a variant of cGAN - see Sec.~\ref{sec:background} for details.
The purpose of a tcGAN is to learn the mapping $G:\mathcal{T}\to \mathcal{I}$. In this regard, $G$'s objective is to generate samples $i\in\mathcal{I}$ conditioned on textual descriptions $t\in\mathcal{T}$ that cannot be distinguished from ``real'' images. In contrast, the adversarially trained discriminator $D$ aims to detect generated ``fake'' samples. To do so, $D$ generates two probability distributions: $D_s(i) = p(s|i)$, a distribution over the state of the image (``real'' or ``fake''), and $D_t(i) = p(t|i)$, a distribution over textual representations\footnote{Note that we are implicitly using text embeddings as a textual representation.} for a given image $i$.
Slightly abusing notation, let $T = \{t_1, \ldots, t_n\}$, $I = \{i_1,\ldots,i_n\}$ be observed texts, images and classes respectively.
The objective of a tcGAN can then be expressed as
\begin{eqnarray}
\label{tcGANloss}
\mathcal{L}_{tcGAN}\left(G,D\right)=\mathbb{E}_{I,T}\left[\log D_T\left(I\right)\right]\\\nonumber +\mathbb{E}_{T,z}\left[\log D_S\left(G\left(T,z\right)\right)\right],
\end{eqnarray}
where $z$ denotes a random noise vector
\newline
We use the StackGAN architecture proposed by \cite{zhang_stackgan++:_2017} as our tcGAN. Here, the idea is to use multiple GANs with different levels of granularity. In a StackGAN with $l$ stacked GANs, we consider generators $G_1, \ldots, G_l$ and discriminators $D_1, \ldots, D_l$.
Now, $G_1$ is conditioned on a text embedding $\varphi_t$ for text $t$ and generates a low-resolution image $i_1$.
Both the generated image $i_1$ and $\varphi_t$ act as input to $D_1$ which in turn predicts whether the image is real or fake given the textual description.
On a next stage, $G_2$ takes the generated image provided from $G_1$ in conjuction with the textual embedding as input in order to generate a more detailed image of higher resolution.
Having this pipeline, the image quality is increased at every stage of the StackGAN, resulting in a high-resolution image at its final stage. See \cite{zhang_stackgan++:_2017} for further details.\\
StackGANs allow for text-conditioned image synthesis optimized for realistic appearance.
However, they lack the ability to take into account that textual representations and images might be labeled with class information. This calls out for an extension to utilize class labels in a augmented-with-hallucinated-data few-shot scenario and is presented next.
\subsection{Auxiliary Classifier GAN}
\label{subsec:auxclassifier}
Conventional tcGANs cannot consider class labels
and are therefore not adequate in the few-shot scenario.
Hence, we propose to employ the auxiliary classifier GAN architecture \cite{odena2016conditional}.
Specifically, this entails augmentation of the default tcGAN objective as in Eq.~\eqref{tcGANloss} with a classification loss $\mathcal{L}_{class}$, which is defined as
\begin{equation}
\mathcal{L}_{class}\left(D\right)=\mathbb{E}_{C,I}\left[\log p\left(C\mid I\right)\right].
\end{equation}
Further, let
\begin{equation*} \mathcal{L}_{class}\left(G\right)\triangleq\mathcal{L}_{class}\left(D\right).
\end{equation*}
Now, augmenting the objective leads to the two loss terms,
\begin{equation}\mathcal{L}\left(D\right)=\mathcal{L}_{tcGAN}\left(G,D\right)+\mathcal{L}_{class}\left(D\right)
\end{equation}
\begin{equation}\mathcal{L}\left(G\right)=\mathcal{L}_{tcGAN}\left(G\right)-\mathcal{L}_{class}\left(G\right),
\end{equation}
which are optimized in an alternating fashion.\\
Conceptually, optimization of the augmented loss implies that $D$ solves the additional task of predicting the class label of given images in addition to discrimination between ``real'' or ``fake''.
Adding the auxiliary output layer to the StackGAN architecture therefore suggests a two-fold advantage in the context of few-shot learning.
First, backpropagating the classification loss to $G$ favors the generation of samples which are both class-specific and realistic.
This will prove to be key for performing classification using a dataset extended with generated (i.e. hallucinated) samples.
From now on, we denote images generated by $G$ for $\mathcal{C}_{\text{novel}}$ as $\mathcal{S}_{\text{gen}}^{\text{novel}}$.
Second, the new output layer of $D$ can be used to perform classification.
As a consequence, $D$ can be used as a classifier both for novel classes and base classes for which meaningful latent representations are readily available.
\begin{algorithm}[ht]
\caption{Self-paced adversarial training, \textsc{RANK}() is a function that ranks generated images based on their score of $D'$ and \textsc{TOP}() returns the highest ranked images}\label{spl}
\begin{algorithmic}[1]
\State \textbf{Input:} Pre-trained networks $G$, $D'$ and $K$
\State \textbf{Output:} Finetuned classifier $D'$
\For{$i = 1,\ldots, n$}
\State $\mathcal{S}_{\text{gen}}^{\text{novel}} = \emptyset$
\For{$c \in \mathcal{C}_{\text{novel}}$}
\State $\text{candidates}=\emptyset$
\For{$\text{caption} \in t_c$}
\State $\text{candidates} = \text{candidates}\cup G(\text{caption})$
\EndFor
\State $\text{candidates}_{\text{ranked}}$ $=$ \textsc{rank}$(\text{candidates}, D')$
\State $\text{sample} = \textsc{top}(\text{candidates}_{\text{ranked}}, K)$
\State $\mathcal{S}_{\text{gen}}^{\text{novel}} = \mathcal{S}_{\text{gen}}^{\text{novel}} \cup \text{sample}$
\EndFor
\State $\mathcal{S}_{\text{all}}^{\text{novel}} = \mathcal{S}_{\text{train}}^{\text{novel}} \cup \mathcal{S}_{\text{gen}}^{\text{novel}}$
\State update $D'$, $G$ with $\mathcal{S}_{\text{all}}^{\text{novel}}$
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Self-paced Finetuning on Novel Classes}
The representation learning phase, which consists of training the StackGAN with an auxiliary classifier, yields the discriminator $D$.
As described, the discriminator is able to distinguish real from fake samples as well as to perform classification w.r.t. the base classes.
However, to classify w.r.t. novel classes as well, $D$ has to be adapted.
Specifically, the class-aware layer with $|\mathcal{C}_{\text{base}}|$ output neurons is replaced and reduced to $|\mathcal{C}_{\text{novel}}|$ output neurons, which are randomly initialized.
We refer to this modified discriminator as $D'$.
Now, using the notion of hallucinating additional samples, the network can be finetuned with generated images as well as training data for novel categories, i.e. with the data given by $S_{train}^{novel} \cup S_{gen}^{novel}$.
It should be noted that samples contained in $\mathcal{S}_{\text{gen}}^{\text{novel}}$ can be very noisy which can be attributed to the fact that $G$ does not always output high-quality images.
In order to alleviate that problem, we propose a self-paced learning strategy ensuring that only the best generated samples within $\mathcal{S}_{\text{gen}}^{\text{novel}}$ are used.
In particular, we employ the softmax activation for class-specific confidence scoring.
To this end, optimization is performed on the loss defined as,
\begin{eqnarray}
\max_{G} \min_{D, \boldsymbol{\alpha}} \mathcal{L}\left(D,G \mid I_{novel}, T_{novel} \right)\\\nonumber + \sum_{T \in {T}_{\text{novel}}} \alpha_{T} \mathbb{E}_{I_T \sim G(T) } [ \log D( I_T ) ] \nonumber
\label{eq:2}
\end{eqnarray}
\begin{eqnarray}
\text{subject to:} & 0 \leq \alpha_T \leq 1, \quad \sum_{T \in {T}_{\text{novel}}} \alpha_I \leq K, \nonumber
\end{eqnarray}
where $\mathcal{L}\left(D,G \mid I_{novel}, T_{novel} \right) $ is the auxiliary GAN loss on the joint set of generated and training data $\mathcal{S}_{\text{train}}^{\text{novel}} \cup \mathcal{S}_{\text{gen}}^{\text{novel}}$,
$G(T)$ is the tcGAN generator, and $\alpha_T$ is a soft selector for the images $I_T$ generated from textual descriptions $T$, where the descriptions come from a set of captions for novel categories $T_{novel}$ (i.e. $T \in {T}_{\text{novel}}$).
Since each $\alpha_T$ is a selector for the generated data, $K$ specifies
the maximum number of generated
of samples to be included in $\mathcal{S}_{\text{gen}}^{\text{novel}}$ for the next finetuning step. Our pseudo-code is given in algorithm~\ref{spl}.
\subsubsection{Initialization for SPL}
To obtain a meaningful ranking in the self-paced learning phase, $D'$ has to be initialized on novel classes.
Again taking into account the setting of few-shot learning, we restrict the number of samples per class available to $n$ for doing this, i.e. use $\mathcal{S}_{\text{train}}^{\text{novel}}(n)$ instead of $\mathcal{S}_{\text{train}}^{\text{novel}}$.
Since only real images are used for initialization, the quality of this data is very high compared to the noise-prone generated samples.
Due to the limited amount of samples, the initialized $D'$ will be weak on the classification task, but sufficiently powerful for performing an initial ranking of the generated images.
\subsubsection{Self-paced Adversarial Training}
The ability to rank generated images with the pre-trained $D'$ allows for data selection and guided optimization.
In our approach we specifically follow a self-paced learning strategy.
This entails iteratively choosing generated images that have highest probability in $D'$ for $\mathcal{C}_{\text{novel}}$, yielding a curated set of high-quality generated samples $\mathcal{S}_{\text{gen}}^{\text{novel}}$.
Finally, we aggregate original samples and generated images $\mathcal{S}_{\text{train}}^{\text{novel}} \cup \mathcal{S}_{\text{gen}}^{\text{novel}}$ for training, during which we alternately update $D'$ and $G$.
Doing so yields both a more accurate ranking as well as higher class prediction accuracy as the number of samples increases.
Ultimately, the approach summarized in algorithm~\ref{spl} learns a reliable classifier that performs well in few-shot learning scenarios.
\subsection{Datasets}
We test the applicability of our method on two fine-grained classification datasets, namely CUB \cite{WahCUB_200_2011} with bird data and Oxford-102~\cite{nilsback2008automated} containing flower data.
Specifically, the CUB bird dataset contains 11,788 images of 200 different bird species, with $\mathcal{I} \subset \mathbb{R}^{256\times256}$.
The data is split equally into training and test data.
As a consequence, samples are roughly equally distributed, with training and test each containing $\approx 30$ images per category.
Additionally, 10 short textual descriptions per image are provided by \cite{reed_learning_2016}.
Similar to \cite{zhang_stackgan++:_2017}, we use the text-encoder pre-trained by \cite{reed_learning_2016}, yielding a text embedding space $\mathcal T \subset \mathbb{R}^{1024}$.
Following \cite{zhang_stackgan++:_2017}, we split the data such that $\left|C_{base}\right|=150$ and $\left|C_{novel}\right|=50$.
To simulate few-shot learning, $n\in\{1,2,5,10,20\}$ images of $C_{novel}$ are used for training, as proposed by \cite{hariharan_low-shot_2017}.
In contrast, the Oxford-102 dataset contains images of 102 different categories of flowers. Similar to the CUB-200 dataset, Reed et al.~\cite{reed_learning_2016} provide 10 textual descriptions per image. As for the CUB dataset, we use the text-encoder pre-trained by \cite{reed_learning_2016}, yielding a text embedding space $\mathcal T \subset \mathbb{R}^{1024}$. Following Zhang et al.~\cite{zhang_stackgan++:_2017}, we split the data such that $\left|C_{base}\right|=82$ and $\left|C_{novel}\right|=20$.
To simulate few-shot learning, $n\in\{1,2,5\}$ images of $C_{novel}$ are used for training.
\begin{table}[t!]
\setlength{\tabcolsep}{5pt}
\centering{
\begin{tabular}{lccc} \toprule
& & n & \\
Model & 1 & 2 & 5\\ \midrule
Finetuning & 70.42 & 82.53 & 81.14 \\
Initialization & 75.26 & 89.45 & 89.97 \\
SPL-D'G & \textbf{78.37} & \textbf{91.18} & \textbf{92.21}
\\\bottomrule
\end{tabular}
}
\captionsetup{position=above}
\captionof{table}{Top-5 accuracy in percent for our model on Oxford-102 dataset in different settings (best results are in bold)}
\label{tab:results_oxford}
\end{table}
\begin{figure*}[ht]
\centering
\captionsetup{justification=centering, margin=-1cm}
\begin{tikzpicture}[scale = 1]%
\begin{axis}[
legend columns=2,
width=0.7\textwidth,
height=0.4\textwidth,
legend pos=south east,
major grid style={line width=.1pt,draw=gray!40},
grid=both,
xtick={0,3,6,...,30},
ytick={0.45, 0.475 ,0.5,...,0.6},
ymin=0.42,
xlabel={Iteration in self-paced learning phase},
ylabel= Accuracy,
axis y discontinuity=crunch,
xmin=0,xmax=30,
ymax=0.6]
\addplot+[mark=none] table [x=iteration, y=top5, col sep=comma] {data/evolution_DG.csv};
\addplot+[mark=none] table [x=iteration, y=top5, col sep=comma] {data/evolution_D.csv};
\addplot+[domain=0:30, mark=none, dashed]{0.4916};
\legend{D' and G, D', Initialization}
\end{axis}
\end{tikzpicture}%
\caption{ Top-5 Accuracy in every iteration for different strategies in the 1-shot learning scenario}
\label{fig:evolution}
\end{figure*}
\subsection{Algorithmic Details}
During representation learning, we train a StackGAN for $900$ epochs. Similar to \cite{zhang_stackgan++:_2017}, we use Adam \cite{kingma2014adam} for optimization.
We set the learning rate $\tau$ to $2 \times 10^{-4}$ and the batch size to 24 for both $G$ and $D$.
In the initialization phase for self-paced learning, we construct $D'$ by replacing the last layer of $D$ by a linear softmax layer.
The resulting network is then optimized using the cross-entropy loss function and a SGD optimizer with learning rate $\tau=10^{-3}$ and momentum $0.5$.
Batch size is set to 32 and training proceeds for 20 epochs.
Self-paced learning of $D'$ continues to use the same settings (i.e. SGD with $\tau=10^{-3}$ and momentum $0.5$, minimizing a cross-entropy loss).
Additionally, Adam's learning rate for $G$ is reduced to $2 \times 10^{-5}$.
In every iteration we choose exactly one generated image per category and perform training for 10 epochs.
\begin{table}
\setlength{\tabcolsep}{5pt}
\centering{
\begin{tabular}{lcccc} \toprule
& Top 1 & Top 3 & Top 5 \\ \midrule
SGM \cite{hariharan_low-shot_2017} & 19.08 & 40.55 & 48.89 \\
SGM+Hallucination \cite{hariharan_low-shot_2017} & 20.27 & \textbf{41.06} & 50.59 \\
Our proposed method & \textbf{24.90} & 37.59 & \textbf{57.67}
\\\bottomrule
\end{tabular}
}
\captionsetup{position=above}
\captionof{table}{Top-1, top-2 and top-5 accuracy of single modality models for 1-shot Learning task proposed by \cite{hariharan_low-shot_2017}(SGM-loss and SGM-loss with Hallucination) compared to our model (best results are in bold) }
\label{tab:comparison}
\end{table}
\subsection{Models}
In order to asses the performance of individual components, we perform an ablation study.
A simple approach for transfer learning is to make use of a pre-trained representation and then finetune that network on novel data.
We apply this strategy on a first baseline (\textbf{Finetuning}), for which we pre-train a classifier $T$ that has exactly the same architecture as $D$ on the base classes, followed by finetuning with the few instances of novel classes on $\mathcal{S}_{\text{train}}^{\text{novel}}$.
This meta-learning strategy learns meaningful representations on the base classes $\mathcal{C}_{\text{base}}$ that can be used for novel classes $\mathcal{C}_{\text{novel}}$.
A second baseline (\textbf{Initialization}) constitutes our first contribution.
We modify the discriminator $D$ of the StackGAN, which we obtain from the representation learning phase, to obtain $D'$ by exchanging the discriminator's last layer.
Finetuning is then performed on the real samples from novel classes $\mathcal{S}_{\text{train}}^{\text{novel}}$.
Note that the \textit{initialization} baseline uses $D$ which is pre-trained using the adversarial principle during the StackGAN training, in contrast to the \textit{finetuning} baseline that uses $T$ as pre-trained by a conventional classifier.
Afterwards, we iteratively add high-quality generated samples for novel categories $\mathcal{S}_{\text{gen}}^{\text{novel}}$ as described.
In a first self-paced experiment (\textbf{SPL-D'}) we update $D'$ using selected generated samples in every iteration.
In a second experiment, additionally to updating $D'$, we update $G$ in every iteration in order to be fully self-paced (\textbf{SPL-D'G}). Following previous approaches (e.g. \citep{hariharan_low-shot_2017}), we evaluate our approach by reporting the top-5 accuracy.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\textwidth]{figures/rankedImages.jpg}
\caption{a) real images of birds and b) ranked birds after different iterations}
\label{fig:lastFigure}
\end{figure*}
\subsection{Analysis of Self-paced Finetuning}
We run several additional experiments to further analyze the behavior of our method. For the following experiments we use the CUB bird dataset.
\subsection{Results of Ablation Study}
\subsubsection{CUB-200 Dataset}
We report top-1-, top-3- and top-5-accuracy for our method in different settings.
The results in top-5-accuracy are shown in Tab.~\ref{tab:results}.
We observe that the initialization phase already provides a large margin compared to the finetuning phase.
Both are finetuned exclusively on real images with the difference that our initialized $D'$ is pre-trained in an adversarial fashion on text and image data during the representation learning phase.
In contrast, the finetuning baseline is pre-trained only on image data without adversarial training.
Our results indicate that including text information in representation learning already provides accuracy gains.
Further, we observe that the self-paced finetuning phase improves classification accuracy.
In particular, the margin in a 1-shot scenario is large.
For most scenarios, by updating both $G$ and $D'$ we achieve higher accuracy than by only updating $D'$.
This observation indicates that generated images add upon class-discriminatory power through our self-paced finetuning strategy if $G$ is involved in training.
Our full self-paced sample selection procedure, including updating $D'$ and $G$ provides an accuracy boost of $17$, $20$ and $17$ percent in the challenging $1$-,$2$- and $5$-shot scenarios, respectively.
Also for $10$- and $20$-shot learning we outperform the baseline by more than $10$ percent.
We observe the same trends for top-1- and top-3 accuracy.
\subsubsection{Oxford-102 Dataset}
Similar to the experiments for CUB, we compare our method (1) to a \textit{Finetuning} baseline, i.e. pretrain classifier on base classes and finetune it on novel classes and (2) to an \textit{Initialization} model, i.e. finetune $D^*$ obtained from representation learning on novel classes. For our method, we update both $D'$ and $G$ (SPL-D'G). Classification results in top-5-accuracy are given in Tab. \ref{tab:results_oxford}. We can observe similar trends compared to the CUB dataset. In all few-shot scenarios, we outperform the finetuning baseline with a large margin. Specifically, our method yields a performance boost by ca. 8\%, 10.5\% and 11\% in the challenging 1-, 2- and 5-shot scenarios compared to the finetuning baseline. The initialization for the self-paced learning phase provides performance gains as well. However, the full model yields the best results in every few-shot scenario.
\subsection{Comparison to Similar Work}
We compare our method to the single modality methods recently proposed by Hariharan and Girshick \cite{hariharan_low-shot_2017}. For the comparison in Tab.~\ref{tab:comparison} we use the CUB bird dataset. It can be observed that our approach outperforms the methods proposed by \cite{hariharan_low-shot_2017}, justifying the usage of multimodal data for training in few-shot scenarios.
\subsubsection{Evolution of classifiers during SPL}
In order to check the behavior of the the self-paced learning phase, we study the classification accuracy across all iterations. In Fig. \ref{fig:evolution} we report the top-5-accuracy for 30 iterations in the 1-shot learning scenario. The dashed line shows the accuracy after the initialization phase and can be interpreted as lower bound for the self-paced learning phase. The blue line shows the accuracy for our method updating only $D'$, while the red line shows the accuracy for updating $D'$ and $G$. For the first iterations both models behave very similarly, but after a certain warm-up phase the model entailing updates of $G$ performs better. That indicates that updating $G$ leads to the synthesis of images of higher quality, which in turn further increases the classification accuracy.
\begin{figure*}[ht]
\centering
\captionsetup{justification=centering, margin=-1cm}
\begin{tikzpicture}[scale = 0.65]%
\begin{axis}[
legend pos=north east,
legend columns=2,
width=0.6\textwidth,
height=0.5\textwidth,
major grid style={line width=.1pt,draw=gray!40},
grid=none,
xtick={1,2,...,10},
ytick={0.2,0.3,0.4,...,1.0},
ymin=0.1,
xlabel={Ranked chunks of generated images},
ylabel= Accuracy,
bar width=0.3cm,
ymax=0.9,xmax=11]
\addplot+[ybar, mark=none, , pattern=north east lines,pattern color = red] table [x=step, y=top1, col sep=comma] {data/rankingAccuracy.csv};
\addplot+[ybar, mark=none] table [x=step, y=top5, col sep=comma] {data/rankingAccuracy.csv};
\legend{Top-1, Top-5}
\end{axis}
\end{tikzpicture}%
\begin{tikzpicture}[scale = 0.8]%
\begin{axis}[
legend columns=2,
width=0.6\textwidth,
height=0.4\textwidth,
legend pos=south east,
major grid style={line width=.1pt,draw=gray!40},
grid=both,
xtick={0,1,...,10},
ytick={1.0, 1.5 ,2.0,...,4.0},
ymin=1,
xlabel={Ranked chunks of generated images},
ylabel= Inception score,
xmin=1,xmax=10,
ymax=4.0]
\addplot+[mark=none] table [x=step, y=inception_score, col sep=comma] {data/inceptionScores.csv};
\end{axis}
\end{tikzpicture}%
\hspace{1cm}(a) \hspace{5.5cm} (b)
\caption{ (a) Classification accuracy for chunks and (b) inception score for chunks}
\label{fig:chunkResults}
\end{figure*}
\subsubsection{Effect of Ranking}
To check the suitability of $D'$ to rank the generated samples, we further analyze the behavior of the ranked samples. Therefore, we split 30 ranked samples into chunks of size 3, yielding 10 ranked chunks. In a first experiment, we evaluate the class-discriminativeness of those chunks. In this regard, we use the initialized $D'$ and consider every single chunk as test set. The classification results in top-1-accuracy and top-5-accuracy are shown in Fig.~\ref{fig:chunkResults}a. It can be observed that the classification accuracy for high-ranked chunks is higher in test time. This shows the ranking is working in the desired fashion, i.e. prioritizing generated samples for which $D'$ is more confident w.r.t. to correct class-discriminative information.
In a second experiment, we study the quality of the generated images across the ranked chunks. Note that ranking is performed based on class-discriminativeness alone and specifically does not take into account the quality of the image. Therefore, we use the inception score \cite{salimans_improved_2016}
to assess the quality of images generated by GANs.
As can be observed from Fig.\ref{fig:chunkResults}b, the quality of the images remains constant across the chunks, validating that we do not suffer a quality drop by ranking images based on class-discriminativeness. Hence, our proposed ranking allows to pick images that perform best for classification without any loss in quality of the generated images. Fig. \ref{fig:lastFigure} shows some generated images for one category ranked in different iterations. We can observe that the ranking improves over the iterations.
\subsection{Few-Shot Learning}
For learning deep networks using limited amounts of data, different approaches have been developed. Following Taigman et al.~\cite{taigman2014deepface}, Koch et al.~\cite{koch_siamese_2015} interpreted this task as a verification problem, i.e. given two samples, it has to be verified, whether both samples belong to the same class. Therefore, they employed siamese neural networks \cite{bromley1994signature} to compute the distance between the two samples and perform nearest neighbor classification in the learned embedding space. Some recent works approach few-shot learning by striving to avoid overfitting by modifications to the loss function or the regularization term. Yoo et al.~\cite{yoo_efficient_2017} proposed a clustering of neurons on each layer of the network and calculated a single gradient for all members of a cluster during the training to prevent overfitting. The optimal number of clusters per layer is determined by a reinforcement learning algorithm. A more intuitive strategy is to approach few-shot learning on data-level, meaning that the performance of the model can be improved by collecting additional related data. Douze et al.~\cite{douze_low-shot_2017} proposed a semi-supervised approach in which a large unlabeled dataset containing similar images was included in addition to the original training set. This large collection of images was exploited to support label propagation in the few-shot learning scenario. Hariharan et al.~\cite{hariharan_low-shot_2017} combined both strategies (data-level and algorithm-level) by defining the squared gradient magnitude loss, that forces models to generalize well from only a few samples, on the one hand and generating new images by hallucinating features on the other hand. For the latter, they trained a model to find common transformations between existing images that can be applied to new images to generate new training data (see also \cite{wang_low-shot_2018}). Other recent approaches to few-shot learning have leveraged meta-learning strategies. Ravi et al.~\cite{ravi_optimization_2017} trained a long short-term memory (LSTM) network as meta-learner that learns the exact optimization algorithm to train a learner neural network that performs the classification in a few-shot learning setting. This method was proposed due to the observation that the update function of standard optimization algorithms like SGD is similar to the update of the cell state of a LSTM. Bertinetto et al.~\cite{bertinetto_learning_2016} trained a meta-learner feed-forward neural network that predicts the parameters of another, discriminative feed-forward neural network in a few-shot learning scenario. Another tool that has been applied successfully to few-shot learning recently is attention. Vinyals et al.~\cite{vinyals_matching_2016} introduced matching networks for one-shot learning tasks. This network is able to apply an attention mechanism over embeddings of labeled samples in order to classify unlabeled samples. One further outcome of this work is that it is helpful to mimic the one-shot learning setting already during training by defining mini-batches, called episodes with subsampled classes. Snell et al.~\cite{snell_prototypical_2017} generalize this approach by proposing prototypical networks. Prototypical networks search for a non-linear embedding space (the prototype) in which classes can be represented as the mean of all corresponding samples. Classification is then performed by finding the closest prototype in the embedding space. In the one-shot scenario, prototypical networks and matching networks are equivalent.
\subsection{Multimodal Learning}
\cite{kiros_unifying_2014} propose to align visual and semantic information in a joint embedding space using a encoder-decoder pipeline.
Building on this, \cite{faghri_vse++:_2017} improve upon this mixed representation by incorporating a triplet ranking loss.
\cite{karpathy_deep_2015} generate textual image descriptions. Their model infers latent alignments between regions of images and segments of sentences of their respective descriptions.
\cite{reed_learning_2016} focus on fine-grained visual descriptions.
They present an end-to-end trainable deep structured joint embedding trained on two datasets containing fine-grained visual descriptions.
In addition to multimodal embeddings, another related field using data from different modalities is text-to-image generation.
\cite{reed16_gen} study image synthesis based on textual information. \cite{zhang_stackgan++:_2017} greatly improve the quality of generated images to a photo-realistic high-resolution level by stacking multiple GANs (StackGANs).
Extensions of StackGANs include an end-to-end trainable version \cite{zhang_stackgan++:_2017} and considering an attention mechanism over the textual input \citep{xu_attngan:_2017}. Sharma et al. \cite{sharma2018chatpainter} extended the conditioning by involving dialogue data and further improved the image quality.
Beside the usage of GANs for conditioned image generation, other work employed Variational Autoencoders \cite{kingma2013auto} to generate images \cite{mishra2017generative}. However, they conditioned on attribute vectors instead of text.
\subsection{Learning from Simple to Complex}
Recently, many studies have shown the benefits of organizing the training examples
from simple to complex for model training. Bengio et al. \cite{bengio2009curriculum} first proposed a general learning strategy: curriculum learning. They show that suitably sorting the training samples, from the easiest to the most difficult, and iteratively training a classifier starting with a subset of easy samples
can be useful to find better local minima. In \cite{chen2015webly}, easy and difficult images
are provided for training a CNN in order to learn generic CNN features using webly annotated data. Note that in this and in all the other curriculum-learning-based approaches, the order of the samples is provided by an external supervisory signal, taking into account human domain-specific expertise.
Curriculum learning was extended to self-paced learning by Kumar et al. \cite{kumar2010self}. They proposed the self-paced learning framework, automatically expanding the training pool in an easy-to-hard manner by converting the curriculum mechanism into a concise regularization term. Curriculum learning uses human design to organize the examples, and self-paced learning can automatically choose training examples according to the loss. Supancic et al. \cite{supancic2013self} adopt a similar framework in a tracking scenario and train a detector using a subset of video frames, showing that this selection is important to avoid drifting.
In \cite{zhang2017bridging} saliency is used to progressively select samples in weakly supervised object detection.
Although some of these self-paced methods use pre-trained CNN-based features to represent samples (e.g., \cite{liang2015towards}), or uses a CNN as the classifier directly(e.g., \cite{sangineto2016self}), none of them formulates the self-paced strategy in a GAN training protocol as we do in this paper.
|
2,869,038,155,081 | arxiv | \section{Introduction}
\label{section:introduction}
The problem of subspace clustering is to find a nonlinear model of the form $\mathcal{U}=\bigcup_{i\in I}S_i$ where $\left\{S_i\right\}_{i\in I}$ is a set of subspaces that is nearest to a set of data $\textbf{W}=\left\{w_1,...,w_N\right\} \in \mathbb{R}^d$. The model can then be used to classify the data $\textbf{W}$ into classes called clusters.
In many engineering and mathematics applications, data lives in a union of low dimensional subspaces \cite{Lu08,Kanatani03,Akram09,Vidalbook}. For instance, consider a moving affine camera that captures $F$ frames of a scene that contains multiple moving objects. Let $p$ be a point of one of these objects and let $x_i(p), y_i(p)$ be the coordinates of $p$ in frame $i$. Define the {\em trajectory vector} of $p$ as the vector $w(p)=(x_1(p),y_1(p),x_2(p),y_2(p),\dots,x_N(p),y_N(p))^t$ in $ \mathbb R^{2F}$. It can be shown that the trajectory vectors of all points of an object in a video belong to a vector subspace in $\mathbb R^{2F}$ of dimension no larger than $4$ \cite{Kanatani01,Akram10}. Thus, trajectory vectors in videos can be modeled by a union $\mathcal{M}=\cup_{i\in I}V_i$ of $l$ subspaces where $l$ is the number of moving objects (background is itself a motion). It can also be shown that human facial motion and other non-rigid motions can be approximated by linear subspaces \cite{Bregler00,Brand01}. Another clustering problem that can be modeled as union of subspaces is recognition of faces. Specifically, the set of all two dimensional images of a given face $i$, obtained under different illuminations and facial positions, can be modeled as a set of vectors belonging to a low dimensional subspace $S_i$ living in a higher dimensional space $\mathbb{R}^d$ \cite{Basri01,Ho03,Vidalbook}. A set of such images from different faces is then a union $\mathcal{U}=\bigcup_{i\in I}S_i$. Similar nonlinear models arise in sampling theory where $\mathbb{R}^d$ is replaced by an infinite dimensional Hilbert space $\mathcal{H}$, e.g., $L^2(\mathbb{R}^d)$ \cite{AT11,Akram08,Lu08,Maravic05}.
\subsection{Subspace Segmentation Problem}
\label {SSP}
The goal of subspace clustering is to identify all of the subspaces that a set of data $\textbf{W}=\left\{w_1,...,w_N\right\} \in \mathbb{R}^d$ is drawn from and assign each data point $w_i$ to the subspace it belongs to. The number of subspaces, their dimensions, and a basis for each subspace are to be determined. The subspace clustering or segmentation problem can be stated as follows:
\begin{changemargin}{1cm}{1cm}
Let $\mathcal{U}=\bigcup_{i=1}^{M}S_i$ where $\left\{S_i \subset \mathcal{H}\right\}_{i=1}^{M}$ is a set of subspaces of a Hilbert space $\mathcal{H}$. Let $\textbf{W}=\left\{w_j \in \mathcal{H}\right\}_{j=1}^{N}$ be a set of data points drawn from $\mathcal{U}$. Then,
\begin{changemargin}{0.1cm}{0.1cm}
\begin{enumerate}
\item determine the number of subspaces $M$,
\item determine the set of dimensions $\left\{d_i\right\}_{i=1}^{M}$,
\item find an orthonormal basis for each subspace $S_i$,
\item collect the data points belonging to the same subspace into the same cluster. \\
\end{enumerate}
\end{changemargin}
\end{changemargin}
Note that often the data may be corrupted by noise, may have outliers or the data may not be complete, e.g., there may be missing data points. In some subspace clustering problems, the number $M$ of subspaces or the dimensions of the subspaces $\{d_i\}_{i=1}^{M}$ are known. A number of approaches have been devised to solve the problem above or some of its special cases.
\subsubsection{Sparsity Methods}
Elhamifar \textit{et al.} developed an algorithm for linear and affine subspace clustering using sparse representation of vectors \cite{Elhamifar09,Elhamifar10}. This method combined with a spectral clustering, gives good results for motion segmentation and it is more general than Eldar's work in compressed sensing \cite{Eldar09}. Another method, related to compressed sensing by Liu \textit{et al.} \cite{Liu10, Liu10_2} finds the lowest rank representation of the data matrix. The lowest rank representation is then used to define the similarity of an undirected graph, which is then followed by spectral clustering. Favaro \textit{et al.} in \cite{Favaro11} extends \cite{Elhamifar09,Elhamifar10,Liu10,Liu10_2}.
\subsubsection{Algebraic Methods}
Algebraic methods have also been used for solving the subspace clustering problem. The
Generalized Principle Component Analysis (GPCA) is one such method \cite{Vidalbook,Vidal05,Vidal07}, and it can distinguish subspaces of different dimensions. Since it is algebraic, it is computationally inexpensive, however, its complexity increases exponentially as the number of subspaces and their dimensions increase. It is also sensitive to noise and outliers. The Robust Algebraic Segmentation is a more specialized algebraic method developed by Rao \textit{et al.} \cite{Rao10} to partition image correspondences to the motions in a 3-D dynamic scene (that contains 3-D rigid body and 2-D planar structures) under perspective camera projection.
\subsubsection{Iterative and Statistical Methods}
Iterative methods have also been employed for the subspace clustering problem. For example, the nonlinear least squares \cite{Akram08,Akram09} and K-subspaces \cite{Tseng00} start with an initial estimation of subspaces (or estimation of the bases of the subspaces). Then, a cost function reflecting the ``distance'' of a point to each subspace is computed and the point is assigned to its closest subspace. After that, each cluster of data is used to reestimate each subspace. The procedure is repeated until the segmentation of data points does not change. These methods, however, are sensitive to the initialization and require a good initial partition for convergence to a global minimum.
The statistical methods such as Multi Stage Learning (MSL) \cite{Kanatani03,Gruber04} are typically based on Expectation Maximization (EM) \cite{Candillier05}. The union of subspaces is modeled by a mixture of probability distributions. For example, each subspace is modeled by a Gaussian distribution. The model parameters are then estimated using \textit{Maximum Likelihood Estimation}. This is done by using a two-step process that optimizes the \textit{log-likelihood} of the model which depends on some hidden (latent) variables. In \textit{E-Step} (Expectation), the expectation of the \textit{log-likelihood} is computed using the current estimate of the latent variables. In \textit{M-Step} (Maximization), the values of the latent variables are updated by maximizing the expectation of the \textit{log-likelihood}. As in the case of the iterative methods, statistical methods highly depends on initialization of model parameters or segmentation and they assume that the number of subspaces as well as their dimensions are known.
The Random Sample Consensus (RANSAC) \cite{Bolles81}, which has been applied to numerous computer vision problems, is successful in dealing with noise and outliers. But it is a specialized algorithm and assumes that the subspaces have the same dimension and that this dimension is known.
\subsubsection{Spectral Clustering Methods}
\label{SpectralMethods}
Spectral clustering \cite{Luxburg07} is often used in conjunction with other methods as the final step in clustering. Some of the latest subspace clustering algorithms (such as \cite{Elhamifar09,Elhamifar10,Lerman09_3}) aim at defining an appropriate similarity matrix between data points which then can be used for further processing using the spectral clustering method. An application of spectral clustering to motion segmentation can be found in \cite{Schnorr09}. Spectral curvature clustering \cite{Lerman09,Lerman09_2} is a variant of spectral clustering. \cite{Zelnik04} provides a spectral clustering algorithm that aims at reducing the computational complexity. The motion segmentation algorithm developed by Yan and Pollefeys \cite{Yan06} first estimates a local linear manifold for each trajectory data and then computes an affinity matrix based on the principle subspace angles between each pair of local linear manifolds. The algorithm then uses spectral clustering for segmenting the trajectories of independent, articulated, rigid, and non-rigid body motions. \cite{Vidal10} gives a detailed treatment of various related algorithms.
\subsection{Motion Segmentation Problem}
The appendix gives a detailed treatment of motion segmentation as a special case of the subspace segmentation problem. First, a data matrix $W_{2F\times N}$ is constructed using $N$ feature points that are tracked across $F$ frames. Then, each column of $W$ (i.e., the trajectory vector of a feature point) is treated as a data point and it is shown that all of the data points that correspond to the same moving object lie in an at most 4-dimensional subspace of $\mathbb{R}^{2F}$.
\subsection{Paper Contributions}
\begin{enumerate}
\item This paper presents a clustering algorithm for high dimensional data that are drawn from a union of low dimensional subspaces of equal and known dimensions. The algorithm is applicable to the motion segmentation problem and uses some fundamental linear algebra concepts. Some of our ideas are similar to those of Yan and Pollefeys described above in Section \ref{SpectralMethods}. However, our algorithm differs from theirs fundamentally as described below:
\begin{itemize}
\item Yan and Pollefeys' method estimate a subspace $S_i$ for each point $x_i$, and then computes the principle angles between those subspaces as an affinity measure. In our work, we also estimate a subspace for each point, however, these local subspaces are used differently. They are used to compute the distance between each point $x_j$ to the local subspace $S_i$ for the data point $x_i$.
\item In their method, an exponential function for affinity of two points $x_i$ and $x_j$ is used, and this exponential function depends on the principle angles between the subspaces $S_i$ and $S_j$ that are associated with $x_i$ and $x_j$, respectively. In our case, the affinity measure is different. We first find the distance between $x_j$ and $S_i$ and then apply a threshold, computed from the data, to obtain a binary similarity matrix for all data points.
\item The method of Yan and Pollefeys uses spectral clustering on the normalized graph Laplacian matrix of the similarity matrix they propose. However, our approach does not use the spectral clustering on the normalized graph Laplacian of our similarity matrix. Instead, our constructed binary similarity matrix converts our original data clustering problem to a simpler clustering of data from 1-dimensional subspaces which can be solved by any traditional data clustering algorithm.
\end{itemize}
\item Our algorithm is reliable in the presence of noise, and applied to the Hopkins 155 Dataset, it generates the best results to date for motion segmentation. The two motion, three motion, and overall segmentation rates for the video sequences are 99.43$\%$, 98.69$\%$, and 99.24$\%$, respectively.
\item Many of the subspace segmentation algorithms use SVD to represent the data matrix $W$ as $W=U\Sigma V^t$ and then replace $W$ with the first $r$ rows of $V^t$, where $r$ is the effective rank of $W$. This paper provides a formal justification for this in Proposition \ref{proposition1}.
\end{enumerate}
\subsection{Paper Organization}
The organization of the paper is as follows: Section \ref{preliminaries} gives some preliminaries. In Section \ref{S3}, we devise an algorithm for the subspace segmentation problem in the special case where the subspaces have equal and known dimensions. In Section \ref{experimentalresults}, we apply our algorithm to the motion segmentation problem, test it on the Hopkins 155 Datasets, explain the experimental procedure, and present the experimental results.
\section{Preliminaries}
\label{preliminaries}
In this section, we present Proposition \ref{proposition1} which will be used later to justify that a data matrix $W$ whose columns represent data points can be replaced with a lower rank matrix after computing its SVD (i.e. $W=U\Sigma V^t$). It can be paraphrased by saying that for any matrices $A,B,C$, a cluster of the columns of $B$ is also a cluster of the columns of $C=AB$. A cluster of $C$ however is not necessarily a cluster $B$, unless $A$ has full rank:
\begin{proposition}
\label{proposition1}
Let $A$ and $B$ be ${m\times n}$ and $n\times k$ matrices. Let $C=AB$. Assume $J \subset \left\{1,2,\dotsb,k\right\}$.
\begin {enumerate}
\item If $b_i\in \text{span}\left\{b_j : j\in J\right\}$ then $c_i\in \text{span}\left\{c_j : j\in J\right\}$.
\item If $A$ is full rank and $m\geq n$ then
$b_i\in \text{span}\left\{b_j : j\in J\right\} \Longleftrightarrow c_i\in \text{span}\left\{c_j : j\in J\right\}$
\end {enumerate}
\label{proposition1}
\end{proposition}
\begin{proof} The first part can be proved by the simple matrix manipulation
\begin{align}
AB &= A\left[ \begin{matrix} b_1 & \dotsb & b_i & \dotsb& b_k \end{matrix} \right] \nonumber\\
&=\left[ \begin{matrix} Ab_1 & \dotsb & Ab_i & \dotsb& Ab_k \end{matrix} \right] \nonumber \\
&=\left[ \begin{matrix} Ab_1 & \dotsb & A\sum_{j\in J}k_jb_j & \dotsb& Ab_k \end{matrix} \right] \nonumber \\
&=\left[ \begin{matrix} Ab_1 & \dotsb & \sum_{j\in J}k_jAb_j & \dotsb& Ab_k \end{matrix} \right] \nonumber \\
&=\left[ \begin{matrix} c_1 & \dotsb & \sum_{j\in J}k_jc_j & \dotsb& c_k \end{matrix} \right]
\end{align}
For the second part, we note that $A^tA$ is invertible and $(A^tA)^{-1}A^tC=B$. We then apply part 1 of the proposition. Note that the same result clearly holds if $A$ is invertible.
\end{proof}
The proposition above suggest that--for the purpose of column clustering--we can replace a matrix $C$ by matrix $B$ as long as $A$ has the stated properties. Thus by choosing $A$ appropriately the matrix $C$ can be replaced by a more suitable matrix $B$, e.g. $B$ has fewer rows, is better conditioned or is in a format where columns can be easily clustered.
\section{Nearness to Local Subspace Approach}
\label {S3}
In this section, we develop a specialized algorithm for subspace segmentation and data clustering when the dimensions of the subspaces are equal and known. First, a local subspace is estimated for each data point. Then, the distances between the local subpaces and points are computed and a distance matrix is generated. This is followed by construction of a binary similarity matrix by applying a data-driven threshold to the distance matrix. Finally, the segmentation problem is converted to a one-dimensional data clustering problem. The precise steps are described in Algorithm \ref {algo:greatcircle} and in the explanation that follows.
\subsection{Algorithm for Subspace Segmentation for Subspaces of Equal and Known Dimensions}
The algorithm for subspace segmentation is given in Algorithm \ref{algo:greatcircle}. We assume that the subspaces have dimension $d$ (for motion segmentation, $d=4$). The details of the various steps are:
\begin{algorithm}
\caption{Subspace Segmentation}
\label{algo:greatcircle}
\begin{algorithmic}[1]
\REQUIRE The $m\times N$ data matrix $W$ whose columns are drawn from subspaces of dimension $d$
\ENSURE Clustering of the feature points.
\STATE Compute the SVD of $W$ as in Equation \eqref{eq:svd}.
\STATE Estimate the rank of $W$ (denoted by $r$) if it is not known. For example, using Equation \eqref{eq:rankestimation} or any other appropriate choice.
\STATE Compute $(V_r)^t$ consisting of the first $r$ rows of $V^t$.
\STATE Normalize the columns of $(V_r)^t$.
\STATE Replace the data matrix $W$ with $(V_r)^t$.
\STATE Find the angle between the column vectors of $W$ and represent it as a matrix. \COMMENT{i.e., $\arccos(W^t W)$.}
\STATE Sort the angles and find the closest neighbors of column vector.
\FORALL{Column vector $x_i$ of $W$}
\STATE Find the local subspace for the set consisting of $x_i$ and $k$ neighbors (see Equation \eqref{eq:hypercircle}). \COMMENT{Theoretically, $k$ is at least $d-1$. We can use the least square approximation for the subspace (see the section \textit{Local Subspace Estimation}). Let $A_i$ denote the matrix whose columns form an orthonormal bases for the local subspace associated with $x_i$.}
\ENDFOR
\FOR{$i=1$ to N}
\FOR{$j=1$ to N}
\STATE define $H = (d_{ij}) =\left(||x_j-A_{i}^t x_j||_p+||x_i-A_{j}^t x_i||_p\right)/2$
\ENDFOR
\ENDFOR
\COMMENT {Build the distance matrix}
\STATE Sort the entries of the $N\times N$ matrix $H$ from smallest to highest values into the vector $h$ and set the threshold $\eta$ to the value of the $T^{th}$ entry of the sorted and normalized vector $h$, where $T$ is such that $\|\chi_{[T,N^2]}-h\|_2$ is minimized, and where $\chi_{[T,N^2]}$ is the characteristic function of the discrete set $[T,N^2]$.
\STATE Construct a similarity matrix $S$ by setting all entries of $H$ less than threshold $\eta$ to 1 and by setting all other entries to 0. \COMMENT {Build the binary similarity matrix}
\STATE Normalize the rows of $S$ using $l_1$-norm.
\STATE Perform SVD $S^t = U_n \Sigma_n (V_n)^t$.
\STATE Cluster the columns of $\Sigma_n$$(V_n)^t$ using k-means. $\Sigma_n(V_n)^t$ is the projection on to the span of $U_n$.
\end{algorithmic}
\end{algorithm}
\emph{Dimensionality Reduction and Normalization:}
Let $W$ be an $m\times N$ data matrix whose columns are drawn from a union of subspaces of dimensions at most $d$, possibly perturbed by noise. In order to reduce the dimensionality of the problem, we compute the SVD of $W$
\begin {equation}
\label{eq:svd}
W=U\Sigma V^t
\end {equation}
where $U=\left[ \begin{matrix} u_1 & u_2 & \dotsb & u_m \end{matrix} \right] $ is an $m\times m$ matrix, $V=\left[ \begin{matrix} v_1&v_2 & \dotsb & v_N \end{matrix} \right]$ is an $N\times N$ matrix, and $\Sigma$ is an $m\times {\color{red} N}$ diagonal matrix with diagonal entries $\sigma_1,\dots, \sigma_l$, where $l=\min \{m,N\}$.
To estimate the effective rank of $W$, one can use the modal selection algorithm \cite{Yan06} to estimate the rank $r$ if it is not known:
\begin{equation}
r=\text{argmin}_r\frac{\sigma_{r+1}^2}{\sum_{i=1}^r\sigma_i^2}+\kappa r
\label{eq:rankestimation}
\end{equation}
where $\sigma_j$ is the $j^{th}$ singular value and $\kappa$ is a suitable constant. Another possible model selection algorithm can be found in \cite{Zappella11}. $U_r\Sigma_r(V_r)^t$ is the best rank-$r$ approximation of $W=U\Sigma V^t$, where $U_r$ refers to a matrix that has the first $r$ columns of $U$ as its columns and $V_r$ refers to the first $r$ rows of $V^t$. In the case of motion segmentation, if there are $k$ independent motions across the frames captured by a moving camera, the rank of $W$ is between $2(k+1)$ and $4(k+1)$.
We can now replace the data matrix $W$ with the matrix $(V_r)^t$ that consists of the first $r$ rows of $V^t$ (thereby reducing the dimensionality of data). This step is justified by Proposition \ref{proposition1}. Also, \cite{Vidal05} discusses the segmentation preserving projections and states that the number of subspaces and their dimensions are preserved by random projections, except for a zero measure set of projections. It should also be noted that this step reduces additive noise as well, especially in the case of light-tailed noise, e.g., Gaussian noise. The number of subspaces corresponds to the number of moving objects. Vidal \textit{et al.} \cite{Vidal08} uses an alternative method (power method) for SVD to project incomplete motion data (trajectories) into a 5-dimensional subspace and then applies GPCA and spectral clustering for subspace segmentation. Dimensionality reduction corresponds to Steps 1, 2, and 3 in Algorithm \ref{algo:greatcircle}.
Another type of data reduction is normalization. Specifically, the columns of $(V_r)^t$ are normalized to lie on the unit sphere $\mathbb{S}^{r-1}$. This is because by projecting the subspace on the unit sphere, we effectively reduce the dimensionality of the data by one. Moreover, the normalization gives equal contribution of the data matrix columns to the description of the subspaces. Note that the normalization can be done by using $l_p$ norms of the columns of $(V_r)^t$. This normalization procedure corresponds to Steps 4 and 5 in Algorithm \ref{algo:greatcircle}. \\ \\
\emph{\textbf{Local Subspace Estimation:}}
The data points (i.e., each column vector of $(V_r)^t$) that are close to each other are likely to belong to the same subspace. For this reason, we estimate a local subspace for each data point using its closest neighbors. This can be done in different ways. For example, if the $l_2$-norm is used for normalization, we can find the angles between the points, i.e., we can compute the matrix $\arccos(V_r\times (V_r)^t)$. Then we can sort the angles and find the closest neighbors of each point. If we use $l_p$-norm for normalization, we can generate a distance matrix $(a_{ij})=(||x_i-x_j||_p)$ and then sort each column of the distance matrix to find the neighbors of each $x_i$, which is the $i^{th}$ column of $(V_r)^t$.
Once the distance matrix between the points is generated, we can find, for each point $x_i$, a set of $k+1\ge d$ points $\left\{x_i, x_{i_1},...,x_{i_k}\right\}$ consisting of $x_i$ and its $k$ closest neighbors. Then we generate a d-dimensional subspace that is nearest (in the least square sense) to the data $\left\{x_i, x_{i_1},...,x_{i_k}\right\}$. This is accomplished by using SVD
\begin{equation}
\label{eq:hypercircle}
X=\left[x_i \; x_{i_1} \; ... \; x_{i_k}\right] = A\Sigma B^t.
\end{equation}
Let $A_i$ denote the matrix of the first $d$ columns of $A$ associated with $x_i$. Then, the column space $\textsl{C}(A_i)$ is the $d$-dimensional subspace nearest to $\left\{x_i, x_{i_1},...,x_{i_k}\right\}$.
Local subspace estimation corresponds to Steps 6 to 10 in Algorithm \ref{algo:greatcircle}. \\ \\
\emph{\textbf{Construction of Binary Similarity Matrix:}}
So far, we have associated a local subspace $S_i$ to each point $x_i$. Ideally, the points and only those points that belong to the same subspace as $x_i$ should have zero distance from $S_i$. This suggests computing the distance of each point $x_j$ to the local subspace $S_i$ and forming a distance matrix $H$.
The distance matrix $H$ is generated as $H = (d_{ij}) = \left(||x_j-A_{i}^t x_j||_p+||x_i-A_{j}^t x_i||_p\right)/2$.\\
A convenient choice of $p$ is 2. Note that as $d_{ij}$ decreases, the probability of having $x_j$ on the same subspace as $x_i$ increases. Moreover, for $p=2$, $||x_j-A_{i}^t x_j||_2$ is the Euclidean distance of $x_j$ to the subspace associated with $x_i$.
Since we are not in the ideal case, a point $x_j$ that belongs to the same subspace as $x_i$ may have non-zero distance to $S_i$. However, this distance is likely to be small compared to the distance between $x_j$ and $S_k$ if $x_j$ and $x_k$ do not belong to the same subspace. This suggests that we compute a threshold that will distinguish between these two cases and transform the distance matrix into a binary matrix in which a zero in the $(i,j)$ entry means $x_i$ and $x_j$ are likely to belong to the same subspace, whereas $(i,j)$ entry of one means $x_i$ and $x_j$ are not likely to belong to the same subspace.
To do this, we convert the distance matrix $H=(d_{ij})_{N\times N}$ into a binary similarity matrix $S=(s_{ij})$. This is done by applying a data-driven thresholding as follows:
\begin{enumerate}
\item Create a vector $h$ that contains the sorted entries of $H_{N\times N}$ from smallest to highest values. Scale $h$ so that its smallest value is zero and its largest value is one.
\item Set the threshold $\eta$ to the value of the $T^{th}$ entry of the sorted vector $h$, where $T$ is such that $\|\chi_{[T,N^2]}-h\|_2$ is minimized, and where $\chi_{[T,N^2]}$ is the characteristic function of the discrete set $[T,N^2]$.
If the number of points in each subspace are approximately equal, then we would expect about $\frac{N}{n}$ points in each subspace, and we would expect $\frac{N^2}{n^2}$ small entries (zero entries ideally). However, this may not be the case in general. For this reason, we compute the data-driven threshold $\eta$ that distinguishes the small entries from the large entries.
\item Create a similarity matrix $S$ from $H$ such that all entries of $H$ less than the threshold $\eta$ are set to 1 and the others are set to 0.
\end{enumerate}
The construction of binary similary corresponds to Steps 11 to 17 in Algorithm \ref{algo:greatcircle}. In \cite{Yan06}, Yan and Pollofeys uses chordal distance (as defined in~\cite{Wong67}) between the subspaces $\mathcal{F}(x_i)$ and $\mathcal{G}(x_j)$ as a measure of the distance between points $x_i$ and $x_j$
\begin{equation}
\label{distance_pollefey}
d_{c}^2(\mathcal{F},\mathcal{G}) = \sum_{i=1}^p \sin^2(\theta_i)
\end{equation}
where $\{\theta_i\}_{i=1}^{p}$ are the principle angles between $p$-dimensional local subspaces $\mathcal{F}$ and $\mathcal{G}$ with $\theta_1\leq\dots\leq \theta_p$. In this approach, the distance between any pairs of points from $\mathcal{F}$ and $\mathcal{G}$ is the same. We find distances between points and local subspaces and our approach distinguishes different points from the same subspace. To see this, let $v \in span\{Q_{\mathcal{F}}\}$, $||v||_2 = 1$, where the columns of $Q_{\mathcal{F}}$ form an orthonormal basis for $\mathcal{F}$. Thus $v=Q_{\mathcal{F}}x$ for some $x$ with $||x||_2=1$. Let $Q_{\mathcal{G}}$ form an orthonormal basis for $\mathcal{G}$, then the Euclidian distance from $v$ to $\mathcal{G}$ squared is given by
\begin{align}
\|v-P_{\mathcal{G}}(v)||^2_2 &= \|Q_{\mathcal{F}}x-Q_{\mathcal{G}}Q_{\mathcal{G}}^tQ_{\mathcal{F}}x\|^2_2 \nonumber \\
&= ||x||_2^2-x^tQ_{\mathcal{F}}^tQ_{\mathcal{G}}Q_{\mathcal{G}}^tQ_{\mathcal{F}}x \nonumber \\
&=||x||_2^2-x^tY\Sigma Z^tZ\Sigma^t Y^tx \nonumber \\
&=x^tYY^tx-x^tY\Sigma \Sigma^t Y^tx \nonumber \\
&=x^tYY^tx-x^tY\Sigma^2Y^tx \nonumber \\
&=z\left( I-\Sigma^2\right)z \nonumber
\end{align}
where $Y\Sigma Z^t$ is the SVD for $Q_{\mathcal{F}}^tQ_{\mathcal{G}}$ and $z:=Y^tx$. Thus, using the relation $\cos\theta_i= \sigma_i $ between principle angles and singular values \cite{Golub96}, we get
\begin{align}
d^2(v,\mathcal{G}) &= \sum_{i=1}^{p}z_i^2\sin^2(\theta_i).
\label{distance_us}
\end{align}
Hence, our approach discriminates distances from points in $\mathcal{F}$ to subspace $\mathcal{G}$. We also have $\sum_{i=1}^{p}z_i^2\sin^2(\theta_i) \leq \sum_{i=1}^{p}\sin^2(\theta_i)$ and therefore $d_c$ is more sensitive to noise.
Using Eq.~\ref{distance_us}, we get $0< \sin\theta_1 \leq d \leq \sin\theta_p$. Assuming a uniform distribution of samples from $\mathcal{F}$ and $\mathcal{G}$, $h$ can be approximated by a function depicted in Figure \ref{fig:threshold}. The goal is to find the threshold at the jump discontinuity $T$ from $0$ to $\sin\theta_1$. Our method minimizes the highlighted area. Under this model, a simple computation shows that our data driven thresholding algorithm picks $T_d=T$ for $\sin\theta_1
/\sin\theta_p\geq 1/2$, e.g., if $\theta_1 \geq 30^o$. In other situations, our algorithm overshoots in estimating the threshold index depending on $\theta_1$ and $\theta_{p}$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.30]{Threshold.jpg}
\label{fig:threshold}
\caption{Linear modeling for $h$}
\end{figure} \\
\emph{\textbf{Segmentation:}}
The last step is to use the similarity matrix $S$ to segment the data. To do this, we first normalize the rows of $S$ using $l_1$-norm, i.e., $\tilde S = D^{-1}S$, where $D$ is a diagonal matrix $(d_{ij}) = \sum_{j=1}^N s_{ij}$. Note that $S$ and $\tilde S$ are not symmetric. $\tilde S$ is related to the random walk Laplacian $L_r$ ($\tilde S = I-L_r$) \cite{Petrik07}. Although other $l_p$ normalizations are possible for $p\geq 1$, however, because of the geometry of the $l_1$ ball, $l_1$-normalization brings outliers closer to the cluster clouds (distances of outliers decrease monotonically as $p$ decreases to 1). Since SVD (which will be used next) is associated with $l_2$ minimization it is sensitive to outliers. Therefore $l_1$ normalization works best when SVD is used.
Observe that the initial data segmentation problem has now been converted to segmentation of $n$ 1-dimensional subspaces from the rows of $\tilde S$. This is because,
in the ideal case, from the construction of $\tilde S$, if $x_i$ and $x_j$ are in the same subspace, the $i^{th}$ and $j^{th}$ rows of $\tilde S$ are equal. Since there are $n$ subspaces, then there will be $n$ 1-dimensional subspaces.
Now, the problem is again a subspace segmentation problem, but this time the data matrix is $\tilde S$ with each row as a data point. Also, each subspace is 1-dimensional and there are $n$ subspaces. Therefore, we can apply SVD again to obtain
\begin{equation}
\tilde{S}^t = U_n \Sigma_n (V_n)^t. \nonumber
\end{equation}
Using Proposition \ref{proposition1}, it can be shown that $\Sigma_n$$(V_n)^t$ can replace $\tilde{S}^t$ and we cluster the columns of $\Sigma_n$$(V_n)^t$, which is the projection of $\tilde S$ on to the span of $U_n$. Since the problem is only segmentation of subspaces of dimension 1, we can use any traditional segmentation algorithm such as k-means to cluster the data points. The segmentation corresponds to Steps 18 to 20 in Algorithm \ref{algo:greatcircle}.
\section{Experimental Results}
\label{experimentalresults}
\subsection{The Hopkins 155 Dataset}
The Hopkins 155 Dataset \cite{Vidal07} was created as a benchmark database to evaluate motion segmentation algorithms. It contains two (2) and three (3) motion sequences. There are three (3) groups of video sequences in the dataset: (1) 38 sequences of outdoor traffic scenes captured by a moving camera, (2) 104 indoor checker board sequences captured by a handheld camera, and (3) 13 sequences of articulated motions such as head and face motions. Cornerness features that are extracted and tracked across the frames are provided along with the dataset. The ground truth segmentations are also provided for comparison.
\subsection{Results}
Tables \ref{tab:twomotion}, \ref{tab:threemotion}, and \ref{tab:overall} display some of the experimental results for the Hopkins 155 Dataset. Our Nearness to Local Subspace (NLS) approach have been compared with six (6) motion detection algorithms: (1) GPCA \cite{Vidal05}, (2) RANSAC \cite{Bolles81}, (3) Local Subspace Affinity (LSA) \cite{Yan06}, (4) MLS \cite{Kanatani03,Gruber04}, (5) Agglomerative Lossy Compression (ALC) \cite{Rao101}, and (6) Sparse Subspace Clustering (SSC) \cite{Elhamifar09}. An evaluation of those algorithms is presented in \cite{Elhamifar09} with a minor error in the tabulated results for articulated three motion analysis of SSC-N. SSC-B and SSC-N correspond to Bernoulli and Normal random projections, respectively \cite{Elhamifar09}. The minor error in \cite{Elhamifar09} is the listing of error as 1.42\% for articulated three motions. It is replaced with 1.60\% in Table~\ref{tab:threemotion}. In Tables~\ref{tab:twomotion}-\ref{tab:overall}, we used the number of neighbors $k=3$. Since each point is drawn from a 4-dimensional subspace, a minimum of 3 neighbors are needed to fit a local subspace for each point. Using the same assumption as the algorithms that we compare with, we take the rank of the data matrix to be 8 for two motion and 12 for three motion. Table~\ref{tab:twomotion} displays the misclassification rates for the two motions video sequences. NLS outperforms all of the algorithms for the checkerboard sequences, which are linearly independent motions. The overall misclassification rate is 0.57\%. This is 24\% better than the next best algorithm. Table~\ref{tab:threemotion} shows the misclassification rates for the three motion sequences. NLS has 1.31\% misclassification rate and performs 47\% better than the next best algorithm (i.e. SSC-N). Table~\ref{tab:overall} presents the misclassification rates for all of the video sequences. Our algorithm NLS (with 0.76\% misclassification rate) performs 39\% better than the next best algorithm (i.e. SSC-N). In general, our algorithms outperforms SSC-N, which is given as the best algorithm for the two and three motion sequences together.
Table~\ref{tab:Various_T} shows the performance of the data driven threshold index $T_d$ compared to various other possible thresholds. We provide the results for $\pm 20\%$, $\pm 10\%$, and $\pm 5\%$ deviations from $T_d$.
Table~\ref{tab:various_k} displays the robustness of the algorithm with respect to the number of neighbors $k$. The second portion of the table excludes one pathological sequence from two-motion checker sequence for $k=4$ and $k=5$. When $k$ is set to 3 - which is the minimum number of neighbors required - the algorithm performs better.
Table~\ref{tab:LSA} displays the increase in the performance of the original LSA algorithm when our distance/similarity and segmentation techniques are applied separately. Both of them improves the performance of the algorithm, however, the new distance and similarity combination contributes more than the new segmentation technique.
Recently, the Low-Rank Representation (LRR) in \cite{Liu10, Liu10_2} was applied to the Hopkins 155 Datasets and it generated an error rate of 3.16\%. The authors state that this error rate can be reduced to 0.87\% by using a variation of LRR with some additional adjustment of a certain parameter.
\begin{table*}
\centering
\scriptsize
\begin{tabular}{||c||cccccccc||}
\hline
\textbf{\em Checker (78)} &GPCA&LSA&RANSAC&MSL&ALC&SSC-B&SSC-N&NLS\\
\hline \hline
Average & 6.09\% & 2.57\%& 6.52\% &4.46\% & 1.55\% & 0.83\% & 1.12\% & 0.23\%\\
Median & 1.03\% & 0.27\%& 1.75\% &0.00\% & 0.29\% & 0.00\% & 0.00\% & 0.00\% \\
\hline \hline
\textbf{\em Traffic (31)} &GPCA&LSA&RANSAC&MSL&ALC&SSC-B&SSC-N&NLS\\
\hline \hline
Average & 1.41\% & 5.43\%& 2.55\% &2.23\% & 1.59\% & 0.23\% & 0.02\% & 1.40\%\\
Median & 0.00\% & 1.48\%& 0.21\% &0.00\% & 1.17\% & 0.00\% & 0.00\% & 0.00\%\\
\hline
\hline \hline
\textbf{\em Articulated (11)} &GPCA&LSA&RANSAC&MSL&ALC&SSC-B&SSC-N&NLS\\
\hline \hline
Average & 2.88\% & 4.10\%& 7.25\% &7.23\% & 10.70\% & 1.63\% & 0.62\% & 1.77\% \\
Median & 0.00\% & 1.22\%& 2.64\% &0.00\% & 0.95\% & 0.00\% & 0.00\%& 0.88\%\\
\hline
\hline \hline
\textbf{\em All (120 seq)} &GPCA&LSA&RANSAC&MSL&ALC&SSC-B&SSC-N&NLS\\
\hline \hline
Average & 4.59\% & 3.45\%& 5.56\% &4.14\% & 2.40\% & 0.75\% & 0.82\%& \textbf{0.57\%} \\
Median & 0.38\% & 0.59\%& 1.18\% &0.00\% & 0.43\% & 0.00\% & 0.00\%& 0.00\%\\
\hline
\end{tabular}
\caption{\% segmentation errors for sequences with two motions.}
\label{tab:twomotion}
\end{table*}
\begin{table*}
\centering
\scriptsize
\begin{tabular}{||c||cccccccc||}
\hline
\textbf{\em Checker (26)} &GPCA&LSA&RANSAC&MSL&ALC&SSC-B&SSC-N&NLS\\
\hline \hline
Average & 31.95\% & 5.80\%& 25.78\% &10.38\% & 5.20\% & 4.49\% & 2.97\%& 0.87\% \\
Median & 32.93\% & 1.77\%& 26.00\% &4.61\% & 0.67\% & 0.54\% & 0.27\%& 0.35\%\\
\hline \hline
\textbf{\em Traffic (7)} &GPCA&LSA&RANSAC&MSL&ALC&SSC-B&SSC-N&NLS\\
\hline \hline
Average & 19.83\% & 25.07\%& 12.83\% &1.80\% & 7.75\% & 0.61\% & 0.58\%& 1.86\% \\
Median & 19.55\% & 23.79\%& 11.45\% &0.00\% & 0.49\% & 0.00\% & 0.00\%& 1.53\% \\
\hline
\hline \hline
\textbf{\em Articulated (2)} &GPCA&LSA&RANSAC&MSL&ALC&SSC-B&SSC-N&NLS\\
\hline \hline
Average & 16.85\% & 7.25\%& 21.38\% &2.71\% & 21.08\% & 1.60\% & 1.60\%& 5.12\% \\
Median & 16.85\% & 7.25\%& 21.38\% &2.71\% & 21.08\% & 1.60\% & 1.60\%& 5.12\% \\
\hline
\hline \hline
\textbf{\em All (35 seq)} &GPCA&LSA&RANSAC&MSL&ALC&SSC-B&SSC-N&NLS\\
\hline \hline
Average & 28.66\% & 9.73\%& 22.94\% &8.23\% & 6.69\% & 3.55\% & 2.45\%& \textbf{1.31\%} \\
Median & 28.26\% & 2.33\%& 22.03\% &1.76\% & 0.67\% & 0.25\% & 0.20\%& 0.45\% \\
\hline
\end{tabular}
\caption{\% segmentation errors for sequences with three motions.}
\label{tab:threemotion}
\end{table*}
\begin{table*}
\centering
\scriptsize
\begin{tabular}{||c||cccccccc||}
\hline
\textbf{\em All (155 seq)} &GPCA&LSA&RANSAC&MSL&ALC&SSC-B&SSC-N&NLS\\
\hline \hline
Average & 10.34\% & 4.94\%& 9.76\% &5.03\% & 3.56\% & 1.45\% & 1.24\%& \textbf{0.76\%} \\
Median & 2.54\% & 0.90\%& 3.21\% &0.00\% & 0.50\% & 0.00\% & 0.00\%& 0.20\%\\
\hline
\end{tabular}
\caption{\% segmentation errors for all sequences.}
\label{tab:overall}
\end{table*}
\begin{table*}
\centering
\scriptsize
\begin{tabular}{||c||ccccccc||}
\hline
\textbf{\em All-2 (120 seq)} &Data Driven $T_d$&0.8$T_{d}$&0.9$T_{d}$&0.95$T_{d}$&1.05$T_{d}$&1.10$T_{d}$&1.20$T_{d}$\\
\hline \hline
Average & 0.57\% & 0.95\% & 1.17\%& 0.62\%& 0.58\%& 1.05\%& 0.77\% \\
Median & 0.00\% & 0.00\% & 0.35\%& 2.27\% & 2.27\%& 0.00\%& 0.00\% \\
\hline
\hline
\textbf{\em All-3 (35 seq)} &Data Driven $T_d$&0.8$T_{d}$&0.9$T_{d}$&0.95$T_{d}$&1.05$T_{d}$&1.10$T_{d}$&1.20$T_{d}$\\
\hline \hline
Average & 1.31\% & 4.39\% & 3.18\% & 1.42\%& 1.20\%& 1.24\%& 2.06\% \\
Median& 0.45\% & 0.60\% & 0.57\% & 0.46\%& 0.45\% & 0.42\%& 0.37\% \\
\hline \hline
\textbf{\em All (155 seq)} &Data Driven $T_d$&0.8$T_{d}$&0.9$T_{d}$&0.95$T_{d}$&1.05$T_{d}$&1.10$T_{d}$&1.20$T_{d}$\\
\hline
Average & 0.76\% & 1.84\% & 1.67\% & 0.83\%& 0.74\% & 1.10\%& 1.11\% \\
Median & 0.20\% & 0.00\% & 0.00\%& 0.20\%& 0.20\% & 0.18\%& 0.19\% \\
\hline
\end{tabular}
\caption{\% comparison of the data driven threshold index $T_d$ with other choices.}
\label{tab:Various_T}
\end{table*}
\begin{table*}
\centering
\scriptsize
\begin{tabular}{||c||cc||c||cc||}
\multicolumn{1}{c||}{} &
\multicolumn{2}{c||}{\textit{ALL SEQ INCLUDED}} &
\multicolumn{1}{c||}{} &
\multicolumn{2}{c||}{\textit{1 SEQ EXCLUDED}} \\
\hline
\textbf{\em Checker-2 (78)}&k=5&k=4&k=3&k=5&k=4\\
\hline \hline
Average & 0.65\% & 1.59\% & 0.23\%& 0.23\% & 0.97\%\\
Median & 0.00\% & 0.00\% & 0.00\% & 0.00\% & 0.00\% \\
\hline \hline
\textbf{\em Traffic-2 (31)} &k=5&k=4&k=3&k=5&k=4\\
\hline \hline
Average & 1.56\% & 1.66\% & 1.40\%& 1.56\% & 1.66\% \\
Median & 0.00\% & 0.00\% & 0.00\%& 0.00\% & 0.00\% \\
\hline
\hline
\textbf{\em Articulated-2 (11)} &k=5&k=4&k=3&k=5&k=4\\
\hline \hline
Average & 2.44\% & 2.33\% & 1.77\%& 2.44\% & 2.33\% \\
Median & 0.00\% & 0.00\%& 0.88\%& 0.00\% & 0.00\%\\
\hline
\hline
\textbf{\em All-2 (120 seq)} &k=5&k=4&k=3&k=5&k=4\\
\hline \hline
Average & 1.04\% & 1.75\%& \textbf{0.57}\%& 0.77\% & 1.35\% \\
Median & 0.00\% & 0.00\%& \textbf{0.00\%}& 0.00\% & 0.00\%\\
\hline \hline
\textbf{\em Checker-3 (26)} &k=5&k=4&k=3&k=5&k=4\\
\hline \hline
Average & 0.44\% & 0.43\%& 0.87\% & 0.44\% & 0.43\%\\
Median & 0.24\% & 0.22\%& 0.35\%& 0.24\% & 0.22\%\\
\hline \hline
\textbf{\em Traffic-3 (7)} &k=5&k=4&k=3&k=5&k=4\\
\hline \hline
Average & 6.59\% & 7.18\%& 1.86\%& 6.59\% & 7.18\% \\
Median & 1.81\% & 4.37\%& 1.53\%& 1.81\% & 4.37\% \\
\hline
\hline
\textbf{\em Articulated-3 (2)} &k=5&k=4&k=3&k=5&k=4\\
\hline \hline
Average & 20.54\% & 4.05\%& 5.12\%& 20.54\% & 4.05\% \\
Median & 20.54\% & 4.05\%& 5.12\%& 20.54\% & 4.05\%\\
\hline
\hline
\textbf{\em All-3 (35 seq)}&k=5&k=4&k=3&k=5&k=4\\
\hline \hline
Average & 2.82\% & 1.98\%& \textbf{1.31\%}& 2.82\% & 1.98\% \\
Median & 0.65\% & 0.47\%& \textbf{0.45\%}& 0.65\% & 0.47\%\\
\hline \hline
\textbf{\em All (155 seq)} &k=5&k=4&k=3&k=5&k=4\\
\hline \hline
Average & 1.50\% & 1.81\%& \textbf{0.76}\% & 1.30\% & 1.50\% \\
Median & 0.21\% & 0.00\%& \textbf{0.20\%}& 0.21\% & 0.00\%\\
\hline
\end{tabular}
\caption{\% segmentation errors - NLS algorithm for various $k$.}
\label{tab:various_k}
\end{table*}
\begin{table*}
\centering
\scriptsize
\begin{tabular}{||c||ccc||}
\hline
\textbf{\em Checker-2 (78)}&LSA(Original)&LSA(New Dist/Similarity)&LSA(New Segmentation)\\
\hline \hline
Average & 2.57\% & 0.97\% & 1.71\%\\
Median & 0.27\% & 0.00\% & 0.00\% \\
\hline \hline
\textbf{\em Traffic-2 (31)}&LSA(Original)&LSA(New Dist/Similarity)&LSA(New Segmentation)\\
\hline \hline
Average & 5.43\% & 1.59\% & 4.99\%\\
Median & 1.48\% & 1.11\% & 0.65\%\\
\hline
\hline \hline
\textbf{\em Articulated-2 (11)} &LSA(Original)&LSA(New Dist/Similarity)&LSA(New Segmentation)\\
\hline \hline
Average& 4.10\% & 2.10\% & 4.26\% \\
Median & 1.22\% & 0.43\%& 1.21\%\\
\hline
\hline \hline
\textbf{\em All-2 (120 seq)} &LSA(Original)&LSA(New Dist/Similarity)&LSA(New Segmentation)\\
\hline \hline
Average & 3.45\% & 1.22\%& 2.27\% \\
Median & 0.59\% & 0.00\%& 0.35\%\\
\hline
\textbf{\em Checker-3 (26)}&LSA(Original)&LSA(New Dist/Similarity)&LSA(New Segmentation)\\
\hline \hline
Average & 5.80\% & 2.66\%& 4.67\% \\
Median & 1.77\% & 0.30\%& 0.91\%\\
\hline \hline
\textbf{\em Traffic-3 (7)} &LSA(Original)&LSA(New Dist/Similarity)&LSA(New Segmentation)\\
\hline \hline
Average & 25.07\% & 6.38\%& 24.46\% \\
Median & 23.79\% & 1.28\%& 31.20\% \\
\hline
\hline \hline
\textbf{\em Articulated-3 (2)} &LSA(Original)&LSA(New Dist/Similarity)&LSA(New Segmentation)\\
\hline \hline
Average & 7.25\% & 6.18\%& 7.25\% \\
Median & 7.25\% & 6.18\%& 7.25\% \\
\hline
\hline \hline
\textbf{\em All-3 (35 seq)}&LSA(Original)&LSA(New Dist/Similarity)&LSA(New Segmentation)\\
\hline \hline
Average & 9.73\% & 2.45\%& 8.78\% \\
Median& 2.33\% & 0.20\%& 1.94\% \\
\hline
\textbf{\em All (155 seq)} &LSA(Original)&LSA(New Dist/Similarity)&LSA(New Segmentation)\\
\hline \hline
Average & 4.94\% & 1.84\%& 3.96\% \\
Median & 0.90\% & 0.18\%& 0.61\%\\
\hline
\end{tabular}
\caption{\% segmentation errors for LSA with various parameters.}
\label{tab:LSA}
\end{table*}
\section{Conclusions}
The NLS approach described in this paper can handle noise effectively, but it works only in special cases of subspaces segmentation problems (i.e., subspaces of equal and known dimensions). Our approach is based on the computation of a binary similarity matrix for the data points. A local subspace is first estimated for each data point. Then, a distance matrix is generated by computing the distances between the local subspaces and points. The distance matrix is converted to the similarity matrix by applying a data-driven threshold. The problem is then transformed to segmentation of subspaces of dimension $1$ instead of subspaces of dimension $d$. The algorithm was applied to the Hopkins 155 Dataset and generated the best results to date.
\section*{Acknowledgement}
We would like to thank Professor Ren\'{e} Vidal for his invaluable comments and feedback.
\bibliographystyle{elsarticle-num}
|
2,869,038,155,082 | arxiv | \section{Introduction} \label{s:intro}
The classical braid group has been defined in 1925 by Artin (\cite{Artin25}).
In 1962 Fox and Neuwirth \cite{fox} proved that the group defined by Artin is the fundamental
group of the configuration space $C({\mathbb R}^2, n)$ of unordered $n$-tuples of distinct points in the
real plane.
A more general algebraic definition of Artin groups can be given starting from
the standard presentation of a Coxeter group $W.$
Given a Coxeter group $W$ acting on a real vector space $V$ we can consider
the collection $\mathcal{H}_W$ of all the hyperplanes $H$
which are fixed by
a reflection $\rho \in W.$ This collection is the \emph{reflection arrangement} of $W.$
In \cite{briesk2} Brieskorn proved that
the fundamental group of the regular orbit space with respect to the
action of the group $W$
on the complement of a complexified reflection arrangement is the Artin group $A$ associated to $W.$
We illustrate the case of the braid group,
that can be considered as the leading example of this construction.
We will use it for several other examples along this paper.
We consider the action, by permuting coordinates,
of the symmetric group on $n$ letters $\mathfrak{S}_n$ on
the complex vector space ${\mathbb C}^n$.
If we restrict this action of $\mathfrak{S}_n$
to the space of ordered $n$-tuples of distinct points $F({\mathbb C},n)$
we obtain a free and properly discontinuous action.
The space $F({\mathbb C},n)$ is the complement
of the union of the hyperplanes of the form $H_{ij} = \{z_i =z_j \}$ in ${\mathbb C}^n.$
The quotient $C({\mathbb C},n)=F({\mathbb C},n)/\mathfrak{S}_n$ is the regular orbit space for $\mathfrak{S}_n$
and hence its fundamental group is the braid group on $n$ strands $\mathcal{B}_n,$
that is the Artin group associated to $\mathfrak{S}_n.$
The result of Brieskorn mentioned above
shows the important relation between Artin groups and arrangements of hyperplanes,
since an Artin group is the fundamental group of a quotient of the complement of a reflection
arrangement.
Research on arrangements of hyperplanes started with the works
of E. Fadell, R. Fox, L. Neuwirth, V.I. Arnol$'$d,
E. Brieskorn, T. Zaslavsky, K. Saito, P. Deligne, A. Hattori and later P. Orlik, L. Solomon,
H. Terao, M. Goresky, R. MacPherson, C. De Concini, C. Procesi,
M. Salvetti, R. Stanley, R. Randell, G. Lehrer, A. Bj\"orner, G. Ziegler and many others.
A basic reference for the subject is \cite{ot}. A more recent reference with many recent developments
and a wide bibliography on the theory of hyperplane arrangements
is given by the book (still work in progress) \cite{cxarr}.
Given an arrangement $\mathcal{H},$ an important combinatorial invariant is the \emph{intersection lattice}
$L(\mathcal{H}),$ that is the poset of non-empty intersections of elements of $\mathcal{H}$ ordered
by reverse inclusion.
One of the main problems in the study of arrangements
is to understand the relation between the topology of the complement of the arrangement
and
its intersection lattice.
For a real arrangement we have a finer combinatorial invariant, the \emph{face poset}
(see Definition \ref{d:face_poset} and \cite{ot}).
In \cite{salv87} Salvetti
introduced a CW-complex $\mathrm{Sal}(\mathcal{H})$ associated to a real arrangement $\mathcal{H}$ and
determined by the face poset of $\mathcal{H}.$ He proved that this complex is
homotopy equivalent to the complement of the complexified arrangement.
Moreover if $\mathcal{H}$ is associated to a reflection group $W,$ the
group $W$ acts on the complex $\mathrm{Sal}(\mathcal{H})$ and the quotient complex $X_W$ is homotopy
equivalent to the regular orbit space of $W$ (see \cite{salvetti94, decsal96}). An extension of these results
for an oriented matroid can be found in \cite{gelryb}. For a general complex
arrangement, in \cite{bz} Bj\"orner and Ziegler construct a finite regular cell
complex with the homotopy type of the complement of the arrangement.
In this short survey we present some methods and useful tools for the study of Artin groups
through the Salvetti complex. A natural filtration of the complex allows to define a
spectral sequence that can be very helpful in several homology and cohomology computations.
In particular we can use the Salvetti complex to compute the cohomology of Artin groups,
either with constant coefficients or with a local system of coefficients.
The computation of the cohomology of the Milnor fiber, which is related to a very
interesting abelian local system over a Laurent polynomial ring, plays a special role in this context.
In Section \ref{s:salvetti} we recall our main notation for arrangement of hyperplanes and the Salvetti complex.
We try to keep the notation introduced in \cite{paris}.
In Section \ref{s:filtration} we give a general introduction to computations using
a spectral sequence that arises from
a natural filtration of the Salvetti complex. Finally
in Section \ref{s:examples} we provide a few examples that show how the computations via this spectral sequence
can be applied to the study of the cohomology and homology of braid groups,
providing a simpler or shorter proof
for previously known results. A first example is given
in Section \ref{ss:classical} where we provide a shorter
proof of Fuks's result (see \cite{fuks}) on the homology of braid groups mod $2.$
Another example is in Section \ref{ss:rat}: we compute the rational
cohomology of the commutator subgroup of the braid group
giving a new proof of some results already appeared in \cite{fren}, \cite{mar} and \cite{dps}.
In Section \ref{ss:affine} we show how the Salvetti complex can be modified in order to study recursively affine
type Artin groups. In Section \ref{ss:nonab} we show how it can be used for computer
investigations providing the example of a non-abelian local system.
\subsection*{Acknowledgment}
The author would like to thank
the organizing and scientific committees of the
School ``Arrangements in Pyr\'en\'ees'' held in June 2012 in Pau,
where the idea of these notes started.
\section{Hyperplane arrangements, Artin groups and Salvetti complex} \label{s:salvetti}
\subsection{Hyperplane arrangements}
We recall some definitions and results on hyperplane arrangements and Artin groups.
We follow the notation
of \cite{paris}
and we refer to it for a more detailed introduction. We refer to \cite{ot} for a general
introduction on the subject of hyperplane arrangements.
Let $I$ be an open convex cone in a finite dimensional real vector space $V.$
\begin{defin}
A real \emph{hyperplane arrangement} in $I$ is a family $\mathcal{H}$ of real affine hyperplanes of
$V$ such that each hyperplane of $\mathcal{H}$ intersects $I$ and the family $\mathcal{H}$ is locally finite in $I.$
\end{defin}
\begin{defin} \label{d:face_poset}
A real hyperplane arrangement $\mathcal{H}$ induces a stratification on the convex cone $I$ into \emph{facets}.
Given two points $x$ and $y$ in $I$ we say that they belong to the same facet $F$ if for every
hyperplane $H \in \mathcal{H}$ either $x \in H$ and $y \in H$ or $x$ and $y$ belong to the same connected component
of $I \setminus H.$
We call the set of all facets $\mathcal{S}$ the \emph{face poset} of $\mathcal{H}$ and
we equip $\mathcal{S}$ with the partial order given by $F > F'$ if and only if
$\overline{F} \supset F'.$
\end{defin}
A \emph{face} is a codimension $1$ facet, i.~e. a facet that is contained in
exactly one hyperplane of the arrangement. A \emph{chamber} of the arrangement is a maximal facet, that
is a connected component $C$ of the complement $$I \setminus \cup_{H \in \mathcal{H}} H.$$
Let $H$ be a real affine hyperplane and let $v(H)$ be its underlying
vector space: the \emph{complexified hyperplane}
$H_{\mathbb C}$ is the complex affine hyperplane $H_{{\mathbb C}} := \{ z = x + \imath y, x \in H, y \in v(H) \}$
in the complex vector space $V_{\mathbb C} := V \otimes_{\mathbb R} {\mathbb C}.$
We recall the definition of the complement of the complexified arrangement:
$$M(\mathcal{H}) := (I \oplus \imath V) \setminus \bigcup_{H \in \mathcal{H}} H_{{\mathbb C}}.$$
Now we consider the case of a Coxeter arrangement.
Let the couple $(W,S)$ be a Coxeter system and assume that
the set of generators $S$ is given by linear reflections
in the vector space $V.$ Then $W$ is a finite subgroup
of $\GL(V).$ We define the \emph{reflection arrangement} of $W$
as the collection $\mathcal{H} =\mathcal{H}_W:= \{H \subset V \mid H
\mbox{ is the fixed hyperplane of a reflection } \rho \in W \}.$ Given any
chamber $C$ of the arrangement $\mathcal{H}$ we
define the convex cone $I$ associated to $(W,S)$ as the interior of the union
$$
\overline{I} := \bigcup_{w \in W} w\overline{C}.
$$
The complement of the reflection arrangement is given by $M(W):= M(\mathcal{H}_W).$
The group $W$ acts freely and properly discontinuously on $M(W)$ and we denote by $N(W)$
the quotient $M(W)/W.$
Let $W$ be a Coxeter group with Coxeter graph $\Gamma.$
The fundamental group of the complement $N(W)$ is $A_\Gamma,$ that is the Artin group of type $\Gamma.$
The fundamental group of the complement $M(W)$ is the pure Artin group $PA_\Gamma$ (see \cite{bri73}).
\begin{exm} \label{ex:b3}
We consider the example of the group $W=\mathfrak{S}_3$ acting on $I={\mathbb R}^3$ by permuting coordinates.
The corresponding reflection arrangement is the given by the hyperplanes $H_{1,2}, H_{1,3}, H_{2,3}$,
where we define $H_{i,j}=
\{x \in {\mathbb R}^3 \mid x_i = x_j \}.$
We fix the fundamental chamber $C_0 = \{ x \in {\mathbb R}^3 \mid x_1 < x_2 < x_3 \}$ in the complement of $\mathcal{H}_W.$
The complement $M(W)$ is the ordered configuration space $F({\mathbb C},3),$ while the space $N(W)$ is the unordered
configuration space $C({\mathbb C},3).$ The following is Coxeter graph of $\mathfrak{S}_3$
\begin{mucca}
\entrymodifiers={=<4pt>[o][F-]}
\begin{center}
\begin{tabular}{l}
\xymatrix @R=2pc @C=2pc {
\ar @{-}[r]_(-0.20){s_1} \ar @{-}[r]_(0.80){s_2} &
}
\end{tabular}
\end{center}\end{mucca}
that is the Coxeter graph of type ${\mathbb A}_2.$ The standard generators of the Coxeter group $\mathfrak{S}_3$
are the elements $s_1, s_2$ with relations
$s_1^2 = s_2^2 = e$ and $s_1s_2s_1 = s_2 s_1 s_2.$ We can
identify the generator $s_1$ (resp. $s_2$) with the transposition
$(1,2) \in \mathfrak{S}_3 \mbox{(resp. }(2,3)\mbox{)}.$
The fundamental group of $C({\mathbb C},3)$ is the classical braid group on three strands $\mathcal{B}_3$
and the fundamental group of $F({\mathbb C},3)$ is the pure braid group braid group on three strands $\mathcal{P}\mathcal{B}_3.$
The braid group $\mathcal{B}_3$ is generated by the elements $\sigma_1, \sigma_2$ with relation
$\sigma_1\sigma_2\sigma_1 = \sigma_2\sigma_1\sigma_2$ (see, for example, \cite{bri73}).
\end{exm}
\subsection{The Salvetti complex}
The key geometric object that we consider in this survey
is the Salvetti complex. This is a CW-complex which has the
homotopy type of the complement $M(\mathcal{H}).$ Moreover in the case of
finite arrangements the Salvetti complex has a finite number of cells.
Its explicit description and the simple structure, especially in the
case of reflection arrangements, turn out to be very important for
filtrations and recursive arguments.
In this survey we don't provide an explicit definition of the Salvetti complex.
The reader interested on the subject can find the original definition in \cite{salv87}.
An extended definition of can be found in \cite{paris}. Further in this section we provide
a description of the algebraic complexes that compute the homology and cohomology of the
quotient of Salvetti complex $\mathrm{Sal}(\mathcal{H}_W)$ by the action of the group $W.$
\begin{thm}[\cite{salv87}]
The complement $M(\mathcal{H})$ has the homotopy type of a CW-complex $\mathrm{Sal}(\mathcal{H})$ that is a deformation retract of $M(\mathcal{H}).$
The $k$-cells of the complex $\mathrm{Sal}(\mathcal{H})$ are in $1$ to $1$ correspondence with the couples $(C,F)$ where
$C$ is a chamber of the arrangement and $F$ is a codimension $k$ facet adjacent to the cell $C.$
\end{thm}
If the arrangement $\mathcal{H}$ is the reflection arrangement of a Coxeter group $W,$ the complex $\mathrm{Sal}(\mathcal{H})$
is $W$-invariant and the homotopy that gives the retraction from the space $M(\mathcal{H})$ to the complex $\mathrm{Sal}(\mathcal{H})$ can
be chosen to be $W$-equivariant. Furthermore, the action on the cells follows from the action of $W$ on the sets of
chambers and facets. Fix a fundamental chamber $C_0$ for the arrangement $\mathcal{H}_W.$
\begin{thm}[\cite{salvetti94, decsal96}]
Let $W$ be a Coxeter group. The orbit space $N(W)$ has the same
homotopy type of the CW-complex $X_W = \mathrm{Sal}(\mathcal{H}_W)/W.$
The $k$-cells of the complex $X_W$ are in $1$ to $1$
correspondence with the facets of $\mathcal{H}_W$ that are adjacent
to the fundamental chamber $C_0.$
\end{thm}
Let $(W,S)$ be the Coxeter system associated to the Coxeter group $W$ and to the fundamental chamber $C_0.$
Let $\Gamma$ be the corresponding Coxeter graph. We recall that the nodes of $\Gamma$ are in bijection with the
elements of $S.$ Since the arrangement $\mathcal{H}_W$ is locally finite, the facets of the
arrangement $\mathcal{H}_W$ that are adjacent to the fundamental chamber $C_0$ are in bijection with the finite parabolic
subgroups of $W$ generated by subsets of $S.$
\begin{cor}[\cite{salvetti94, cms08, cd95}]
Let $(W,S)$ be a Coxeter system. The $k$-cells of the complex $X_W$ are in $1$ to $1$ correspondence with the
$k$-subsets of $S$ that generate finite parabolic subgroups.
\end{cor}
\begin{exm}
In Figure \ref{fig:B3} there is a picture of the complex $X_W$ for the symmetric group $W=\mathfrak{S}_3$
with set of generators $S = \{s_1, s_2\}.$
The $6$ vertices of the hexagon are all identified to a single vertex corresponding to the empty subset of $S$.
The $6$ edges of the hexagon are identified according to the arrows and correspond to the subsets $\{s_1\}$ and $\{s_2\}.$ The $2$-cell corresponds to the set $S$ itself.
The complex $X_W$ is homotopy equivalent to the configuration space $C({\mathbb C},3).$
\begin{figure}[htb]
\centering
\includegraphics{./B3model.pdf}
\caption{} \label{fig:B3}
\end{figure}
\end{exm}
In order to provide a complete description of the complexes $\mathrm{Sal}(W)$ and $X_W$ for a given Coxeter system $(W,S)$
we need to show how the cells glue together.
We refer the reader to \cite{salv87} and \cite{salvetti94} (see also \cite{paris}) for this.
Here we recall the description of
the boundary map for the
cochain complex of $X_W$ with coefficients in
an assigned local system.
Let $M$ be a ${\mathbb Z}$-module and let
$$
\lambda: A_\Gamma \to \mathrm{Aut}(M)
$$
be a representation of the fundamental group of $X_W.$ Such a representation determines a local system $\mathcal{L}_\lambda$
on the complex $X_W.$
Moreover let $(\mathcal{C}^*, \delta)$ be the algebraic complex associated to the CW-complex $X_W$ that computes the cohomology
$H^*(X_W;\mathcal{L}_\lambda).$
The complex $\mathcal{C}^*$ is given by a direct sum of some copies of the ${\mathbb Z}$-module $M$ indexed by elements $e_T$
\begin{equation}\label{e:complex}
\mathcal{C}^k := \bigoplus M.e_T
\end{equation}
where the sum goes over all the subset $T \subset S$ such that $\mid \! T \! \mid = k$ and the parabolic
subgroup $W_T$ is finite. The complex $\mathcal{C}^*$ is graded with $\deg e_T = \mid \! T \! \mid.$
In order to define the differential $\delta$ we recall some well known facts about Coxeter groups and Artin groups.
The first result we need is the following one (see for example Proposition 1.10 in \cite{hump}).
\begin{prop} \label{p:decomp}
Let $(W,S)$ be a Coxeter system with length function $l.$ Any
element $w \in W$ can be written in a unique way as a product
$w = uv$ with $v \in W_T$ and $u \in \underline{w} \in W/W_T$
such that $l(w) = l(u) + l(v).$ \end{prop}
The element $u$ is the unique
element of minimal length
in the coset $\underline{w} \in W/W_T$ and it is called
the \emph{minimal coset representative} of $\underline{w}.$
Given a Coxeter system $(W_\Gamma,S),$ with Coxeter graph $\Gamma,$ and the associated Artin group $A_\Gamma,$ there
is a natural epimorphism $\pi: A_\Gamma \twoheadrightarrow W_\Gamma$ defined by mapping each standard generator
$g_s$ of $A_\Gamma$ to the corresponding element $s\in W_\Gamma$ for all $s \in S.$ Matsumoto proves the following
lemma (see also \cite{tits}):
\begin{lem}[\cite{mats}]
Let $(W_\Gamma,S)$ be a Coxeter system. Given an element $w \in W$ expressed as a positive word
$s_{i_1} \cdots s_{i_l}$ of minimal length $l$ in the generators $s_j \in S,$
the corresponding element $g=g_{s_{i_1}} \cdots g_{s_{i_l}} \in A_\Gamma$ is well defined and does not
depend on the choice of the word representing $w.$
\end{lem}
As a consequence the map $\pi$ has a natural set-theoretic section $\psi: W \to A_\Gamma.$ We remark that the
section $\psi$ defined according to the previous lemma is not a group homomorphism.
Let $<$ be a total ordering on the set $S.$ We can define the coboundary map $\delta$ as follows:
for a generator $e_T \in \mathcal{C}^*$ and an element $a \in M$ we have
\begin{equation} \label{e:delta1}
\delta (a.e_T) := \sum_
{
s \in S \setminus T, \mid \!W_{T \cup \{s\}} \! \mid < \infty
}
(-1)^{\sigma(s,T)+1} \sum_{\underline{w} \in W_{T \cup \{ s\}}/W_T}
(-1)^{l(w)}\lambda(\psi(w))(a).e_{T \cup \{s\}}
\end{equation}
where $w$ is the minimal length representative of the coset $\underline{w} \in W_{T \cup \{ s\}}/W_T$
and $\sigma(s,T)$ is the number of elements of the set $T$ that are strictly smaller than $s$ with
respect to the order $<.$
\begin{thm}[\cite{salvetti94}]
Let $\mathcal{L}_\lambda$ be the local system induced on the space $N(W)$
by a representation $\lambda$ of the group $A_\Gamma$
on the ${\mathbb Z}$-module $M.$ Let $(\mathcal{C}^*, \delta)$ be the complex defined by formulas (\ref{e:complex})
and (\ref{e:delta1}) above for the group $W = W_\Gamma.$ We have the following isomorphism:
$$
H^*(\mathcal{C}^*) = H^*(N(W); \mathcal{L}_\lambda).
$$
\end{thm}
We recall the following fundamental result.
\begin{thm}[\cite{deligne}]
If $W$ is a finite linear reflection group, then $N(W)$ is aspherical.
\end{thm}
As a consequence if $W$ is finite the space $N(W)$ is a classifying space
for $A_\Gamma$ and we have an isomorphism
$$
H^*(N(W); \mathcal{L}_\lambda) = H^*(A_\Gamma; M_\lambda)
$$
where $M_\lambda$ is the ${\mathbb Z}$-module $M$ considered as
a $A_\Gamma$-module through the representation $\lambda.$
\subsection{Abelian representations and Poincar\'e series}
We focus now on abelian representations of $A_\Gamma$ since in that case the expression
of formula (\ref{e:delta1}) became very simple.
\begin{rmk}
We recall how to compute the abelianization
$A_\Gamma^{\Ab}: = A_\Gamma/[A_\Gamma, A_\Gamma]$
of the group $A_\Gamma.$
For a given Coxeter graph $\Gamma$ we consider the graph
$\overline{\Gamma}$ with vertices set $S,$ the set of vertices
of $\Gamma$ and with an edge $e_{s,t}$ for the couple $(s,t)$ if and
only if the element $m(s,t)$ in the Coxeter matrix is odd.
The abelianization $A_\Gamma^{\Ab}$ is the free abelian group
generated by the connected components of the graph $\overline{\Gamma}.$
The abelianization map $\Ab: A_\Gamma \to A_\Gamma^{\Ab}$ maps each
standard generator $g_s \in A_\Gamma$ to the generator corresponding
to the connected component of the graph $\overline{\Gamma}$
containing the vertex $s.$
\end{rmk}
If $\lambda$ is an abelian representation, then $\lambda$ factors through
the abelianization map $\Ab$ and the elements in the image of $\lambda$ commute.
Given a subset $H \subset W$ we define the sum $$H_\lambda := \sum_{w \in H} \lambda(\psi(w)).$$
In particular, given a subset $T \subset S$ that generates the parabolic subgroup $W_T,$
we call the sum $(W_T)_\lambda$ the
\emph{Poincar\'e series of the group $W_T$ with coefficients in the representation $\lambda$}.
As a consequence of Proposition \ref{p:decomp} we obtain the following formula:
$$(W_{T})_\lambda
\sum_{\underline{h} \in W_{T \cup \{s\}}/W_T} \lambda(\psi(h))
= (W_{T \cup \{s\}})_\lambda$$
where $h$ is the minimal coset representative of $\underline{h} \in W_{T \cup \{s\}}/W_T.$
\begin{exm} \label{ex:rank_one}
We define a representation $\lambda(q): A_\Gamma \to \mathrm{Aut}(L)$,
where $L = R[q^\pmu]$ is a Laurent polynomial ring with coefficients in a ring $R$ and
$\lambda(q)(g_s)$ is multiplication by $q$ for each standard generator of $A_\Gamma.$
In this case the series $W(q):=W_{\lambda(q)}$ is called the
\emph{Poincar\'e series} for $W.$
From formula (\ref{e:delta1}) we get
\begin{equation} \label{e:delta2}
\delta (a.e_T) := \sum_
{
s \in S \setminus T, \mid \!W_{T \cup \{s\}} \! \mid < \infty
}
(-1)^{\sigma(s,T)+1}
\frac{W_{T \cup \{s\}}(-q)}{W_T(-q)}
.e_{T \cup \{s\}}
\end{equation}
If $W$ is a finite Coxeter group with exponents
$m_1, \ldots, m_n$ the Poincar\'e series is actually a polynomial
and the following product formula holds (\cite{sol66}):
$$W(q) = \prod_{i=1}^n (1+q+\cdots+q^{m_i}).$$
\end{exm}
\begin{exm}
An analog of Example \ref{ex:rank_one} is given by a representation
on the Laurent polynomial ring in two variables $L = R[q_1^\pmu, q_2^\pmu].$
Let $\Phi$ be a root system with two different root-lengths.
As an example consider the root systems of type $\mathbb{B}_n$ or any reducible root system.
Let $W$ be the Coxeter group associated to the root system $\Phi.$
We can define a representation of $W$ on the ring $L$ as follows: if $\alpha$ is a short
root and $s$ is the reflection associated to $\alpha \in \Phi$ the generator $g_s$ maps to
multiplication by $q_1$ and if $t$ is the reflection associated to a long root $\beta \in \Phi$
$g_t$ maps to multiplication by $q_2.$ The Poincar\'e series for $W_{{\mathbb B}_n}$ with coefficients in such
a representation are computed in \cite{reiner}.
\end{exm}
\begin{exm} \label{ex:sal_b3} We show an explicit computation of the cochain complex $\mathcal{C}^*$
and we compute the coboundary $\delta$ in the case of the Coxeter group $W = W_{{\mathbb A}_2} = \mathfrak{S}_3,$
with coefficients in the local system $\mathcal{L}_\lambda = {\mathbb Z}[q^\pmu]$ given as in Example \ref{ex:rank_one}.
The complex that we are going to describe computes the cohomology
of the commutator subgroup of the braid group $\mathcal{B}_3,$ up to a degree shift (see Theorem \ref{t:shifting}):
$$
H^*(\mathcal{C}^*)= H^*(\mathcal{B}_3; {\mathbb Z}[q^\pmu]_\lambda) = H^{*+1}(\mathcal{B}_3'; {\mathbb Z}).
$$
We recall that the set of standard generators for the group $W$ is $S = \{s_1,s_2 \}.$ Hence the complex $\mathcal{C}^*$ is given by
$$\mathcal{C}^0= {\mathbb Z}[q^\pmu].e_{\varnothing};$$
$$\mathcal{C}^1= {\mathbb Z}[q^\pmu].e_{\{s_1\}} \oplus {\mathbb Z}[q^\pmu].e_{\{s_1\}};$$
$$\mathcal{C}^2= {\mathbb Z}[q^\pmu].e_{\{s_1, s_2\}}.$$
According to the formulas in Example \ref{ex:rank_one}, the Poincar\'e series are given by
$$
W_{\varnothing}(q) = 1;
$$
$$
W_{\{s_1\}}(q) = W_{\{s_1\}}(q) = 1-q;
$$
$$
W_{\{s_1, s_2\}}(q) = (1-q)(1-q+q^2)
$$
and hence the coboundary is
$$
\delta e_{\varnothing} = (1-q) e_{\{s_1\}} + (1-q)e_{\{s_2\}}
$$
$$
\delta e_{\{s_1\}} = - \delta e_{\{s_2\}} = (1-q+q^2) e_{\{s_1,s_2\}}.
$$
\end{exm}
\begin{rmk}
The analog construction of the algebraic complex $(\mathcal{C}^*, \delta)$ can be given for homology.
We have a complex
\begin{equation}\label{e:complex_homology}
\mathcal{C}_k:= \bigoplus_{\mid \!T \! \mid = k, \mid \!W_{T} \! \mid < \infty} M.e_T
\end{equation}
with boundary maps
\begin{equation} \label{e:delta_homology}
\partial (a.e_T) := \sum_
{
s \in
}
(-1)^{\sigma(s,T)+1} \sum_{\underline{w} \in W_{T }/W_{T\setminus \{ s\}}}
(-1)^{l(w)}\lambda(\psi(w))(a).e_{T \setminus \{s\}}
\end{equation}
so that $H_*(\mathcal{C}_*) = H_*(N(W); \mathcal{L}_\lambda).$
\end{rmk}
\section[Filtrations and spectral sequences]{Filtrations and spectral sequences for the Salvetti complex}
\label{s:filtration}
\subsection{A natural filtration for the Salvetti complex} \label{ss:natural}
In this section we assume that we have a Coxeter graph
$\Gamma$ with \emph{finite} set of vertices $S$ and
a corresponding Coxeter group $W=W_\Gamma$ and a
Coxeter system $(W,S).$ We fix an ordering $<$ on $S$
and we assume $S= \{s_1, \cdots, s_N\},$ with $s_1 < \cdots < s_N.$
Moreover we set a ${\mathbb Z}$-module $M$ and a representation
$\lambda: A_\Gamma \to \mathrm{Aut}(M).$
The ordering on the set $S$ induces a natural decreasing
filtration on the complex $\mathcal{C}^*$ defined in Section \ref{s:salvetti}.
We define the submodule
$$
\mathcal{F}^k\mathcal{C}^* := <e_T \mid s_{N-k+1}, \cdots, s_N \in T>.
$$
It is clear from the description of the differential $\delta$
(see equation (\ref{e:delta1})) that the submodule
$\mathcal{F}^k\mathcal{C}^*$ is a subcomplex of the complex $(\mathcal{C}^*, \delta)$ and we have the inclusions
$$0 = \mathcal{F}^{N+1}\mathcal{C}^* \subset \cdots \subset \mathcal{F}^{k+1}\mathcal{C}^* \subset \mathcal{F}^k\mathcal{C}^* \subset \cdots \subset \mathcal{F}^0\mathcal{C}^* = \mathcal{C}^*.$$
By standard methods (see for example \cite{spa}) we have a spectral sequence associated
to the complex $(\mathcal{C}^*, \delta)$ and the filtration $\mathcal{F}$:
\begin{thm} \label{t:ss}
There is a first-quadrant spectral sequence $(E_r,d_r)$ with $E_0$-term
$$
E_0^{i,j} = \mathcal{F}^i\mathcal{C}^j/\mathcal{F}^{i+1}\mathcal{C}^j \Longrightarrow H^{i+j}(\mathcal{C}^*).
$$
The $d_0$ differential is the map naturally induced by the differential $\delta$ on the
quotient complex $\mathcal{F}^i\mathcal{C}^{i+j}/\mathcal{F}^{i+1}\mathcal{C}^{i+j}.$
The $E_1$-term of the spectral sequence is given by
$$
E_1^{i,j} = H^{i+j}(\mathcal{F}^i\mathcal{C}^*/\mathcal{F}^{i+1}\mathcal{C}^*)
$$
and the $d_1$ differential corresponds to the boundary operator of the triple
$(\mathcal{F}^{i+2}C^j, \mathcal{F}^{i+1}C^j, \mathcal{F}^{i}C^j).$
\end{thm}
\begin{exm}
In the case of the complex $(\mathcal{C}^*, \delta)$ of Example \ref{ex:sal_b3} ($W=W_{{\mathbb A}_2}$)
the filtration gives a very easy picture.
The term $\mathcal{F}^0\mathcal{C}^*$ is the complex $\mathcal{C}^*$ itself. The term $\mathcal{F}^1\mathcal{C}^*$ is the ${\mathbb Z}[q^\pmu]$-submodule
generated by $e_{\{s_2\}}$ and $e_{\{s_1, s_2\}}.$ The term $\mathcal{F}^2\mathcal{C}^*$ is the submodule generated by
$e_{\{s_1, s_2\}}.$ Finally $\mathcal{F}^3\mathcal{C}^*$ is the trivial submodule.
It is easy to see that the quotient $\mathcal{F}^0\mathcal{C}^*/\mathcal{F}^1\mathcal{C}^*$ is isomorphic to the complex
$(\mathcal{C}_{{\mathbb A}_1}^*, \delta)$ for $W=W_{{\mathbb A}_1} = \mathfrak{S}_2$ (recall that the
corresponding Artin group is the braid group $\mathcal{B}_2 = {\mathbb Z}$), with the correspondence
$$
\iota:\mathcal{F}^0\mathcal{C}^*/\mathcal{F}^1\mathcal{C}^* \to \mathcal{C}_{{\mathbb A}_1}^*
$$
given by $\iota:[e_{\{s_1\}}] \mapsto e_{\{s_1\}}$ and $\iota:[e_{\varnothing}] \mapsto e_{\varnothing}$.
It is easy to verify that the isomorphism $\iota$ is compatible with the coboundary map $\delta$. Moreover, note that
$\iota$ preserves the natural graduation. We assume that the ring of coefficients ${\mathbb Z}[q^\pmu]$
is naturally graded with degree zero.
The quotient $\mathcal{F}^1\mathcal{C}^*/\mathcal{F}^2\mathcal{C}^*$ (resp. $\mathcal{F}^2\mathcal{C}^*/\mathcal{F}^3\mathcal{C}^*$) is
isomorphic, as a ${\mathbb Z}[q^\pmu]$-module, to ${\mathbb Z}[q^\pmu]$ generated by $[e_{\{s_2\}}]$
(resp. $[e_{\{s_1, s_2\}}]$) with graduation shifted by $1$ (resp. $2$).
Let $\lambda$ be the representation defined in Example \ref{ex:rank_one}. Note that
$\lambda$ is compatible with the natural inclusion $\mathcal{B}_m \into \mathcal{B}_{m+1}$.
Hence we can write the $E_1$-term of the spectral sequence associated to $(\mathcal{C}^*, \delta)$ as follows
\begin{center}
\begin{tabular}{|l}
\xymatrix @R=1pc @C=1pc {
H^1(\mathcal{B}_2; {\mathbb Z}[q^\pmu]_\lambda) & & \\
H^0(\mathcal{B}_2; {\mathbb Z}[q^\pmu]_\lambda) & {\mathbb Z}[q^\pmu]_\lambda & {\mathbb Z}[q^\pmu]_\lambda }\\
\hline
\end{tabular}
\end{center}
\end{exm}
\subsection{The differentials} The differentials of the spectral sequence given in
Theorem \ref{t:ss} are induced by
the coboundary $\delta$ of the complex $\mathcal{C}^*.$
The differential $d_1$ is explicitly described in Theorem \ref{t:ss}. In order to compute the higher differentials
it is useful to control the representatives in $\mathcal{C}^*$ for the elements of the spectral sequence.
Following the construction in \cite{spa} we define $Z^s_r:= \{c \in \mathcal{F}^s\mathcal{C}^* \mid \delta c \in \mathcal{F}^{s+r}\mathcal{C}^* \}.$
Given an element $x \in E_r,$ it is represented by a cochain
$$c \in {Z^s_r}/{(Z^{s+1}_{r-1} + \delta Z^{s-r+1}_{r-1})},$$
hence by a class $\overline{c} \in \mathcal{F}^s\mathcal{C}^*$ such that
$\delta \overline{c} \in \mathcal{F}^{s+r}\mathcal{C}^*$
modulo the subgroup $$(\delta \mathcal{F}^{s-r+1}\mathcal{C}^* \cap \mathcal{F}^s\mathcal{C}^*) + \mathcal{F}^{s+1}\mathcal{C}^*.$$
The differential $d_r$ on the class $x$ is the map induced by the coboundary $\delta.$
Hence, given
$b \in {Z^{s+r}_r}/{(Z^{s+r+1}_{r-1} + \delta Z^{s+1}_{r-1})}$ and
$\overline{b} \in \mathcal{F}^{s+r}\mathcal{C}^*/\mathcal{F}^{s+r+1}\mathcal{C}^*$ representatives of an element $y \in E_r,$ if $d_r x = y $
we have that $\delta c - b \in (Z^{s+r+1}_{r-1} + \delta Z^{s+1}_{r-1})$ and, if $y=0,$
$c \in Z^s_{r+1} + Z^{s+1}_{r-1}.$ In an equivalent way we can say that
$d_rx = b$ if and only
if $\delta \overline{c} - \overline{b} \in (\mathcal{F}^{s+r+1}\mathcal{C}^* + \delta \mathcal{F}^{s+1}\mathcal{C}^*).$
Given an element $x \in E_r$ such that $d_r x=0$ we need to lift $x$ to an element $x' \in E_{r+1}.$ We
begin taking a representative $c \in Z^s_{r+1} + Z^{s+1}_{r-1}$ for $x$ and we
choose
a lifting $c' \in Z^s_{r+1}$
with $c' = c + \Delta,$ where $\Delta \in Z^{s+1}_{r-1}.$
This means that we need to lift the class $\overline{c}$ to a class
$\overline{c}' \in \mathcal{F}^s\mathcal{C}^*/((\delta \mathcal{F}^{s-r+1}\mathcal{C}^* \cap \mathcal{F}^s\mathcal{C}^*) + \mathcal{F}^{s+1}\mathcal{C}^*)$
taking as a representative for $\overline{c}'$ the element $\overline{c} + \overline{\Delta}$ where
$\overline{\Delta} \in \mathcal{F}^{s+1}\mathcal{C}^*$ and $\delta (\overline{c} + \overline{\Delta}) \in \mathcal{F}^{s+r+1}\mathcal{C}^*.$
Working out the spectral sequence we can use Theorem \ref{t:ss} and start at page $E_1$ choosing
a class $x \in H^*(\mathcal{F}^s\mathcal{C}^*/\mathcal{F}^{s+1}\mathcal{C}^*)$ and a representative $c_1 \in \mathcal{F}^s\mathcal{C}^*$ for $x.$
At the $E_r$-step of the spectral sequence we have a representative $c_r$ for $x$
with $\delta c_r \in \mathcal{F}^{s+r}\mathcal{C}^*$
and if $d_rc_r =0$
we can choose in $E_{r+1}$ a new representative $c_{r+1} = c_r + \Delta_r$ with
$\Delta_r \in \mathcal{F}^{s+1}\mathcal{C}^*$ and
$\delta (c_r + \Delta_r) \in \mathcal{F}^{s+r+1}\mathcal{C}^*.$
\subsection{Recursion and order of vertices} \label{ss:recursion}
Thanks to the simple structure of the complex $\mathcal{C}^*$ and the filtration $\mathcal{F}^*,$ Theorem \ref{t:ss}
can provide a recursive description of the cohomology of the complex $C^*$ and hence of the space $N(W).$
The Coxeter graph of the group $W$ as well as the choice of the ordering
on the set $S$ of vertices of $\Gamma$ play an important role in this.
Let $\Gamma_{\overline{k}}$ be the full subgraph of $\Gamma$ with vertices $s_1, \ldots, s_{N-k-1}$
and let $\Gamma_{\widetilde{k}}$ be the full subgraph of $\Gamma$ with vertices $s_{N-k+1}, \ldots, s_{N}.$
\begin{prop}\label{p:recursion}
Let $A_\Gamma$ be the Artin group associated to the Coxeter graph $\Gamma.$
Suppose that the parabolic subgroups associated to the graphs $\Gamma_{\overline{k}}$
and $\Gamma_{\widetilde{k}}$ commute, i.~e.~for every vertex
$s \in s_{1}, \ldots, s_{N-k-1}$ and $t \in a_{N-k+1}, \ldots, s_{N}$ we have $m(s,t) = 2.$ Then the quotient complex
$\mathcal{F}^k\mathcal{C}^*/\mathcal{F}^{k+1}\mathcal{C}^*$ is isomorphic to the complex $\mathcal{C}^*(\Gamma_{\overline{k}})[k],$
that is the cochain complex that computes the cohomology of the Artin group
$G_{\Gamma_{\overline{k}}}$ with a graduation shifted by $k.$
The isomorphism $$\mathcal{C}^*(\Gamma_{\overline{k}})[k] \stackrel{\rho}{\longrightarrow} \mathcal{F}^k\mathcal{C}^*/\mathcal{F}^{k+1}\mathcal{C}^*$$
is defined as follows: given a subset $T \subset \{s_1, \ldots, s_{N-k-1}\},$
the generator $e_{T}$ maps to the equivalence class of the generator $e_{T'},$ with
$T' = T \cup \{ s_{N-k+1}, \ldots, s_N \}.$
\end{prop}
\begin{rmk} \label{rm:linear}
In the special case when the Coxeter graph $\Gamma$ is a subgraph of a linear graph we can
sort the the vertices of $\Gamma$ in linear order, in such a way that for every index $k$ the hypothesis of
Proposition \ref{p:recursion} hold. Choose such an ordering for the vertices of
$\Gamma.$ Hence, according to Theorem \ref{t:ss}, the construction described above determines
a spectral sequence $(E_r,d_r)$ converging to the cohomology of the Artin group $A_\Gamma.$ The
recursion given
by Proposition \ref{p:recursion} implies that for every $i$, the $i$-th column of the $E_1$-term
of the spectral sequence is isomorphic to the cohomology of the Artin group $A_{\Gamma'}$ for $\Gamma'$
a subgraph of $\Gamma.$
\end{rmk}
\begin{exm} \label{ex:A_4}
We keep working with a generic ${\mathbb Z}$-module $M$ and a representation
$\lambda: A_\Gamma \to \mathrm{Aut}(M)$ as in Section \ref{ss:natural}, but we consider the special case of
the Coxeter group $W$ of type ${\mathbb A}_4,$ with diagram
\begin{mucca}
\entrymodifiers={=<4pt>[o][F-]}
\begin{center}
\begin{tabular}{l}
\xymatrix @R=2pc @C=2pc {
\ar @{-}[r]_(-0.20){s_1} & \ar @{-}[r]_(-0.20){s_2} & \ar @{-}[r]_(-0.20){s_3} \ar @{-}[r]_(0.80){s_4} &
}
\end{tabular}
\end{center}\end{mucca}
and with the order on vertices given by the labelling. It is clear from the diagram that the given order satisfies the hypothesis of Proposition \ref{p:recursion}.
Moreover we have the following isomorphisms:
$$
\mathcal{F}^0\mathcal{C}^*/\mathcal{F}^1\mathcal{C}^* = \mathcal{C}^*({\mathbb A}_3); \; \;\;\;\; \mathcal{F}^1\mathcal{C}^*/\mathcal{F}^2\mathcal{C}^* = \mathcal{C}^*({\mathbb A}_2)[1]; \; \;\;\;\;
\mathcal{F}^2\mathcal{C}^*/\mathcal{F}^3\mathcal{C}^* = \mathcal{C}^*({\mathbb A}_1)[2]; $$
$$\mathcal{F}^3\mathcal{C}^*/\mathcal{F}^4\mathcal{C}^* = M[3]; \; \;\;\;\; \mathcal{F}^4\mathcal{C}^*/\mathcal{F}^5\mathcal{C}^* = \mathcal{F}^4\mathcal{C}^* = M[4]
$$
where the index $[k]$ in square brackets means that the graduation
of the module is shifted by $k.$
Note that in this example the Artin group $A_\Gamma$ is the braid group on $5$ strands.
According to \cite{paris} we write $\mathcal{B}_i$ for the braid group on $i$ strands.
We consider the natural identification of the groups $\mathcal{B}_i,$ $i <5$ as subgroups of $\mathcal{B}_5$
through the diagram inclusion induced by the filtration. Hence in this case we identify
$\mathcal{B}_i$ with the subgroup generated by $s_1, \ldots, s_i.$ We keep using the notation $\lambda$
for the representation of the subgroups of $\mathcal{B}_5$ induced by the inclusion.
The cohomology $H^*(N(W_{{\mathbb A}_4});\mathcal{L}_\lambda)$ - that is the cohomology $H^*(\mathcal{B}_5; M_\lambda)$
of the classical braid group
$\mathcal{B}_5$ on $5$ strands with coefficients
on the $\mathcal{B}_5$-module $M_\lambda$
- can be computed
by means of a spectral sequence with the following $E_1$-term:
\begin{center}
\begin{tabular}{|l}
\xymatrix @R=1pc @C=1pc {
H^3(\mathcal{B}_4; M_\lambda) & & & &\\
H^2(\mathcal{B}_4; M_\lambda) & H^2(\mathcal{B}_3; M_\lambda) & & &\\
H^1(\mathcal{B}_4; M_\lambda) & H^1(\mathcal{B}_3; M_\lambda) & H^1(\mathcal{B}_2; M_\lambda) & &\\
H^0(\mathcal{B}_4; M_\lambda) & H^0(\mathcal{B}_3; M_\lambda) & H^0(\mathcal{B}_2; M_\lambda) & M & M
}\\
\hline
\end{tabular}
\end{center}
The cohomology of the groups $\mathcal{B}_i$ for $i <5$ (and actually for any $i$) can be computed recursively by means of an
analog spectral sequence.
\end{exm}
\begin{rmk}\label{rm:homology}
In the homology complex $\mathcal{C}_*$ the dual filtration is given by
$$
\mathcal{F}_k\mathcal{C}_* := < e_T \mid \{ s_{N-k+1}, \ldots , s_N \} \varsubsetneq T>.
$$
With the hypothesis of Proposition \ref{p:recursion} we have that the quotient
$\mathcal{F}_{k+1}\mathcal{C}_* / \mathcal{F}_{k}\mathcal{C}_*$ is isomorphic to the complex $\mathcal{C}_*(\Gamma_{\overline{k}})[k].$
\end{rmk}
If $\mathcal{H}$ is a finite central arrangement we can associate to every hyperplane $H \in \mathcal{H}$ a linear functional
$l_H$ with $\ker l_H = H.$ The homogeneous polynomial $f_\mathcal{H} = \prod_{H \in \mathcal{H}} l_H,$
which is unique up to multiplication
by an invertible element, is the \emph{defining polynomial} of the arrangement
and the set $f_\mathcal{H}^{-1}(1)$ is the \emph{Milnor fiber} of the arrangement
(see \cite{miln} for a general introduction).
If $\mathcal{H} = \mathcal{H}_W$ is the reflection arrangement of a Coxeter group $W,$
the polynomial $f^2_\mathcal{H} = \prod_{H \in \mathcal{H}}l_H^2$
is $W$-invariant and hence defines a \emph{weighted homogeneous} polynomial $\phi:V/W \to {\mathbb C}$
on the affine variety $V/W$ with non-isolated singularity $\phi^{-1}(0) = (\cup_{H\in \mathcal{H}}H)/W.$
The map $\phi$ restricts to a fibration $\phi: N(W) \to {\mathbb C}^*$ with
fiber $F_W = \phi^{-1}(1)$ that is called the Milnor fiber of the singularity
associated to $W.$
Let $\lambda(q)$ be the representation on the Laurent polynomial
ring $L=R[q^\pmu]$ considered in Example \ref{ex:rank_one}.
Let $W$ be a finite Coxeter group, with Coxeter graph $\Gamma.$
The fibration $\phi: N(W) \to {\mathbb C}^*$ induces on fundamental groups a map $\phi_\sharp:A_\Gamma \to {\mathbb Z}$
sending each standard generator of the Artin group to $1.$
Since the space $N(W)$ is aspherical, from Shapiro's Lemma (see \cite{brown}) we have that
the cohomology of the Milnor fiber $F_W$ with constant coefficients in the ring $R$
is isomorphic to the cohomology of $N(W)$ with coefficients in the $A_\Gamma$-module
of Laurent series $R[[q^\pmu]]$ where each standard generator of $A_\Gamma$ maps
to multiplication by $q:$
$$H^*(F_W; R) = H^*(A_\Gamma; R[[q^\pmu]]).$$
Using the recursive description of the spectral sequence for the Salvetti complex,
in \cite{cal05} it is shown that the cohomology of the Artin group $A_\Gamma$ with coefficients in the
representation $\lambda(q)$ is isomorphic, modulo an index shifting, to the cohomology with constant
coefficients of the Milnor fiber $F_W.$ We can state the result as follows:
\begin{thm}[\cite{cal05}] \label{t:shifting}
Let $W$ be a finite Coxeter group and let $A$ the associated Artin group. We have:
$$
H^{*+1}(A;\mathcal{L}_q) = H^*(F_W;R).
$$
\end{thm}
\begin{rmk}
A recursive computation applies even if $\Gamma$ is not a linear graph or
if the order on the set of vertices is not linear.
For any subset $T$ of the set $S$ of vertices of $\Gamma$ we can define the following subcomplex of $\mathcal{C}^*:$
$$\mathcal{F}^{T}\mathcal{C}^*:= < e_{U} \mid T \subset U \subset S>.$$
We can consider the poset
$$\mathcal{P}:= \{(T,T') \mid T \subset T' \subset S \}$$
with the order relation given by $(T_1,T_1') < (T_2, T_2')$ if and only if $T_1 \subset T_2,$ $T_1' \subset T_2'.$
Given a couple $(T,T') \in \mathcal{P} $ the recursive method
described in this section allows one to compute the $E_1$-term of
the spectral sequence for the cohomology of the quotient complex
$\mathcal{F}^{T}\mathcal{C}^*/\mathcal{F}^{T'}\mathcal{C}^*$ by recursion on the poset $\mathcal{P}.$
\end{rmk}
In the next section we present a few examples of the application of this method and some results obtained with it.
\section{Cohomology of Artin groups: some examples} \label{s:examples}
In this section we recall some computations and examples where the methods from the previous section apply.
In some cases, like in Section \ref{ss:classical} and Section \ref{ss:rat},
the use of the spectral sequence described
in Section \ref{s:filtration} makes computations and proof shorter.
In what follows we will use sometimes a compact notation for the generators $e_T,$ $T \subset S$ of the $L$-module
$\mathcal{C}^*$ for the Coxeter system $(W,S).$ If $S$ is the ordered set $\{s_1, \ldots, s_n \}$ we will write a string
$\epsilon_1\epsilon_2\cdots\epsilon_n,$ $\epsilon_i \in \{0,1 \}$ for a generator $e_T$ such
that $\epsilon_i = 1$ if and only if $s_i \in T.$ We will write also $0^h$ and $1^h$ instead of
$\underbrace{0 \cdots 0 }_{h \mbox{ terms}}$ and $\underbrace{1 \cdots 1 }_{h \mbox{ terms}},$
meaning respectively $e_{\empty}$ and $e_S.$ As an example we write $1^n$ for $e_S$ and $10^{n-1}$
for $e_{\{s_1\}};$ we can also use notations like $A01^2$ to denote a the terms
$e_T$ such that $s_{n-2} \notin T,$ $s_{n-1} \in T,$ $s_n \in T.$
\subsection{Homology of the braid group $\mathcal{B}_n \! \mod 2$}\label{ss:classical}
In the case of the classical braid group $\mathcal{B}_n$ with constant coefficients it is more simple to compute
the homology instead of the cohomology. We state in this form the results obtained by Fuks in \cite{fuks}
and we give a somewhat simpler proof.
\begin{thm}
The homology $\oplus_n H_*(\mathcal{B}_n; {\mathbb Z}_2)$ is isomorphic to the ring $R = {\mathbb Z}_2[x_0, x_1, x_2, \cdots]$
considered as a ${\mathbb Z}_2$-module. The variable
$x_i$ has homological dimension $\dim x_i = 2^i-1$ and degree $\deg x_i= 2^i$ so that
the monomial $x_{i_1}^{h_1}\cdots x_{i_l}^{h_l}$ belongs to the homology group
$H_m(\mathcal{B}_n, {\mathbb Z}_2)$ with $n = \sum_j h_j 2^{i_j}$ and $m = \sum_j h_j (2^{i_j}-1).$
The multiplication map $\mathcal{B}_{n_1} \times \mathcal{B}_{n_2} \to \mathcal{B}_{n_1+n_2}$ given by juxtaposing braids
induces a multiplication on $\oplus_n H_*(\mathcal{B}_n; {\mathbb Z}_2)$ that corresponds to the standard multiplication
in the ring $R.$
\end{thm}
\begin{proof}
We consider the constant local system $\mathcal{L} = {\mathbb Z}_2$ where
each standard generator acts by multiplication by $1.$
Using the notation of Example \ref{ex:rank_one} we set $q = -1.$ The coefficients in the
boundary $\partial$ can be easily computed since $1 + q + \cdots + q^{n-1} = n \mod 2.$
In particular we have that the boundary for a simple element in the form $c= 1^{n-1}$ is
given ($\!\!\!\!\mod 2$) by
$$
\partial c = \sum_{i=1}^{n-1} \bin{n}{i} 1^{i-1}01^{n-i-1}.
$$
We recall that the binomial $\bin{n}{i}$ is even if and only if the integers $i$ and $n-i$
have no common non-zero coefficients in their expansion in base $2.$ As a special case we have that
if $n$ is a power of $2$ then the binomial $\bin{n}{i}$ is always even.
Given a monomial $u = x_{i_1}^{h_1} \cdots x_{i_l}^{h_l}$ we assume that the indexes of $u$ are ordered
$i_1 > i_2 > \cdots > i_l$ and we associate to $u$ the following generator
in the Salvetti complex $\mathcal{C}_*$ for $\mathcal{B}_n$:
$$
\underbrace{1^{2^{i_1}-1}0\cdots01^{2^{i_1}-1}}_{h_1 \mbox{ terms}} 0 \cdots 0
\underbrace{1^{2^{i_l}-1}0\cdots01^{2^{i_l}-1}}_{h_l \mbox{ terms}}.
$$
From the description of the boundary map
$\partial$ it follows that for any generator of $\mathcal{C}_*$ in the form
$$
c = 1^{2^{a_1}-1}0\cdots01^{2^{a_l}-1}
$$
we have that $\partial c=0$ and then in particular all
the generators associated to monomials in $R$ are cycles.
Moreover given two generators
$$
c_1 = 1^{2^{a_1}-1}0\cdots01^{2^{a_l}-1}01^{2^b-1}01^{2^{b'}-1}01^{2^{a'_1}-1}0\cdots01^{2^{a'_{l'}}-1}
$$
and
$$
c_2 = 1^{2^{a_1}-1}0\cdots01^{2^{a_l}-1}01^{2^{b'}-1}01^{2^{b}-1}01^{2^{a'_1}-1}0\cdots01^{2^{a'_{l'}}-1}
$$
we can set
$$
c = 1^{2^{a_1}-1}0\cdots01^{2^{a_l}-1}01^{2^b+2^{b'}-1}01^{2^{a'_1}-1}0\cdots01^{2^{a'_{l'}}-1}
$$
and for $b \neq b'$ we have $\partial c = c_1 + c_2.$ Hence the two cycles $c_1$ and $c_2$ are co-homologous.
We assume the inductive hypothesis that for any $k < n$ the
cycles corresponding to the monomials
with total degree $k$ generate the homology
group $H_*(\mathcal{B}_k; {\mathbb Z}_2).$ Using the filtration given in Remark \ref{rm:homology}
we can define the spectral sequence for the homology of $\mathcal{B}_n$
analogous to the cohomology spectral sequence
constructed in Theorem \ref{t:ss}.
The $E^1$-term is given by $E^1_{s,t}= H^t(\mathcal{B}_{n-s-1}; {\mathbb Z}_2).$ By induction the $s$-th
column of the $E^1$-term of the spectral sequence is generated by the
monomials in $R$ with degree $n-s-1.$ If the string $c$ is the cycle associated to the
monomial $u \in R,$ the representative in $\mathcal{C}_*$ of a monomial $u$ in the $s$-th column
of the $E^1$-term is given by the string $c01^s.$
The differential $d^1_{s,t}: E^1_{s,t} \to E^1_{s-1,t}$ acts on $c01^s$ by mapping
$d^1: c01^s \to s \cdot c001^{s-1},$ that is the representative
of the monomial $s\cdot ux_0 \mod 2.$ This means
that $d^1_{s,t}$ it is given by multiplication by $sx_0$ and
hence it is trivial if and only if $s$ is even, while it is injective for odd $s.$
It follows from the inductive hypothesis on the description of the groups
$H_*(\mathcal{B}_k; {\mathbb Z}_2)$ for $k<n$ that for $s$ even we have $E^2_{s,t} =0$ and
for $s$ odd $E^2_{s,t}$ is generated by all the monomials with degree $n-s-1$ and dimension $s$ that are not
divided by $x_0.$
The differential $d^2_{s,t}: E^2_{s,t} \to E^2_{s-2,t+1}$ is given by multiplication
by $x_1$ if $s-1 \equiv 0 \mod 4$ and is trivial otherwise.
The $s$-th column of the $E^3$-term of the spectral sequence is trivial
if $s-1 \equiv 0 \mod 4,$ $s > 1$ and is generated by monomials
that are not divided by $x_0$ and $x_1$ if $s-1 \equiv 2 \mod 4.$
In general the description of the differential, and as a consequence
the description of the spectral sequence, is
the following.
The differential $d^{2^i}_{s,t}: E^{2^i}_{s,t} \to E^{2^i}_{s-{2^i},t+2^i-1}$
is given by multiplication by $x_i$ if $s-1 \equiv 0 \mod 2^i$ and is trivial otherwise.
The $s$-th column $E^{2^i+1}$-term of the spectral sequence is trivial
if $s-1 \equiv 0 \mod 2^i,$ $s > 2^i$ and is generated by monomials
that are not divided by $x_0, x_1, \ldots, x_i$ if $s-1 \equiv 2^{i-1} \mod 2^i.$
All the other differentials are trivial.
In the $E^\infty$-term of the spectral sequence we have, in the $0$-th column, the monomials
$u$ with degree $n-1.$ Those lift to monomials $ux_0$ in the homology of $\mathcal{B}_n.$
In general in the $(2^i-1)$-th column we have the monomials with degree $n - 2^i$ that are not divided
by the terms $x_0, \ldots, x_{i-1}.$ A monomial $u$ in the $(2^i-1)$-th column lifts to the monomial $ux_i$
in the homology of $\mathcal{B}_n.$
The multiplication map $\mathcal{B}_{n_1} \times \mathcal{B}_{n_2} \to \mathcal{B}_{n_1+n_2}$ given by juxtaposing braids
is induced by the inclusion of the Coxeter graph $\Gamma_{A_{n_1-1}}$ for $W_{A_{n_1-1}}$ and
$\Gamma_{A_{n_2-1}}$ for $W_{A_{n_2-1}}$ in the graph $\Gamma_{A_{n_1+n_2-1}}$ for $W_{A_{n_1+n_2-1}}$
as graphs of commuting parabolic subgroups. The map sends the vertices of $\Gamma_{A_{n_1-1}}$ to
the first $n_1-1$ vertices of $\Gamma_{A_{n_1+n_2-1}}$ and the vertices of $\Gamma_{A_{n_2-1}}$ to
the last $n_2-1$ preserving the ordering.
The induced map on the Salvetti complex is given by mapping the couple of strings $(A,B)$ to the string $A0B$
and hence the induced multiplication in homology maps the couple of monomials $(u,v)$ to the product $uv.$
\end{proof}
\subsection{Rational cohomology of the Milnor fiber}\label{ss:rat}
In this example we show how to compute the rational cohomology of the classical braid group $\mathcal{B}_n$
with coefficients in the representation $\lambda(q)$ already described in Example \ref{ex:rank_one}.
The result presented here has been computed in \cite{fren} and \cite{mar} and independently in \cite{dps}.
Let $R:= \Q$ be the field of rational numbers and let $\mathcal{L}_q$ be the local system constructed in
Example \ref{ex:rank_one}. The local system is induced by the action of the braid group on
the Laurent polynomial ring $L:= \Q[q^\pmu].$ Each standard generator maps to multiplication
by $(-q).$ The choice of this action is clearly equivalent to the action given by each standard
generator mapping to multiplication by $q$, as in Example \ref{ex:rank_one}. Although we prefer the choice of $(-q)$,
in coherence with \cite{dps, cal05, cal06} and others, in order to get slightly simpler formulas,
as the reader can see in the following paragraphs.
As showed in Section \ref{ss:recursion}, this local system has an interesting geometric
interpretation in terms of the cohomology of the Milnor fiber of
the discriminant singularity of type ${\mathbb A}_{n-1}$ (see also \cite{cal05, cal06} for the analog computation
for homology with integer coefficients).
From an algebraic point of view, the computation gives, modulo an index shifting, the rational cohomology of the
kernel of the abelianization map $\mathcal{B}_n \to {\mathbb Z},$ that is the commutator subgroup $\mathcal{B}_n'$ of the braid group
on $n$ strands.
In fact it is easy to see that the Milnor fiber of type ${\mathbb A}_{n-1}$ is a classifying space for $\mathcal{B}_n'$ and using Theorem \ref{t:shifting} we get:
$$
H^{*+1}(\mathcal{B}_n;\mathcal{L}_q) = H^*(\mathcal{B}_n';\Q).
$$
Let $\varphi_n(q)$ be the $n$-th cyclotomic polynomial. We introduce the notation
$\mathbf{n}:= \Q[q]/(\varphi_n(q)).$ In the following paragraphs we will also use the notation
$[n]:= 1 + q + \cdots + q^{n-1}= \frac{q^n-1}{q-1}.$
For any positive integer $n$ we linearly order the vertices of the graph $\Gamma_{n}$ of type ${\mathbb A}_n,$ that is the
graph for the Artin group $\mathcal{B}_{n+1}.$ Let $\mathcal{C}_n^*$ be the complex associate to $\Gamma_n.$
Recall that the Coxeter group $W_{{\mathbb A}_n}$ has exponents $1,\ldots, n$ and hence
$W_{{\mathbb A}_n}(q) = [n+1]! :=\prod_{i=1}^{n+1} [i].$
From Example \ref{ex:rank_one} we can describe more explicitly the coboundary $\delta$ in $\mathcal{C}^*_n.$
Let $e_T$ be a generator of $\mathcal{C}^*_n$ in the form $A01^a01^b0B$ and let $e_{T'}$ be the generator
$A01^{a+b+1}0B.$
We need the following simple remark: if $W_S$ is a Coxeter group generated by a set of generator $S$ that
is the disjoint union $S= T_1 \cup T_2$ of two commuting set of generators, then
we can decompose $W_S = W_{T_1} \times W_{T_2}$ and we have a factorization
$W_S(q) = W_{T_1}(q) \times W_{T_2}(q)$ for the Poincar\'e series for $W.$
Applying this to the computation of $\delta e_T$ we have that the coefficient for $e_{T'}$ in the
coboundary is given by the sign coefficient $(-1)^{a+|A|}$ times the $q$-analog binomial
$$
\frac{W_{{\mathbb A}_{a+b+1}}(q)}{W_{{\mathbb A}_a}(q)W_{{\mathbb A}_b}(q)} = \frac{[a+b+2]!}{[a+1]![b+1]!} := \qbin{a+b+2}{a+1}.
$$
As in \cite{dps} we define the following elements:
\begin{eqnarray*}
w_h & := & 01^{h-2}0\\
z_r & := & 1^{h-1}0 + (-1)^h 01^{h-1}\\
b_h & := & 01^{h-2}\\
c_h & := & 1^{h-1}\\
z_h(i) & := & \sum_{j=0}^{j=i-1} (-1)^{hj}w_h^j z_h w_h^{i-j-1}\\
v_h(i) & := & \sum_{j=0}^{j=i-2} (-1)^{hj}w_h^j z_h w_h^{i-j-2} b_h + (-1)^{h(i-1)} w_h^{i-1} c_h.
\end{eqnarray*}
We remark that the elements $z_h(i)$ and $v_h(i)$ are cocycles.
Our aim is to prove the following result:
\begin{thm} [\cite{dps}] \label{t:dps}
\begin{eqnarray*}
H^{n-2i+1}(\mathcal{B}_{n+1}; \mathcal{L}_q) & = &
\left\{
\begin{array}{cl}
0 & \mbox{if } h:=\frac{n}{i} \mbox{ is not an integer} \\
\q{h} & \mbox{generated by } [z_h(i)] \mbox{ if } h:= \frac{n}{i} \mbox{ is an integer}
\end{array}
\right. \\
H^{n-2(i-1)}(\mathcal{B}_{n+1}; \mathcal{L}_q) & = &
\left\{
\begin{array}{cl}
0 & \mbox{if } h:=\frac{n+1}{i} \mbox{ is not an integer} \\
\q{h} & \mbox{generated by } [v_h(i)] \mbox{ if } h:=\frac{n+1}{i} \mbox{ is an integer.}
\end{array}
\right.
\end{eqnarray*}
\end{thm}
\begin{proof}
We can prove the Theorem by induction on $n.$
We consider
the natural graph inclusion $\Gamma_n \into \Gamma_{n+1}$ and group inclusion $\mathcal{B}_n \into \mathcal{B}_{n+1}$
induced by the filtration $\mathcal{F}.$ As in Example \ref{ex:A_4} we recall that the $E_1-$term of the spectral
sequence for $\mathcal{B}_{n+1}$ is given by
$$ E^{s,t}_1 := H^{s+t}(\mathcal{F}^s\mathcal{C}^*_n/\mathcal{F}^{s+1}\mathcal{C}^*_n) = H^t (\mathcal{C}_{n-s-1})$$
where we can define the complexes $\mathcal{C}_0^* = \mathcal{C}_{-1}^*:= L$ concentrated in dimension $0$ and hence
$H^*(\mathcal{C}_0^*) = H^0(\mathcal{C}_0) = H^*(\mathcal{C}_{-1}) = H^0(\mathcal{C}_{-1}) = L.$
The statement of the theorem is trivially true for $n=1.$
Assume $n>1$ and suppose that the theorem holds for any integer $m,$ $m<n.$
Each non-trivial entry $E_1^{s,t}$ of the $E_1$ term of the spectral sequence for $\mathcal{C}_n^*$ is
isomorphic either to a $L$-module of the form $\q{h},$ for a suitable $h,$ or to the ring $L$ itself.
The second case holds only for the entries $E^{0,n-1}_1$ and $E^{0,n}_1.$
The cyclotomic polynomials $\ph_h(q)$ are prime polynomials in the ring $L.$
As a consequence any map $d:\q{h} \rightarrow \q{k}$ induced by a differential $d_k$ of
of the spectral sequence can be non-zero only if $h = k$ and the map is an isomorphism.
In a similar way any map $d:\q{h} \rightarrow L/([n+1])$ can be non-zero only if
$h \mid n+1$ and if $h \nmid n+1$ any map from $\q{h}$ to any quotient of $L/([n+1])$ is trivial. This follows since
$[n+1]$ is the product of the cyclotomic polynomials $\ph_h(q)$ for $h \mid n+1$ and
hence the $L$-module $L/([n+1])$ decomposes as a direct sum of modules $$L/([n+1]) = \bigoplus_{h|n+1}\q{h}.$$
Since the $L$-module $E^{n-1,0}_1 = L$ is generated by $01^{n-1}$ and $E^{n,0}_1 = L$
is generated by $1^n,$ from Example \ref{ex:rank_one} we can see that the differential
$d_1: E^{n-1,0}_1 \rightarrow E^{n,0}_1$ is given by multiplication by $[n+1]$ and hence
we have $E^{n-1,0}_2= 0$ and $E^{n,0}_2 = R/([n+1]).$
As a consequence if we fix a certain integer $h$ we can study the spectral
sequence considering only the
terms isomorphic to $\q{h}$ and, if $h \mid n+1,$
the summand of $R/([n+1])$ isomorphic to $\q{h},$ while we can ignore all
the other summand in the spectral sequence.
We have three different cases:
\textit{i)}
$h \mid n$ \\
By induction we know that $E^{s,t}_1 = H^t(\mathcal{C}^*_{n-s-1}) = \q{h}$ only in two cases:
\begin{itemize}
\item[\textit{i.a)}] $h \mid n-s-1$ and $ t = n-s-1-2 \frac{n}{h}+1;$
\item[\textit{i.b)}] $h \mid n-s$
and $t=n-s-1-2(\frac{n}{h} - 1 ).$
\end{itemize}
If we set $i := \frac{n}{h},$ in case \textit{i.a)} we have
\begin{eqnarray*}
\lambda = 1, \ldots, i-1 & E_1^{\lambda h-1,n-\lambda(h-2) -2i+1} &
\mbox{generated by } z_h(i-\lambda)01^{\lambda h-1}\\
\mbox{and in case in case \textit{i.b)} we have} & & \\
\lambda = 0, \ldots, i-1 & E_1^{\lambda h ,n-\lambda(h-2) -2i+1} &
\mbox{generated by } v_h(i-\lambda)01^{\lambda h}.
\end{eqnarray*}
We note that
$z_h(i)01^l = v_h(i)001^l - (-1)^{h(i-1)} w_h(i-1) 1^{h-1}01^l,$ hence we get
that the map
$d_1:E_1^{\lambda h-1,n-\lambda(h-2) -2i+1} \rightarrow E_1^{\lambda h,n-\lambda(h-2) -2i+1}$
is given by multiplication by
$[\lambda h + 1],$ so it is an isomorphism. It follows (see diagram below) that
the $L$-module $E_1^{0,n-21+1}$ is the only one that survives in the term
$E_{\infty}$ and hence $E_1^{0,n-21+1}$ will give, as we will see next,
the only contribution from $E_{\infty}$ to the cohomology group $H^{n-2i + 1}(\mathcal{C}_n^*).$
\begin{center}
\begin{tabular}{|l}
\xymatrix @R=1pc @C=1pc {
\q{h} & & & & & & & & & & & &\\
& & & \q{h}\ar[r]^\sim & \q{h} & & & & & & & &\\
& & & & & & & \cdots \ar[r]^\sim & \cdots & & & & \\
& & & & & & & & & & & \q{h} \ar[r]^\sim & \q{h} \\
& & & & & & & & & & & & }\\
\hline
\end{tabular}
\end{center}
In order to consider the case \textit{ii)} we need the following lemma:
\begin{lem}[\cite{dss}] \label{l:dss} Let $1 = d_1 < ...< d_n$ be the divisors of $n,$ in the ring $L$ we have the following
equality of ideals:
$$
\left( \left[ \begin{array}{c}n\\d_1 \end{array} \right] ,...,\left[ \begin{array}{c}n\\d_k \end{array} \right] \right)=
(\varphi_{d_{k+1}}\cdots\varphi_n).
$$
\end{lem}
\textit{ii)}
$h \mid n+1$\\
Now we set $i := \frac{n+1}{h}.$ The two possible cases for $E^{s,t}_1 = \q{h}$ are
the following ones:
\begin{eqnarray*}
\lambda = 1, \ldots, i-1 & E_1^{\lambda h-2,n-\lambda(h-2) -2i+2} &
\mbox{generated by } z_h(i-\lambda)01^{\lambda h-2}\\
\lambda = 1, \ldots, i-1 & E_1^{\lambda h-1,n-\lambda(h-2) -2i+2} &
\mbox{generated by } v_h(i-\lambda)01^{\lambda h-1}.
\end{eqnarray*}
The differential
$d_1:E_1^{\lambda h-2,n-\lambda(h-2) -2i+2} \rightarrow E_1^{\lambda h-1,n-\lambda(h-2) -2i+2}$
is multiplication by the $q$-analog $[\lambda h]$ and hence it is the trivial map.
The next differential that we need to consider is $$d_{h-1}:E_{h-1}^{\lambda h-1,n-\lambda(h-2) -2i+2}
\rightarrow E_{h-1}^{(\lambda+1) h-2,n-(\lambda+1)(h-2) -2i+2}.$$
The equality
$v_h(i)01^l=z_h(i-1)01^{h-2}01^l+(-1)^{h(i-1)}w_h^{i-1}1^{h-1}01^l$ implies that
the map above corresponds to multiplication by
$[\lambda h+1]\ldots[\lambda h+h-1]$ and hence it is an isomorphism, since all the factors
are invertible in $\q{h}.$ Finally the map
$d_{h-1}:E_{h-1}^{n-h,h-1} \rightarrow E_{h-1}^{n,0}$ corresponds
to multiplication by
$\alpha_h = \left[ \begin{array}{c}n+1\\h \end{array} \right]$ and hence from Lemma \ref{l:dss}
it is injective. Below we have a picture of the spectral sequence, with differentials $d_1$ and $d_{h-1}.$
We can see the there is only one nontrivial $\q{h}$-module that survives in
$E_{\infty},$ that is $E_{h-1}^{h-2,n-h+2-2(i-1)},$ that gives a contribution (actually the only one)
to the cohomology group $H^{n-2(i-1)}(C_n).$
\begin{center}
\begin{tabular}{|l}
\xymatrix @R=1pc @C=1pc {
& & \q{h}\ar[r]^0 & \q{h}\ar[rrrd]^\sim& & & & & & & & & & & \\
& & & & & &\cdots \ar[r]^0 & \cdots \ar[rrrd]^\sim & & & & & & &\\
& & & & & & & & & & \q{h} \ar[r]^0 & \q{h} \ar[rrrd]^{\alpha_h}& & &\\
& & & & & & & & & & & & & & R/I}\\
\hline
\end{tabular}
\end{center}
\textit{iii)}
$h \nmid n(n+1)$\\
Let $c, 1 < c < h$ be an integer such that $h\mid n+c,$ if we set $i := \frac{n+c}{h},$
we have again two cases for $E^{s,t}_1 = \q{h}$:
\begin{eqnarray*}
\lambda = 1, \ldots, i -1 & E_1^{\lambda h-c-1,n+c-\lambda(h-2) -2i+1}
& \mbox{generated by } z_h(i-\lambda)01^{\lambda h-c-1}\\
\lambda = 1, \ldots, i -1 & E_1^{\lambda h-c ,n+c-\lambda(h-2) -2i+1}
& \mbox{generated by } v_h(i-\lambda)01^{\lambda h-c}.
\end{eqnarray*}
The differential
$d_1:E_1^{\lambda h-c-1,n+c-\lambda(h-2) -2i+1}\rightarrow E_1^{\lambda h-c ,n+c-\lambda(h-2) -2i+1}$
corresponds to multiplication by $[\lambda h -c +1]$ that is co-prime with $[h]$ and hence the map is
an isomorphism. It follows that none of the modules survives in $E_2$ and
hence the contribution to $E_{\infty}$ is trivial.
From Lemma \ref{l:dss} and from the previous observations in case \textit{ii)} we get that
$E_{\infty}^{n,0} =\q{n+1},$ generated by $1^n.$
From the description of the spectral sequence it follows that the cohomology
group $H^*(\mathcal{C}_n^*)$ is the one described in the statement of the theorem.
In order to complete the proof we need to check that the generators are correct.
In case \textit{i)} the $L$-module $E_1^{0,n-21+1}$ is generated by $v_h(i)0$ that differs from $z_h(i)$
by a term of the form $A1$ and hence we can lift $v_h(i)0$ to $z_h(i).$
The case \textit{ii)} is analog: the $L$-module $E_{h-1}^{h-2,n-h+2-2(i-1)}$ is generated by
$z_h(i-1)01^{h-2}$ that differs from $v_h(i)$ by a term of the form $A1^{h-1}$ and hence
we can lift $z_h(i-1)01^{h-2}$ to $v_h(i).$
\end{proof}
\subsection{Artin group of affine type and non-linear Coxeter graphs: some remarks} \label{ss:affine}
We deal now with the case of an affine reflection group.
Let $(W, S)$ be an affine reflection group with Coxeter graph $\Gamma$ and suppose $\mid \! S \! \mid =n+1.$
Let $\lambda$ be an abelian representation of the Artin group $A_\Gamma$ over a ring $R$ that is
an unique factorization domain.
The generators of the Salvetti complex $(\mathcal{C}^*, \delta)$ are in $1$ to $1$ correspondence
with the proper subsets of $S.$ It can be somewhat convenient to complete the complex
$\mathcal{C}^*$ to an \emph{augmented} Salvetti Complex $\widehat{\mathcal{C}}^*$ as follows:
$$
\widehat{\mathcal{C}}^* := \mathcal{C}^* \oplus R.e_S.
$$
We can define the coboundary $\widehat{\delta}$ on the complex
$\widehat{\mathcal{C}}^*$ setting
$\widehat{\delta}(e_T) = \delta(e_T)$ if $\mid \! T \! \mid <n$ and
re-defining the coboundary on the
top-dimensional generators of $\mathcal{C}^*.$
We formally define a suitable \emph{quasi}-Poincar\'e polynomial for $W$ by:
$$
(\widehat{W}_S)_\lambda := \mathrm{lcm}\{(W_{S \setminus \{s\}})_\lambda \mid s \in S\}.
$$
and for every $s \in S$ we set the coboundary for $\widehat{\mathcal{C}}^*$:
$$
\delta(e_{S \setminus \{ s\}}):= (-1)^{\sigma(s, S\setminus \{ s\})+1}
\frac{(\widehat{W}_S)_\lambda}{(W_{S \setminus \{s\}})_\lambda}.
$$
and it is straightforward to verify that $\widehat{\mathcal{C}}^*$ is still a chain complex. Moreover, we have
the following relations between the cohomology of $\mathcal{C}^*$ and $\widehat{\mathcal{C}}^*$:
$$ H^i(\mathcal{C}^*) = H^i (\widehat{\mathcal{C}}^*) $$
for $i \neq n, n + 1$ and we have the short exact sequence
$$ 0 \to H^n (\widehat{\mathcal{C}}^*) \to H^n (\mathcal{C}^*) \to R \to 0.$$
An example of this construction can be found in the computation of the cohomology
of the affine Artin group of type $\widetilde{\mathbb{B}}_n$ in \cite{cms10}.
\subsection{A non-abelian case: three strands braid group and a geometric representation} \label{ss:nonab}
The third braid group $\mathcal{B}_3$ and the special linear
group $SL_2({\mathbb Z})$ have a classical geometric representation
given by symmetric power of the natural symplectic representation. The cohomology of this representation
is studied in detail in \cite{ccs12}. The aim of this section is to show how the Salvetti complex can be used
for finite computations, even with non-abelian representation.
In general we can consider an orientable
surface $M_{g,n}$ of genus $g$ with $n$ connected components in its boundary and the isotopy
classes of Dehn twists around simple loops $c_1, \ldots, c_{2g}$ such that $\mid \! \! c_i \cap c_{i+1} \! \! \mid \; = 1$
and $\mid \! \! c_i \cap c_j \! \! \mid \; = 0$ if $j \neq i \pm 1.$ We give a representation of the braid group
in the symplectic group $Aut(H^*(M_{g,n};{\mathbb Z}); < >)$ of all automorphisms preserving the intersection form
as follows: the $i$-th generator of the braid group
$\mathcal{B}_{2g+1}$ maps to the Dehn twist with respect to the simple loop $c_i.$
In the case $g=1,$ $n=1$ the symplectic group equals $SL_2({\mathbb Z}).$ We extend this representation
to a representation $\lambda$ on the symmetric algebra $M={\mathbb Z}[x,y].$
This representation splits into irreducible $SL_2({\mathbb Z})$-modules $M = \oplus_{n \geq 0}M_n$
according to the polynomial degree. In \cite{ccs12} the cohomology groups $H^*(\mathcal{B}_3; M)$ and $H^*(SL_2({\mathbb Z}); M)$
are computed. The main ingredients for the achievement of this result are
the computation of the spectral sequence associated
to the central extension $$1 \to {\mathbb Z} \to \mathcal{B}_3 \to SL_2({\mathbb Z}) \to 1$$
(see \cite[Th. 10.5]{milnor71}), the amalgamated free product decomposition
$$SL_2({\mathbb Z}) = {\mathbb Z}_4 \ast_{{\mathbb Z}_2} {\mathbb Z}_6$$ (see \cite{mks}) and a generalization of
a classical result of Dickson (see \cite{dickson, steinberg}) on the characterization
of $SL_2(\F_p)$-invariants polynomials.
The methods described in this survey don't seem very useful to compute explicitly
the cohomology group $H^*(\mathcal{B}_3; M),$
but they can be used to get finite computations with the help of a computer.
In particular, for a fixed degree $n$ the computation of the cohomology group $H^*(\mathcal{B}_3; M_n)$
is a very simple problem.
Let $\sigma_1$ and $\sigma_2$ be the standard generators of the braid group $\mathcal{B}_3.$ The action of the
representation $\lambda$ on degree-one polynomials is given by
$$\sigma_1:\left\{ \begin{array}{l} x\to x-y\\ y \to y\end{array}\right. ,\
\sigma_2:\left\{ \begin{array}{l} x\to x\\ y \to x+ y\end{array} \right. $$
and hence, with respect to the basis $\{x,y\}$ of $M_1,$ the representation is given by the matrices
$$
\sigma_1\stackrel{\lambda}{\mapsto} \begin{bmatrix} 1&0\\-1&1\end{bmatrix},\quad
\sigma_2\stackrel{\lambda}{\mapsto} \begin{bmatrix} 1& 1\\ 0& 1\end{bmatrix}. $$
The action extends to the $n$-th symmetric algebra of the space $<x,y>,$ with basis
$\{ x^n, x^{n-1}y, \ldots, y^n\},$ by the matrices
$$
\sigma_1\stackrel{\lambda}{\mapsto} \begin{bmatrix}
\binom{n}{0}& 0 & 0 & \cdots & 0\\
-\binom{n}{1} & \binom{n-1}{0} & 0 & \ddots & 0 \\
\binom{n}{2} & -\binom{n-1}{1} & \binom{n-2}{0} & \ddots & 0\\
\vdots & \vdots & \ddots & \ddots & 0 \\
(\!-\!1\!)^n \binom{n}{n} & (\!-\!1\!)^{n\!-\!1} \binom{n-1}{n-1} & \cdots & -\binom{1}{1} & \binom{0}{0}
\end{bmatrix}, \quad
\sigma_2\stackrel{\lambda}{\mapsto} \begin{bmatrix}
\binom{0}{0} & \binom{1}{0} & \cdots & \binom{n - 1}{\;\;\,0} & \binom{n}{0}\\
0 & \binom{1}{1} & \ddots & \binom{n-1}{1}& \binom{n}{1} \\
0 & 0 & \ddots & \ddots & \vdots\\
\vdots & \ddots & \ddots & \binom{n - 1}{n - 1} & \binom{n}{n - 1} \\
0 & 0 & \cdots & 0 & \binom{n}{n}
\end{bmatrix}
$$
that is, $(\lambda(\sigma_1))_{ij}= (-1)^{i-j}\binom{n+1-j}{i-j}$ and $(\lambda(\sigma_2))_{ij}= \binom{j-1}{i-1},$
where $\binom{h}{k} = 0$ if $k<0.$
Hence we have to compute the cohomology for the complex $\mathcal{C}^*$ given by:
\begin{center}
\begin{tabular}{l}
\xymatrix @R=1pc @C=1pc {
& *+<3pt>[F]{00} & \\
*+<3pt>[F]{10} \ar[ur]^{\sigma_1\sigma_2 - \sigma_2 + \Id} & & *+<3pt>[F]{01} \ar[ul]_{-\sigma_2\sigma_1 + \sigma_1 - \Id}\\
& *+<3pt>[F]{00} \ar[ul]^{\sigma_1-\Id} \ar[ur]_{\sigma_2 -\Id}& }\\
\end{tabular}
\end{center}
Similar computations for large $n$ can provide an evidence for general results like in \cite{ccs12}.
The reader familiar with computing local system (co)homology using resolutions will see that the cochain complex
obtained here coincides with that obtained from the standard presentation of $\mathcal{B}_3$ by these other methods.
The cochain complex above can be easily generalized to the case $g>1$,
that is the computation of the co\-ho\-mo\-lo\-gy of the group $\mathcal{B}_{2g+1}$
with coefficients in the representation on the ring of polynomials ${\mathbb Z}[x_1, y_1, \cdots, x_g, y_g].$
\providecommand{\bysame}{\leavevmode\hbox
to3em{\hrulefill}\thinspace}
\def\MR#1{MR#1}
\providecommand{\MRhref}[1]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#1}
}
\providecommand{\href}[2]{\hyperlinks{#1}{#2}}
\bibliographystyle{amsalpha}
|
2,869,038,155,083 | arxiv | \section{Introduction}
\label{sec:introduction}
Deep learning models, e.g., deep neural networks (DNNs), have become the standard models for solving many complex real-world problems, such as image recognition \cite{he2016deep}, speech recognition \cite{graves2013speech}, natural language processing \cite{collobert2011natural}, and autonomous driving \cite{chen2015deepdriving}. However, training large-scale DNN models is by no means trivial, which requires not only large-scale datasets but also significant computational resources. The training cost can grow rapidly with task complexity and model capacity. For instance, it can cost \$1.6 million to train a BERT model on Wikipedia and Book corpora (15 GB) \cite{sharir2020cost}.
It is thus of utmost importance to protect DNNs from unauthorized duplication or reproduction.
One concerning fact is that well-trained DNNs are often exposed to the public via remote services (APIs), cloud platforms (e.g., Amazon AWS, Google Cloud and Microsoft Azure), or open-source toolkits like OpenVINO\footnote{\href{https://github.com/openvinotoolkit/open_model_zoo}{https://github.com/openvinotoolkit/open\_model\_zoo}}. It gives rise to adversaries (e.g., a model ``thief'') who attempt to steal the model in stealthy ways, causing copyright infringement and economic losses to the model owners.
Recent studies have shown that stealing a DNN can be done very efficiently without leaving obvious traces \cite{tramer2016stealing,papernot2017practical}. Arguably, unauthorized finetuning or pruning is the most straightforward way of model stealing, if the model parameters are publicly accessible (for research purposes only) or the adversary is an insider. Even when only the API is exposed, the adversary can still exploit advanced \emph{model extraction} techniques \cite{tramer2016stealing,papernot2017practical,orekondy2019knockoff,juuti2019prada,yuan2020attack} to steal most functionalities of the hidden model. These attacks pose serious threats to the copyright of deep learning models, calling for effective protection methods.
A number of defense techniques have been proposed to protect the copyright of DNNs, where DNN watermarking \cite{uchida2017embedding,zhang2018protecting,adi2018turning,darvish2019deepsigns} is one major type of technique. DNN watermarking embeds a secret watermark (e.g., logo or signature) into the model by exploiting the over-parameterization property of DNNs \cite{adi2018turning}. The ownership can then be verified when the same or similar watermark is extracted from a suspect model. The use of watermarks has an obvious advantage, i.e., the owner identity can be embedded and verified exactly, given that the watermark can be fully extracted. However, these methods still suffer from certain weaknesses. Arguably, the most concerning one is that they are \emph{invasive}, i.e., they need to tamper with the training procedure to embed the watermark, which may compromise model utility or introduce new security threats into the model \cite{liu2018fine,wang2019neural,fan2019rethinking,guo2020hidden}.
More recently, DNN fingerprinting \cite{cao2021ipguard,DBLP:conf/iclr/LukasZK21} has been proposed as a non-invasive alternative to watermarking. Lying at the design core of fingerprinting is \emph{uniqueness} --- the unique feature of a DNN model. Specifically, fingerprinting extracts a unique identifier (or fingerprint) from the owner model to differentiate it from other models. The ownership can be claimed if the identifier of the owner model matches with that of a suspect model. However, in the context of deep learning, a single fingerprinting feature/metric can hardly be sufficient or flexible enough to handle all the randomness in DNNs or against different types of model stealing and adaptive attacks (as we will show in our experiments). In other words, there exist many scenarios where a DNN model can easily lose its unique feature or property (i.e., fingerprint).
In this work, we propose a \emph{testing} approach for DNN copyright protection.
Instead of solely relying on one metric, we propose to actively test the ``similarities'' between a victim model and a suspect model from multiple angles. The core idea is to \textbf{1)} carefully construct a set of test cases to comprehensively characterize the victim model, and \textbf{2)} measure how similarly the two models behave on the test cases. Intuitively, if a suspect model is a stolen copy of the victim model, it will behave just like the victim model in certain ways. An extreme case is that the suspect is the exact duplicate of the victim model, and in this case, the two models will behave identically on these test cases. This testing view creates a dilemma for the adversary as better stealing will inevitably lead to higher similarities to the victim model. We further identify two major challenges for testing-based copyright protection: 1) how to define comprehensive testing metrics to fully characterize the similarities between two models, and 2) how to effectively generate test cases to amplify the similarities. The set of similarity scores can be viewed as a proof obligation that provides a chain of strong evidence to judge a stolen copy.
Following the above idea, we design and implement \textsc{DeepJudge}, a novel testing framework for DNN copyright protection. As illustrated in Fig.~\ref{fig:frame}, \tool{DeepJudge} is composed of three core components. First, we propose a set of multi-level testing metrics to fully characterize a DNN model from different angles. Second, we propose efficient test case generation algorithms to magnify the similarities (or differences) measured by the testing metrics between the two models. Finally, a `yes'/`no' (stolen copy) judgment will be made for the suspect model based on all similarity scores.
The advantages of \textsc{DeepJudge} include \textbf{1) non-invasive}: it works directly on the trained models and does not tamper with the training process; \textbf{2) efficient}: it can be done very efficiently with only a few seed examples and a quick scan of the models; \textbf{3) flexible}: it can easily incorporate new testing metrics or test case generation methods to obtain more evidence and reliable judgement, and can be applied in both white-box and black-box scenarios with different testing metrics; \textbf{4) robust}: it is fairly robust to adaptive attacks such as model extraction and defense-aware attacks. The above advantages make \tool{DeepJudge} a practical, flexible, and extensible tool for copyright protection of deep learning models.
We have implemented \tool{DeepJudge} as an open-source self-contained toolkit and evaluated \textsc{DeepJudge} on four benchmark datasets (i.e., MNIST, CIFAR-10, ImageNet and Speech Commands) with different DNN architectures, including both convolutional and recurrent neural networks. The results confirm the effectiveness of \textsc{DeepJudge} in providing strong evidence for identifying the stolen copies of a victim model. \tool{DeepJudge} is also proven to be more robust to a set of adaptive attacks compared to existing defense techniques.
In summary, our main contributions are:
\begin{itemize}
\item We propose a novel testing framework \tool{DeepJudge} for copyright protection of deep learning models. \tool{DeepJudge} determines whether one model is a copy of the other depending on the similarity scores obtained from a comprehensive set of testing metrics and test case generation algorithms.
\item We identify three typical scenarios of model copying including finetuned copy, pruned copy, and extracted copy; define positive and negative suspect models for each scenario; and consider both white-box and black-box protection settings. \tool{DeepJudge} can produce reliable evidence and judgement to correctly identify the positive suspects across all scenarios and settings.
\item \tool{DeepJudge} is a self-contained open-source tool for robust copyright protection of deep learning models and a strong complement to existing techniques. \tool{DeepJudge} can be flexibly applied in different DNN copyright protection scenarios and is extensible to new testing metrics and test case generation algorithms.
\end{itemize}
\begin{figure*}[t]
\centering
\includegraphics[width=0.84\linewidth]{djframe.png}
\caption{The overview of \tool{DeepJudge} Testing Framework.}
\label{fig:frame}
\end{figure*}
\vspace{-2mm}
\section{Background}
\label{sec:background}
\subsection{Deep Neural Network}
\label{sec:background_dnn}
A DNN classifier is a decision function $f: X\rightarrow Y$ mapping an input ${\bm{x}} \in X$ to a label $y \in Y=\{1, 2, \cdots, C\}$, where $C$ is the total number of classes. It comprises of $L$ layers:
$\{f^1, f^2, \cdots, f^{L-1}, f^L\},$
where $f^1$ is the input layer, $f^L$ is the probability output layer,
and $f^2,\cdots,f^{L-1}$ are the hidden layers. Each layer $f^{l}$ can be denoted by a collection of neurons:
$\{n_{l,1}, n_{l,2}, \cdots, n_{l,N_l}\},$
where $N_l$ is the total number of neurons at that layer. Each neuron is a computing unit that computes its output by applying a linear transformation followed by a non-linear operation to its input (i.e., output from the precedent layer). We use $\phi_{l,i}({\bm{x}})$ to denote the function that returns the output of neuron $n_{l,i}$ for a given input $ {\bm{x}} \in X$. Then, we have the output vector of layer $f^{l} $ ($2 \leq l \leq L$):
$f^{l}({\bm{x}}) = \left\langle \phi_{l,1}({\bm{x}}), \phi_{l,2}({\bm{x}}), \cdots, \phi_{l,N_l}({\bm{x}})\right\rangle$.
Finally, the output label $f({\bm{x}})$ is computed as $f({\bm{x}})=\argmax f^L({\bm{x}})$.
\vspace{-2mm}
\subsection{DNN Watermarking}
\label{sec:background_watermarking}
A number of watermarking techniques have been proposed to protect the copyright of DNN models \cite{adi2018turning,darvish2019deepsigns,le2020adversarial,uchida2017embedding,zhang2018protecting,jia2021entangled}. Similar to traditional multimedia watermarking, DNN watermarking works in two steps: \emph{embedding} and \emph{verification}. In the embedding step, the owner embeds a secret watermark (e.g., a signature or a trigger set) into the model during the training process.
Depending on how much knowledge of the model is available in the verification step, existing watermarking methods can be broadly categorized into two classes: a) \textit{white-box} methods for the case when model parameters are available; and b) \textit{black-box} methods when only predictions of the model can be acquired.
White-box watermarking embeds a pre-designed signature (e.g., a string of bits) into the parameter space of the model via certain regularization terms \cite{darvish2019deepsigns,uchida2017embedding}. The ownership could be claimed when the extracted signature from a suspect model is similar to that of the owner model.
Black-box watermarking usually leverages backdoor attacks \cite{gu2017badnets} to implant a watermark pattern into the owner model by training the model with a set of backdoor examples (also known as the trigger set) relabeled to a secret class \cite{le2020adversarial,zhang2018protecting}. The ownership can then be claimed when the defender queries the suspect model for examples attached with the watermark trigger and receives the secret class as predictions.
\vspace{-2mm}
\subsection{DNN Fingerprinting}
\label{sec:background_fingerprinting}
Recently, DNN fingerprinting techniques have been proposed to verify model ownership via two steps: fingerprint \emph{extraction} and \emph{verification}. According to the categorization rule for watermarking, fingerprinting methods \cite{cao2021ipguard, DBLP:conf/iclr/LukasZK21} are all \emph{black-box} techniques. Moreover, they are \emph{non-invasive}, which is in sharp contrast with watermarking techniques. Instead of modifying the training procedure to embed identities, fingerprinting directly extracts a unique feature or property of the owner model as its fingerprint (i.e., a unique identifier). The ownership can then be verified if the fingerprint of the owner model matches with that of the suspect model. For example, IPGuard \cite{cao2021ipguard} leverages data points close to the classification boundary to fingerprint the boundary property of the owner model. A suspect model is determined to be a stolen copy of the owner model if it predicts the same labels for most boundary data points. \cite{DBLP:conf/iclr/LukasZK21} proposes a Conferrable Ensemble Method (CEM) to craft conferrable (a subclass of transferable examples) adversarial examples to fingerprint the overlap between two models' decision boundaries or adversarial subspaces. CEM fingerprinting demonstrates robustness to removal attacks including finetuning, pruning and extraction attacks, except several adapted attacks like adaptive transfer learning and adversarial training \cite{DBLP:conf/iclr/LukasZK21}. It is the closest work to our \tool{DeepJudge}. However, as a fingerprinting method, CEM targets \emph{uniqueness}, while as a testing framework, our \tool{DeepJudge} targets \emph{completeness}, i.e., comprehensive characterization of a model with multi-level testing metrics and diverse test case generation methods. Note that CEM fingerprinting can be incorporated into our framework as a black-box metric.
\section{DNN Copyright Threat Model}
\label{sec:threat_model}
We consider a typical attack-defense setting with two parties: the victim and the adversary. Here, the model owner is the victim who trains a DNN model (i.e., the victim model) using private resources. The adversary attempts to steal a copy of the victim model, which 1) mimics its functionality while 2) cannot be easily recognized as a copy.
Following this setting, we identify three common threats to DNN copyright: 1) model finetuning, 2) model pruning, and 3) model extraction. The three threats are illustrated in the top row of Fig.~\ref{fig:frame}.
\vspace{0.5mm}
\noindent\textbf{\textit{Threat 1:} Model Finetuning.} In this case, we assume the adversary has full knowledge of the victim model, including model architecture and parameters, and has a small dataset to finetune the model \cite{adi2018turning,uchida2017embedding}. This occurs, for example, when the victim open-sourced the model for academic purposes only, but the adversary attempts to finetune the model to build commercial products.
\vspace{0.5mm}
\noindent\textbf{\textit{Threat 2:} Model Pruning.} In this case, we also assume the adversary has full knowledge of the victim model's architecture and parameters. Model pruning adversaries first prune the victim model using some pruning methods, then finetune the model using a small set of data \cite{liu2018rethinking,renda2020comparing}.
\vspace{0.5mm}
\noindent\textbf{\textit{Threat 3:} Model Extraction.} In this case, we assume the adversary can only query the victim model for predictions (i.e., the probability vector). The adversary may be aware of the architecture of the victim model but has no knowledge of the training data or model parameters.
The goal of model extraction adversaries is to accurately steal the functionality of the victim model through the prediction API \cite{juuti2019prada,tramer2016stealing,papernot2017practical,orekondy2019knockoff,yuan2020attack}. To achieve this, the adversary first obtains an annotated dataset by querying the victim model for a set of auxiliary samples, then trains a copy of the victim model
on the annotated dataset. The auxiliary samples can be selected from a public dataset \cite{correia2018copycat,orekondy2019knockoff} or synthesized using some adaptive strategies \cite{papernot2017practical,juuti2019prada}.
\section{Testing for DNN Copyright Protection}
\label{sec:methodology}
In this section, we present \tool{DeepJudge}, the proposed testing framework that produces supporting evidence to determine whether a \emph{suspect} model is a \emph{copy} of a \emph{victim} model. The victim model can be copied by model finetuning, pruning, or extraction, as discussed in Section \ref{sec:threat_model}. We identify the following criteria for a reliable copyright protection method:
\begin{enumerate}
\item \textbf{Fidelity.} The protection or ownership verification process should not affect the utility of the owner model.
\item \textbf{Effectiveness.} The verification should have high precision and recall in identifying stolen model copies.
\item \textbf{Efficiency.} The verification process should be efficient, e.g., taking much less time than model training.
\item \textbf{Robustness.} The protection should be resilient to adaptive attacks.
\end{enumerate}
\tool{DeepJudge} is a testing framework designed to satisfy all the above criteria.
In the following three subsections, we will first give an overview of \tool{DeepJudge}, then introduce its multi-level testing metrics and test case generation algorithms.
\subsection{\tool{DeepJudge} Overview}
\label{subsec:frame}
As illustrated in the bottom row of Fig. \ref{fig:frame}, \tool{DeepJudge} consists of two components and a final judgement step: i) test case generation, ii) a set of multi-level distance metrics for testing, and iii) a thresholding and voting based judgement mechanism.
Alg.~\ref{alg:overall} depicts the complete procedure of \tool{DeepJudge} with pseudocode. It takes the victim model $\mathcal{O}$, a suspect model $\mathcal{S}$, and a set of data $\mathcal{D}$ associated with the victim model as inputs and returns the values of the testing metrics as evidence as well as the final judgement. The set of data $\mathcal{D}$ can be provided by the owner from either the training or testing set of the victim model.
At the test case generation step, it selects a set of seeds from the input dataset $\mathcal{D}$ (Line 1) and carefully generates a set of extreme test cases from the seeds (Line 2).
Based on the test cases generated, \tool{DeepJudge} computes the distance (dissimilarity) scores defined by the testing metrics between the suspect and victim models (Line 3).
The final judgement of whether the suspect is a copy of the victim can be made via a thresholding and voting mechanism according to the dissimilarity scores between the victim and a set of negative suspect models (Line 4).
\begin{algorithm}[t]
\small
\caption{$\tool{DeepJudge}(\mathcal{O}, \mathcal{S}, \mathcal{D})$}
\label{alg:overall}
\KwIn{owner model $\mathcal{O}$, suspect model $\mathcal{S}$, data set $\mathcal{D}$ }
\KwOut{judgement $\mathcal{J}$, evidence $\mathcal{E}$}
\SetKwFunction{SelectSeeds}{SelectSeeds}
\SetKwFunction{GenTestCase}{GenerateTestCases}
\SetKwFunction{Metrics}{ComputeMetrics}
\SetKwFunction{Judging}{Judging}
\SetKwFunction{Evidence}{Evidence}
\tcp{Test case generation (Section \ref{sec:test-gen})}
$Seeds$ $\leftarrow$ \SelectSeeds$(\mathcal{O}, \mathcal{D})$
${T}$ $\leftarrow$ \GenTestCase$(\mathcal{O}, Seeds)$
\tcp{Testing metrics (Section \ref{sec:metrics})}
$\mathcal{E}$ $\leftarrow$ \Metrics$(\mathcal{O}, \mathcal{S}, {T})$
\tcp{Final judgement (Section \ref{subsec:verification})}
$\mathcal{J}$ $\leftarrow$ \Judging$(\mathcal{E})$ \tcp{Copy, Right? Yes or No.}
\Return $\mathcal{J}, \mathcal{E}$
\end{algorithm}
\subsection{Multi-level Testing Metrics}
\label{sec:metrics}
We first introduce the testing metrics for two different settings respectively: white-box and black-box. \emph{1) White-box Setting:} In this setting, \tool{DeepJudge} has full access to the internals (i.e., intermediate layer outputs) and the final probability vectors of the suspect model $\mathcal{S}$.
\emph{2) Black-box Setting:} In this setting, \tool{DeepJudge} can only query the suspect model $\mathcal{S}$ to obtain the probability vectors or the predicted labels.
In both settings, we assume the model owner is willing to provide full access to the victim model $\mathcal{O}$, including the training and test datasets, and the training details if necessary.
\begin{table}[t]
\renewcommand\arraystretch{1.2}
\centering
\caption{Proposed multi-level testing metrics.}\label{tab:metrics}
\begin{tabu}{lll}
\tabucline[1pt]{-}
\textbf{Level} & \textbf{Metric} & \textbf{Defense Setting} \\ \tabucline[1pt]{-}
\emph{Property-level} & Robustness Distance (RobD) & Black-box \\ \hline
\multirow{2}{*}{\emph{Neuron-level}} & Neuron Output Distance (NOD)
& White-box \\
& Neuron Activation Distance (NAD) & White-box \\ \hline
\multirow{3}{*}{\emph{Layer-level}}
& Layer Outputs Distance (LOD) & White-box \\
& Layer Activation Distance (LAD) & White-box \\
& Jensen-Shanon Distance (JSD) & Black-box \\\tabucline[1pt]{-}
\end{tabu}
\end{table}
The proposed testing metrics are summarized in Table~\ref{tab:metrics}, with their suitable defense settings highlighted in the last column. \tool{DeepJudge} advocates evidence-based ownership verification of DNNs via multi-level testing metrics that complement each other to produce more reliable judgement.
\subsubsection{Property-level metrics}
There is an abundant set of model properties that could be used to characterize the similarities between two models, such as the adversarial robustness property \cite{fawzi2017robustness,carlini2017towards,cao2021ipguard,DBLP:conf/iclr/LukasZK21} and the fairness property \cite{mehrabi2019survey}. Here, we consider the former and define the \emph{robustness distance} to measure the adversarial robustness discrepancy between two models on the same set of test cases.
We will test more properties in our future work.
Denote the function represented by the victim model $\mathcal{O}$ by $f$, given an input ${\bm{x}}_i$ and its ground truth label $y_i$, an adversarial example ${{\bm{x}}'_i}$ can be crafted by slightly perturbing ${\bm{x}}_i$ towards maximizing the classification error of $f$. This process is known as the adversarial attack, and $f({\bm{x}}'_i) \neq y_i$ indicates a successful attack. Adversarial examples can be generated using any existing adversarial attack methods such as FGSM \cite{goodfellow2014explaining} and PGD \cite{madry2017towards}.
Given a set of test cases, we can obtain its adversarial version $T=\{{\bm{x}}'_1, {\bm{x}}'_2, \cdots\}$, where ${\bm{x}}'_i$ denotes the adversarial example of ${\bm{x}}_i$.
The robustness property of model $f$ can then be defined as its accuracy on $T$:
$$Rob(f,T) =\frac{1}{|T|}\sum_{i=1}^{|T|} (f({\bm{x}}'_{i})) = y_i). $$
\noindent \textbf{Robustness Distance (RobD)}.
Let $\hat{f}$ be the suspect model, we define the robustness distance between $f$ and $\hat{f}$ by the absolute difference between the two models' robustness:
$$RobD(f, \hat{f}, T) = |Rob(\hat{f}, T) - Rob(f,T)|.$$
The intuition behind \emph{RobD} is that model robustness is closely related to the decision boundary learned by the model through its unique optimization process, and should be considered as a type of fingerprint of the model. \emph{RobD} requires minimal knowledge of the model (only its output labels).
\subsubsection{Neuron-level metrics}
\label{subsubsec:neuron-level}
Neuron-level metrics are suitable for white-box testing scenarios where the internal layers' output of the model is accessible.
Intuitively, the output of each neuron in a model follows its own statistical distribution, and the neuron outputs in different models should vary. Motivated by this, \tool{DeepJudge} uses the output status of neurons to capture the difference between two models and defines the following two neuron-level metrics \emph{NOD} and \emph{NAD}.
\noindent \textbf{Neuron Output Distance (NOD)}.
For a particular neuron $n_{l,i}$ with $l$ being the layer index and $i$ being the neuron index within the layer, we denote the neuron output function of the owner's victim model and the suspect copy model by $\phi_{l,i}$ and $ \hat{\phi}_{l,i}$ respectively. \emph{NOD} measures the average neuron output difference between the two models over a given set $T=\{{\bm{x}}_1, {\bm{x}}_2, \cdots\}$ of test cases:
$$NOD(\phi_{l,i}, \hat{\phi}_{l,i}, T)= \frac{1}{|T|} \sum_{{\bm{x}}\in T} |\phi_{l,i}({\bm{x}})-\hat{\phi}_{l,i}({\bm{x}})|.$$
\noindent \textbf{Neuron Activation Distance (NAD)}.
Inspired by the Neuron Coverage \cite{pei2017deepxplore} for testing deep learning models, \emph{NAD} measures the difference in activation status (`activated' vs. `not activated') between the neurons of two models. Specifically, for a given test case ${\bm{x}} \in T$, the neuron $n_{l,i}$ is determined to be `activated' if its output value $\phi_{l,i}({\bm{x}})$ is larger than a pre-specified threshold.
The \emph{NAD} between the two models with respect to neuron $n_{l,i}$ can then be calculated as:
$$NAD(\phi_{l,i},\hat{\phi}_{l,i},T) = \frac{1}{|T|}\sum_{{\bm{x}}\in T}|S({\phi_{l,i}({\bm{x}})})-S({\hat{\phi}_{l,i}({\bm{x}})})|,$$
where the step function $S(\phi_{l,i}({\bm{x}}))$ returns $1$ if $\phi_{l,i}({\bm{x}})$ is greater than a certain threshold, $0$ otherwise.
\subsubsection{Layer-level metrics}
The layer-wise metrics in \tool{DeepJudge} take into account the output values of the entire layer in a DNN model.
Compared with neuron-level metrics, layer-level metrics provide a full-scale view of the intermediate layer output difference between two models.
\vspace{0.5mm}
\noindent \textbf{Layer Output Distance (LOD)}. Given a layer index $l$, let $f^l$ and $\hat{f}^l$ represent the layer output functions of the victim model and the suspect model, respectively. \emph{LOD} measures the $L^p$-norm distance between the two models' layer outputs:
$$LOD(f^l,\hat{f}^l,T) =\frac{1}{|T|} \sum_{{\bm{x}}\in T}||f^l({\bm{x}})-\hat{f}^l({\bm{x}})||_p,$$
where $||\cdot||_p$ denotes the $L^p$-norm ($p=2$ in our experiments).
\vspace{0.5mm}
\noindent \textbf{Layer Activation Distance (LAD)}. \emph{LAD} measures the average \emph{NAD} of all neurons within the same layer:
$$LAD(f^l,\hat{f}^l,T)=\frac{1}{|N_l|} \sum_{i=1}^{|N_l|} NAD(\phi_{l,i},\hat{\phi}_{l,i},T),$$
where $N_l$ is the total number of neurons at the $l$-th layer, and $\phi_{l,i}$ and $\hat{\phi}_{l,i}$ are the neuron output functions from $f^l$ and $\hat{f}^l$.
\vspace{0.5mm}
\noindent \textbf{Jensen-Shanon Distance (JSD)}. JSD \cite{fuglede2004jensen} is a metric that measures the similarly of two probability distributions, and a small \emph{JSD} value implies the two distributions are very similar.
Let $f^L$ and $\hat{f}^L$ denote the output functions (output layer) of the victim model and the suspect model, respectively. Here, we apply \emph{JSD} to the output layer as follows:
$$ JSD(f^L,\hat{f}^L,T)=\frac{1}{2|T|} \sum_{{\bm{x}}\in T} K(f^L({\bm{x}}),q)+K(\hat{f}^L({\bm{x}}),q),$$
where $q=(f^L({\bm{x}})+\hat{f}^L({\bm{x}}))/2$ and $K(\cdot,\cdot)$ is the Kullback-Leibler divergence.
\emph{JSD} quantifies the similarity between two models' output distributions, and is particularly more powerful against model extraction attacks where the suspect model is extracted based on the probability vectors (distributions) returned by the victim model.
\subsection{Test Case Generation}
\label{sec:test-gen}
To fully exercise the testing metrics defined above, we need to magnify the similarities between a positive suspect and the victim model, while minimizing the similarities of a negative suspect to the victim model.
In \tool{DeepJudge}, this is achieved by smart test case generation methods. Meanwhile, test case generation should respect the model accessibility in different defense settings, i.e., black-box vs. white-box.
\subsubsection{Black-box setting}
\label{subsubsec:bound}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{advs.png}
\caption{\tool{DeepJudge} uses adversarial examples of the victim model to probe the difference in models' decision boundary.}
\label{fig:adv}
\end{figure}
When only the input and output of a suspect model are accessible, we populate the test set $T$ using adversarial inputs generated by existing adversarial attack methods on the victim model.
We consider three widely used adversarial attack methods, including Fast Gradient Sign Method (FGSM) \cite{goodfellow2014explaining}, Projected Gradient Descent (PGD) \cite{madry2017towards}, and Carlini \& Wagner’s (CW) attack \cite{carlini2017towards}, where FGSM and PGD are $L^{\infty}$-bounded adversarial methods, and CW is an $L^{2}$-bounded attack method. This gives us more diverse test cases with both $L^{\infty}$- and $L^{2}$-norm perturbed adversarial test cases. The detailed description and exact parameters used for adversarial test case generation are provided in Appendix \ref{subsec:generation}.
Fig.~\ref{fig:adv} illustrates the rationale behind using adversarial examples as test cases.
Finetuned and pruned model copies are directly derived from the victim model, thus they share similar decision boundaries (purple line) as the victim model. However, the negative suspect models are trained from scratch on different data or with different initializations, thus having minimum or no overlapping with the victim model's decision boundary. By subverting the model's predictions, adversarial examples cross the decision boundary from one side to the other (we use untargeted adversarial examples). Although the extracted models by model extraction attacks are trained from scratch by the adversary, the training relies on the probability vectors returned by the victim model, which contains information about the decision boundary. This implies that the extracted model will gradually mimic the decision boundary of the victim model. From this perspective, the decision boundary (or robustness) based testing imposes a dilemma to model extraction adversaries: the better the extraction, the more similar the extracted model to the victim model, and the easier it to be identified by our decision boundary based testing.
\subsubsection{White-box setting}
In this case, the internals of the suspect model are accessible, thus a more fine-grained approach for test case generation becomes feasible. As shown in Fig.~\ref{fig:testing}, given a seed input and a specified layer, \tool{DeepJudge} generates one test case for each neuron, and the corner case of the neuron's activation is of our particular interest.
\begin{figure}[t]
\centering
\includegraphics[width=0.83\linewidth]{testing.png}
\caption{\tool{DeepJudge} tests each neuron and generates a test case to explore the corner region of its output distribution.}
\label{fig:testing}
\end{figure}
The test generation algorithm is described in Alg. \ref{alg:neuron}. It takes the owner's victim model $\mathcal{O}$ and a set of selected seeds $Seeds$ as input, and returns the set of generated test cases $T$. $T$ is initialized to be empty (Line 1).
The main content of the algorithm is a nested loop (Lines 2-13), in which
for each neuron $n_{l,i}$ required by the metrics in Section \ref{sec:metrics}, the algorithm searches for an input that activates the neuron's output $\phi_{l,i}({\bm{x}}')$ more than a threshold value. At each outer loop iteration, an input is sampled from the $Seeds$ (Lines 3-4). It is then iteratively perturbed in the inner loop following the neuron activation's gradient update (Lines 6-7), until an input ${\bm{x}}'$ that can satisfy the threshold condition is found and is added into the test suite $T$ (Lines 8-11) or when the maximum number of iterations is reached. The parameter $lr$ (Line 7) is used to control that the search space of the input with perturbation is close enough to its seed input. Finally, the generated test suite $T$ is returned.
We discuss how to configure the specific threshold $k$ for a neuron $n_{l,i}$ used in Alg.~\ref{alg:neuron}. Since the statistics may vary across different neurons, we pre-compute the threshold $k$ based on the training data and the owner model, which is the maximum value (upper bound) of the corresponding neuron output over all training samples.
The final threshold value is then adjusted by a hyper-parameter $m$ to be used in Alg.~\ref{alg:neuron} for more reasonable and adaptive thresholds.
Note that the thresholds for all interested neurons can be calculated once by populating layer by layer across the model.
\begin{algorithm}[t]
\small
\caption{GenerateTestCases($\mathcal{O}$, $Seeds$)}
\label{alg:neuron}
\KwIn{owner model $\mathcal{O}$, a set of seed inputs $Seeds$}
\KwOut{test suite $T$}
Initialize test suite $T\leftarrow\emptyset$
\For{each neuron $n_{l,i}$}
Sample a seed input ${\bm{x}}\leftarrow$ $Seeds.choice()$
${\bm{x}}'\leftarrow copy({\bm{x}})$
\For{$iter = 1$ \KwTo $iters$}{
Calculate gradients $grads \leftarrow \frac{\nabla \phi_{l,i}({\bm{x}}')}{\nabla {\bm{x}}'}$
Perturb input ${\bm{x}}' \leftarrow {\bm{x}}' + lr \cdot grads$
\If{$\phi_{l,i}({\bm{x}}') > $ threshold $k$}{
Add new test case $T\leftarrow T\cup \{{\bm{x}}'\}$\\
\textbf{break}
}
}
}
\Return $T$
\end{algorithm}
\subsection{Final Judgement}
\label{subsec:verification}
The judgment mechanism of \tool{DeepJudge} has two steps: thresholding and voting. The thresholding step determines a proper threshold for each testing metric based on the statistics of a set of negative suspect models (see Section \ref{subsubsec:negative} for more details). The voting step examines a suspect model against each testing metric, and gives it a positive vote if its distance to the victim model is \emph{lower} than the threshold of that metric. The lower a measured metric value, the more likely the suspect model is a copy of the victim, according to this metric. The final judgment can then be made based on the votes: \emph{the suspect model will be identified as a positive suspect if it receives more positive votes, and a negative suspect otherwise}.
For each testing metric $\lambda$, we set the threshold adaptively using an $\varepsilon$-difference strategy. Specifically, we use one-tailed T-test to calculate the lower bound $LB_\lambda$ based on the statistics of the negative suspect models at the 99\% confidence level. If the measured difference $\lambda(\mathcal{O},\mathcal{S},T)$ is lower than $LB_\lambda$, $\mathcal{S}$ will be a copy of $\mathcal{O}$ with high probability. The threshold for each metric $\lambda$ is defined as: $\tau_\lambda = \alpha_{\lambda} \cdot LB_\lambda$, where $\alpha_\lambda$ is a user-specified relaxing parameter controlling the sensitivity of the judgement. As $\alpha_\lambda$ decreases, the false positive rate (the possibility of misclassifying a negative suspect as a stolen copy) will also increase. We empirically set $\alpha=0.9$ for black-box metrics and $\alpha=0.6$ for white-box metrics respectively, depending on the negative statistics.
\tool{DeepJudge} makes the final judgement by voting below:
\begin{equation*}
p_{copy}(\mathcal{O},\mathcal{S},T) = \frac{1}{|\Lambda|}\sum_{\lambda \in \Lambda}\mathbbm{1}\big(\lambda(\mathcal{O},\mathcal{S},T) \leq \tau_\lambda\big),
\end{equation*}
where $\mathbbm{1}$ in the indicator function and $\Lambda$ denotes the set of \tool{DeepJudge} testing metrics, i.e., \{\emph{RobD}, \emph{NOD}, \emph{NAD}, \emph{LOD}, \emph{LAD}, \emph{JSD}\}. Note that, depending on the defense setting (white-box vs. black-box), only a subset of the testing metrics can be applied and the averaging can only be applied on the available metrics. \tool{DeepJudge} identifies a positive suspect copy if $p_{copy}$ is larger than 0.5 and a negative one otherwise. Arguably, voting is the most straightforward way of making the final judgement. While this simple voting strategy works reasonably well in our experiments, we believe more advanced judgement rules can be developed for diverse real-world protection scenarios.
\subsection{\tool{DeepJudge} vs. Watermarking \& Fingerprinting}
\label{subsec:comp}
\begin{table*}[t]
\small
\renewcommand\arraystretch{1.2}
\centering
\footnotesize
\caption{A comparison of different copyright protection methods.} \label{tab:compare}
\scalebox{1.03}{
\begin{tabu}{cc|c|cc|ccc}
\tabucline[1pt]{-}
\multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\textbf{Type}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Non-invasive} \end{tabular}}
& \multicolumn{2}{c|}{\textbf{Evaluated Settings}} & \multicolumn{3}{c}{\textbf{Evaluated Attacks}} \\ \cline{4-8}
& & & \emph{Black-box} & \emph{White-box} & \emph{Finetuning} & \emph{Pruning} & \emph{Extraction} \\ \tabucline[1pt]{-}
Uchida et al. \cite{uchida2017embedding} & \emph{Watermarking} & \no & \no & \yes & \yes & \yes & \no \\ \hline
Merrer et al. \cite{le2020adversarial} & \emph{Watermarking} & \no & \yes & \no & \yes & \yes & \no \\ \hline
Adi et al. \cite{adi2018turning} & \emph{Watermarking} & \no & \yes & \no & \yes & \no & \no \\ \hline
Zhang et al. \cite{zhang2018protecting} & \emph{Watermarking} & \no & \yes & \no & \yes & \yes & \no \\ \hline
Darvish et al. \cite{darvish2019deepsigns} & \emph{Watermarking} & \no & \yes & \yes & \yes & \yes & \no \\ \hline
Jia et al. \cite{jia2021entangled} & \emph{Watermarking} & \no & \yes & \no & \yes & \yes & \yes \\ \hline
Cao et al. \cite{cao2021ipguard} & \emph{Fingerprinting} & \yes & \yes & \no & \yes & \yes & \no \\ \hline
Lukas et al. \cite{DBLP:conf/iclr/LukasZK21} & \emph{Fingerprinting} & \yes & \yes & \no & \yes & \yes & \yes \\ \hline
\textbf{DeepJudge (Ours)} & \emph{Testing} & \yes & \yes & \yes & \yes & \yes & \yes \\ \tabucline[1pt]{-}
\end{tabu}
}
\end{table*}
Here, we briefly discuss why our testing approach is more favorable in certain settings and how it complements existing defense techniques. Table \ref{tab:compare} summarizes the differences of \tool{DeepJudge} to existing watermarking and fingerprinting methods, from three aspects: 1) whether the method is non-invasive (i.e., independent of model training); 2) whether it is particularly designed for or evaluated in different defense settings (i.e., white-box vs. black-box); and 3) whether the method is evaluated against different attacks (i.e., finetuning, pruning and extraction). \tool{DeepJudge} is training-independent, able to be flexibly applied in either white-box or black-box settings, and evaluated (also proven to be robust) against all three types of common copyright attacks including model finetuning, pruning and extraction, with empirical evaluations and comparisons deferred to Section~\ref{subsec:comparison}.
Watermarking is invasive (training-dependent), whereas fingerprinting and testing are non-invasive (training independent). The effectiveness of watermarking depends on how well the owner model memorizes the watermark and how robust the memorization is to different attacks. While watermarking can be robust to finetuning or pruning attacks \cite{uchida2017embedding, zhang2018protecting}, it is particularly vulnerable to the emerging model extraction attack (see Section \ref{subsubsec:fail}). This is because model extraction attacks only extract the key functionality of the model, however, watermarks are often task-irrelevant. Despite the above weaknesses, watermarking is the only technique that can embed the owner identity/signature into the model, which is beyond the functionalities of fingerprinting or testing.
Fingerprinting shares certain similarities with testing. However, they differ in their goals. Fingerprinting aims for ``uniqueness'', i.e., a unique fingerprint of the model, while testing aims for ``completeness'', i.e., to test as many dimensions as possible to characterize not only the unique but also the common properties of the model. Arguably, effective fingerprints are also valid black-box testing metrics.
But as a testing framework, our \tool{DeepJudge} is not restricted to a particular metric or test case generation method. Our experiments in Section~\ref{sec:adaptive_exp} show that a single metric or fingerprint is not sufficient to handle the diverse and adaptive model stealing attacks.
In Section~\ref{sec:adaptive_attack2}, we will also show that our \tool{DeepJudge} can survive those adaptive attacks that break fingerprinting by dynamically changing the test case generation strategy.
We anticipate a long-running arms race in deep learning copyright protection between model owners and adversaries, where watermarking, fingerprinting and testing methods are all important for a comprehensive defense.
\section{Experiments}
\label{sec:evaluation}
We have implemented \tool{DeepJudge} as a self-contained toolkit in Python\footnote{The tool and all the data in the experiment are publicly available via \url{https://github.com/Testing4AI/DeepJudge}}.
In the following,
we first evaluate the performance of \tool{DeepJudge} against model finetuning and model pruning (Section \ref{subsec:tuning}), which are two threat scenarios extensively studied by watermarking methods \cite{adi2018turning,darvish2019deepsigns}. We then examine \tool{DeepJudge} against more challenging model extraction attacks in Section \ref{subsec:extraction_exp}.
Finally, we test the robustness of \tool{DeepJudge} under adaptive attacks in Section \ref{sec:adaptive_exp}.
Overall, we evaluated \tool{DeepJudge} with 11 attack methods, 3 baselines, and over 300 deep learning models trained on 4 datasets.
\subsection{Experimental Setup}\label{subsec:setup}
\subsubsection{Datasets \& Victim Models}
We run the experiments on three image classification datasets (i.e., MNIST \cite{lecun2010mnist}, CIFAR-10 \cite{krizhevsky2009learning} and ImageNet \cite{russakovsky2015imagenet}) and one audio recognition dataset (i.e., SpeechCommands \cite{warden2018speech}). The models used for the four datasets are summarized in Table~\ref{tab:setup}, including three convolutional architectures and one recurrent neural network.
For each dataset, we divide the training data into two subsets. The first subset (50\% of the training examples) is used to train the victim model. More detailed experimental settings can be found in Appendix \ref{subsec:datasets}.
\subsubsection{Positive suspect models}
Positive suspect models are derived from the victim models via finetuning, pruning, or model extraction. These models are considered as stolen copies of the owner's victim model. \tool{DeepJudge} should provide evidence for the victim to claim ownership.
\subsubsection{Negative suspect models}
\label{subsubsec:negative}
Negative suspect models have the same architecture as the victim models but are trained independently using either the remaining 50\% of training data or the same data but with different random initializations. The negative suspect models serve as the control group to show that \tool{DeepJudge} will not claim wrong ownership. These models are also used to compute the testing thresholds ($\tau$). The same training pipeline and the setting are used to train the negative suspect models. Specifically, ``Neg-1'' are trained with different random initializations while ``Neg-2'' are trained using a separate dataset (the other 50\% of training samples).
\subsubsection{Seed selection}
\label{subsubsec:seed}
Seed selection prepares the $Seeds$ examples used to generate the test cases.
Here, we apply the sampling strategy used in DeepGini \cite{feng2020deepgini} to select a set of high-confidence seeds from the test dataset (details are in Appendix \ref{subsec:gini}). The intuition is that high-confidence seeds are well-learned by the victim model, thus carrying more unique features of the victim model. More adaptive seed selection strategies are explored in the adaptive attack section \ref{subsubsec:advtrain}.
\subsubsection{Adversarial example generation} We use three classic attacks including FGSM~\cite{goodfellow2014explaining}, PGD~\cite{madry2017towards} and CW \cite{carlini2017towards} to generate adversarial test cases as introduced in Section \ref{subsubsec:bound}.
\begin{table}[t]\centering
\renewcommand\arraystretch{1.15}
\footnotesize
\caption{Datasets and victim models.} \label{tab:setup}
\begin{tabu}{ccccc}
\tabucline[1pt]{-}
\multicolumn{1}{c}{\textbf{Dataset}} & \textbf{Type} & \textbf{Model} & \textbf{\#Params} & \multicolumn{1}{c}{\textbf{Accuracy}} \\ \tabucline[1pt]{-}
\multicolumn{1}{c}{MNIST} & Image & LeNet-5 & 107.8\,K & \multicolumn{1}{c}{98.5\%} \\ \hline
\multicolumn{1}{c}{CIFAR-10} & Image & ResNet-20 & 274.4\,K & \multicolumn{1}{c}{84.8\%} \\ \hline
\multicolumn{1}{c}{ImageNet} & Image & VGG-16 & 33.65\,M & \multicolumn{1}{c}{74.4\%} \\ \hline
\multicolumn{1}{c}{SpeechCommands} & Audio & LSTM(128) & 132.4\,K & \multicolumn{1}{c}{94.9\%} \\
\tabucline[1pt]{-}
\multicolumn{4}{l}{\#Params: number of parameters}
\end{tabu}
\end{table}
\subsection{Defending Against Model Finetuning \& Pruning}
\label{subsec:tuning}
As model finetuning and pruning threats are similar in processing the victim model (see Section \ref{sec:threat_model}), we discuss them together here. These two are also the most extensively studied threats in prior watermarking works \cite{adi2018turning,uchida2017embedding}.
\subsubsection{Attack strategies} Given a victim model and a small set of data in the same task domain, we consider the following four commonly used model finetuning \& pruning strategies:
\textbf{a) Finetune the last layer (FT-LL).} Update the parameters of the last layer while freezing all other layers.
\textbf{b) Finetune all layers (FT-AL).} Update the parameters of the entire model.
\textbf{c) Retrain all layers (RT-AL).} Re-initialize the parameters of the last layer then update the parameters of the entire model.
\textbf{d) Parameter pruning (P-r\%).}
Prune $r$ percentage of the parameters that have the smallest absolute values, then finetune the pruned model to restore the accuracy. We test both low ($r$=$20\%$) and high ($r$=$60\%$) pruning rates.
Typical data-augmentations are also used to strengthen the attacks. More details of these attacks are in Appendix \ref{subsec:aug}.
\begin{table*}[t]
\renewcommand\arraystretch{1.2}
\setlength\tabcolsep{4pt}
\footnotesize
\centering
\caption{Performance of \tool{DeepJudge} against model finetuning and pruning attacks in the \textbf{black-box setting}. PGD \cite{madry2017towards} is used to generate the adversarial test cases. ACC is the validation accuracy. For each metric, the values below (indicating `copy') or above (indicating `not copy') the threshold $\tau_\lambda$ (the last row) are highlighted in \redbox{red} (copy alert) and \greenbox{green} (no alert), respectively. `\textbf{Yes} (2/2)': two of the metrics vote for `copy' ($p_{copy}=100\%$); `\textbf{No} (0/2)': none of the metrics vote for `copy' ($p_{copy}=0\%$).} \label{tab:black-box}
\begin{tabu}{c|c|cccc|cccc}
\tabucline[1pt]{-}
\multicolumn{2}{c|}{\multirow{2}{*}{\textbf{Model Type}}} & \multicolumn{4}{c|}{\textbf{MNIST}} & \multicolumn{4}{c}{\textbf{CIFAR-10}} \\ \cline{3-10}
\multicolumn{2}{c|}{} & ACC & \emph{RobD} & \emph{JSD} & \textbf{Copy?} & ACC & \emph{RobD} & \emph{JSD} & \textbf{Copy?}\\ \tabucline[1pt]{-}
\multicolumn{2}{c|}{Victim Model} & 98.5\% & -- & -- & -- & 84.8\% & -- & -- & -- \\ \tabucline[1pt]{-}
\multicolumn{1}{c|}{\multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}Positive\\ Suspect\\ Models\end{tabular}}} & FT-LL & 98.8$\pm$0.0\% & \redbox{0.019$\pm$0.003} & \redbox{0.016$\pm$0.002} & \textbf{Yes} (2/2) & 82.1$\pm$0.1\% & \redbox{0.000$\pm$0.000} & \redbox{0.002$\pm$0.001} & \textbf{Yes} (2/2) \\ \cline{2-10}
\multicolumn{1}{c|}{} & FT-AL & 98.7$\pm$0.1\% & \redbox{0.045$\pm$0.016} & \redbox{0.033$\pm$0.010} & \textbf{Yes} (2/2) & 79.9$\pm$1.4\% & \redbox{0.192$\pm$0.028} & \redbox{0.162$\pm$0.014} & \textbf{Yes} (2/2)\\ \cline{2-10}
\multicolumn{1}{c|}{} & RT-AL & 98.4$\pm$0.2\% & \redbox{0.298$\pm$0.039} & \redbox{0.151$\pm$0.017} & \textbf{Yes} (2/2) & 79.4$\pm$0.8\% & \redbox{0.237$\pm$0.055} & \redbox{0.197$\pm$0.027} & \textbf{Yes} (2/2)\\ \cline{2-10}
\multicolumn{1}{c|}{} & P-20\% & 98.7$\pm$0.1\% & \redbox{0.058$\pm$0.014} & \redbox{0.035$\pm$0.009} & \textbf{Yes} (2/2) & 81.7$\pm$0.2\% & \redbox{0.155$\pm$0.032} & \redbox{0.128$\pm$0.018} & \textbf{Yes} (2/2)\\ \cline{2-10}
\multicolumn{1}{c|}{} & P-60\% & 98.6$\pm$0.1\% & \redbox{0.172$\pm$0.024} & \redbox{0.097$\pm$0.010} & \textbf{Yes} (2/2) & 81.1$\pm$0.6\% & \redbox{0.318$\pm$0.036} & \redbox{0.233$\pm$0.019} & \textbf{Yes} (2/2)\\ \tabucline[1pt]{-}
\multicolumn{1}{c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Negative\\ Suspect\\ Models\end{tabular}}} & Neg-1 & 98.4$\pm$0.3\% & \greenbox{0.968$\pm$0.014} & \greenbox{0.614$\pm$0.016} & \textbf{No} (0/2) & 84.2$\pm$0.6\% & \greenbox{0.920$\pm$0.021} & \greenbox{0.603$\pm$0.016} & \textbf{No} (0/2)\\ \cline{2-10}
\multicolumn{1}{c|}{} & Neg-2 & 98.3$\pm$0.2\% & \greenbox{0.949$\pm$0.029} & \greenbox{0.600$\pm$0.020} & \textbf{No} (0/2) & 84.9$\pm$0.5\% & \greenbox{0.926$\pm$0.030} & \greenbox{0.615$\pm$0.021} & \textbf{No} (0/2)\\ \cline{2-10}
\multicolumn{1}{c|}{} & \textbf{$\tau_\lambda$} & -- & \textbf{0.852} & \textbf{0.538} & -- & -- & \textbf{0.816} & \textbf{0.537} & --\\ \tabucline[1pt]{-}
\end{tabu}
\vspace{1mm}
\begin{tabu}{c|c|cccc|cccc}
\tabucline[1pt]{-}
\multicolumn{2}{c|}{\multirow{2}{*}{\textbf{Model Type}}} & \multicolumn{4}{c|}{\textbf{ImageNet}} & \multicolumn{4}{c}{\textbf{SpeechCommands}} \\ \cline{3-10}
\multicolumn{2}{c|}{} & ACC & \emph{RobD} & \emph{JSD} & \textbf{Copy?} & ACC & \emph{RobD} & \emph{JSD} & \textbf{Copy?}\\ \tabucline[1pt]{-}
\multicolumn{2}{c|}{Victim model} & 74.4\% & -- & -- & -- & 94.9\% & -- & -- & -- \\ \tabucline[1pt]{-}
\multicolumn{1}{c|}{\multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}Positive\\ Suspect\\ Models\end{tabular}}} & FT-LL & 73.2$\pm$0.4\% & \redbox{0.034$\pm$0.007} & \redbox{0.009$\pm$0.003} & \textbf{Yes} (2/2) & 95.2$\pm$0.1\% & \redbox{0.104$\pm$0.007} & \redbox{0.036$\pm$0.006} & \textbf{Yes} (2/2)\\ \cline{2-10}
\multicolumn{1}{c|}{} & FT-AL & 70.8$\pm$0.9\% & \redbox{0.073$\pm$0.011} & \redbox{0.043$\pm$0.011} & \textbf{Yes} (2/2) & 95.8$\pm$0.3\% & \redbox{0.326$\pm$0.024} & \redbox{0.155$\pm$0.014} & \textbf{Yes} (2/2)\\ \cline{2-10}
\multicolumn{1}{c|}{} & RT-AL & 53.3$\pm$0.8\% & \redbox{0.192$\pm$0.008} & \redbox{0.251$\pm$0.015} & \textbf{Yes} (2/2) & 94.3$\pm$0.3\% & \redbox{0.445$\pm$0.019} & \redbox{0.231$\pm$0.016} & \textbf{Yes} (2/2)\\ \cline{2-10}
\multicolumn{1}{c|}{} & P-20\% & 69.7$\pm$1.1\% & \redbox{0.106$\pm$0.010} & \redbox{0.064$\pm$0.003} & \textbf{Yes} (2/2) & 95.4$\pm$0.2\% & \redbox{0.310$\pm$0.026} & \redbox{0.152$\pm$0.013} & \textbf{Yes} (2/2)\\ \cline{2-10}
\multicolumn{1}{c|}{} & P-60\% & 68.8$\pm$1.0\% & \redbox{0.161$\pm$0.017} & \redbox{0.091$\pm$0.004} & \textbf{Yes} (2/2) & 95.0$\pm$0.5\% & \redbox{0.437$\pm$0.030} & \redbox{0.215$\pm$0.013} & \textbf{Yes} (2/2)\\ \tabucline[1pt]{-}
\multicolumn{1}{c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Negative\\ Suspect\\ Models\end{tabular}}} & Neg-1 & 74.2$\pm$0.3\% & \greenbox{0.737$\pm$0.007} & \greenbox{0.395$\pm$0.006} & \textbf{No} (0/2) & 94.9$\pm$0.7\% & \greenbox{0.819$\pm$0.025} & \greenbox{0.456$\pm$0.014} & \textbf{No} (0/2) \\ \cline{2-10}
\multicolumn{1}{c|}{} & Neg-2 & 73.9$\pm$0.5\% & \greenbox{0.760$\pm$0.010} & \greenbox{0.429$\pm$0.004} & \textbf{No} (0/2) & 94.5$\pm$0.8\% & \greenbox{0.832$\pm$0.024} & \greenbox{0.472$\pm$0.012} & \textbf{No} (0/2) \\ \cline{2-10}
\multicolumn{1}{c|}{} & \textbf{$\tau_\lambda$} & -- & \textbf{0.659} & \textbf{0.356} & -- & -- & \textbf{0.727} & \textbf{0.405} & --\\ \tabucline[1pt]{-}
\end{tabu}
\end{table*}
\begin{table*}[t]
\renewcommand\arraystretch{1.2}
\setlength\tabcolsep{2.1pt}
\centering
\footnotesize
\caption{Performance of \tool{DeepJudge} against model finetuning and pruning attacks in the \textbf{white-box setting}. Algorithm~\ref{alg:neuron} is used to generate the test cases. For each metric, the values below (indicating `copy') or above (indicating `not copy') the threshold $\tau_\lambda$ (the last row) are highlighted in \redbox{red} (copy alert) and \greenbox{green} (no alert) respectively. `\textbf{Yes} (4/4)': all 4 metrics vote for `copy' ($p_{copy}=100\%$); `\textbf{No} (0/4)': none of the metrics vote for `copy' ($p_{copy}=0\%$).} \label{tab:white-box}
\begin{tabu}{cc|ccccc|ccccc}
\tabucline[1pt]{-}
\multicolumn{2}{c|}{\multirow{2}{*}{\textbf{Model Type}}} & \multicolumn{5}{c|}{\textbf{MNIST}} & \multicolumn{5}{c}{\textbf{CIFAR-10}} \\ \cline{3-12}
\multicolumn{2}{c|}{} & \emph{NOD} & \emph{NAD} & \emph{LOD} & \emph{LAD} & \textbf{Copy?} & \emph{NOD} & \emph{NAD} & \emph{LOD} & \emph{LAD} & \textbf{Copy?} \\ \tabucline[1pt]{-}
\multicolumn{1}{c|}{\multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}Positive\\ Suspect\\ Models\end{tabular}}} & FT-LL & \redbox{0.00$\pm$0.00} & \redbox{0.00$\pm$0.00} & \redbox{0.00$\pm$0.00} & \redbox{0.00$\pm$0.00} & \textbf{Yes} (4/4) & \redbox{0.00$\pm$0.00} & \redbox{0.00$\pm$0.00} & \redbox{0.00$\pm$0.00} & \redbox{0.00$\pm$0.00} & \textbf{Yes} (4/4) \\ \cline{2-12}
\multicolumn{1}{c|}{} & FT-AL & \redbox{0.08$\pm$0.01} & \redbox{0.23$\pm$0.21} & \redbox{0.32$\pm$0.03} & \redbox{0.82$\pm$0.16} & \textbf{Yes} (4/4) & \redbox{0.15$\pm$0.02} & \redbox{0.30$\pm$0.12} & \redbox{0.74$\pm$0.07} & \redbox{0.21$\pm$0.04} & \textbf{Yes} (4/4) \\ \cline{2-12}
\multicolumn{1}{c|}{} & RT-AL & \redbox{0.31$\pm$0.02} & \redbox{0.37$\pm$0.20} & \redbox{0.97$\pm$0.04} & \redbox{1.27$\pm$0.29} & \textbf{Yes} (4/4) & \redbox{0.18$\pm$0.02} & \redbox{0.26$\pm$0.10} & \redbox{0.78$\pm$0.03} & \redbox{0.22$\pm$0.02} & \textbf{Yes} (4/4) \\ \cline{2-12}
\multicolumn{1}{c|}{} & P-20\% & \redbox{0.10$\pm$0.01} & \redbox{0.16$\pm$0.12} & \redbox{0.36$\pm$0.03} & \redbox{0.79$\pm$0.15} & \textbf{Yes} (4/4) & \redbox{0.28$\pm$0.03} & \redbox{0.32$\pm$0.09} & \redbox{0.77$\pm$0.06} & \redbox{0.24$\pm$0.02} & \textbf{Yes} (4/4) \\ \cline{2-12}
\multicolumn{1}{c|}{} & P-60\% & \redbox{0.11$\pm$0.01} & \redbox{0.82$\pm$0.26} & \redbox{0.43$\pm$0.03} & \redbox{1.16$\pm$0.08} & \textbf{Yes} (4/4) & \redbox{0.62$\pm$0.03} & \redbox{1.65$\pm$0.34} & \redbox{2.80$\pm$0.21} & \redbox{0.93$\pm$0.10} & \textbf{Yes} (4/4) \\ \tabucline[1pt]{-}
\multicolumn{1}{c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Negative\\ Suspect\\ Models\end{tabular}}} & Neg-1 & \greenbox{0.77$\pm$0.07} & \greenbox{11.46$\pm$1.14} & \greenbox{1.73$\pm$0.06} & \greenbox{6.42$\pm$0.84} & \textbf{No} (0/4) & \greenbox{3.09$\pm$0.30} & \greenbox{10.94$\pm$1.74} & \greenbox{11.85$\pm$1.01} & \greenbox{5.41$\pm$0.67} & \textbf{No} (0/4) \\ \cline{2-12}
\multicolumn{1}{c|}{} & Neg-2 & \greenbox{0.79$\pm$0.08} & \greenbox{12.28$\pm$1.50} & \greenbox{1.78$\pm$0.13} & \greenbox{6.37$\pm$0.47} & \textbf{No} (0/4) & \greenbox{3.21$\pm$0.18} & \greenbox{11.09$\pm$0.71} & \greenbox{12.60$\pm$1.33} & \greenbox{5.37$\pm$0.72} & \textbf{No} (0/4) \\ \cline{2-12}
\multicolumn{1}{c|}{} & $\tau_\lambda$ & \textbf{0.45} & \textbf{6.74} & \textbf{1.03} & \textbf{3.65} & -- & \textbf{1.79} & \textbf{6.14} & \textbf{6.89} & \textbf{3.01} & -- \\ \tabucline[1pt]{-}
\end{tabu}
\vspace{1mm}
\begin{tabu}{cc|ccccc|ccccc}
\tabucline[1pt]{-}
\multicolumn{2}{c|}{\multirow{2}{*}{\textbf{Model Type}}} & \multicolumn{5}{c|}{\textbf{ImageNet}} & \multicolumn{5}{c}{\textbf{SpeechCommands}} \\ \cline{3-12}
\multicolumn{2}{c|}{} & \emph{NOD} & \emph{NAD} & \emph{LOD} & \emph{LAD} & \textbf{Copy?} & \emph{NOD} & \emph{NAD} & \emph{LOD} & \emph{LAD} &\textbf{Copy?} \\ \tabucline[1pt]{-}
\multicolumn{1}{c|}{\multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}Positive\\ Suspect\\ Models\end{tabular}}} & FT-LL & \redbox{0.00$\pm$0.00} & \redbox{0.00$\pm$0.00} & \redbox{0.00$\pm$0.00} & \redbox{0.00$\pm$0.00} & \textbf{Yes} (4/4) & \redbox{0.000$\pm$0.000} & \redbox{0.00$\pm$0.00} & \redbox{0.00$\pm$0.00} & \redbox{0.000$\pm$0.00} & \textbf{Yes} (4/4) \\ \cline{2-12}
\multicolumn{1}{c|}{} & FT-AL & \redbox{0.02$\pm$0.01} & \redbox{0.18$\pm$0.09} & \redbox{0.16$\pm$0.05} & \redbox{0.58$\pm$0.13} & \textbf{Yes} (4/4) & \redbox{0.037$\pm$0.003} & \redbox{0.05$\pm$0.02} & \redbox{0.42$\pm$0.02} & \redbox{12.82$\pm$1.00} & \textbf{Yes} (4/4) \\ \cline{2-12}
\multicolumn{1}{c|}{} & RT-AL & \redbox{0.03$\pm$0.00} & \redbox{0.30$\pm$0.07} & \redbox{0.25$\pm$0.03} & \redbox{0.78$\pm$0.05} & \textbf{Yes} (4/4) & \redbox{0.055$\pm$0.003} & \redbox{0.25$\pm$0.31} & \redbox{0.64$\pm$0.08} & \redbox{21.64$\pm$2.47} & \textbf{Yes} (4/4) \\ \cline{2-12}
\multicolumn{1}{c|}{} & P-20\% & \redbox{0.11$\pm$0.01} & \redbox{0.83$\pm$0.06} & \redbox{0.76$\pm$0.01} & \redbox{1.67$\pm$0.22} & \textbf{Yes} (4/4) & \redbox{0.038$\pm$0.002} & \redbox{0.03$\pm$0.02} & \redbox{0.44$\pm$0.02} & \redbox{14.57$\pm$3.12} & \textbf{Yes} (4/4) \\ \cline{2-12}
\multicolumn{1}{c|}{} & P-60\% & \redbox{0.77$\pm$0.01} & \redbox{3.09$\pm$0.12} & \redbox{3.41$\pm$0.03} & \redbox{6.63$\pm$0.23} & \textbf{Yes} (4/4) & \redbox{0.094$\pm$0.004} & \redbox{0.45$\pm$0.32} & \redbox{0.67$\pm$0.04} & \redbox{20.58$\pm$3.44} & \textbf{Yes} (4/4) \\ \tabucline[1pt]{-}
\multicolumn{1}{c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Negative\\ Suspect\\ Models\end{tabular}}} & Neg-1 & \greenbox{6.55$\pm$0.78} & \greenbox{32.18$\pm$2.97} & \greenbox{35.03$\pm$3.13} & \greenbox{30.32$\pm$1.91} & \textbf{No} (0/4) & \greenbox{0.488$\pm$0.013} & \greenbox{39.61$\pm$9.74} & \greenbox{2.82$\pm$0.08} & \greenbox{64.32$\pm$2.42} & \textbf{No} (0/4) \\ \cline{2-12}
\multicolumn{1}{c|}{} & Neg-2 & \greenbox{6.25$\pm$0.39} & \greenbox{30.04$\pm$2.44} & \greenbox{44.21$\pm$3.11} & \greenbox{29.58$\pm$0.86} & \textbf{No} (0/4) & \greenbox{0.480$\pm$0.012} & \greenbox{34.84$\pm$6.07} & \greenbox{2.79$\pm$0.09} & \greenbox{62.69$\pm$1.75} & \textbf{No} (0/4) \\ \cline{2-12}
\multicolumn{1}{c|}{} & $\tau_\lambda$ & \textbf{3.48} & \textbf{17.17} & \textbf{20.74} & \textbf{17.20} & -- & \textbf{0.286} & \textbf{19.77} & \textbf{1.66} & \textbf{37.48} & -- \\ \tabucline[1pt]{-}
\end{tabu}
\end{table*}
\subsubsection{Effectiveness of \tool{DeepJudge}} The results are presented separately for black-box vs. white-box settings.
\vspace{0.5mm}
\noindent\textbf{Black-box Testing.}
In this setting, only the output probabilities of the suspect model are accessible. Here, \tool{DeepJudge} uses the two black-box metrics: \emph{RobD and JSD}. For both metrics, the smaller the value, the more similar the suspect model is to the victim model. Table~\ref{tab:black-box} reports the results of \tool{DeepJudge} on the four datasets. Note that we randomly repeat the experiment 6 times for each finetuning or pruning attack and 12 times for independent training (as more negative suspect models will result in a more accurate judging threshold).
Then, we report the average and standard deviation (in the form of $a\pm b$) in each entry of Table~\ref{tab:black-box}. Clearly, all positive suspect models are more similar to the victim model with significantly smaller \emph{RobD} and \emph{JSD} values than negative suspect models. Specifically, a low \emph{RobD} value indicates that the adversarial examples generated on the victim model have a high transferability to the suspect model, i.e., its decision boundary is closer to the victim model. In contrast, the \emph{RobD} values of the negative suspect models are much larger than that of the positives, which matches our intuition in Fig.~\ref{fig:adv}.
To further confirm the effectiveness of the proposed metrics, we show the ROC curve for a total of 54 models (30 positive suspect models and 24 negative suspect models) for \emph{RobD} and \emph{JSD} in Figure \ref{fig:ROC}. The AUC values are 1 for both metrics. Note that we omit the plots for the following white-box testing as the AUC values for all metrics are also 1.
\vspace{0.5mm}
\noindent\textbf{White-box Testing.}
In this setting, all intermediate-layer outputs of the suspect model are accessible. \tool{DeepJudge} can thus use the four white-box metrics (i.e., \emph{NOD, NAD, LOD, and LAD}) to test the models. Table~\ref{tab:white-box} reports the results on the four datasets. Similar to the two black-box metrics, the smaller the white-box metrics, the more likely the suspect model is a stolen copy. As shown in Table~\ref{tab:white-box}, there is a fundamental difference between the two sets (positive vs. negative) of suspect models according to each of the four metrics. That is, the two sets of models are completely separable, leading to highly accurate detection of the positive copies.
It is not surprising as white-box testing can collect more fine-grained information from the suspect models.
In both the black-box and white-box settings, the voting in \tool{DeepJudge} overwhelmingly supports the correct final judgement (the `Copy?' column).
\noindent\textbf{Combined Visualization.} To better understand the power of \tool{DeepJudge}, we combine the black-box and white-box testing results for each suspect model into a single radar chart in Fig.~\ref{fig:radars}. Each dimension of the radar chart corresponds to a \emph{similarity score} given by one testing metric. For better visual effect,
we normalize the values of the testing metrics into the range $[0,1]$, and the larger the normalized value, the more similar the suspect model to the victim. Thus, the filled area could be viewed as the \emph{accumulated supporting evidence} by \tool{DeepJudge} metrics for determining whether the suspect model is a stolen copy. Clearly, \tool{DeepJudge} is able to accurately distinguish positive suspects from negative ones.
Among the positive suspect models, the areas of RT-AL and P-60\% are noticeably smaller than the other two, meaning they are harder to detect. This is because these two attacks make the most parameter modifications to the victim model. Comparing the metrics, activation-based metrics (e.g., \emph{NAD}) demonstrate better performance than output-based metrics (e.g., \emph{NOD}), while white-box metrics are stronger than black-box metrics, especially against strong attacks like RT-AL. In Appendix~\ref{subsec:generation}, we also analyze the influencing factors including adversarial test case generation and layer selection (for computing the testing metrics) via several calibration experiments. An analysis of how different levels of finetuning or pruning affect \tool{DeepJudge} is presented in Appendix~\ref{discuss}.
\vspace{0.5mm}
\noindent\textbf{Time Cost of \tool{DeepJudge}.}
The time cost of generating test cases using 1k seeds is provided in appendix Table \ref{tab:time}. For the black-box setting, we report the cost of PGD-based generation, while for the white-box setting, we report that of Algorithm~\ref{alg:neuron}. It shows that the time cost of white-box generation is slightly higher but is still very efficient in practice. The maximum time cost occurs on the SpeechCommands dataset for white-box generation, which is $\sim1.2$ hours. This time cost is regarded as \emph{efficient} since test case generation is a \emph{one-time effort}, and the additional time cost of scanning a suspect model with the test cases is almost negligible.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\linewidth]{roc_robd.png}
\hspace{2mm}
\includegraphics[width=0.45\linewidth]{roc_jsd.png}
\caption{The detection ROC curves of metrics \emph{RobD} and \emph{JSD} on CIFAR-10 suspect models, and $AUC=1$ for both metrics.}
\label{fig:ROC}
\end{figure}
\begin{tcolorbox}[fonttitle = \bfseries]
\textbf{Remark 1:} \textsc{DeepJudge} is effective and efficient in identifying finetuning and pruning copies.
\end{tcolorbox}
\begin{figure*}[t]
\centering
\includegraphics[width=0.4\linewidth]{radar-cifar.png}
\hspace{8mm}
\includegraphics[width=0.4\linewidth]{radar-speech.png}
\setlength{\abovecaptionskip}{3pt}
\caption{Similarities of different suspect models to the victim model on CIFAR-10 (left 3 columns) and SpeechCommands (right 3 columns). We use the \textcolor{orange}{orange} line for the positive suspect models and the \blue{blue} line for negatives. Each dimension of the radar chart corresponds to a \emph{similarity score} given by one \tool{DeepJudge} metric. The similarity score is computed by first normalizing the metric, e.g., $RobD$, to $[0,1]$ then taking $1-RobD$.}
\label{fig:radars}
\end{figure*}
\subsubsection{Comparison with existing techniques}
\label{subsec:comparison}
We compare \tool{DeepJudge} with three state-of-the-art copyright defense methods against model finetuning and pruning attacks. More details of these defense methods can be found in Appendix~\ref{subsec:watermarking}.
\begin{figure*}[]
\centering
\includegraphics[width=0.4\linewidth]{black-compss.png}
\hspace{8mm}
\includegraphics[width=0.4\linewidth]{white-compss.png}
\caption{\tool{DeepJudge} vs. three state-of-the-art copyright defense methods. \emph{Left}: a comparison with \textbf{two black-box methods} \cite{zhang2018protecting, cao2021ipguard}; \emph{Right}: a comparison with \textbf{one white-box method} \cite{uchida2017embedding}. The results are normalized into $[0,1]$ for better visualization. The higher the normalized value, the better the identification of a positive suspect model.}
\label{fig:baseline}
\end{figure*}
\vspace{0.5mm}
\noindent\textbf{Black-box: Comparison to Watermarking and Fingerprinting}. DNNWatermarking \cite{zhang2018protecting} is a black-box watermarking method based on backdoors, and IPGuard \cite{cao2021ipguard} is a black-box fingerprinting method based on targeted adversarial attacks. Here, we compare these two baselines with \tool{DeepJudge} in the black-box setting. For DNNWatermarking, we train the watermarked model (i.e., victim model) using additionally patched samples from scratch to embed the watermarks, and the \emph{TSA} (Trigger Set Accuracy) of the suspect model is calculated for ownership verification.
IPGuard first generates targeted adversarial examples for the watermarked model then calculates the \emph{MR} (Matching Rate) (between the victim and the suspect) for verification.
For \tool{DeepJudge}, we only apply the \emph{RobD} (robustness distance) metric here for a fair comparison.
The left subfigure of Fig.~\ref{fig:baseline} visualizes the results. \tool{DeepJudge} demonstrates the best overall performance in this black-box setting. DNNWatermarking and IPGuard fail to identify the positive suspect models duplicated by FT-AL, RT-AL, P-20\% and P-60\%. Their scores (\emph{TSA} and \emph{MR}) drop drastically against these four attacks. This basically means that the embedded watermarks are completely removed, or the fingerprint can no longer be verified. While for the \emph{RobD} metric of \tool{DeepJudge}, the gap remains huge between the negative and positive suspects, demonstrating much better effectiveness to diverse finetuning and pruning attacks.
\vspace{0.5mm}
\noindent \textbf{White-box: Comparison to Watermarking}. EmbeddingWatermark \cite{uchida2017embedding} is a white-box watermarking method based on signatures. It requires access to model parameters for signature extraction. We train the victim model with the embedding regularizer \cite{uchida2017embedding} from scratch to embed a 128-bits signature. The \emph{BER} (Bit Error Rate) is calculated and used to measure the verification performance. The right subfigure of Fig.~\ref{fig:baseline} visualizes the comparison results to two white-box \tool{DeepJudge} metrics \emph{NOD} and \emph{NAD}.
The three metrics demonstrate a comparable performance with \emph{NAD} wins on 4 out of the 5 positive suspects. Note that the huge gap between the positives and negatives indicates that all metrics can correctly identify the positive suspects.
Here, a single metric of \tool{DeepJudge} was able to achieve the same level of protection as EmbeddingWatermark.
\begin{tcolorbox}[fonttitle = \bfseries]
\textbf{Remark 2:} Compared to state-of-the-art defense methods, \tool{DeepJudge} performs better in the black-box setting and comparably in the white-box setting against model finetuning and pruning attacks, while not tampering with model training.
\end{tcolorbox}
\begin{table*}[t]
\scriptsize
\caption{Performance of \tool{DeepJudge} against model extraction attacks in the \textbf{black-box setting}. PGD \cite{madry2017towards} is used to generate adversarial test cases. ACC is the validation accuracy. For each metric, the values below (indicating `copy') or above (indicating `not copy') the threshold $\tau_\lambda$ (the last row) are highlighted in \redbox{red} (copy alert) and \greenbox{green} (no alert), respectively. `\textbf{Yes} (2/2)': two of the metrics vote for positive ($p_{copy}=100\%$); `\textbf{No} (0/2)': none of the metrics vote for positive ($p_{copy}=0\%$). See more details about the 3 extraction attacks in Appendix~\ref{subsec:extraction}.}
\label{tab:steal}
\centering
\setlength\tabcolsep{1.2pt}
\renewcommand\arraystretch{1.2}
\begin{tabu}{cc|cccc|cccc|cccc}
\tabucline[1pt]{-}
\multicolumn{2}{c|}{\multirow{2}{*}{\textbf{Model Type}}} & \multicolumn{4}{c|}{\textbf{MNIST}} & \multicolumn{4}{c|}{\textbf{CIFAR-10}} & \multicolumn{4}{c}{\textbf{SpeechCommands}} \\ \cline{3-14}
\multicolumn{2}{c|}{} & ACC & \emph{RobD} & \emph{JSD} &\textbf{Copy?} & ACC & \emph{RobD} & \emph{JSD} &\textbf{Copy?} & ACC & \emph{RobD} & \emph{JSD} &\textbf{Copy?} \\\tabucline[1pt]{-}
\multicolumn{1}{c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Positive\\ Suspect\\ Models\end{tabular}}} & JBA & 83.6$\pm$1.7\% & \greenbox{0.866$\pm$0.034} & \greenbox{0.596$\pm$0.006} & \textbf{No} (0/2)& 40.3$\pm$1.5\% & \redbox{0.497$\pm$0.044} & \greenbox{0.541$\pm$0.015} & \textbf{No} (1/2) & 40.1$\pm$1.7\% & \redbox{0.381$\pm$0.030} & \greenbox{0.470$\pm$0.011} & \textbf{No} (1/2) \\ \cline{2-14}
\multicolumn{1}{c|}{} & Knock & 94.8$\pm$0.6\% & \redbox{0.491$\pm$0.032} & \redbox{0.273$\pm$0.021} & \textbf{Yes} (2/2) & 74.4$\pm$1.0\% & \redbox{0.715$\pm$0.018} & \redbox{0.436$\pm$0.019} & \textbf{Yes} (2/2) & 86.6$\pm$0.5\% & \redbox{0.618$\pm$0.012} & \redbox{0.303$\pm$0.007} & \textbf{Yes} (2/2) \\ \cline{2-14}
\multicolumn{1}{c|}{} &
ESA & 88.7$\pm$2.5\% & \redbox{0.175$\pm$0.056} & \redbox{0.141$\pm$0.042} & \textbf{Yes} (2/2) & 67.1$\pm$1.9\% & \redbox{0.144$\pm$0.031} & \redbox{0.249$\pm$0.033} & \textbf{Yes} (2/2) & $\times$ & $\times$ & $\times$ & -- \\ \tabucline[1pt]{-}
\multicolumn{1}{c|}{\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Negative\\ Suspect\\ Models\end{tabular}}} & Neg-1 & 98.4$\pm$0.3\% & \greenbox{0.968$\pm$0.014} & \greenbox{0.614$\pm$0.016} & \textbf{No} (0/2) & 84.2$\pm$0.6\% & \greenbox{0.920$\pm$0.021} & \greenbox{0.603$\pm$0.016} & \textbf{No} (0/2) & 94.9$\pm$0.7\% & \greenbox{0.817$\pm$0.025} & \greenbox{0.456$\pm$0.014} & \textbf{No} (0/2) \\ \cline{2-14}
\multicolumn{1}{c|}{} & Neg-2 & 98.3$\pm$0.2\% & \greenbox{0.949$\pm$0.029} & \greenbox{0.600$\pm$0.020} & \textbf{No} (0/2) & 84.9$\pm$0.5\% & \greenbox{0.926$\pm$0.030} & \greenbox{0.615$\pm$0.021} & \textbf{No} (0/2) & 94.5$\pm$0.8\% & \greenbox{0.832$\pm$0.024} & \greenbox{0.472$\pm$0.012} & \textbf{No} (0/2) \\ \cline{2-14}
\multicolumn{1}{c|}{} & $\tau_\lambda$ & -- & \textbf{0.852} & \textbf{0.538} & -- & -- & \textbf{0.816} & \textbf{0.537} & -- & -- & \textbf{0.727} & \textbf{0.405} & -- \\ \tabucline[1pt]{-}
\end{tabu}
\end{table*}
\subsection{Defending Against Model Extraction}
\label{subsec:extraction_exp}
Model extraction (also known as model stealing) is considered to be a more challenging threat to DNN copyright. In this part, we evaluate \tool{DeepJudge} against model extraction attacks, which has not been thoroughly studied in prior work.
\subsubsection{Attack strategies}
We consider model extraction with two different types of supporting data: auxiliary or synthetic (see Section~\ref{sec:threat_model}). We consider the following state-of-the-art model extraction attacks: \textbf{a) JBA (Jacobian-Based Augmentation \cite{papernot2017practical})} samples a set of seeds from the test dataset, then applies Jacobian-based data augmentation to synthesize more data from the seeds. \textbf{b) Knockoff (Knockoff Nets \cite{orekondy2019knockoff})} works with an auxiliary dataset that shares similar attributes as the original training data used to train the victim model. \textbf{c) ESA (ES Attack \cite{yuan2020attack})} requires no additional data but a huge amount of queries. ESA utilizes an adaptive gradient-based optimization algorithm to synthesize data from random noise. ESA could be applied in scenarios where it is hard to access the task domain data, such as personal health data. With the extracted data, the adversary can train a new model from scratch, assuming knowledge of the victim model's architecture. The new model is considered as a successful stealing if its performance matches with the victim model.
\subsubsection{Failure of watermarking}
\label{subsubsec:fail}
Our experiments in Section \ref{subsec:tuning} show the effectiveness and robustness of watermarking to finetuning and pruning attacks. Unfortunately, here we show that the embedded watermarks can be removed by model extraction attacks. We show the results of DNNWatermarking and EmbeddingWatermark in Fig.~\ref{fig:failure}.
The extracted models by different extraction attacks all differ greatly from the victim model according to either TSA (from DNNWatermarking) or BER (from EmbeddingWatermark).
For example, the TSA value for the victim model is 100\%, however, the TSA values for the three extracted copies are all below 1\%.
This basically means that the original watermarks are all erased in the extracted models. It will inevitably lead to failed ownership claims. This is somewhat not too surprising as watermarks are task-irrelevant contents and not the focus of model extraction.
\subsubsection{Effectiveness of \tool{DeepJudge}}
\label{subsec:valid_extraction}
Table~\ref{tab:steal} summarizes the results of \tool{DeepJudge}, which successfully identifies all positive suspect models, except when the stolen copies (by JBA) have extremely poor performance with 15\%, 44\% and 55\% lower accuracy than the corresponding victim model. We note that model extraction does not always work, and poorly performed extractions are less likely to pose a real threat. We also observe that \tool{DeepJudge} works better when the extraction is better, which therefore counters the ultimate perfect matching goal of model extraction attacks.
Compared to model finetuning or pruning, the average \emph{RobD} and \emph{JSD} values on extracted models are relatively larger, meaning that the decision boundaries of extracted models are more different from that of the victim model. The reason is that extracted models are often trained from a random point, while finetuning only slightly shifts the original boundary of the victim model, as depicted in Fig.~\ref{fig:adv}.
As such, model extraction is more stealthy and more challenging for ownership verification.
Nonetheless, the two metrics \emph{RobD} and \emph{JSD}, can still reveal the unique similarities (smaller values) of the extracted models to the victim model: the better the extraction (higher accuracy of the extracted model), the lower the \emph{RobD} and \emph{JSD} values. This indicates that the extracted model behaves more similarly to the victim as its decision boundary gradually approaching that of the victim, and also highlights the unique advantage of \tool{DeepJudge} against model extraction attacks. Note that JBA attack can only extract 50\% of the original accuracy on either CIFAR-10 or SpeechCommands, which should not be considered as successful extractions.
In Fig.~\ref{fig:mnist_extraction}, we further show the evolution of the \emph{RobD} and \emph{JSD} values throughout the entire extraction process of Knockoff, ESA and JBA attacks. We find that both \emph{RobD} (orange line) and \emph{JSD} (red line) values decrease as the extraction progresses, again, except for JBA. This confirms our speculation that, when tested by \tool{DeepJudge}, a better extracted model will expose more similarities to its victim. By contrast, we also study how these two values change during the training process of the negative model in Fig.~\ref{fig:mnist_extraction}, which shows that the independently trained negative suspect models tend to vary more from the victim model and produce higher \emph{RobD} and \emph{JSD} values.
\begin{tcolorbox}[fonttitle = \bfseries]
\textbf{Remark 3:} Model extraction attacks are more challenging than finetuning or pruning attacks, however, \tool{DeepJudge} can still correctly identify those successful extractions. Moreover, the better the extraction, the easier the extracted model will be identified by \tool{DeepJudge} as a stolen copy.
\end{tcolorbox}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\linewidth]{ko_mnist.png}
\includegraphics[width=0.49\linewidth]{esa_mnist.png}
\includegraphics[width=0.49\linewidth]{jba_mnist.png}
\includegraphics[width=0.49\linewidth]{neg_mnist.png}
\caption{The \emph{RobD} (orange line) and \emph{JSD} (red line) scores between the victim and extracted models throughout the entire extraction procedure (defined by sample sizes, epochs or rounds) on MNIST.}
\label{fig:mnist_extraction}
\end{figure}
\vspace{-2mm}
\section{Robustness to Adaptive Attackers}
\label{sec:adaptive_exp}
In this section, we explore potential adaptive attacks to \tool{DeepJudge} based on the adversary's knowledge of \tool{DeepJudge}: 1) the adversary knows the testing metrics and the test cases, or 2) the adversary only knows the testing metrics. Contrast evaluation of watermarking \& fingerprinting against similar adaptive attacks are in Appendix~\ref{subsec:adapt123}.
\vspace{-2mm}
\subsection{Knowing Both Testing Metrics and Test Cases}
\label{sec:adaptive_attack1}
In this threat model, the adversary has full knowledge of \tool{DeepJudge} including the testing metrics $\Lambda$ and the secret test cases $T$. We also assume the adversary has a subset of clean data. In \tool{DeepJudge}, we have two test settings, i.e., white-box testing and black-box testing. The two testings differ in the testing metrics and the generated test cases (see examples in Fig.~\ref{fig:sample}). The black-box test cases are labeled. Therefore, the adversary can mix $T$ into its clean subset to finetune the stolen model to have large testing distances (i.e., black-box testing metrics \emph{RobD} and \emph{JSD}) while maintaining good classification performance. This will fool \tool{DeepJudge} to identify the stolen model to be significantly different from the victim model. This adaptive attack against black-box testing is denoted by \emph{Adapt-B}. Since the white-box test cases are unlabeled, the adversary can use the predicted labels (by the victim model) as ground-truth and finetunes the stolen model following a similar procedure as \emph{Adapt-B}. This attack against white-box testing is denoted by \emph{Adapt-W}. Note that the suffix `\emph{-B/-W}' marks the target testing setting to attack, while both attacks are white-box adaptive attacks knowing all the information.
The results of \tool{DeepJudge} using the exposed test cases $T$ are reported in Table \ref{tab:adapt}. It shows that: 1) \tool{DeepJudge} is robust to \emph{Adapt-W}, which fails to maximize the output distance and activation distance simultaneously nor maintaining the original classification accuracy; 2) though \tool{DeepJudge} is not robust to \emph{Adapt-B} when the test cases are exposed with labels, it can easily recover the performance with new test cases generated with different seeds (see the ROC curves on the exposed and new test cases in Fig.~\ref{fig:old}); and 3) \tool{DeepJudge} can still correctly identify the stolen copies by \emph{Adapt-B} when combining black-box and white-box testings (the final judgements are all correct). Comparing the non-trivial effort of retraining/finetuning a model to the efficient generation of new test cases, \tool{DeepJudge} holds a clear advantage in the arms race against finetuning-based adaptive attacks.
It is noteworthy that \emph{Adapt-W} did not break all white-box metrics of \tool{DeepJudge}, since the mechanism of white-box testing is robust. Specifically, black-box testing characterizes the behaviors of the output layer, while white-box testing characterizes the internal behaviors of more shallow layers. Due to the over-parameterization property of DNNs, it is relatively easy to fine-tune the model to overfit the set of black-box test cases, subverting the results of the black-box metrics. However, in white-box testing, changing the activation status of all hidden neurons on the set of white-box test cases is almost impossible without completely retraining the model. Therefore, white-box testing is inherently more robust to adaptive attacks, especially when the test cases are exposed.
\begin{figure}[t]
\centering
\includegraphics[width=0.46\linewidth]{roc_exposed.png}
\hspace{2mm}
\includegraphics[width=0.46\linewidth]{roc_newcase.png}
\caption{Detection ROC curve of \emph{RobD} with exposed (left) and new (right) test cases against \emph{Adapt-B} attack on CIFAR-10.}
\label{fig:old}
\end{figure}
\subsection{Knowing Only the Testing Metrics}
\label{sec:adaptive_attack2}
In this threat model, the adversary can still adapt in different ways. We consider two adaptive attacks: \emph{adversarial training} targeting on black-box testing and a general \emph{transfer learning} attack on white-box testing, respectively.
\subsubsection{Blind adversarial training}
\label{subsubsec:advtrain}
Since our black-box testing mainly relies on probing the decision boundary difference using adversarial test cases, the adversary may utilize adversarial training to improve the robustness of the stolen copy. Given the PGD parameters and a subset of clean data (20\% of the original training data), the adversary iteratively trains the stolen model to smooth the model decision boundaries following \cite{madry2017towards}. This type of adaptive attack is denoted by \emph{Adv-Train}. As Table \ref{tab:adapt} shows, it can indeed circumvent our black-box testing, with a sacrifice of $\sim10\%$ performance (a phenomenon known as accuracy-robustness trade-off \cite{tsipras2018robustness,zhang2019theoretically}).
However, interestingly, if we replace the high-confidence seeds used in \tool{DeepJudge} with low-confidence seeds, \tool{DeepJudge} becomes effective again (as shown in Fig.~\ref{fig:advtrain}). One possible reason is that, compared to high-confidence seeds, these low-confidence seeds are natural boundary (hard) examples that are close to the decision boundary, thus can generate more test cases to cross the adversarially smoothed decision boundary
within certain perturbation budget.
Examples of high/low confidence test seeds are provided in Fig.~\ref{fig:gini-contrast}. It is also worth mentioning that our white-box testing still performs well in this case. Overall, \tool{DeepJudge} is robust to \emph{Adv-Train} or at least can be made robust by efficiently updating the seeds.
\begin{table*}[]
\footnotesize
\caption{Performance of \tool{DeepJudge} against several adaptive attacks on the CIFAR-10 dataset. \emph{Adapt-B}: adaptive attack against black-box testing; \emph{Adapt-W}: adaptive attack against white-box testing; \emph{Adv-Train}: adversarial training, \emph{VTL}: vanilla transfer learning. For each metric, the values below (indicating `copy') or above (indicating `not copy') the threshold $\tau_\lambda$ (the last row) are highlighted in \redbox{red} (copy alert) and \greenbox{green} (no alert) respectively.}
\label{tab:adapt}
\centering
\renewcommand\arraystretch{1.2}
\setlength\tabcolsep{4pt}
\begin{tabu}{c|c|cccccccc}
\tabucline[1pt]{-}
\multicolumn{2}{c|}{\multirow{2}{*}{\textbf{Model Type}}} & & \multicolumn{2}{c}{\textbf{Black-box Testing}} & \multicolumn{4}{c}{\textbf{White-box Testing}} \\ \cline{3-10}
\multicolumn{2}{c|}{} & ACC & \emph{RobD} & \emph{JSD} & \emph{NOD} & \emph{NAD} & \emph{LOD} & \emph{LAD} & \textbf{Copy?} \\ \tabucline[1pt]{-}
\multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Positive\\ Suspect\\ Models\end{tabular}}
& Adapt-B & 81.4$\pm$0.9\% & \greenbox{0.985$\pm$0.011} & \greenbox{0.665$\pm$0.007} & \redbox{0.38$\pm$0.04} & \redbox{0.44$\pm$0.15} & \redbox{1.12$\pm$0.06} & \redbox{0.40$\pm$0.05} & \textbf{Yes} (4/6) \\ \cline{2-10}
& Adapt-W & 71.9$\pm$1.8\% & \redbox{0.519$\pm$0.048} & \redbox{0.372$\pm$0.025} & \greenbox{3.11$\pm$0.12} & \redbox{1.94$\pm$0.12} & \greenbox{11.62$\pm$0.54} & \redbox{1.89$\pm$0.33} & \textbf{Yes} (4/6) \\ \cline{2-10}
& Adv-Train & 74.5$\pm$2.3\% & \greenbox{0.939$\pm$0.087} & \greenbox{0.637$\pm$0.036} & \redbox{0.68$\pm$0.11} & \redbox{0.79$\pm$0.17} & \redbox{1.89$\pm$0.14} & \redbox{0.75$\pm$0.08} & \textbf{Yes} (4/6) \\ \cline{2-10}
& VTL & 93.3$\pm$1.7\% & $\times$ & $\times$ & \redbox{0.85$\pm$0.23} & \redbox{1.08$\pm$0.14} & \redbox{2.58$\pm$0.24} & \redbox{0.64$\pm$0.15} & \textbf{Yes} (4/4) \\ \tabucline[1pt]{-}
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Negative\\ Suspect\\ Models\end{tabular}}
& Neg-1 & 84.2$\pm$0.6\% & \greenbox{0.920$\pm$0.021} & \greenbox{0.603$\pm$0.016} & \greenbox{3.09$\pm$0.30} & \greenbox{10.94$\pm$1.74} & \greenbox{11.85$\pm$1.01} & \greenbox{5.41$\pm$0.67} & \textbf{No} (0/6) \\ \cline{2-10}
& Neg-2 & 84.9$\pm$0.5\% & \greenbox{0.926$\pm$0.030} & \greenbox{0.615$\pm$0.021} & \greenbox{3.21$\pm$0.18} & \greenbox{11.09$\pm$0.71} & \greenbox{12.60$\pm$1.33} & \greenbox{5.37$\pm$0.72} & \textbf{No} (0/6) \\ \cline{2-10}
& $\tau_\lambda$ & -- & \textbf{0.816} & \textbf{0.537} & \textbf{1.79} & \textbf{6.14} & \textbf{6.89} & \textbf{3.01} & -- \\ \tabucline[1pt]{-}
\end{tabu}
\end{table*}
\begin{figure}[]
\centering
\includegraphics[width=0.46\linewidth]{roc_highconfidence.png}
\hspace{2mm}
\includegraphics[width=0.46\linewidth]{roc_lowconfidence.png}
\caption{Detection ROC curve of \emph{RobD} with adversarial test cases generated from high-confidence (left) or low-confidence seeds (right) against \emph{{Adv-Train}} attack on CIFAR-10.}
\label{fig:advtrain}
\vspace{-0.1in}
\end{figure}
\subsubsection{Transfer learning}
\label{subsubsec:vtl}
The adversary may transfer the stolen copy of the victim model to a new dataset. The adversary exploits the main structure of the victim model as a backbone and adds more layers to it.
Here, we test a vanilla transfer learning (\emph{VTL}) strategy from the 10-class CIFAR-10 to a 5-class SVHN \cite{netzer2011reading}. The last layer of the CIFAR-10 victim model is first replaced by a new classification layer. We then fine-tune all layers on the subset of SVHN data.
Note that, in this setting, the black-box metrics are no longer feasible since the suspect model has different output dimensions to the victim model, however, the white-box metrics can still be applied since the shallow layers are kept. The results are reported in Table~\ref{tab:adapt}. Remarkably, \tool{DeepJudge} succeeds in identifying transfer learning attacks with distinctively low testing distances and an $AUC=1$.
In one recent work \cite{DBLP:conf/iclr/MainiYP21}, it was observed that the knowledge of the victim model could be transferred to the stolen models. Dataset Inference (DI) technique was then proposed to probe whether the victim's knowledge (i.e., private training data) is preserved in the suspect model. We believe such knowledge-level testing metrics could also be incorporated into \tool{DeepJudge} to make it more comprehensive. An analysis of how different levels of transfer learning could affect \tool{DeepJudge} can be found in Appendix~\ref{discuss}.
\vspace{1mm}
\begin{tcolorbox}[fonttitle = \bfseries]
\textbf{Remark 4:} \tool{DeepJudge} is fairly robust to adversarial finetuning, adversarial training or transfer learning based adaptive attacks, although sometimes it needs to regenerate the seeds or test cases.
\end{tcolorbox}
\vspace{1mm}
\section{Conclusion}
\label{sec:conclusion}
In this work, we proposed \tool{DeepJudge}, a novel testing framework for copyright protection of deep learning models. The core of \tool{DeepJudge} is a family of multi-level testing metrics that characterize different aspects of similarities between the victim model and a suspect model.
Efficient and flexible test case generation methods are also developed in \tool{DeepJudge} to help boost the discriminating power of the testing metrics.
Compared to watermarking methods, \tool{DeepJudge} does not need to tamper with the model training process. Compared to fingerprinting methods, it can defend more diverse attacks and is more resistant to adaptive attacks.
\tool{DeepJudge} is applicable in both black-box and white-box settings against model finetuning, pruning and extraction attacks.
Extensive experiments on multiple benchmark datasets demonstrate the effectiveness and efficiency of \tool{DeepJudge}.
We have implemented \tool{DeepJudge} as a self-contained open-source toolkit.
As a generic testing framework, new testing metrics or test case generation methods can be effortlessly incorporated into \tool{DeepJudge} to help defend future threats to deep learning copyright protection.
\vspace{1mm}
\section*{Acknowledgement}
We are grateful to the anonymous reviewers and shepherd for their valuable comments. This research was supported by the Key R\&D Program of Zhejiang (2022C01018) and the NSFC Program (62102359, 61833015).
\clearpage
\bibliographystyle{plain}
|
2,869,038,155,084 | arxiv | \section{Introduction}\label{intro}
\input{sections/1_intro.tex}
\vspace{0.3cm}
\section{Problem Formulation}\label{prob_definition}
\input{sections/2_prob_defnition}
\section{Learning to Generate Implicit Knowledge by Self-Talk}\label{method}
\input{sections/3_method}
\section{Experiment Setup} \label{setup}
\input{sections/4_setup}
\section{Results} \label{result}
\input{sections/5_results}
\section{Related Work}\label{rel_work}
\input{sections/6_rel_work}
\section{Conclusion}\label{conclusion}
\input{sections/7_conclusion}
\bibliographystyle{acl_natbib}
\subsection{Response Generation}
We follow the common dialogue response generation setup~\cite{weizenbaum1966eliza, ritter-etal-2011-data, sordoni2015neural}: given a dialogue \emph{history} $H$ (a sequence of dialogue utterances), generate an appropriate \emph{response} $R$.
Current neural RG models often frame this task as a \emph{conditional language modeling} problem. Specifically, given a \emph{history ($H$)} consisting of a sequence of $n$ dialogue turns: $X_1, X_2, ..., X_n$ (each turn refers to an utterance containing a sequence of $t_i$ tokens: $x_{i,1}, x_{i,2}, ..., x_{i, t_i}$) and a \emph{response ($R$)} sentence $Y$ comprised of a sequence of $m$ tokens $y_1, y_2, ..., y_m$, RG models aim to learn the conditional probability distribution by training on human dialogues:
\begin{equation}
P_{\theta}(R|H)=\prod_{i=1}^{m}P_{\theta}(y_i|y_{<i}, X_1,...,X_n).
\end{equation}
\subsection{Implicit Knowledge Generation}\label{2.2}
To make the implicit knowledge grounding step explicit, we introduce a new component to RG -- implicit knowledge that is \emph{conditioned on} the dialogue history $H$. We use $I$ to denote the implicit knowledge for brevity, which contains multiple natural language (NL) statements $I=Z_1, Z_2, ...$ (each containing a sequence of tokens: $z_{i,1}, z_{i,2}, ...$) expressing commonsense knowledge. For example, in Figure~\ref{fig:motivation}, ``\emph{rose is a type of flower}'' and ``\emph{rose is a symbol of love}'' are two NL statements expressing the implicit commonsense knowledge.
To emulate realistic conversation scenario, we also \emph{fuse} dialogue history $H$ in traditional RG with implicit knowledge $I$ for each turn and denote it with $H'$. i.e. $H'=X_1, I_1, X_2, I_2 ..., X_n$, where $I_i$ indicates the implicit knowledge statements for the i-th turn in the dialogue history.
To externalize the knowledge grounding step, inspired by how humans communicate and inquiry-based learning~\cite{bruner1961act,shwartz-etal-2020-unsupervised}, our TBS RG paradigm requires models to first \emph{generate} implicit knowledge $I$ conditioned on $H'$,
\emph{i.e.} $P_{\theta}(I_n|H'=X_1, I_1, X_2, I_2 ..., X_n)$.
\subsection{Data Components}
This section introduces our proposed TBS method to train a generative model that can \emph{both} talk with itself to explicitly generate background commonsense knowledge ($P_{\theta}(I|H')$ ) and then generate response afterwards, $P_{\theta}(R|H', I)$.
Figure~\ref{fig:method} illustrates the process to train the TBS models.
To pair each dialogue with appropriate implicit knowledge, we first define a \emph{matching process} and use ConceptNet~\cite{speer2017conceptnet} as the implicit knowledge source (Section \ref{matching}).
Then, to construct training instances, we face two key method design choices: how to represent knowledge (\ref{representation}) and how to connect the knowledge with the dialogue (\ref{transition}).
Finally, we train TBS RG models to learn $P_{\theta}(I|H')$ and $P_{\theta}(R|H', I)$ with the same parameters $\theta$.
The following sections explain these components in details.
\subsection{Knowledge-Aligned Dialogues}\label{matching}
To train TBS models we need dialogue datasets consisting of a dialogue history, a response, and the knowledge statement connecting them.
We focus on two methods that create \emph{weakly-supervised knowledge labels} for dialogues as they are more scalable and cost less than human annotations.
\paragraph{Hard-Matching}
The hard-matching process first \emph{lemmatizes} all the non-stop words in each utterance, then it identifies
knowledge triples
whose two concepts appear in an utterance and the next turn respectively.
This is the same as the filtering process in~\citet{zhou-etal-2021-commonsense} and is closely related to distant supervision methods for relation extraction~\cite{craven1999constructing,mintz2009distant}. For more details, refer to Appendix~\ref{appendix_matching}.
\paragraph{Soft-Matching Using Embedding Similarity}
Hard-matching only captures the surface form and neglects many important semantic relations between words.
We thus develop a soft-matching procedure using embedding similarity from SentenceBERT~\cite{reimers-2019-sentence-bert} to measure semantic relations between dialogue turns and triples in ConceptNet.
Specifically, we first extract candidate triples from ConceptNet with one concept appearing in the $i^{th}$ turn.
Next, we form a \emph{query} by concatenating the $i^{th}$ turn and the next $(i+1)^{th}$ turn response.
Finally, we encode the query and all triple candidates using SentenceBERT and use cosine similarity to find the semantically closest triples as matched knowledge. More details are presented in Appendix~\ref{appendix_matching}.
\subsection{Knowledge Representation}\label{representation}
Implicit commonsense knowledge $I$ stored in ConceptNet is in the form of \emph{(subject $s$, relation $r$, object $o$)} triples, such as \emph{(rose, TypeOf, flower)}, which is not compatible with RG models, which operate on NL sentences and may not include relation tokens in their trained vocabulary.
Here we design two alternatives to represent the grounded knowledge and use the implicit knowledge in Figure~\ref{fig:motivation} as a running example.
\paragraph{Map Relations to Natural Language (NL)}
To convert ConceptNet triples into NL, we follow a common practice and map every relation $r$ in the triple to its NL template, and fill in $s$ and $o$ in the template~\cite{levy2017zero}. We use the same mapping as that used in COMET~\cite{bosselut-etal-2019-comet}, covering all standard types of relations in ConceptNet. For example, \emph{rose is a type of flower; rose is a symbol of love}.
\paragraph{Information-Seeking Question-Answer Pairs}
Another format to convert triples to NL sentences is through asking and answering information-seeking questions. \citet{shwartz2020unsupervised} designed templates of information-seeking questions and answers to provide background knowledge for LMs. We adopt a similar strategy and design a template for each relation in ConceptNet. For example, \emph{What is a type of flower? Rose is a type of flower. Rose is a symbol of what? Rose is a symbol of love}. The mappings we use for these two types of representations are shown in Appendix~\ref{appendix_mapping}.
\subsection{Knowledge-Dialogue Transition}\label{transition}
To help our RG models learn the TBS paradigm and generate outputs structured similarly, i.e., implicit knowledge first and then responses, we need to properly connect knowledge and dialogues in our data. Here we consider two alternatives for creating such a transition.
\textbf{Special symbols}. Following the common practice of separating sequences in neural LMs~\cite{radford2018improving,devlin-etal-2019-bert}, we use a special symbol to serve as the separator. We enclose the implicit knowledge $I$ with special symbols ``$<$implicit$>$'' and ``$<$/implicit$>$'' and add it between $H'$ and $R$, for example, ``\emph{$<$speaker1$>$ I need to buy some flowers for my wife. $<$implicit$>$ rose is a type of flower $<$/implicit$>$ $<$speaker2$>$ Perhaps you'd be interested in red roses.''}
\textbf{Natural language prompts}. More recent work has found that NL prompts help LMs to perform better on various downstream tasks, including natural language generation (NLG)~\cite{brown2020language, liu2021pre, zheng2021exploring}. Here we use the NL prompts to prompt RG models to \emph{generate} implicit knowledge and responses. We use ``\emph{The following background knowledge is helpful for generating the response:}'' to elicit knowledge and ``\emph{Grounded on the background knowledge, what does the speaker probably say in the next response?}'' to elicit response.
\subsection{Model Training}
After constructing knowledge-aligned dialogues, each of our data instances is a sequence of tokens with three components: a dialogue history $H'$ fused with potential implicit knowledge after each turn, implicit knowledge (empty or non-empty) $I$, and a response $R$.
We split each instance $d(H', R, I) \in D$ to first train the model to generate just the knowledge $I$ based on $H'$, $P_{\theta}(I|H')$, and then train it to generate $R$ based on both $I$ and $H'$, $P_{\theta}(R|H', I)$.
Formally, we follow standard way of modeling $P_{\theta}$ in auto-regressive neural RG models and use Maximum Likelihood Estimation (MLE) to train our model to maximize $P_{\theta}(I|H')$ (knowledge generation KG) by minimizing the conditional negative log-likelihood loss (NLL):
\begin{equation*}
\mathcal{L}_{KG} = - \sum_{i=1}^{m}\log P_\theta(Z_i|Z_{<i},X_1,...,X_n),
\end{equation*}
where $Z_i$ is the i-th statement in $I$. And to model $P_{\theta}(R|H', I)$ we minimize:
\begin{equation*}
\mathcal{L}_{RG} = - \sum_{i=1}^{m}\log P_\theta(y_i|y_{<i},X_1,I_1..., X_n).
\end{equation*}
We train one generative model on these losses in one-pass with splitted instances for KG and RG instead of multiple training phases. During inference, we only provide dialogue history as input and the model has to generate knowledge and responses.
\subsection{Dataset}
We consider dialogues from four datasets: DailyDialog~\cite{li2017dailydialog}, EmpatheticDialogues~\cite{rashkin2019towards}, MuTual~\cite{cui2020mutual}, and SocialIQA-prompted Commonsense-Dialogues ~\cite{zhou-etal-2021-commonsense}.
For training, we use the filtered version of the four datasets from~\citet{zhou-etal-2021-commonsense}, which ensures each dialogue contains at least one commonsense knowledge triple from ConceptNet.
In total, the training data contains 31k dialogues with 159k utterances.
We reserve 10\% of data as a development set for evaluating model training and selecting hyper-parameters.
Table~\ref{tab:data_stats} shows the number of instances resulted from applying our hard- and soft-matching procedures to our training data in order to construct knowledge-aligned dialogues.
For testing dialogues, to not bias our evaluation toward where common sense is crucial in making the response, we use the test data from the \emph{original data distribution} of the 4 datasets mentioned above. The testing data consists of around 3k dialogues.
\begin{table}[]
\centering
\small
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|c|c|c}
& \# Instances & Avg \# turns & Avg \# knowledge \\ \hline
Dialogues-Only & 159k & 4.3 & 0 \\
Hard-match & 57k & 4.5 & 1.4 \\
Soft-match & 71k & 4.6 & 2.8 \\ \hline
\end{tabular}
}
\caption{\small Dialogue data statistics.}
\label{tab:data_stats}
\end{table}
\subsection{Compared Methods}\label{baselines}
We use DialoGPT-medium~\cite{zhang-etal-2020-dialogpt} as our base model, which is a commonly-used end-to-end RG model.
We fine-tune \textbf{DialoGPT} using all of the 159K dialogue instances.
We also use DialoGPT to serve as the backbone model
and consider three variables in our TBS model configuration introduced from Sections~\ref{matching} to~\ref{transition}: \textbf{hard}-matching or \textbf{soft}-matching, special \textbf{symbol} as separator or NL \textbf{prompt}, and triple-converted-\textbf{NL} to represent knowledge or information seeking \textbf{QA} pairs. To justify our choice of using one model to do both KG and RG, we also compare with \textbf{TBS-Two Model} where we train separate models for knowledge generation (KG) and RG using the same training data. Our default model configuration is \emph{hard-symbol-NL}.
We also compare several knowledge-grounded RG baselines that \emph{retrieve} external knowledge or \emph{generate} knowledge with another model.
For retrieval, we follow most common approaches in knowledge-selection~\cite{zhao2017learning, wolf-etal-2020-transformers, eric2021multi} and train RoBERTa~\cite{liu2019roberta} to classify triples using our knowledge-aligned data (matched or not matched), and use it to label candidate triples during testing (\textbf{KS-RoBERTa}).
For the generative model, we use COMET~\cite{bosselut-etal-2019-comet} as a commonsense knowledge generator
(\textbf{KG-COMET}).
Furthermore, we consider RG models that take the hard-matched or soft-matched knowledge obtained from the \emph{ground-truth response} (\textbf{Hard-GT} and \textbf{Soft-GT}).
Note that though there is noise in hard-matching or soft-matching procedure, this setting uses the next turn response and is likely to provide relevant knowledge. Implementation details for all the models are shown in Appendix~\ref{appendix_implementation}.
\subsection{Evaluation Protocol}\label{eval_protocol}
\paragraph{Automatic Evaluation}
We use standard natural language generation metrics such as BLEU~\cite{papineni2002bleu}, METEOR~\cite{banerjee2005meteor}, ROUGE~\cite{lin2004rouge}, CIDEr~\cite{vedantam2015cider} and SkipThoughts~\cite{kiros2015skip}.
We also use GRADE~\cite{huang2020grade}, a reference-free metric shown to have consistent correlation with human judgements ~\cite{yeh2021comprehensive} to ensure the validity of experimental results.
\paragraph{Human Evaluation}
We conduct extensive human evaluation using 300 randomly sampled instances from unseen test dialogues described above.
For \textbf{response quality}, we conduct \emph{pairwise comparison} where we present a dialogue history and two responses made by two different models and ask them to choose one or select ``\emph{not sure}'' based on different criteria~\cite{zhou2018commonsense,zhang2020dialogpt}\footnote{We choose to conduct pairwise comparison since multiple previous work has shown that it produces a more reliable evaluation than directly asking humans to score the response, which is a highly subjective task~\cite{amidei2019use, callison-burch-etal-2007-meta, celikyilmaz2020evaluation}}.
We evaluate on \emph{six} dimensions: which response is more \emph{grammatical}, \emph{coherent}, \emph{engaging}, \emph{informative}, \emph{specific}, and \emph{makes common sense}~\cite{zhang2020dialogpt,roller2020recipes}.
More details of the instructions for annotators on each dimension with examples are included in Appendix~\ref{appendix_evaluation}.
For \textbf{knowledge quality}, we evaluate the generated knowledge in isolation (``\emph{does this knowledge make sense}'') and in conjunction with the context for relevance.
We perform majority voting per instance using three annotators from Amazon Mechnical Turk (AMT). We use Fleiss' Kappa ($\kappa$) ~\cite{fleiss1971measuring} to measure agreement among the annotators.
\subsection{Performance of Response Generation}
\paragraph{Model variant analysis}
To find the best-performing configuration of our TBS method, we consider alternatives as discussed in Sections~\ref{matching} to~\ref{transition}, and conduct 4 pairwise comparisons: \emph{soft} vs. \emph{hard}, \emph{prompt} vs. \emph{symbol}, and \emph{QA vs. relation-converted NL format}.
From Table~\ref{tab:variants}, we find that using soft-matching to create knowledge-aligned dialogue dataset produces more grammatical responses and responses that make more common sense, with $\kappa$=0.64-0.73, indicating substantial agreement according to one interpretation from~\citet{landis1977measurement}.
Using QA to represent knowledge makes the responses more grammatical, coherent, commonsensical, and also achieves the best performance on average on six dimensions.
We also compare results that combine these alternatives, \emph{e.g., soft-symbol-QA} (due to space constraints, results are shown in Appendix~\ref{appendix_variants}), however, we do not observe significant improvements after combining these alternatives and our best configuration in terms of average improvement is still \emph{hard-symbol-QA}.
We thus use \emph{hard-symbol-QA} as our final configuration and refer to it as \emph{TBS} throughout this section.
\paragraph{Does TBS produce better responses vs. end-to-end RG?}
By comparing TBS and \textit{end-to-end} DialoGPT-ft model in Table~\ref{tab:automatic} and Figure~\ref{fig:human_eval}, we find that TBS models produce better-quality responses using both automatic and human evaluations.
Specifically, even though hard-matching only annotates about 33\% of the training instances,
TBS outperforms end-to-end RG model significantly on most automatic metrics.
From human evaluation ($\kappa$=0.62-0.69), we find our TBS model performs on par with DialoGPT trained on more data in grammar, coherence, and engagingness, and achieves statistically-significant (p$<$ 0.05) improvement on informativeness, specificity, and the common sense aspects of generated responses\footnote{We also conducted direct scoring in human evaluations and observed significant improvement (on average 7.3 out of 10 for TBS vs. 5.9 for DialoGPT-ft), but since it results in lower agreement ($\kappa$=0.49), we focus on comparative evaluation.}. We argue that by providing weakly-supervised knowledge labels and TBS training, RG models require less data and can generate quality responses with improvement in the informativeness, specificity, and common sense aspects of the responses.
\paragraph{Is TBS knowledge generation better than other knowledge-augmented RG?}
We compare TBS models with other knowledge-augmented baselines that retrieve knowledge from ConceptNet using embedding scores (KS-SBERT) or a trained selector (KS-RoBERTa), or generate from \emph{another} model (KG-COMET). From Table~\ref{tab:automatic}, we find that these models perform similarly to the end-to-end DialoGPT model and are outperformed by TBS models on most automatic metrics. Figure~\ref{fig:human_eval} shows that while TBS methods have significant improvements on all dimensions against knowledge-selection baselines, COMET as a knowledge generator has smaller gaps on informativeness, specificity, and common sense, but is outperformed significantly on grammar, coherence, and engagingness.
Next we compare against the setup where we feed the model the knowledge that is derived using the \emph{ground-truth} response (Hard/Soft-GT), \emph{i.e.}, the provided knowledge is obtained using concepts appearing in the ground-truth response.
From Table~\ref{tab:automatic}, we surprisingly find that even though our proposed TBS model has no access to response-leaking knowledge labels and is trained on much less data, the TBS RG model still achieves statistically significant improvement on GRADE and BLEU-4. And from human evaluation results in Figure~\ref{fig:human_eval_GT}, TBS model significantly improves the specificity and common sense aspect of responses while stays on par on other evaluation dimensions compared with the hard-GT model and improves even more compared with soft-GT.
We find that one potential explanation is that only around 55\% of Hard-GT knowledge is labeled as \emph{used in response} whereas it is 77\% in our TBS model (see Section~\ref{analysis}).
This is also related to how the RG model leverages the knowledge in training.
Further analysis is needed to understand the effect of knowledge and the relationship between knowledge and responses.
\begin{table}[tb]
\centering
\small
\resizebox{\columnwidth}{!}{
\begin{tabular}{l|c|c|c}
\textbf{Model} & \textbf{Novel} & \textbf{Makes Sense} & \textbf{Relevant} \\ \hline
KS-SBERT & 0\% & 91.7\%* & 85.0\% \\
KS-RoBERTa & 0\% & 77.7\%* & 76.3\% \\
KG-COMET & 63.3\% & 68.3\%/63.2\% & 67.5\%/68.9\% \\ \hline
TBS-two-model & 46.3\% & 89.0\%/85.6\% & 90.7\%/90.2\% \\
TBS-one-model & 44\% & 86.3\%/85.9\% & 85.7\%/86.5\% \\ \hline
\end{tabular}
}
\caption{ \small Human evaluation on \textbf{knowledge quality}. For models that generate novel (not in ConceptNet) knowledge, we show \emph{non-novel/novel} percentages. ``*'' means knowledge is from ConceptNet (not generated).
}
\label{tab:KQ}
\end{table}
\subsection{Quality of Generated Knowledge }
We then examine how well TBS RG models learn to generate knowledge on unseen dialogues. We use human evaluation and focus on three dimensions: does the model generate \emph{novel} knowledge that does not appear in ConceptNet? does the generated knowledge statement \emph{make sense} as a standalone fact? and is the generated knowledge \emph{relevant} to the dialogue context? For the first question we directly query from ConceptNet and show percentages.
For the latter two we follow Section~\ref{eval_protocol} and show the percentages that MTurkers think the knowledge makes sense and is relevant from the 300 sampled test instances (the same used in response quality).
We test our TBS model, the two-model variant, and other knowledge-augmented baselines introduced in Section~\ref{baselines}.
\paragraph{Around 85\% of knowledge generated from TBS makes sense and is relevant}
Table~\ref{tab:KQ} shows that TBS models can generate implicit knowledge that makes sense and is relevant to the context for around 85\% of the time as judged by human annotators ($\kappa$=0.73-0.80). Compared with knowledge-selection models that retrieve knowledge from ConceptNet, TBS generates knowledge that is similar in terms of common sense and has better relevance to the dialogue history. Compared with COMET that also generates knowledge, we find TBS models generate more knowledge that follows common sense and is relevant to the dialogue. Comparing two-model and one-model TBS, we find that two-model generates more knowledge that makes sense and is relevant, although its response quality is poorer (Table~\ref{tab:automatic} and Figure~\ref{fig:human_eval}). This might be due to model synergies when learning both knowledge generation and response generation.
\paragraph{Model generates \emph{novel} knowledge}
We find a significant portion of \emph{novel} knowledge generated from the COMET and TBS models that is not present in the training data. Furthermore, the quality of the generated novel knowledge is similar to that of knowledge existing in ConceptNet. COMET generates more new knowledge but the quality (both common sense and relevance) is significantly lower than TBS models. We include some examples of novel knowledge generated in Appendix~\ref{appendix_results}. In general we find that the new knowledge is complimentary to ConceptNet, not just a paraphrased version of existing triples (since in those cases the model will directly generate the ConceptNet triple). This shows a promising sign that TBS RG models can potentially generate good-quality novel knowledge labels for unseen dialogues.
\begin{table*}[tb]
\centering
\resizebox{\linewidth}{!}{
\small
\begin{tabular}{|cc|l|l|}
\hline
\multicolumn{2}{|c|}{\textbf{Dialogue History}} & \begin{tabular}[c]{@{}l@{}}A: My last day of work was yesterday. \\ I got so emotional driving home and I cried. \\ B: I'm sorry to hear you were upset! \\ Why did you have to quit? \\ A: I'm going back to school full time. \\ I was there for more than 10 years. I will miss everyone.\end{tabular} & \begin{tabular}[c]{@{}l@{}}A: I am calling friends to let you know that\\ I am moving out of state. \\ B: What happened? Why? \\ A: I got a new job.\end{tabular} \\ \hline
\multicolumn{2}{|c|}{\multirow{2}{*}{\textbf{Human Response}}} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}B: Wow that is a long time! Hopefully you can stay \\ in touch with all of your former coworkers.\end{tabular}} & \multirow{2}{*}{B: It must pay well for you to move.} \\
\multicolumn{2}{|c|}{} & & \\ \hline
\multicolumn{2}{|c|}{\textbf{DialoGPT-ft}} & B: That's a long time. I bet you'll miss your friends. & B: I am not sure where I want to live. \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{TBS}}} & Gen. Knowledge & \textit{What is school related to? school is related to college;} & \textit{\begin{tabular}[c]{@{}l@{}}What is job related to? job is related to work;\\ What is pay related to? pay is related to job;\end{tabular}} \\ \cline{2-4}
\multicolumn{1}{|c|}{} & Response & \begin{tabular}[c]{@{}l@{}}B: I'm sorry to hear that. \\ I hope you have a good college experience!\end{tabular} & B: Oh wow! Will they pay you a lot? \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{KS-RoBERTa}}} & Retr. Knowledge & \textit{\begin{tabular}[c]{@{}l@{}}year is related to book; angry is a synonym of upset; \\ cry is related to upset;\end{tabular}} & \textit{\begin{tabular}[c]{@{}l@{}}move is an antonym of easy; move is related to leave; \\ move is related to change;\end{tabular}} \\ \cline{2-4}
\multicolumn{1}{|c|}{} & Response & B: You're not alone in that. I'm sorry you had to go through that. & B: I am sure you will be able to make it work. \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{KG-COMET}}} & Gen. Knowledge & \textit{I will miss everyone causes sadness} & \textit{I got a new job causes work} \\ \cline{2-4}
\multicolumn{1}{|c|}{} & Response & B: I'm sure you will. I'm sure you'll do great. & B: I'm so sorry! I hope you are able to get a new job! \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Hard-GT}}} & Hard-Matched Knowledge & \textit{\begin{tabular}[c]{@{}l@{}}year is related to long; long is related to miss; \\ long is related to full\end{tabular}} & \textit{pay is related to job} \\ \cline{2-4}
\multicolumn{1}{|c|}{} & Response & B: I know the feeling. I'm going back to school too. & B: I am sorry to hear that. I hope you find a better job. \\ \hline
\end{tabular}
}
\caption{Randomly sampled qualitative examples from our TBS model (TBS-hard-symbol-QA).
}
\label{tab:qual_examples}
\end{table*}
\begin{table*}[tb]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{1}{|c|}{\textbf{Dialogue History}} & \multicolumn{1}{c|}{\textbf{Generated Knowledge}} & \multicolumn{1}{c|}{\textbf{Generated Response}} \\ \hline
\begin{tabular}[c]{@{}l@{}}\textless{}speaker1\textgreater My last day of work was yesterday. \\ I got so emotional driving home and I cried. \\ \textless{}speaker2\textgreater I'm sorry to hear you were upset! \\ Why did you have to quit? \\ \textless{}speaker1\textgreater Im going back to school full time. \\ I was there for more than 10 years. I will miss everyone.\end{tabular} & What is school related to? school is related to college; & \begin{tabular}[c]{@{}l@{}}\textless{}speaker2\textgreater I'm sorry to hear that. \\ I hope you have a good college experience!\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}\textless{}speaker1\textgreater I am calling friends to let you know that\\ I am moving out of state. \\ \textless{}speaker2\textgreater What happened? Why? \\ \textless{}speaker1\textgreater I got a new job.\end{tabular} & \begin{tabular}[c]{@{}l@{}}What is job related to? job is related to work; \\ What is pay related to? pay is related to job;\end{tabular} & \textless{}speaker2\textgreater Oh wow! How long will you be working there? \\ \hline
\begin{tabular}[c]{@{}l@{}}\textless{}speaker1\textgreater There was something moving in the woods. \\ I looked out of the hunting shack and saw movement so I grabbed my gun.\end{tabular} & What is move related to? move is related to change; & \textless{}speaker2\textgreater Did you change your mind? \\ \hline
\begin{tabular}[c]{@{}l@{}}\textless{}speaker1\textgreater I love all of you guys! This was the most\\ special performance that I have ever had because of you all!\end{tabular} & What is happy related to? happy is related to special; & \textless{}speaker2\textgreater Awww, you're so happy. It's always nice to see a happy couple. \\ \hline
\begin{tabular}[c]{@{}l@{}}\textless{}speaker1\textgreater Two years ago, I discovered that my now ex-wife \\ was cheating on me with someone she worked with. \\ When I first discovered it, i was angry and hurt. \\ \textless{}speaker2\textgreater I bet you were, I did that to my husband\\ when I was young. I thought it was the solution to relief from abuse.\end{tabular} & \begin{tabular}[c]{@{}l@{}}What is man related to? man is related to young; \\ What is young? young is a man; \\ What is man related to? man is related to woman;\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textless{}speaker1\textgreater Yeah, I was so angry and hurt, \\ I thought I was going to be physically ill or something.\end{tabular} \\ \hline
\end{tabular}
}
\caption{Randomly sampled qualitative examples from TBS-hard-symbol-QA. We can find that in most cases the generated knowledge makes sense and responses are grounded in the knowledge and not generic. However, we do find some error cases such as the last example the response does not use the knowledge.
}
\label{tab:more_examples}
\end{table*}
\subsection{Performance Analysis}\label{analysis}
\paragraph{Most responses are knowledge grounded}
To examine how TBS methods leverage knowledge for RG, we also present annotators a history, generated knowledge, and generated response, and ask them whether the knowledge is \emph{used in response}.
We find that around 77\% of generated knowledge is used in the generated response, \emph{i.e.}, the response is \emph{grounded} in the knowledge generated from TBS.
\begin{figure}[tb]
\centering
\includegraphics[width=0.9\columnwidth]{figures/noisy_knowledge_effects.pdf}
\caption
\small
Effects of \textbf{noisy knowledge} on response quality.
}
\label{fig:noisy_effects}
\end{figure}
\paragraph{Noisy knowledge heavily impacts quality}
To better showcase the connection between knowledge and response, we examine how knowledge quality generated from TBS methods can affect response quality. During inference, we randomly sample \emph{noisy knowledge} from another dialogue, feed it to the model to generate a response conditioned on irrelevant knowledge, and compare the response quality with response generated from TBS knowledge.
Fig~\ref{fig:noisy_effects} shows that there is a statistically significant (p $\leq$ 0.05) drop in response quality in four dimensions. This indicates that the quality of knowledge input heavily influences response quality and that TBS models generate better responses because of its decent knowledge quality.
\paragraph{Qualitative examples and limitations}
We show several qualitative examples from different models and human responses in Table~\ref{tab:qual_examples}. We find that TBS generates relevant knowledge and responses grounded properly in that knowledge, whereas KS/KG models retrieve noisy knowledge and Hard-GT generates response not grounded in knowledge.
Here we present a summary of error patterns of TBS models and discuss potential directions to improve. More examples can be found in Table~\ref{tab:more_examples}.
First, our matching procedures do not concern multi-hop triples that might be needed for complex reasoning chains. Second, ConceptNet mostly contains taxonomic and lexical knowledge (``\emph{RelatedTo, IsA, etc}''), limiting the diversity of generated knowledge from TBS models. We plan to explore other knowledge resources such as ATOMIC2020~\cite{Hwang2021COMETATOMIC2O} in the future. Third, currently the model always generates implicit knowledge. In future work, we are interested in training RG models that understand \emph{when} implicit knowledge is needed based on the dialogue context.
\section*{Ethics and Broader Impact}
Our work aims to train RG models that explicitly generate implicit knowledge before responding. ~\citet{sheng2021nice} have found biases in DialoGPT (our base model) responses and ~\citet{mehrabi2021lawyers} have found representational harms in common sense resources. We acknowledge that the generated responses from our models might contain biases.
All of the dialogue datasets and models are in English, which benefits English speakers more. We have conducted human evaluation using Amazon Mechanical Turks. We pay turkers around \$15 per hour, well above the highest state minimum wage and engage in constructive discussions if they have concerns about the process. We also give each annotation instance enough time so that we do not pressure annotators.
\section*{Acknowledgments}
We thank anonymous reviewers for providing insightful feedback and members from Amazon Alexa AI team and INK and JAUNTS lab from USC. Pei Zhou, Jay Pujara, and Xiang Ren’s work on this project was funded by the Defense Advanced Research Projects Agency with award N660011924033. The research was also supported by gifts from Google.
\section{TBS Framework Details}
\subsection{Matching Detail}\label{appendix_matching}
\paragraph{Hard-Matching}
This process follows that used in ~\citet{zhou-etal-2021-commonsense}. We first identify potential candidates for concepts in ConceptNet~\cite{speer2017conceptnet}. For each utterance, we use a part-of-speech (POS) tagger to find the nouns, verbs, and adjectives that are not stopwords and then construct a set of potential concepts by including the lemmatized version of these words. The POS tagger, lemmatizer, and stopword list are from the Natural Language Toolkit (NLTK) package~\cite{bird2009natural}.
This step results in a set of concept words for \emph{each turn} of a dialogue.
With a set of concepts we extract for every dialogue turn, we then identify a list of candidate triples $(e_1, r, e_2)$. We use the ConceptNet containing single-word concepts pre-processed by~\citet{zhou2018commonsense}. For each concept we identified in a turn, we store all triples in ConceptNet that contain this concept, either as subject or object.
After getting a list of commonsense triples $(e_1, r, e_2)$ containing concepts in a particular turn using ConceptNet, we next examine if any of the \emph{other} entity in the triples appears in the concept set of the next turn.
If we find such a match,
we record this triple to be a commonsense assertion that might be implied in the response.
\paragraph{Soft-Matching}
We reuse the first several steps of hard-matching to find a set of candidate triples for each dialogue turn, then instead of searching for the exact words in the next turn, we use embedding similarity from SentenceBERT~\cite{reimers-2019-sentence-bert} (specifically the ``\emph{all-MiniLM-L6-v2}'' variant, which is claimed to be a ``All-round model tuned for many use-cases. Trained on a large and diverse dataset of over 1 billion training pairs'')\footnote{\url{https://www.sbert.net/docs/usage/semantic_textual_similarity.html}}.
To select the final matched knowledge, we choose the top 3 triples from ConceptNet with the highest similarity. After examining the distribution of embedding similarities from SBERT, we also require the similarity to be above 0.4 to be matched to ensure quality matching.
\subsection{Mappings}\label{appendix_mapping}
We show complete mappings of relations from ConceptNet for both relation-converted NL and information-seeking QA pairs in Table~\ref{tab:mappings}.
\begin{figure}[]
\centering
\includegraphics[width=\columnwidth]{figures/data_example.pdf}
\caption
{\textbf{Data example.} We align implicit knowledge from ConceptNet~\cite{speer2017conceptnet} between dialogue turns and form each instance in three components.}
}
\label{fig:data}
\end{figure}
\section{Experimental Details}
\subsection{Implementation Details}\label{appendix_implementation}
We use base models from HuggingFace\footnote{DialoGPT-medium:~\url{https://huggingface.co/microsoft/DialoGPT-medium}} and implement TBS based on TransferTransfo~\cite{wolf2019transfertransfo}\footnote{\url{https://github.com/huggingface/transfer-learning-conv-ai}}.
We fine-tune the model for 3 epochs with batch size 4 and set the learning rate to be 6.25e-5. We perform gradient accumulation for 8 steps and gradient clipping with a max norm of 1.0 and optimize using the Adam optimizer. For decoding, we use top-p nucleus sampling~\cite{holtzman2019curious} with temperature T (p = 0.9 and T = 0.7), and a maximum decoding length of 300 tokens. Note that since we are also generating knowledge, this maximum length is larger than normal RG models.
Our TBS models are mostly trained on 4 Quadro RTX 8000 GPUs and take around 5 hours. For automatic metrics, we use the nlg-eval package\footnote{\url{https://github.com/Maluuba/nlg-eval}} and the GRADE repo\footnote{\url{https://github.com/li3cmz/GRADE}}.
\subsection{Evaluation Detail}\label{appendix_evaluation}
We present the MTurk interface we use for response quality and knowledge quality evaluation in Figures~\ref{fig:turking_gce},~\ref{fig:turking_ics}, and~\ref{fig:turking_kq} including instructions and examples. We require turkers to have at least 500 numbers of HITs approved, with approval rate higher than 95\%, and from either Canada, UK, or US since our data is in English.
\section{Additional Results}\label{appendix_results}
\begin{table*}[tb]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{|l|l|l|}
\hline
\textbf{Relation in ConceptNet} & \textbf{Relation-Converted NL} & \textbf{Information-Seeking QA} \\ \hline
DefinedAs & is defined as & What is \textless{}concept1\textgreater defined as? | \textless{}concept1\textgreater is defined as \textless{}concept2\textgreater{} \\ \hline
DesireOf & desires & What does \textless{}concept1\textgreater desire of? | \textless{}concept1\textgreater desires \textless{}concept2\textgreater{} \\ \hline
HasA & has a & What does \textless{}concept1\textgreater have? | \textless{}concept1\textgreater has \textless{}concept2\textgreater{} \\ \hline
HasFirstSubevent & starts with & What does \textless{}concept1\textgreater start with? | \textless{}concept1\textgreater starts with \textless{}concept2\textgreater{} \\ \hline
HasLastSubevent & ends with & What does \textless{}concept1\textgreater end with? | \textless{}concept1\textgreater ends with \textless{}concept2\textgreater{} \\ \hline
HasPrerequisite & requires & What does \textless{}concept1\textgreater require? | \textless{}concept1\textgreater requires \textless{}concept2\textgreater{} \\ \hline
HasProperty & has the property & What property does \textless{}concept1\textgreater have? | \textless{}concept1\textgreater is \textless{}concept2\textgreater{} \\ \hline
HasSubevent & requires & What subevent does \textless{}concept1\textgreater have? | \textless{}concept1\textgreater has subevent of \textless{}concept2\textgreater{} \\ \hline
IsA & is a & What is \textless{}concept1\textgreater{}? | \textless{}concept1\textgreater is a \textless{}concept2\textgreater{} \\ \hline
MadeOf & is made of & What is \textless{}concept1\textgreater made of? | \textless{}concept1\textgreater is made of \textless{}concept2\textgreater{} \\ \hline
MotivatedByGoal & is motivated by & What is \textless{}concept1\textgreater motivated by? | \textless{}concept1\textgreater is motivated by \textless{}concept2\textgreater{} \\ \hline
NotCapableOf & is not capable of & What is \textless{}concept1\textgreater not capable of? | \textless{}concept1\textgreater is not capable of \textless{}concept2\textgreater{} \\ \hline
NotDesires & does not desire & What does \textless{}concept1\textgreater not desire? | \textless{}concept1\textgreater does not desire \textless{}concept2\textgreater{} \\ \hline
NotHasA & does not have a & What does \textless{}concept1\textgreater not have? | \textless{}concept1\textgreater does not have a \textless{}concept2\textgreater{} \\ \hline
NotHasProperty & does not have the property & What property does \textless{}concept1\textgreater not have? | \textless{}concept1\textgreater does not have \textless{}concept2\textgreater{} \\ \hline
NotIsA & is not a & What \textless{}concept1\textgreater is not? | \textless{}concept1\textgreater is not a \textless{}concept2\textgreater{} \\ \hline
NotMadeOf & is not made of & What is \textless{}concept1\textgreater not made of? | \textless{}concept1\textgreater is not made of \textless{}concept2\textgreater{} \\ \hline
PartOf & is part of & What is \textless{}concept1\textgreater a part of? | \textless{}concept1\textgreater is a part of \textless{}concept2\textgreater{} \\ \hline
RelatedTo & is related to & What is \textless{}concept1\textgreater related to? | \textless{}concept1\textgreater is related to \textless{}concept2\textgreater{} \\ \hline
SymbolOf & is a symbol of & What is \textless{}concept1\textgreater a symbol of? | \textless{}concept1\textgreater is a symbol of \textless{}concept2\textgreater{} \\ \hline
UsedFor & is used for & What is \textless{}concept1\textgreater used for? | \textless{}concept1\textgreater is used for \textless{}concept2\textgreater{} \\ \hline
AtLocation & is located at & Where is \textless{}concept1\textgreater{}? | \textless{}concept1\textgreater is located at \textless{}concept2\textgreater{} \\ \hline
CapableOf & is capable of & What is \textless{}concept1\textgreater capable of? | \textless{}concept1\textgreater is capable of \textless{}concept2\textgreater{} \\ \hline
Causes & causes & What does \textless{}concept1\textgreater cause? | \textless{}concept1\textgreater causes \textless{}concept2\textgreater{} \\ \hline
CausesDesire & causes the desire to & What desire does \textless{}concept1\textgreater cause? | \textless{}concept1\textgreater causes desire of \textless{}concept2\textgreater{} \\ \hline
CreatedBy & is created by & What is \textless{}concept1\textgreater created by? | \textless{}concept1\textgreater is created by \textless{}concept2\textgreater{} \\ \hline
Desires & desires & What does \textless{}concept1\textgreater desire? | \textless{}concept1\textgreater desires \textless{}concept2\textgreater{} \\ \hline
HasPainCharacter & has pain character of & What pain character does \textless{}concept1\textgreater have? | \textless{}concept1\textgreater has pain character of \textless{}concept2\textgreater{} \\ \hline
HasPainIntensity & has pain intensity of & What pain intensity does \textless{}concept1\textgreater have? | \textless{}concept1\textgreater has pain intensity of \textless{}concept2\textgreater{} \\ \hline
InheritsFrom & inherits from & What does \textless{}concept1\textgreater inherit from? | \textless{}concept1\textgreater inherits from \textless{}concept2\textgreater{} \\ \hline
InstanceOf & is an instance of & What is \textless{}concept1\textgreater an instance of? | \textless{}concept1\textgreater is an instance of \textless{}concept2\textgreater{} \\ \hline
LocatedNear & is located near & What is \textless{}concept1\textgreater located near? | \textless{}concept1\textgreater is located near \textless{}concept2\textgreater{} \\ \hline
LocationOfAction & has location of action at & What location of action does \textless{}concept1\textgreater have? | \textless{}concept1\textgreater has location of action of \textless{}concept2\textgreater{} \\ \hline
ReceivesAction & receives action of & What action does \textless{}concept1\textgreater receive? | \textless{}concept1\textgreater received action of \textless{}concept2\textgreater{} \\ \hline
Antonym & is an antonym of & What is an antonym of \textless{}concept1\textgreater{}? | \textless{}concept1\textgreater is an antonym of \textless{}concept2\textgreater{} \\ \hline
DerivedFrom & is derived from & What is \textless{}concept1\textgreater derived from? | \textless{}concept1\textgreater is derived from \textless{}concept2\textgreater{} \\ \hline
DistinctFrom & is distinct form & What is \textless{}concept1\textgreater distinct form? | \textless{}concept1\textgreater is distinct form \textless{}concept2\textgreater{} \\ \hline
EtymologicallyRelatedTo & is etymologically related to & What is \textless{}concept1\textgreater etymologically related to? | \textless{}concept1\textgreater is etymologically related to \textless{}concept2\textgreater{} \\ \hline
FormOf & is a form of & What is \textless{}concept1\textgreater a form of? | \textless{}concept1\textgreater is a form of \textless{}concept2\textgreater{} \\ \hline
HasContext & has context of & What context does \textless{}concept1\textgreater have? | \textless{}concept1\textgreater has context of \textless{}concept2\textgreater{} \\ \hline
SimilarTo & is is similar to & What is \textless{}concept1\textgreater similar to? | \textless{}concept1\textgreater is similar to \textless{}concept2\textgreater{} \\ \hline
Synonym & is a synonym of & What is a synonym of \textless{}concept1\textgreater{}? | \textless{}concept1\textgreater is a synonym of \textless{}concept2\textgreater{} \\ \hline
dbpediacapital & has the capital city & What is the capital city of \textless{}concept1\textgreater{}? | \textless{}concept1\textgreater has capital city of \textless{}concept2\textgreater{} \\ \hline
dbpediaproduct & has product & What product does \textless{}concept1\textgreater have? | \textless{}concept1\textgreater has product of \textless{}concept2\textgreater{} \\ \hline
\end{tabular}
}
\caption{Knowledge representation mappings.
}
\label{tab:mappings}
\end{table*}
\begin{figure*}[tb]
\centering
\includegraphics[width=1.0\linewidth]{figures/Turking_GCE.png}
\caption{
\textbf{Human evaluation interface} for response quality on dimensions: grammar, coherence, and engagingness.
}
\vspace{-0.1cm}
\label{fig:turking_gce}
\end{figure*}
\begin{figure*}[tb]
\centering
\includegraphics[width=1.0\linewidth]{figures/Turking_ISC.png}
\caption{
\textbf{Human evaluation interface} for response quality on dimensions: informativeness, specificity, and common sense.
}
\vspace{-0.1cm}
\label{fig:turking_ics}
\end{figure*}
\begin{figure*}[tb]
\centering
\includegraphics[width=1.0\linewidth]{figures/Turking_knowledge.png}
\caption{
\textbf{Human evaluation interface} for knowledge quality with 3 questions: does the knowledge make sense as a standalone fact, is the knowledge relevant to the context, and does the generated resposne use the knowledge?
}
\vspace{-0.1cm}
\label{fig:turking_kq}
\end{figure*}
\begin{table*}[tb]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{l|c|c|c|c|c|c|c}
\multicolumn{1}{c|}{Model Variants} & Grammatical & Coherent & Engaging & Informative & Specific & Common Sense & \textbf{Avg} \\ \hline
TBS-\textbf{soft}-symbol-NL & \textbf{53.0/10.0\%} & 46.3/8.7\% & 48.7/9.3\% & 41.7/20.6\% & 51.7/6\% & \textbf{52/7\%} & 50.5/10.3\% \\
TBS-hard-\textbf{prompt}-NL & 50.3/4\% & 49/7.3\% & 47/9\% & 49.4/6\% & 51/3\% & 48.3/2.7\% & 49.2/5.3\% \\
TBS-hard-symbol-\textbf{QA} & \textbf{53/6.7\%} & \textbf{53.6/5.6\%} & 51.3/4.7\% & 51.3/3.7\% & 51.3/5\% & \textbf{54/3.7\%} & \textbf{52.4/4.8\%} \\
TBS-\textbf{soft}-\textbf{prompt}-NL & 49.3/6.7\% & 49.78/8.7\% & 51.3/4.7\% & 50.3/2.7\% & 49.3/8.7\% & 48.2/6.7\% & 49.8/5.4\\
TBS-\textbf{soft}-symbol-\textbf{QA} & 51.5/4.2\% & \textbf{52.1/3.5\%} & 51.9/4.9\% & 49.2/6.7\% & 49.9/2.7\% & 45.3/6.9\% & 51.8/5.6\\
TBS-hard-\textbf{prompt}-\textbf{QA} & 48.3/7.2\% & 49.9/7.7\% & 50/5.2\% & 49.2/5.7\% & 48.2/6.6\% & 47.4/2.9\% & 48.8/6.4\\
TBS-\textbf{soft}-\textbf{prompt}-\textbf{QA} & 50.1/4.7\% & 50.2/8.7\% & 49.3/7.9\% & 48.2/8.7\% & 48.3/2.7\% & 49.9/5.7\% & 49.9/7.2
\end{tabular}
}
\caption{Human evaluation on \textbf{response quality} when comparing different model variants with the base model (hard-symbol-NL).
}
\label{tab:appendix_variants}
\end{table*}
\begin{table*}[tb]
\centering
\scalebox{0.75}{
\begin{tabular}{ccccccccc}
\hline
\multicolumn{1}{c|}{} & \multicolumn{4}{c|}{\textbf{Logical Corruption Average {[}Accuracy/$\Delta$ NLL{]}}} & \multicolumn{4}{c}{\textbf{Complete Corruption Average {[}Accuracy/$\Delta$ NLL{]}}} \\ \cline{2-9}
\multicolumn{1}{c|}{\multirow{-2}{*}{Models}} & DD & ED & MuTual & \multicolumn{1}{c|}{SocialIQA} & DD & ED & MuTual & SocialIQA \\ \hline
\multicolumn{9}{c}{\textit{\textbf{Inference Probing}}} \\ \hline
\multicolumn{1}{c|}{DialoGPT} & 0.57/-0.01 & 0.60/0.03 & 0.62/0.03 & \multicolumn{1}{c|}{0.64/0.03} & 0.71/0.15 & 0.77/0.25 & 0.79/0.22 & 0.87/0.40 \\
\multicolumn{1}{c|}{KS-RoBERTa} & 0.49/-0.00 & 0.50/-0.00 & 0.49/-0.00 & \multicolumn{1}{c|}{0.50/-0.00} & 0.76/0.23 & 0.79/0.24 & 0.78/0.24 & 0.81/0.27 \\
\multicolumn{1}{c|}{TBS} & 0.61/0.15 & 0.57/0.07 & 0.57/0.07 & \multicolumn{1}{c|}{0.56/0.05} & \textbf{0.88/1.38} & \textbf{0.86/1.24} & \textbf{0.87/1.14} & \textbf{0.89/1.47} \\ \hline
\multicolumn{1}{c|}{Human} & 1.0 & 1.0 & 0.9 & \multicolumn{1}{c|}{1.0} & 1.0 & 1.0 & 1.0 & 1.0 \\ \hline
\end{tabular}
}
\caption{CEDAR~\cite{zhou2021probing} results where bold-faced numbers indicate statistically significant differences comparing to the second-best model.
}
\label{tab:CEDAR}
\end{table*}
\subsection{Models Combining Variants}\label{appendix_variants}
Table~\ref{tab:appendix_variants} presents the complete results considering all of our models' variants. We find that the best overall configuration is hard-symbol-QA.
\subsection{CEDAR Probing: Do TBS models understand why a response makes sense?}
We follow the CEDAR probing framework from~\citet{zhou2021probing} that analyzes if RG models assign a higher probability to the response when provided with valid common sense in the form of explanations compared to corrupted explanations. Results comparing to an end-to-end RG model and a knowledge-selection model are shown in Table~\ref{tab:CEDAR}. We find that by TBS training, RG models become much more sensitive to commonsense explanations against complete corruptions but still fall short against more subtle logical corruptions that require deeper reasoning.
|
2,869,038,155,085 | arxiv | \section{Introduction}
The concept of topology is put forward along with the discovery of the quantum Hall effects and topological insulators in condensed matter~\cite{klitzing1980new,kane2010colloquium,sczhang2011TI}. By putting two materials with different topologies in contact, there exist edge states at the interface~\cite{hatsugai1993chern,sczhang2006general}. Because the topology is stable against disorders of a system, these states propagating in a robust unidirectional way are called topologically protected. Photonic crystals (PCs) are analogues of solid crystals so there is a similarity between the behavior of photons and electrons~\cite{yablonovitch1987inhibited,john1987strong}. In 2008, it was proven that analogous topological effects also exist in PCs, where no quantum but classical electromagnetic (EM) nature applies~\cite{haldane2008possible}.
Recent research work has demonstrated that various photonic systems can have nontrivial topology~\cite{minghui2018photonics}. For example, a gyromagnetic PC under external magnetic fields has nontrivial bulk topology and unidirectional backscattering-immune edge states have been observed experimentally~\cite{Joannopoulos2009observation}. However, as gyromagnetic effect is weak at optical frequencies and not amenable to on-chip integration, topological photonic systems composed of non-gyrotropic materials, such as helical waveguide arrays~\cite{rechtsman2013photonic} or coupled ring resonators~\cite{hafezi2013imaging}, are proposed as alternative promising platforms. Moreover, topological photonic systems that preserve the time-reversal symmetry (TRS) and exploit the concepts such as quantum spin Hall (QSH)~\cite{Khanikaev2013photonic, Ma2015guiding, jwdong2014experimental,huxiao2015scheme,barik2016two,wuying2016pseudo} or quantum valley Hall (QVH)~\cite{Ma2016all,jwdong2019a,Shalaev2019robust} effects have also been proposed. For example, by designing PCs with hexagonal $C_{6v}$ symmetry, the doubly degenerate dipole and quadrupole modes can be used to realize the photonic QSH states~\cite{huxiao2015scheme}, and the pseudospin-momentum locking behavior has been experimentally observed~\cite{yves2017crystalline,hangzh2018visualization}. Besides, homogeneous media, for example, bianisotropic~\cite{hanson2017berry,karimi2018unidirectional} or hyperbolic~\cite{zhuangshuang2015topolgocial} metamaterials, can also have nontrivial topological properties. For all the topological photonic systems with TRS, the edge states are realized at the interface between two PCs with trivial and nontrivial topologies.
In this work, we propose a novel topological waveguide that is constructed by a line defect in only a single two-dimensional (2D) PC with nontrivial topology, which can have important practical advantages. To numerically analyze the properties of the topological waveguide, we develop a simple and effective supercell approach based on the finite-difference (FD) method, from which the band structures can be quickly obtained. The topological line-defect states are identified in the band structure and successfully excited using two spatially-symmetric line-source arrays. The unidirectional-propagation feature of the defect states is verified through full-wave simulations. Furthermore, we find no noticeable backscattering by introducing disorders to the PCs structure.
\section{Modal Analysis}
\subsection{Bulk States}
A 2D PC made using a triangular lattice of hexagonal clusters is shown in Fig.~\ref{cylinders}. $\bm{a_1}$ and $\bm{a_2}$ are the two translation vectors with the length of $a_0$, i.e. the lattice constant. Each cluster is composed of six dielectric cylinders located at the corners of a hexagon, with the side length of $R$.
\begin{figure}[!tb]
\centering
\includegraphics[width=\columnwidth]{cylinders.eps}
\caption{Geometry of the 2D PC arranged in a triangular lattice. Each cluster is composed of six cylinders, forming a hexagon with the side length of $R$. The cylinders are pure dielectric, with radius $r$ and dielectric constant $\epsilon$. The right inset shows the smallest supercell holding the periodicity along $x$ and $y$ directions.}
\label{cylinders}
\end{figure}
Only the transverse-magnetic (TM) modes are considered, i.e. electric field only has the out-of-plane component and magnetic field is confined to the $xy$ plane. The governing equation for the TM modes of the 2D PC is written as
\begin{equation}
\frac{1}{\bar{\epsilon}}\frac{\partial^2 E_z}{\partial x^2}+\frac{1}{\bar{\epsilon}}\frac{\partial^2 E_z}{\partial y^2} +k_0^2E_z= 0,
\label{master}
\end{equation}
where $k_0$ is the free-space wave number and $\bar{\epsilon}$ is the averaged dielectric constant~\cite{menglinPRA}.
This equation can be rewritten as $ME_z=k_0^2E_z$ and solved as an eigenvalue problem to obtain eigenvalues $k_0$ and eigenmodes $E_z$. Due to the periodicity of the PC, we can restrict the eigenvalue problem to a single cluster. Nevertheless, to simplify the numerical calculation, we build a rectangular supercell with the periodic boundary conditions imposed on $x$ and $y$ directions. The corresponding Bloch wave numbers are $k_x$ and $k_y$. The lengths of the reassigned translation vectors are $a_0$ along $x$ and $\sqrt{3}a_0$ along $y$. Then, we use the FD method to construct the matrix $M$, which is much easier when compared with integral methods in manipulating the Bloch boundary conditions~\cite{zhengxz2014implementation}. Details on the construction of the matrix $M$ are provided in the Appendix. Then, the eigenvalue problem can be solved. The photonic band structure is drawn by sweeping $k_x$ and $k_y$ along the high symmetry directions of the irreducible Brillouin zone.
It has been found that the relative sizes of $a_0$ and $3R$ distinguish the topologies of the photonic band structures~\cite{huxiao2015scheme}. We calculate the band structures for three cases with $a_0=2.8R,\,\,3R,\,\,3.2R$. The Brillouin zone is shown in the inset of Fig.~\ref{BG_bulk}(a). When $a_0=3R$, there is no band gap and double Dirac cones with a four-fold degeneracy appear at the $\Gamma$ point. While when $a_0\neq3R$, the four-fold degenerate states at the $\Gamma$ point split into two doubly-degenerate states and the band gap opens. The doubly degenerate states are regarded as dipole ($p_x$, $p_y$), and quadrupole ($d_{xy}$, $d_{x^2-y^2}$) states, because the $E_z$ patterns of the states in hexagonal clusters are isomorphic to $p_x$, $p_y$, $d_{x^2-y^2}$ and $d_{xy}$ electron orbitals. In Fig.~\ref{BG_bulk}(b), i.e. the case of $a_0=3.2R$, the $p_x$ and $p_y$ states are at the $\Gamma$ point of the lower band, and the $d_{xy}$ and $d_{x^2-y^2}$ states are at the $\Gamma$ point of the upper band. However, when $a_0<3R$, there is a band inversion. The $p$ states and $d$ states switch their positions as depicted in Fig.~\ref{BG_bulk}(c). The band inversion between $p$ and $d$ states implies the nontrivial topology of the PC. These results are consistent with the findings in literature, thus validating our supercell approach with the imposed periodic boundary conditions along the $x$ and $y$ directions. The linear combinations of $p_x$ and $p_y$ ($d_{xy}$ and $d_{x^2-y^2}$) provide the up- and down-pseudospin eigenstates that underlie the topological edge states in the PC.
\begin{figure*}[!tb]
\centering
\includegraphics[width=2\columnwidth]{bandstructure_bulk.eps}
\caption{Band structures of the 2D PC in Fig.~\ref{cylinders} and $E_z$ of the dipole and quadrupole states at the $\Gamma$ point in the supercell when (a) $a_0=3R$, (b) $a_0=3.2R$, and (c) $a_0=2.8R$. The dielectric constant of the cylinders $\epsilon=11.7$ and radius $r=2$~mm. The side length of the hexagons $R=6$~mm.}
\label{BG_bulk}
\end{figure*}
\subsection{Topological Line-defect States}
Topological edge states have been observed at the interface of two PCs with trivial ($a_0>3R$) and nontrivial ($a_0<3R$) topologies~\cite{huxiao2015scheme,hangzh2018visualization} and in the trivial-nontrivial-trivial PC structures~\cite{gao2018unidirectional}. In the following, we will demonstrate topological line-defect states in a topological waveguide which is constructed by introducing an air gap in only one topologically nontrivial PC structure. Although air can be considered as topologically trivial~\cite{silveirinha2015chern}, the air-nontrivial PC interface cannot support edge states because they cannot be confined. By using the nontrivial PC-air-nontrivial PC structure, topological line-defect states with their power concentrated in the PC region are supported.
The supercell of the proposed waveguide structure is defined in Fig.~\ref{FD_Py}(a) (the solid purple rectangle) with the mirror-symmetry plane, $y=0$. Figure~\ref{FD_Py}(b) illustrates the calculated band structure by sweeping $k_x$ from $-\pi/a_0$ to $\pi/a_0$. The black dashed lines mark the band gap ($7.94$~GHz to $8.67$~GHz) that is calculated in Fig.~\ref{BG_bulk}(c). We find three bands within the band gap. The first band has frequencies lower than $8.34$~GHz and the third band has frequencies larger than $8.37$~GHz. These two bands possess large group velocity around $k_x=0$ and the second band possesses nearly zero group velocity. The distributions of the corresponding time-averaged Poynting vectors at the six marked locations are plotted in Fig.~\ref{FD_Py}(c). Resulting from the structure symmetry, for all the states, the Poynting vectors possess the same mirror-symmetry about $y=0$. However, there is a crucial difference in their energy flow paths. The EM energy of the modes of the first and third bands flow from one supercell to its adjacent supercell. On both the lower and upper edges of the line defect, for the modes marked by triangles, the net energy flows are along right and for the modes marked by the inverted triangles, they are along left. The left- and right-moving paths are accompanied by half-cycle orbits. The rotation of the Poynting vectors along the half-cycle orbits contributes to the net flow of the energy. The direction of rotation correlates with the direction of the energy flow, which implies a pseudospin-locking unidirectional propagation. It is similar to the helical edge states in QSH effects. Meanwhile, we can see from the location of the light lines (the red dashed lines) that these states cannot be guided if they are exposed to air. It is because of the symmetric line defect, the fields at the two edges are coupled and well confined in the PC region. For the modes of the second band, the energy flows are within each supercell, and no effective coupling path is formed between adjacent supercells, which is useless for the guided-wave application.
\begin{figure*}[!tb]
\centering
\includegraphics[width=2\columnwidth]{bandstructure_gap.eps}
\caption{Topological line-defect states. (a) The supercell (indicated by the solid purple rectangle) constructed for the proposed topological waveguide. The width of the gap $g=21$~mm. The parameters of the PC are the same as in Fig.~\ref{BG_bulk}(c). (b) The corresponding band structure. The dashed black lines indicate the upper and lower band edges (same as the band gap calculated in Fig.~\ref{BG_bulk}(c)). The dashed red lines are the light lines $\omega=ck$. Three bands are identified in the band gap. (c) The time-averaged Poynting vector about the line defect at the marked points in (b).}
\label{FD_Py}
\end{figure*}
To further understand the band structure, in Fig.~\ref{FD_Efield}, we plot $E_z$ at the three marked points in Fig.~\ref{FD_Py}(b) with $k_x>0$. Clearly, the modes of the first and third bands have even symmetry, while the modes of the second band have odd symmetry. Based on the previous discussions, the even symmetric modes are topologically protected modes of practical interests. The magnitude of the EM field is strong within the air-gap channel for the even symmetric modes (Figs.~\ref{FD_Efield}(a) and (c)) and is nearly zero for the odd symmetric modes (Fig.~\ref{FD_Efield}(b)). We further examine the phase distributions of these modes. Importantly, it is noted that for the two even symmetric modes, in each hexagon right near the line defect, there is a gradual phase change from $0$ to $2\pi$. The directions of the phase rotation are indicated on the right of Figs.~\ref{FD_Efield}(a) and (c). It can be seen that the EM fields within the hexagons at the upper and lower edges of the line defect are pseudospin polarized with reversed orbital angular momentum (OAM). The OAM of the two even symmetric modes are also opposite.
\begin{figure}[!tb]
\centering
\includegraphics[width=\columnwidth]{modes_Ez.eps}
\caption{The electric fields (real-space distribution, phase and polarization) at the three marked points with $k_x>0$ on the band structure in Fig.~\ref{FD_Py}(b). (a) The even symmetric mode with lower mode frequency. (b) The odd symmetric mode. (c) The even symmetric mode with higher mode frequency.}
\label{FD_Efield}
\end{figure}
\section{Excitation of the Topological Waveguide}
Based on our analysis, topological line-defect guiding states are the pseudospin-polarized symmetric modes. In the following, we excite these states by using line sources carrying OAM. COMSOL software is employed to simulate the PC structure with scattering boundary conditions enclosing the whole structure. We use a four-line-source array to generate OAM. To match the source symmetry with the eigenstates symmetry, a pair of arrays is put inside the hexagons on the upper and lower edges, which is illustrated in Fig.~\ref{Ez_sim}. A topological line-defect state at the third band in Fig.~\ref{FD_Py}(b) can be selectively excited by setting the signs of OAM of the source array to be the same as in Fig.~\ref{FD_Efield}(c). However, it is worth noting that for the first band, even by selecting the corresponding spin directions, there will be two modes at the frequencies below $8.19$~GHz. The frequency range for the excitation of a pure state is between $8.19$~GHz and $8.34$~GHz.
Simulation results of two excited topological line-defect states of the first and third bands are shown in Fig.~\ref{Ez_sim}. In each case, unidirectional energy propagation is observed and the energy flow to the other direction is suppressed. In Fig.~\ref{Ez_sim}(a), the OAM of the source is set identical to those in the eigenstate in Fig.~\ref{FD_Efield}(a). Therefore, the direction of the energy flow is consistent with the eigenstate in the bottom-right panel in Fig.~\ref{FD_Py}(c), i.e. in both cases, the energy flows leftward. Similarly, Fig.~\ref{Ez_sim}(b) shows the excited topological line-defect state of the third band with the energy moving rightward.
\begin{figure}[!tb]
\centering
\includegraphics[width=\columnwidth]{modes_Ez_sim.eps}
\caption{The simulated electric fields (real-space distribution) and the time-averaged Poynting vectors at (a) $f=8.3$~GHz (first band), (b) $f=8.46$~GHz (third band). In each case, there is a pair of sources carrying reversed OAM. The OAM is generated by a four-line-source array in the hexagon near the edge.}
\label{Ez_sim}
\end{figure}
\section{Robustness of the Topological Symmetric Modes}
The topological line-defect states are immune to bulk diffraction in the presence of defects, which is similar to the waveguide modes in conventional PCs. Beyond that, the topological line-defect states originate from the topology of the PC structure, which makes them more robust when there are disorders. To demonstrate their nature of topological protection, we implement two simulations with the same disorder introduced to the proposed waveguide and a conventional PC waveguide, respectively. In both Figs.~\ref{Ez_pert}(a) and (b), a cylinder on the top edge is removed. Same excitation and operating frequency as in Fig.~\ref{Ez_sim}(b) are used in Fig.~\ref{Ez_pert}(a) and the stars denote the positions of the sources. As can be seen in Fig.~\ref{Ez_pert}(a), the flow of the Poynting vector is distorted around the missing cylinder, but it is reconstructed behind the disorder. The EM energy that passes through the planes $1$ and $2$ (indicated by the dashed blue lines) can be calculated by $U=1/2 \int_l \text{Re} (\bf{E} \times \bf{H}^*) \cdot \it{d} \mathbf{l}$. Then, we define the backscattering ratio as $U_1/(U_1+U_2)$ and it is calculated to be $1.0\%$. The unidirectional propagation of the EM wave is well maintained in the topological waveguide. As for the conventional PC waveguide in Fig.~\ref{Ez_pert}(b), the simulation frequency is chosen so that it has the same Bloch wave number as the topological waveguide. After removing a cylinder, only $20\%$ power is transmitted and the rest of the power is reflected. Therefore, the topological waveguide is more robust in terms of the unidirectional propagation with disorders than conventional PC waveguide.
\begin{figure}[!tb]
\centering
\includegraphics[width=\columnwidth]{modes_Ez_pert1.eps}
\caption{The simulated electric fields (real-space distribution) and the time-averaged Poynting vectors when a cylinder next to the line defect is removed from (a) the proposed topological waveguide at $f=8.46$~GHz and (b) a conventional PC waveguide composed of a square lattice of cylinders at $f=7.95$~GHz. The parameters for the conventional PC: dielectric constant $\epsilon=11.7$, radius $r=2$~mm, and the lattice constant is $12$~mm.}
\label{Ez_pert}
\end{figure}
\section{Generalized Topological Line-defect States}
The number and frequencies of the line-defect states depend on the width of the air gap. When the gap size decreases, the third band in the band gap will be pushed up to the higher bulk states. When the gap size increases, the first band in the band gap will be pulled down to the lower bulk states. Hence, the line-defect states are different from the edge states holding the bulk-edge correspondence~\cite{graf2013bulk-edge}. At the meantime, the key features of the states keep unchanged, such as the half-cycle orbits of the Poynting vector. However, when the gap becomes infinitely large, the edge states will extend to air and no wave-guiding channel can be formed. The topological waveguide being discussed is symmetric about the $x$ axis, which is a special case. Actually, the air gap can be inserted in a topologically nontrivial PC at other locations, and more generally, the cutting gap can even be on the cylinders.
In Fig.~\ref{new}, a $60$-degree air bend with the width of $P_y/2$ is inserted into the bulk topologically nontrivial PC. Since the symmetry of the structure changes, the line-defect states being supported now become asymmetric. But the difference in the symmetry property will not change the key features of the topological line-defect states. The new line-defect states still have non-zero group velocity with the inter-supercell energy transfer. At the meantime, the Poynting vector rotates along the half-cycle orbits in the PC on the edge. Importantly, we can observe the unidirectional propagation of this state against sharp bends. The backscattering ratio which is calculated based on the energy transmitted through planes $1$ and $2$ is $8.3\%$.
\begin{figure}[!tb]
\centering
\includegraphics[width=\columnwidth]{modes_Ez_bend.eps}
\caption{Demonstration of the unidirectional propagation of the topological line-defect state in a bending gap. The plotted frequency is $7.96$~GHz. The width of the air gap is $P_y/2$. The two red stars mark the positions of the two sources carrying OAM.}
\label{new}
\end{figure}
\section{Conclusion}
In summary, we proposed a topological waveguide that supports pseudospin-polarized propagating modes. Unlike any existing designs using two PCs with trivial and nontrivial topologies, our design only contains topologically nontrivial PC with an inserted air gap. The topological line-defect states are supported in the structure, resulting from the coupling between the edge states on upper and lower edges of the line defect. The FD supercell approach was established to calculate the band structures and analyze the field polarization and phase. Moreover, the topological line-defect states are successfully excited by using a pair of sources possessing the same symmetry as the eigenfields and they are found to be immune to disorders in the structure. The demonstration is in microwave but can be scaled up to optical regime.
\appendices
\section{Eigenvalue analysis using FD method}
First, the supercell in Fig.~\ref{cylinders} is divided into many grids as shown in Fig.~\ref{SI}. $\phi_{m,n}$ ($m= 1,2,3,...N_y,N_y+1; n=1,2,3,...N_x,N_x+1$) denotes the electric field $E_z$ at each sampling point. Due to the periodicity, each supercell has $N_x\times N_y$ unknowns, i.e. $\phi_{m,n}, m= 1,2,3,...N_y; n=1,2,3,...N_x$.
\begin{figure}[!tb]
\centering
\includegraphics[width=\columnwidth]{grid.eps}
\caption{The supercell with FD grids.}
\label{SI}
\end{figure}
To simulate curved boundaries of the dielectric cylinders, considering the tangential $E_z$ component is continuous across the air-dielectric interface, {we calculate the permittivity at each grid point by computing the average of the permittivity at its four surrounding points that are half-grid away along both the $x$ and $y$ directions:
\begin{equation}
\begin{aligned}
\bar \epsilon = \epsilon_{m,n}=& \frac{1}{4}(\epsilon_{m-0.5,n-0.5}+\epsilon_{m+0.5,n-0.5}\\
&+\epsilon_{m-0.5,n+0.5}+\epsilon_{m+0.5,n+0.5}).
\end{aligned}
\end{equation}
Then, the differential eigenvalue equation~\eqref{master} at the grid point, $(m,n)$ is rewritten by using the FD approximation:
\begin{equation}
\begin{aligned}
&\frac{1}{\epsilon_{m,n}}\frac{\phi_{m,n+1}+\phi_{m,n-1}-2\phi_{m,n}}{\Delta x^2}+\\
&\frac{1}{\epsilon_{m,n}}\frac{\phi_{m+1,n}+\phi_{m-1,n}-2\phi_{m,n}}{\Delta y^2}=k_0^2\phi_{m,n}.
\end{aligned}
\label{FD}
\end{equation}
For the grid points going outside of the unknowns, they are treated by the Bloch boundary conditions, i.e.
\begin{equation}
\phi_{m,n}=\phi_{m,n \pm N_x}e^{j \mp k_xP_x}, \quad \phi_{m,n}=\phi_{m \pm N_y,n}e^{j \mp k_yP_y},
\end{equation}
where $k_x$ and $k_y$ are the Bloch wave numbers.
Finally, the differential eigenvalue equation is recast into a matrix form,
\begin{equation}
\begin{aligned}
M\Phi=k_0^2\Phi, \quad \Phi=&(\phi_{11} ~\phi_{12} ~...~ \phi_{1N_x}~ \phi_{21} ~\phi_{22} ~...~ \phi_{2N_x}~...\\
&~\phi_{N_y1} ~\phi_{N_y2} ~...~ \phi_{N_yN_x})^T,
\end{aligned}
\end{equation}
where $M$ is a sparse matrix. The above can be solved by a standard eigenvalue solver in MATLAB.
\section*{Acknowledgement}
This work was supported in part by the Research Grants Council of Hong Kong GRF 17209918, AOARD FA2386-17-1-0010, NSFC 61271158, HKU Seed Fund 201711159228, and Thousand Talents Program for Distinguished Young Scholars of China.
\bibliographystyle{IEEEtran}
|
2,869,038,155,086 | arxiv | \section{Introduction}
The main contribution of this paper is a theorem, called the substate
theorem; it states, roughly, that if the relative entropy, $S(\rho \|
\sigma) := \mbox{{\rm Tr} } \rho (\log \rho -\log \sigma)$, of two quantum states
$\rho$ and $\sigma$ is at most $c$,
then there a state $\rho'$ close to sigma such that $\rho'/2^{O(c)}$
{\em sits inside} $\sigma$. This implies that, as we will formalise later,
state $\sigma$ can `masquerade' as state $\rho$ with probability
$2^{-O(c)}$ in many situations. Before we
discuss the substate theorem, let us first see a setting in which it is
applied in order to get some motivation. This application concerns
the trade-off in privacy in
two-party quantum communication protocols for the set membership
problem~\cite{miltersen:roundelim}. After that, we discuss the
substate theorem
proper followed by a brief description of several subsequent applications
of the theorem.
\subsection{The set membership problem}
\newcommand{\mathsf{SetMemb}}{\mathsf{SetMemb}}
\begin{definition}
In the {\em set membership problem} $\mathsf{SetMemb}_n$, Alice is given a subset $A
\subseteq [n]$ and Bob an element $i \in [n]$. The two parties are
required to exchange messages according to a fixed protocol in order
for the last recipient of a message to determine if $i \in [n]$. We
often think of Alice's input as a
string $x \in \{0,1\}^n$ which we view as the characteristic vector of
the set $A$; the protocol
requires that in the end the last recipient output $x_i$. In this
viewpoint, Bob's input $i$ is called an {\em index} and the set
membership problem is called the {\em index function problem}.
\end{definition}
The set membership problem is a fundamental problem in communication
complexity. In the classical setting, it was studied by Miltersen,
Nisan, Safra and Wigderson~\cite{miltersen:roundelim}, who showed that
if Bob sends a total of at most $b$ bits, then Alice must send
$n/2^{O(b)}$ bits. Note that this is optimal up to constants, as there
is a trivial protocol where Bob sends the first $b$ bits of his index
to Alice, and Alice replies by sending the corresponding part of her
bit string. The proof of Miltersen {\em et al.} relied on the {\em
richness technique} they developed to analyse such protocols. However,
here is a simple round-elimination argument that gives this lower
bound, and as we will see below, this argument generalises to the
quantum setting. Fix a protocol where Bob sends a total of at most
$b$ bits, perhaps spread over several rounds. We can assume without
loss of generality that Bob is the last recipient of a message, otherwise
we can augment the protocol by making Alice send the answer to Bob at
the end which increases Alice's communication cost by one bit.
Modify this protocol as
follows. In the new protocol, Alice and Bob use shared randomness to
guess all the messages of Bob. Alice sends her responses based on this
guess. After this, if Bob finds that the guessed messages are exactly
what he wanted to send anyway, he accepts the answer given by the
original protocol; otherwise, he aborts the protocol. Thus, if the
original protocol was correct with probability $p$, the new one-round
protocol, when it does not abort, which happens with probability at
least $2^{-b}$, is correct with probability at least $p$. A
standard information theoretic argument of Gavinsky, Kempe, Regev
and de~Wolf~\cite{deWolfKempe} now shows
that in any such protocol, Alice must send $2^{-b} \cdot n (1-H(p))$ bits.
In the quantum setting, a special case of the set membership problem
was studied by Ambainis, Nayak, Ta-Shma and Vazirani~\cite{ANTV:index},
where Bob is not allowed to send any message and there is no prior
entanglement between Alice and Bob.
They referred to this as {\em quantum random access codes}, because in
this setting the problem can be thought of as Alice encoding $n$ classical
bits $x$ using qubits in such a way that Bob is able to determine any
one $x_i$ with probability at least $p \geq \frac{1}{2}$. Note that in
the quantum setting, unlike in its classical counterpart, it is
conceivable that the measurement needed to determine $x_i$ makes the
state unsuitable for determining any of the other bits $x_j$. In fact,
Ambainis {\em et al.} exhibit a quantum random access code encoding
two classical bits $(x_1, x_2)$ into one qubit such that any single
bit $x_i$ can be recovered with probability strictly greater than $1/2$,
which is impossible classically.
Their main result, however, was that any such quantum code
must have $n(1-H(p))$ qubits. They also gave a classical
code with encoding length $n(1-H(p)) + O(\log n)$, thus showing that
quantum random access codes provide no substantial improvement over
classical random access codes.
In this paper, we study the general set membership problem,
where Alice and Bob are allowed to exchange quantum messages over several
rounds as well as share prior entanglement.
Ashwin Nayak (private communication) observed that the
classical round elimination argument described above is applicable in
the quantum setting: if Alice and Bob share prior entanglement in the
form of EPR
pairs, then using quantum teleportation~\cite{bennett:teleportation},
Bob's messages can be assumed to be classical. Now, Alice can guess
Bob's messages, and we can combine the classical round elimination
argument above with the results on random access codes to show that
Alice must send at least $2^{-(2 b + 1)} \cdot n (1-H(p))$ qubits to Bob.
We strengthen these results and show that this trade-off between the
communication required of Alice and Bob is in fact a trade-off in their
privacy: if a protocol has the property that Bob `leaks' only a
small number of bits of {\em information} about his input, then in that
protocol Alice must leak a large amount of information about her
input; in particular, she must send a large number of
qubits. Before we present our result, let us explain what we mean when
we say that Bob leaks only a small number of bits of information about
his input. Fix a protocol for set membership. Assume that
Bob's input $J$ is a random element of $[n]$. Suppose Bob operates
faithfully according to the protocol, but Alice deviates from it and
manages to get her registers, say $A$, entangled with $J$: we say that
Bob leaks only $b$ bits of information about his input if the mutual
information between $J$ and $A$, $I(J : A)$, is at most $b$. This must
hold for all strategies adopted by Alice. Note that we do not assume
that Bob's messages contain only $b$ qubits, they can be arbitrarily
long. In the quantum setting, Alice has a big bag of tricks she can
use in order to extract information from Bob. See
Section~\ref{subsec:privacy} for an example of a cheating strategy for
Alice, that exploits Alice's ability to perform quantum operations.
We show the following result.
\begin{result}[informal statement]
\label{res:privacy}
If there is a quantum protocol for the set membership problem where
Bob leaks only $b$ bits of information about his input $J$, then Alice
must leak $\Omega(n/2^{O(b)})$ bits of information about her input
$x$. In particular, this implies that Alice must send $n/2^{O(b)}$
qubits.
\end{result}
\paragraph{Related work:}
One can compare this with work on private information
retrieval~\cite{chor:pir}. There, one requires that the party holding
the database $x$ know nothing about the index $i$.
Nayak~\cite{nayak:index} sketched an argument
showing that in both classical and quantum settings, the party
holding the database
has to send $\Omega(n)$ bits/qubits to the party holding the index.
Result~\ref{res:privacy}
generalises Nayak's argument and shows a trade-off between the loss in
privacy for the database user Bob, and the loss in privacy for the
database server Alice.
Recently, Klauck~\cite{klauck:privacy} studied privacy in quantum
protocols. In Klauck's setting, two players collaborate to compute a
function, but at any point, one of the players might decide to
terminate the protocol and try to infer something about the input of
the other player using the bits in his possession. The players are
{\em honest but curious}: in a sense, they don't deviate from the
protocol in any way other than, perhaps, by stopping early. In this
model, Klauck shows that there is a protocol for the {\em set
disjointness} function where neither player reveals more than $O((\log
n)^2)$ bits of information about his input, whereas in every classical
protocol, at least one of the players leaks $\Omega(\sqrt{n}/\log n)$
bits of information about his input.
Our model of privacy is more stringent. We allow malicious players who
can deviate arbitrarily from the protocol. An immediate corollary of
our result is that for the set membership problem, one of the players
must leak $\Omega(\log n)$ bits of information. This implies a similar
loss in privacy for several other problems, including the set
disjointness problem.
\paragraph{Privacy trade-off and the substate theorem:}
We now briefly motivate the need for the substate theorem in showing
the privacy trade-off in Result~\ref{res:privacy} above.
We know from the communication trade-off argument for set membership
presented above that in any protocol for the problem, if Bob sends only
$b$ qubits, then Alice must send $n/2^{O(b)}$ qubits. Unfortunately,
this argument is not applicable when the protocol does not promise
that Bob sends only $b$ qubits, but only ensures that the number of
bits of information Bob leaks is at most $b$. So, the assumption is
weaker. On the other hand, the conclusion now is stronger, for it
asserts that Alice must leak $n/2^{O(b)}$ bits of information, which
implies that she must send at least these many qubits. The above
argument relied on the fact that Alice could generate a distribution
on messages, so that every potential message of Bob is
well-represented in this distribution: if Bob's messages are classical
and $b$ bits long, the uniform distribution is such a
distribution---each $b$ bit message appears in it with probability
$2^{-b}$. Note that we are not assuming that messages of Bob have at most
$b$ qubits, so Alice cannot guess these messages in this manner.
Nevertheless, using only the assumption that Bob leaks at most $b$
bits of information about his input, the substate theorem provides us
an alternative for the uniform distribution. It allows us to prove
the existence of a single quantum state that Alice and Bob can
generate without access to Bob's
input, after which if Bob is provided the input $i$, he can obtain the
correct final state with probability at least $2^{-O(b)}$ or abort if
he cannot. After this, a quantum information theoretic argument of
Gavinsky, Kempe, Regev and de Wolf~\cite{deWolfKempe} implies that Alice
must leak at least $n/2^{O(b)}$ bits of information about her input.
The proof is discussed in detail in Section~\ref{sec:tradeoffs}.
\subsection{The substate theorem}
\label{subsec:introsubstate}
It will be helpful to first consider the classical analogue of the
substate theorem. Let $P$ and $Q$ be probability distributions on the
set $[n]$ such that their relative entropy is bounded by $c$, that is
\begin{equation}
\label{eq:defS}
S(P \| Q) := \sum_{i\in [n]} P(i) \log_2 \frac{P(i)}{Q(i)}
~\leq~c
\end{equation}
When $c$ is small, this implies that $P$ and $Q$ are close to each other
in {\em total variation distance}; indeed, one can show that
(see e.g.~\cite[Lemma~12.6.1]{cover:infotheory})
\begin{equation}
\label{eq:SversusTr}
\totvar{P - Q} := \sum_{i\in [n]} |P(i)-Q(i)| ~\leq~
\sqrt{(2 \ln 2)c}.
\end{equation}
That is, the probability of an event ${\cal E}
\subseteq [n]$ in $P$ is close to its probability in $Q$:
$|P({\cal E})-Q({\cal E})| \leq \sqrt{(c\ln 2)/2} $. Now consider the
situation when $c \gg 1$. In that case, expression~(\ref{eq:SversusTr})
becomes
weak, and it is not hard to construct examples where $\totvar{P - Q}$
is very close to $2$. Thus by bounding $\totvar{P - Q}$ alone, we
cannot infer that an event ${\cal E}$ with probability $3/4$ in
$P$ has any non-zero probability in $Q$. But is it true that when $S(P
\| Q) < +\infty$ and $P({\cal E}) > 0$, then $Q( {\cal E}) > 0$? Yes! To
see this, let us reinterpret the expression in (\ref{eq:defS}) as the
expectation of $\log P(i)/Q(i)$ as $i$ is chosen according to
$P$. Thus, one is lead to believe that if $S(P\|Q)
\leq c < +\infty$,
then $\log P(i)/Q(i)$ is typically bounded by $c$, that is,
$P(i)/Q(i)$ is typically
bounded by $2^c$. One can formalise this intuition and
show, for all $r \geq 1$,
\begin{equation}
\label{eq:weaksubstate}
\Pr_{i\in P}\left[\frac{P(i)}{Q(i)} > 2^{r(c+1)}\right] < \frac{1}{r}.
\end{equation}
We now briefly sketch a proof of the above inequality.
Let ${\sf Good} := \{i: P(i)/2^{r(c+1)} \leq Q(i)\}$,
${\sf Bad} := [n] \setminus {\sf Good}$. By concavity of
the logarithm function, we get
\[
P({\sf Good}) \log \frac{P({\sf Good})}{Q({\sf Good})} +
P({\sf Bad}) \log \frac{P({\sf Bad})}{Q({\sf Bad})}
\leq S(P \| Q) \leq c.
\]
By elementary calculus,
$P({\sf Good}) \log \frac{P({\sf Good})}{Q({\sf Good})} > -1$.
Thus we get $P({\sf Bad}) \cdot r (c+1) < c+1$, proving the
above inequality.
We now define a new probability distribution $P'$ as follows:
\[
P'(i) :=
\left\{
\begin{array}{l l}
\frac{P(i)}{P({\sf Good})} & i \in {\sf Good} \\
0 & i \in {\sf Bad}
\end{array}
\right.,
\]
that is, in $P'$ we just discard the bad values of $i$
and renormalise. Now, $\frac{r-1}{r 2^{r(c+1)}}P'$ is dominated by $Q$
everywhere. We have thus shown the classical analogue of the
desired substate theorem.
\paragraph{Result~\ref{res:substate}' (Classical substate theorem)}
{\em
Let $P, Q$ be probability distributions on the same sample space with
$S(P \| Q)\leq c$. Then for all $r > 1$, there exist distributions
$P', P''$ such that $\totvar{P-P'} \leq \frac{2}{r}$ and
$Q = \alpha P' + (1-\alpha) P''$,
where $\alpha := \frac{r-1}{r 2^{r(c+1)}}$.
}
Let us return to our event ${\cal E}$ that occurred with some small
probability $p$ in $P$. Now, if we take $r$ to be $2/p$, then ${\cal E}$
occurs with probability at least $p/2$ in $P'$, and hence appears with
probability $p/2^{O(c/p)}$ in $Q$. Thus, we have shown that even though
$P$ and $Q$ are far apart as distributions, events that have positive
probability, no matter how small, in $P$, continue to have positive
probability in $Q$.
The main contribution of this paper is a quantum analogue of
Result~\ref{res:substate}'. To state it, we recall that the
relative entropy of two quantum states $\rho, \sigma$ in the same
Hilbert space is defined as
$S(\rho \| \sigma) := \mbox{{\rm Tr} } \rho (\log \rho - \log \sigma)$, and
the {\em trace distance} between them is defined as
$\trnorm{\rho -\rho'} := \mbox{{\rm Tr} } \sqrt{(\rho - \rho')^2}$.
\begin{result}[Quantum substate theorem]
\label{res:substate}
Suppose $\rho$ and $\sigma$ are
quantum states in the same Hilbert space with $S(\rho \| \sigma) \leq c$.
Then for all $r >1$, there
exist states $\rho', \rho''$ such that
$\trnorm{\rho -\rho'} \leq \frac{2}{\sqrt{r}}$ and
$\sigma = \alpha \rho' + (1-\alpha)\rho''$,
where $\alpha := \frac{r-1}{r 2^{rc'}}$ and
$c' := c + 4 \sqrt{c+2} + 2 \log (c+2) + 5$.
\end{result}
The quantum substate theorem has been stated above in a form that brings
out the analogy with
the classical statement in Result~\ref{res:substate}'. In
Section~\ref{sec:substate}, we have
a more nuanced statement which is often better suited for applications.
\paragraph{Remark:}
Using the quantum substate theorem and arguing as above, one can conclude
that if an event ${\cal E}$ has probability $p$ in $\rho$, then its
probability $q$ in $\sigma$ is at least $q \geq \frac{p}{2^{O(c/p^2)}}$,
$c = S(\rho \| \sigma)$. Actually, one can show the stronger result
that $q \geq \frac{p}{2^{O(c/p)}}$ as follows.
Using the fact that relative
entropy cannot increase after doing a measurement, we get
\[
p \log \frac{p}{q} + (1-p) \log \frac{1-p}{1-q} \leq S(\rho \| \sigma)
\leq c.
\]
We now argue as in the proof of Result~\ref{res:substate}' to show the
stronger lower bound on $q$.
In view of this, one may wonder if there is any motivation at all in
proving a quantum substate theorem. Recall
however, that the quantum substate theorem
gives a structural relationship between $\rho$ and $\sigma$ which is useful
in many applications e.g. privacy trade-off for set membership discussed
earlier. It does not seem possible in these applications to replace
this structural relationship by considerations about the relative
probabilities of an event ${\cal E}$ in $\rho$ and $\sigma$. In our
privacy trade-off application, $\sigma$ plays the role of the
state that Alice and Bob can generate without access to Bob's input,
and $\rho$ plays the role of the correct final state of Bob in the
protocol. To prove the trade-off, $\sigma$ should be able to `masquerade'
as $\rho$ with probability $2^{-O(b)}$, $b$ being the amount of
information Bob leaks about his input. Also, Bob should {\em know} whether
the `masquerade' succeeded or not so that he can abort if it fails, and
it is this requirement that needs the substate property.
\bigskip
The ideas used to arrive at Result~\ref{res:substate}' do not
immediately generalise to prove Result~\ref{res:substate}, because
$\rho$ and
$\sigma$ need not be simultaneously diagonalisable. As it turns out,
our proof of the quantum substate theorem takes an indirect route.
First, by
exploiting the Fuchs and Caves~\cite{fuchs:fidelity} characterisation
of fidelity and a minimax theorem of game theory, we obtain a
`lifting' theorem about an `observational' version of relative
entropy; this statement is interesting on its own. Using this
`lifting' theorem, and a connection between the `observational'
version of relative entropy and actual relative entropy, we argue that
it is enough to verify the original statement when $\rho$ and $\sigma$
reside in a two-dimensional space and $\rho$ is a pure state. The two
dimensional case is then established by a direct computation.
\subsection{Other applications of the substate theorem}
The conference version of this paper~\cite{jain:substate}, in which
the substate theorem was first announced, described two
applications of the theorem.
The first application provided tight privacy trade-offs
for the set membership problem, which we have discussed above. This
application is a good illustration of the use of the substate theorem,
for several applications have the same structure. The second
application showed tight lower bounds for the {\em pointer chasing
problem}~\cite{NisanWigderson:ptr, KNTZ:ptr}, thereby establishing that the
lower bounds shown by Ponzio, Radhakrishnan and Venkatesh~\cite{ponzio:ptr}
in the
classical setting are valid also for quantum protocols without prior
entanglement.
Subsequent to \cite{jain:substate}, several applications of the
classical and quantum substate theorems
have been discovered. We briefly describe these results
now. Earlier, in related but independent work Chakrabarti, Shi, Wirth and
Yao~\cite{chakrabarti:icost}
discovered their very influential {\em information cost}
approach for obtaining {\em direct sum} results in communication
complexity. Jain, Radhakrishnan and Sen~\cite{jain:directsum}
observed that the
arguments used by Chakrabarti {\em et al.} could be derived more
systematically using the classical substate theorem; this approach
allowed them to extend Chakrabarti {\em et al.}'s direct sum results,
which applied only to one-round
and simultaneous message protocols under product distributions on inputs,
to two-party
multiple round protocols under product distributions on inputs. Ideas
from \cite{jain:directsum} were then applied by Chakrabarti and
Regev~\cite{chakrabarti:ann} to
obtain their tight lower bound on data structures for the {\em approximate
nearest neighbour problem} on the Hamming cube.
The quantum substate theorem, the main result of this
paper, has also found several other applications. Jain, Radhakrishnan
and Sen~\cite{jain:entangred} used it to show how any two-party
multiple round quantum protocol where Alice leaks only $a$
bits of information about her input and Bob leaks only $b$ bits of
information about his, can be transformed to a one-round quantum protocol
with prior entanglement
where Alice transmits just $a2^{O(b)}$ bits to Bob.
Note that plain Schumacher compression~\cite{schumacher}
cannot be used to prove such a
result, since we require a `one-shot' as opposed to an asymptotic result,
there can be interaction in a general communication protocol, as well
as the case that the reduced state of any single party can be mixed.
Jain {\em et al.}'s compression result gives an alternative
proof of Result~\ref{res:privacy}, because the work of
Ambainis {\em et al.}~\cite{ANTV:index} implies that
in any such protocol for set
membership Alice must send $\Omega(n)$ bits to Bob.
Jain {\em et al.} also used the classical and quantum substate
theorems to prove worst case direct sum results for simultaneous message
and one round classical and quantum protocols, improving on
\cite{jain:directsum}.
More recently, using the quantum substate theorem
Jain~\cite{jain:remote} obtained a nearly tight characterisation of
the communication complexity of {\em remote state preparation}, an area
that has received considerable attention lately. The substate theorem
has also found application in the study of quantum cryptographic
protocols: using it, Jain~\cite{jain:quantumstring}
showed nearly tight bounds on the {\em binding-concealing} trade-offs
for {\em quantum string commitment} schemes.
\subsection{Organisation of the rest of the paper}
In the next section, we
recall some basic facts from classical and quantum information theory
that will be used in the rest of the paper.
In Section~\ref{sec:tradeoffs}, we
formally define our model of privacy loss in quantum communication
protocols and prove our privacy trade-off result for set membership
assuming the substate theorem. In
Section~\ref{sec:substate}, we give the actual statement of the
substate theorem that is used in our privacy trade-offs, and a
complete proof for it. Sections~\ref{sec:tradeoffs} and \ref{sec:substate}
may be read independently of each other.
In Section~\ref{sec:conclusion} we mention some open
problems, and finally in the appendix we discuss relationships between
various information theoretic quantities that arise naturally in the
context of the substate theorem. The appendix may be read independently
of Section~\ref{sec:tradeoffs}.
\section{Information theory background}
We now recall some basic definitions and facts from classical
and quantum information
theory, which will be useful later.
For excellent introductions to classical and quantum information
theory, see the books by Cover and Thomas~\cite{cover:infotheory}
and Nielsen and Chuang~\cite{nielsen:quant} respectively.
In this paper, all functions will have finite domains and ranges,
all sample spaces will be finite, all random variables
will have finite range and all Hilbert spaces
finite dimensional. All logarithms are taken to base two.
We start off by recalling the definition of a quantum state.
\begin{definition}[Quantum state]
A quantum state or a density matrix in a Hilbert space ${\cal H}$ is a
Hermitian, positive semidefinite operator on ${\cal H}$ with unit trace.
\end{definition}
Note that a classical probability distribution can be thought of as a
special case of a quantum state with diagonal density matrix.
An important class of quantum states are what are known as {\em pure}
states, which are states
of the form $\ketbra{\psi}$, where $\ket{\psi}$ is a unit vector in
${\cal H}$. Often, we abuse notation and refer to $\ket{\psi}$ itself
as the pure quantum state; note that this notation is ambiguous up to
a multiplicative unit complex number.
Let ${\cal H}, {\cal K}$ be two Hilbert spaces and $\omega$ a quantum state in
the {\em bipartite system} ${\cal H} \otimes {\cal K}$. The {\em reduced} quantum
state of ${\cal H}$ is given by {\em tracing out} ${\cal K}$, also
known as the {\em partial trace}
$
\parttr{{\cal K}} \omega :=
\sum_k (\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_{{\cal H}} \otimes \bra{k}) \omega
(\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_{{\cal H}} \otimes \ket{k})
$
where $\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}_{{\cal H}}$ is the identity operator on ${\cal H}$ and
the summation is over an orthonormal basis for ${\cal K}$. It is
easy to see that the partial trace is independent of the choice of
the orthonormal basis for ${\cal K}$. For a quantum state $\rho$ in ${\cal H}$,
any quantum state $\omega$ in ${\cal H} \otimes {\cal K}$
such that $\parttr{{\cal K}} \omega = \rho$ is said to be an {\em extension}
of $\rho$ in ${\cal H} \otimes {\cal K}$; if $\omega$ is pure, it is said, more
specifically, to be a {\em purification}.
We next define a POVM element, which formalises the notion of a single
outcome of a general measurement on a quantum state.
\begin{definition}[POVM element]
A POVM (positive operator valued measure) element
$F$ on Hilbert space ${\cal H}$ is a Hermitian positive semidefinite
operator on ${\cal H}$ such that $F \leq \leavevmode\hbox{\small1\kern-3.8pt\normalsize1}$, where $\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}$ is
the identity operator on ${\cal H}$.
\end{definition}
If $\rho$ is a quantum state in ${\cal H}$, the
success probability of $\rho$ under POVM element $F$ is given by
$\mbox{{\rm Tr} } (F \rho)$.
We now define a POVM which represents the most general form of a
measurement allowed by quantum mechanics.
\begin{definition}[POVM]
A POVM ${\cal F}$ on Hilbert space ${\cal H}$ is a finite set of POVM
elements $\{F_1, \ldots, F_k\}$ on
${\cal H}$ such that $\sum_{i=1}^k F_i = \leavevmode\hbox{\small1\kern-3.8pt\normalsize1}$, where $\leavevmode\hbox{\small1\kern-3.8pt\normalsize1}$ is the
identity operator on ${\cal H}$.
\end{definition}
If $\rho$ is a quantum state in ${\cal H}$,
let ${\cal F} \rho$ denote the probability distribution
$\{p_1, \ldots, p_k\}$ on $[k]$, where $p_i := \mbox{{\rm Tr} } (F_i \rho)$.
Typically, the distance between two probability distributions $P, Q$ on the
same sample space $\Omega$ is measured in terms of the {\em total
variation distance} defined as
$\totvar{P - Q} := \sum_{i \in \Omega} |P(i) - Q(i)|$.
The quantum analogue of the total variation distance is
known as the {\em trace distance}.
\begin{definition}[Trace distance]
Let $\rho, \sigma$ be quantum states in the same Hilbert space. Their
trace distance is defined as
$\trnorm{\rho - \sigma} := \mbox{{\rm Tr} } \sqrt{(\rho - \sigma)^2}$.
\end{definition}
If we think of probability distributions as diagonal density matrices,
then the trace distance between them is nothing but their total variation
distance. For pure states $\ket{\psi}, \ket{\phi}$ it is easy to
see that their trace distance is given by
$
\trnorm{\ketbra{\psi} - \ketbra{\phi}} =
2 \sqrt{1 - |\braket{\psi}{\phi}|^2}.
$
The following fundamental fact shows that the
trace distance between two density matrices
bounds how well one can distinguish
between them by a POVM. A proof can be found in \cite{aharonov:mixed}.
\begin{fact}
\label{fact:totvartrace}
Let $\rho, \sigma$ be density matrices in the same
Hilbert space ${\cal H}$. Let ${\cal F}$ be a POVM on ${\cal H}$. Then,
$\totvar{{\cal F} \rho - {\cal F} \sigma} \leq \trnorm{\rho - \sigma}$.
Also, there is a two-outcome orthogonal measurement that achieves
equality above.
\end{fact}
Another measure of distinguishability between two probability distributions
$P, Q$ on the same sample space $\Omega$ is the
{\em Bhattacharya distinguishability coefficient} defined as
$B(P, Q) := \sum_{i \in \Omega} \sqrt{P(i) Q(i)}$. Its quantum
analogue is known as {\em fidelity}. We will need several facts about
fidelity in order to prove the quantum substate theorem.
\begin{definition}[Fidelity]
Let $\rho$, $\sigma$ be density matrices in the same
Hilbert space ${\cal H}$. Their fidelity is defined as
$
B(\rho, \sigma) := \mbox{{\rm Tr} } \sqrt{\sqrt{\rho} \sigma \sqrt{\rho}}.
$
\end{definition}
The fidelity, or sometimes its square, is also referred to as the
``transition probability'' of Uhlmann.
For probability distributions, the fidelity turns out to be the same
as their Bhattacharya distinguishability coefficient.
Jozsa~\cite{jozsa:fidelity}
gave an elementary proof for finite dimensional Hilbert
spaces of the following basic and remarkable property about fidelity.
\begin{fact}
\label{fact:jozsa}
Let $\rho, \sigma$ be density matrices in the same
Hilbert space ${\cal H}$. Then,
$
B(\rho, \sigma)=\sup_{{\cal K}, \ket{\psi}, \ket{\phi}} |\braket{\psi}{\phi}|,
$
where ${\cal K}$ ranges over all Hilbert spaces and
$\ket{\psi}, \ket{\phi}$ range over all purifications of
$\rho, \sigma$ respectively in ${\cal H} \otimes {\cal K}$. Also, for any
Hilbert space ${\cal K}$ such that $\dim ({\cal K}) \geq \dim ({\cal H})$, there
exist purifications
$\ket{\psi}, \ket{\phi}$ of $\rho, \sigma$ in ${\cal H} \otimes {\cal K}$,
such that $B(\rho, \sigma) = |\braket{\psi}{\phi}|$.
\end{fact}
We will also need the following fact about fidelity, proved
by Fuchs and Caves~\cite{fuchs:fidelity}.
\begin{fact}
\label{fact:fuchscaves}
Let $\rho, \sigma$ be density matrices in the same
Hilbert space ${\cal H}$. Then
$
B(\rho, \sigma) = \inf_{{\cal F}} B({\cal F} \rho, {\cal F} \sigma),
$
where ${\cal F}$ ranges over POVMs on ${\cal H}$.
In fact, the infimum above can be attained by a complete orthogonal
measurement on ${\cal H}$.
\end{fact}
The most general operation on a density matrix allowed by quantum
mechanics is what is called a {\em completely positive trace preserving
superoperator}, or superoperator for short. Let ${\cal H}, {\cal K}$ be Hilbert
spaces. A superoperator ${\cal T}$ from ${\cal H}$ to ${\cal K}$ maps quantum states
$\rho$ in ${\cal H}$ to quantum states ${\cal T} \rho$ in ${\cal K}$, and is described
by a finite
collection of linear maps $\{A_1, \ldots, A_l\}$ from ${\cal H}$ to ${\cal K}$
called {\em Kraus operators} such that,
${\cal T} \rho = \sum_{i=1}^l A_i \rho A_i^\dagger$.
Unitary transformations, taking partial traces and POVMs are special
cases of superoperators.
We will use the notation $A \geq B$ for Hermitian operators
$A, B$ in the same Hilbert space ${\cal H}$ as
a shorthand for the statement `$A - B$ is positive semidefinite'.
Thus, $A \geq 0$ denotes that $A$ is positive semidefinite.
Let $X$ be a classical random variable. Let $P$ denote the probability
distribution induced by $X$ on its range $\Omega$.
The {\em Shannon entropy} of
$X$ is defined as
$H(X) := H(P) := -\sum_{i \in \Omega} P(i) \log P(i)$.
For any $0 \leq p \leq 1$,
the {\em binary entropy} of $p$ is defined as
$H(p) := H((p, 1-p)) = -p \log p - (1-p) \log (1-p)$.
If $A$ is a quantum system
with density matrix $\rho$, then its {\em von Neumann entropy}
$S(A) := S(\rho) := -\mbox{{\rm Tr} } \rho \log \rho$. It is obvious that
the von Neumann entropy of a probability distribution equals its
Shannon entropy.
If $A, B$ are two
disjoint quantum systems, the {\em mutual information} of $A$ and $B$
is defined as $I(A : B) := S(A) + S(B) - S(AB)$; mutual information
of two random variables is defined analogously.
By a {\em quantum encoding} $M$ of a classical random variable
$X$ on $m$ qubits, we mean that there is a bipartite quantum system with
joint density
matrix $\sum_x \Pr[X = x] \cdot \ketbra{x} \otimes \rho_x$, where the first
system is the random variable, the second system is the quantum
encoding and an $x$ in the range of $X$ is encoded by a quantum
state $\rho_x$ on $m$ qubits. The reduced state of the first system
is nothing but the probability distribution
$\sum_x \Pr[X = x] \cdot \ketbra{x}$
on the range of $X$. The reduced state of the second system is the
{\em average code word} $\rho := \sum_x \Pr[X = x] \cdot \rho_x$.
The mutual information of this encoding is given by
\[
I(X : M) = S(X) + S(M) - S(X M) =
S(\rho) - \sum_x \Pr[X = x] \cdot S(\rho_x).
\]
We now define
the {\em relative entropy} of a pair of quantum states.
\begin{definition}[Relative entropy]
\label{def:relentropy}
If $\rho, \sigma$ are quantum states in the same
Hilbert space, their {\em relative entropy}
is defined as
$S(\rho \| \sigma) := \mbox{{\rm Tr} } (\rho (\log \rho - \log \sigma))$.
\end{definition}
For probability distributions $P, Q$ on the same sample space $\Omega$,
the above definition reduces to
$S(P \| Q) = \sum_{i \in \Omega} P(i) \log \frac{P(i)}{Q(i)}$.
The following fact lists some useful properties of relative entropy.
Proofs can be found in \cite[Chapter 11]{nielsen:quant}.
The monotonicity property
below is also called {\em Lindblad-Uhlmann monotonicity}.
\begin{fact}
\label{fact:relentprop}
Let $\rho, \sigma$ be density matrices in the same
Hilbert space ${\cal H}$. Then,
\begin{enumerate}
\item $S(\rho \| \sigma) \geq 0$, with equality iff $\rho = \sigma$;
\item $S(\rho \| \sigma) < +\infty$ iff
${\rm supp}(\rho) \subseteq {\rm supp}(\sigma)$,
where ${\rm supp}(\rho)$ denotes the {\em support} of $\rho$ i.e.
the span of the eigenvectors corresponding to non-zero
eigenvalues of $\rho$;
\item $S(\cdot \| \cdot)$ is continuous in its two arguments when
it is not infinite.
\item ({\em Unitary invariance}) If $U$ is a unitary transformation on
${\cal H}$,
$S(U \rho U^\dagger \| U \sigma U^\dagger) = S(\rho \| \sigma)$.
\item ({\em Monotonicity}) Let ${\cal L}$ be a Hilbert space and ${\cal T}$
be a completely positive trace preserving
superoperator from ${\cal H}$ to ${\cal L}$. Then,
$S({\cal T} \rho \| {\cal T} \sigma) \leq S(\rho \| \sigma)$.
\end{enumerate}
\end{fact}
The following fact relates mutual information to relative entropy, and
is easy to prove.
\begin{fact}
\label{fact:inforelentropy}
Let $X$ be a classical random variable and
$M$ be a quantum encoding of $X$ i.e. each $x$ in the range of $X$
is encoded by a quantum state $\rho_x$. Let
$\rho := \sum_x \Pr[X = x] \cdot \rho_x$ be the average code word.
Then, $I(X:M) = \sum_x \Pr[X = x] \cdot S(\rho_x \| \rho)$.
\end{fact}
The next fact is an extension of the random access code arguments
of \cite{ANTV:index}, and was proved by Gavinsky, Kempe, Regev
and de Wolf~\cite[Lemma 1]{deWolfKempe}.
\begin{fact}
\label{fact:randomaccess}
Let $X = X_1 \cdots X_n$ be a classical random variable of $n$ uniformly
distributed bits. Let $M$ be a quantum encoding of $X$ on $m$ qubits. For
each $i \in [n]$, suppose there is a POVM ${\cal F}_i$ on $M$ with
three outcomes
$0, 1, ?$. Let $Y_i$ denote the random variable obtained by applying
$F_i$ to $M$. Suppose there are real numbers
$0 \leq \lambda_i, \epsilon_i \leq 1$ such that
$\Pr[Y_i \neq {} ?] \geq \lambda_i$ and
$\Pr[Y_i = X_i \mid Y_i \neq {} ?] \geq 1/2 + \epsilon_i$, where the
probability arises from the randomness in $X$ as well as the randomness
of the outcome of ${\cal F}_i$. Then,
\[
\sum_{i=1}^n \lambda_i \epsilon_i^2 \leq
\sum_{i=1}^n \lambda_i (1 - H(1/2 + \epsilon_i)) \leq
I(X : M) \leq m.
\]
\end{fact}
\section{Privacy trade-offs for set membership}
\label{sec:tradeoffs}
In this section, we prove a trade-off between privacy loss of Alice
and privacy loss of Bob for the set membership
problem $\mathsf{SetMemb}_n$ assuming the substate theorem.
We then embed index function into other functions
using the concept of VC-dimension and show privacy trade-offs for some
other problems. But first, we
formally define our model of privacy loss in quantum
communication protocols.
\subsection{Quantum communication protocols}
\label{subsec:privacy}
We consider two party quantum communication
protocols as defined by Yao~\cite{yao:quantcc}. Let ${\cal X}, {\cal Y}, {\cal Z}$
be sets and
$f: {\cal X} \times {\cal Y} \rightarrow {\cal Z}$ be a function.
There are two players Alice and Bob, who hold qubits. Alice gets
an input $x \in {\cal X}$ and Bob an input $y \in {\cal Y}$.
When the communication protocol ${\cal P}$ starts, Alice and Bob each hold
some `work qubits' initialised in the state $\ket{0}$.
Alice and Bob may also share an input independent prior entanglement.
Thus, the initial superposition is simply
$\ket{0}_A \ket{\psi} \ket{y}_B \ket{0}_B$, where
$\ket{\psi}$ is a pure state providing the input independent
prior entanglement.
Here the subscripts denote
the ownership of the qubits by Alice and Bob. Some of the qubits
of $\ket{\psi}$ belong to Alice, the rest belong to Bob.
The players take turns
to communicate to compute $f(x,y)$. Suppose it is Alice's turn.
Alice can make an
arbitrary unitary transformation on her qubits depending on $x$ only
and then send some qubits to Bob.
Sending qubits does not change the overall
superposition, but rather the ownership of the qubits, allowing
Bob to apply his next unitary transformation, which depends on $y$ only,
on his original qubits
plus the newly received qubits. At the end of the protocol, the
last recipient of qubits performs a measurement in the computational
basis of some qubits in her possession to output an answer
${\cal P}(x, y)$. For each $(x,y) \in {\cal X} \times {\cal Y}$ the unitary
transformations that are applied, as well as the qubits that
are to be sent in each round, the number of rounds, the choice of the
starting player,
and the designation of which qubits are to be treated as
`answer qubits' are specified in advance by the protocol ${\cal P}$.
We say that ${\cal P}$ computes $f$ with $\epsilon$-error
in the worst case,
if $\max_{x,y} \Pr[{\cal P}(x, y) \neq f(x,y)] \leq \epsilon$.
We say that ${\cal P}$ computes $f$ with $\epsilon$-error
with respect to a probability distribution
$\mu$ on ${\cal X} \times {\cal Y}$, if
$\mbox{{\rm E} }_\mu [\Pr [{\cal P}(x, y) \neq f(x,y)]] \leq \epsilon$.
The communication complexity of ${\cal P}$ is defined to be the total number
of qubits exchanged. Note that seemingly more general models of
communication protocols can be thought of, where superoperators may
be applied by the parties instead of unitary transformations and
arbitrary POVM to output the answer of the protocol instead of
measuring in the computational basis, but such
models can be converted to the unitary model above without changing
the error probabilities, communication complexity, and as we will see
later, privacy loss to a cheating party.
Given a probability distribution $\mu$ on ${\cal X} \times {\cal Y}$
we define
$
\ket{\mu} :=
\sum_{(x,y) \in {\cal X} \times {\cal Y}} \sqrt{\mu(x,y)} \, \ket{x} \ket{y}.
$
Running protocol ${\cal P}$ with superposition $\ket{\mu}$ fed to Alice's and
Bob's inputs means that we first create the state
$
\sum_{(x,y) \in {\cal X} \times {\cal Y}} \sqrt{\mu(x,y)}
\ket{x} \ket{0}_A \ket{\psi} \ket{0}_B \ket{y},
$
then feed the middle three registers to ${\cal P}$ and let ${\cal P}$ run
its course till just before applying the final measurement to determine
the answer of the protocol.
We define the success probability of ${\cal P}$ when
$\ket{\mu}$ is fed to Alice's and Bob's inputs to be the
probability that measuring the inputs and the answer qubits
in the computational basis at the end of ${\cal P}$ produces consistent results.
Similarly, running protocol ${\cal P}$
with mixture $\mu$ fed to Alice's and Bob's inputs is defined in the
straightforward fashion.
It is easy to see that the success
probability of ${\cal P}$ on superposition $\ket{\mu}$ is the same as
the success probability on mixture $\mu$, that is, the success probability
on superposition $\ket{\mu}$ is equal to
$E_\mu [\Pr [{\cal P}(x, y) = f(x, y)]]$.
Now let $\mu_{{\cal X}}, \mu_{{\cal Y}}$ be probability distributions
on ${\cal X}, {\cal Y}$, and let $\mu := \mu_{{\cal X}} \times
\mu_{{\cal Y}}$ denote the product distribution on
${\cal X} \times {\cal Y}$. Let ${\cal P}$ be the prescribed {\em honest} protocol
for $f$.
Now let us suppose that Bob turns `malicious' and deviates from the
prescribed protocol ${\cal P}$ in order to learn
as much as he can about Alice's input. Note that Alice remains honest
in this scenario i.e. she continues to follow ${\cal P}$.
Thus, Alice and Bob are now actually running a `cheating'
protocol $\widetilde{{\cal P}}$.
Let registers $A, X, B, Y$ denote Alice's
work qubits, Alice's input qubits, Bob's work qubits and Bob's input
qubits respectively at the end of $\widetilde{{\cal P}}$.
The {\em privacy leakage} from Alice to
Bob in $\widetilde{{\cal P}}$ is captured by the mutual information
$\widetilde{I}(X : B Y)$
between Alice's input register and Bob's qubits in
$\widetilde{{\cal P}}$. We want to study how large $\sup
\widetilde{I}(X : B Y)$ can be for a given function
$f$, product distribution $\mu$, and protocol ${\cal P}$, where the
supremum is taken over all `cheating' protocols $\widetilde{{\cal P}}$ wherein
Bob can be arbitrarily malicious but Alice continues to follow ${\cal P}$
honestly. We shall call this quantity the {\em privacy loss} of ${\cal P}$ from
Alice to Bob. Privacy leakage and privacy loss from Bob to Alice can
be defined similarly.
One of the ways that Bob can cheat (even without Alice realising it!)
is by running ${\cal P}$ with the superposition
$
\ket{\mu_{{\cal Y}}} :=
\sum_{y \in {\cal Y}} \sqrt{\mu_{{\cal Y}}(y)} \, \ket{y}$
fed to register $Y$.
This method of cheating gives Bob at least as much
information about Alice's input as in the `honest' run of
${\cal P}$ when the mixture $\mu_{{\cal Y}}$ is fed to $Y$.
Sometimes it can give much
more. Consider the set membership problem, where
Alice has a bit string $x$ which denotes the characteristic vector
of a subset of $[n]$ and Bob has an $i \in [n]$. Consider
a {\em clean} protocol ${\cal P}$ for the index function problem.
Recall that a protocol ${\cal P}$ is said to be clean if the work
qubits of both the players except the answer qubits are in the state
$\ket{0}$ at the end of ${\cal P}$.
We shall show a privacy trade-off result for ${\cal P}$ under the uniform
distribution on the inputs of the two players.
For simplicity, assume that ${\cal P}$ is errorless (an error of
$1/4$ will only change the privacy losses by a multiplicative
constant).
Alice can cheat by feeding a uniform superposition over bit strings
into her input register $X$, and then running ${\cal P}$.
Bob is honest, and has a random $i \in [n]$.
At the end of this `cheating' run of ${\cal P}$, Alice applies a Hadamard
transformation on each of the registers
$X_j, 1 \leq j \leq n$. Suppose she were to measure them now in
the computational basis. For all $j \neq i$,
she would measure $\ket{0}$ with probability $1$.
For $j = i$, she would measure $1$ with probability $1/2$. Thus,
Alice has extracted about $\log n / 2$ bits of information about
Bob's index $i$. An `honest' run of ${\cal P}$ would have yielded
Alice only $1$ bit of information about $i$.
Klauck~\cite{klauck:privacy}, based on
Cleve et al.~\cite{cleve:ip},
has made a similar observation about $\Omega(n)$ privacy loss for
clean protocols computing
the inner product mod $2$ function.
The significance of our lower bounds on privacy loss is
that they make {\em no assumptions} about the protocol ${\cal P}$.
We now define a
{\em superpositional privacy loss} inspired by the above example.
We consider a `cheating' run of ${\cal P}$ when mixture
$\mu_{{\cal X}}$ is fed
to register $X$ and superposition
$\ket{\mu_{{\cal Y}}}$
to register $Y$. Let $I'(X : B Y)$ denote
the mutual information of Alice's input register $X$ with
Bob's registers $B Y$ at the
end of this `cheating' run of ${\cal P}$.
\begin{definition}[Superpositional privacy loss]
\label{def:privacy}
The superpositional privacy loss of ${\cal P}$ for function $f$ on the
product distribution $\mu$ from Alice
to Bob is defined as $L^{{\cal P}}(f, \mu, A, B) := I'(X : B Y)$.
The superpositional privacy loss from Bob to Alice,
$L^{{\cal P}}(f, \mu, B, A)$,
is defined similarly.
The superpositional privacy loss of ${\cal P}$ for $f$, $L^{{\cal P}}(f)$, is
the maximum over
all product distributions $\mu$, of
$\max\{L^{{\cal P}}(f, \mu, A, B), L^{{\cal P}}(f, \mu, B, A)\}$.
\end{definition}
\paragraph{Remarks:} \ \\
1.\ \ Our notion of superpositional privacy loss can be viewed
as a quantum analogue
of the ``combinatorial-informational'' bounded error measure
of privacy loss, $I_{{\rm c-i}}^\ast$, in Bar-Yehuda
et. al~\cite{baryehuda:privacy}. \\
2.\ \ In \cite{klauck:privacy}, Klauck defines
a similar notion of privacy loss. In his definition,
a mixture according to
distribution $\mu$ (not necessarily a product distribution)
is fed to both Alice's and Bob's input registers. He does
not consider the case of superpositions being fed to input registers.
For product distributions, our notion of privacy is
more stringent than
Klauck's, and in fact, the $L^{{\cal P}}(f, \mu, A, B)$ defined above is
an upper bound (to within an additive factor of $\log |{\cal Z}|$) on Klauck's
privacy loss function. \\
3.\ \ We restrict ourselves to product distributions because we
allow Bob to cheat by putting a superposition in his input register
$Y$. He should be able to do this
without any {\em a priori} knowledge of $x$, which implies that
the distribution $\mu$ should be a product distribution.
4.\ \ The (general) privacy loss defined above is trivially an upper bound
on the superpositional privacy loss.
\subsection{The privacy trade-off result}
\begin{theorem}
\label{thm:index}
Consider a quantum protocol ${\cal P}$ for $\mathsf{SetMemb}_n$ where Alice is given
a subset of $[n]$ and Bob an element of $n$.
Let $\mu$ denote the uniform
probability distribution on Alice's and Bob's inputs.
Suppose ${\cal P}$ has error at most $1/2-\epsilon$ with respect to $\mu$.
Suppose $L^{{\cal P}}(\mathsf{SetMemb}_n, \mu, B, A) \leq k$. Then,
\[
L^{{\cal P}}(\mathsf{SetMemb}_n, \mu, A, B)
\geq \frac{n}{2^{\epsilon^{-3}(14 k + 24)}} - 2.
\]
\end{theorem}
\begin{proof}
Let registers $A, X, B, Y$ denote Alice's work qubits,
Alice's input qubits, Bob's work qubits and Bob's input qubits
respectively, at the end of protocol ${\cal P}$.
We can assume without loss of generality that the last round
of communication in ${\cal P}$ is from Alice to
Bob, since otherwise, we can add an extra round
of communication at the end wherein Alice sends
the answer qubit to Bob. This process increases
$L^{{\cal P}}(\mathsf{SetMemb}_n, \mu, A, B)$ by at most two
and does not increase $L^{{\cal P}}(\mathsf{SetMemb}_n, \mu, B, A)$
(see e.g. the information theoretic arguments in~\cite{cleve:ip}).
Thus at the end of ${\cal P}$, Bob measures the answer qubit,
which is a qubit in the register $B$,
in the computational basis to determine
$f(x, y)$. In the proof, subscripts of pure
and mixed states will denote the registers which are in those
states.
Let $\ket{\psi_i}_{X A Y B}$ be the state vector of Alice's and
Bob's qubits
and $(\rho_i)_{X A}$ the density matrix of Alice's qubits at the end
of the protocol ${\cal P}$, when Alice is fed a uniform superposition
over bit strings
in her input register $X$ and Bob is fed $\ket{i}$ in
his input register $Y$. Let $1/2 + \epsilon_i$ be the success
probability of ${\cal P}$ in this case. Without loss of generality,
$\epsilon_i \geq 0$.
Consider a run, Run~1, of ${\cal P}$ when a uniform mixture
of indices is fed to register $Y$, and a uniform superposition over
bit strings is fed to register $X$. Let $1/2 + \epsilon$ be
the success probability of ${\cal P}$ for Run~1, which is also
the success probability of ${\cal P}$ with respect to $\mu$. Then
$1/4 \leq \epsilon = (1/n) \sum_{i=1}^n \epsilon_i$.
Let $I_1(Y : A X)$ denote
the mutual information of register $Y$ with registers $A X$ at the
end of Run~1 of ${\cal P}$. We know that
$I_1(Y : A X) = L^{{\cal P}}(\mathsf{SetMemb}_n, \mu, B, A) \leq k$.
Let $\rho_{XA} := (1/n) \sum_{i=1}^n (\rho_i)_{XA}$ and
$k_i := S((\rho_i)_{XA} \| \rho_{XA} )$.
Note that $0 \leq k_i < \infty$ by Fact~\ref{fact:relentprop}.
By Fact~\ref{fact:inforelentropy},
\begin{displaymath}
k \geq I_1(Y : A X) = \frac{1}{n} \sum_{i=1}^n
S((\rho_i)_{XA} \| \rho_{XA})
= \frac{1}{n} \sum_{i=1}^n k_i.
\end{displaymath}
Let $k_i' := k_i + 4 \sqrt{k_i + 2} + 2 \log (k_i + 2) + 5$ and
$r_i := (2/\epsilon_i)^2$.
Let us now consider a run, Run~2, of ${\cal P}$
with uniform superpositions fed to registers $X, Y$.
Let $\ket{\phi}_{X A Y B}$ be the state vector of Alice's
and Bob's qubits at the end of Run~2 of ${\cal P}$. Then,
$\parttr{YB} \ketbra{\phi} = \rho_{XA}$, and
the success probability of ${\cal P}$ for Run~2 is
$1/2 + \epsilon$.
Let $Q$ be an additional qubit.
By the substate theorem (Theorem~\ref{thm:substate}), there exist states
$\ket{\psi'_i}_{XAYBQ}$, $\ket{\theta'_i}_{XAYBQ}$ such that
$\trnorm{\ketbra{\psi_i} - \ketbra{\psi_i'}}
\leq 2/\sqrt{r_i} = \epsilon_i$ and
$\parttr{YBQ} \ketbra{\phi_i} = \rho_{XA}$ where
\begin{displaymath}
\ket{\phi_i}_{X A Y B Q} :=
\sqrt{\frac{r_i - 1}{r_i 2^{r_i k_i'}}} \,
\ket{\psi_i'}_{X A Y B} \ket{1}_{Q} +
\sqrt{1 - \frac{r_i - 1}{r_i 2^{r_i k_i'}}} \,
\ket{\theta_i'}_{X A Y B} \ket{0}_{Q},
\end{displaymath}
In fact, there exists
a unitary transformation $U_i$ on registers $Y B Q$,
transforming the state $\ket{\phi}_{X A Y B} \ket{0}_{Q}$
to the state $\ket{\phi_i}_{X A Y B Q}$.
For each $i \in [n]$, let $X_i'$ denote the classical random variable
got by measuring the $i$th bit of register $X$ in state
$\ket{\phi}_{X A Y B}$.
We now prove the following claim.
\begin{claim}
For each $i \in [n]$,
there is a POVM ${\cal M}_i$ with three outcomes $0$, $1$, $?$ acting
on $Y B$ such that if $Z_i'$ is the result of ${\cal M}_i$ on
$\ket{\phi}_{X A Y B}$, then
$
\Pr[Z_i' \neq \; ?]
\geq 2^{-4 \epsilon_i^{-2} (k'_i+1)},
$
and
$\Pr[Z_i' = X_i' \mid Z_i' \neq \; ?] \geq 1/2 + \epsilon_i/2$.
\end{claim}
\begin{proof}
The POVM ${\cal M}_i$ proceeds by first bringing in the ancilla qubit
$Q$ initialised to $\ket{0}_Q$, then applying $U_i$ to the registers
$Y B Q$ and finally
measuring $Q$ in the computational basis.
If it observes $\ket{1}_Q$, ${\cal M}_i$ measures
the answer qubit in $B$ in the computational basis and declares
the result as $Z_i'$. If it observes $\ket{0}_Q$, ${\cal M}_i$ outputs $?$.
When applied to $\ket{\phi}_{X A Y B}$, ${\cal M}_i$ first generates
$\ket{\phi_i}_{X A Y B Q}$ and then measures $Q$ in the computational
basis.
In the case when ${\cal M}_i$ measures $\ket{1}$ for qubit $Q$, which
happens with probability
\[
\Pr[Z_i' \neq \; ?]
= \frac{r_i - 1}{r_i 2^{r_i k_i'}}
\geq 2^{-4 \epsilon_i^{-2} (k'_i+1)},
\]
the state vector of $X A Y B$ collapses to
$\ket{\psi_i'}$. In this case by Fact~\ref{fact:totvartrace},
\begin{displaymath}
\Pr[Z_i' = X_i' | Z_i' \neq \; ?]
\geq \frac{1}{2} + \epsilon_i - \frac{1}{2}
\trnorm{\ketbra{\psi_i} - \ketbra{\psi_i'}}
\geq \frac{1}{2} + \frac{\epsilon_i}{2}.
\end{displaymath}
\end{proof}
Consider now a run, Run~3, of ${\cal P}$ when a uniform mixture over
bit strings is fed to register $X$ and a uniform superposition
over $[n]$ is fed to register $Y$. Let $\rho_{X A Y B}$ denote
the density matrix of the registers $X A Y B$ at the end of
Run~3 of ${\cal P}$. In fact, measuring in the
computational basis the register $X$ in the
state $\ket{\phi}_{X A Y B}$ gives us $\rho_{X A Y B}$; also,
$\parttr{YB} \rho_{X A Y B} = \rho_{XA}$.
Let $I_3(X : Y B)$ denote the mutual information between register
$X$ and registers $Y B$ in the state $\rho_{X A Y B}$.
For each $i \in [n]$, let $X_i$ denote the classical
random variable corresponding to the $i$th bit of register $X$ in state
$\rho_{X A Y B}$. Then,
$X := X_1 \ldots X_n$ is a uniformly distributed bit string of
length $n$.
Let $Z_i$ denote the result of POVM ${\cal M}_i$ of the above claim applied
to $\rho_{X A Y B}$. Then since ${\cal M}_i$ acts only on the registers
$Y B$, we get
$
\Pr[Z_i \neq \; ?]
= \Pr[Z_i' \neq \; ?]
\geq 2^{-4 \epsilon_i^{-2} (k_i' + 1)},
$
and
$\Pr[Z_i = X_i \mid Z_i \neq \; ?]
= \Pr[Z_i' = X_i' \mid Z_i \neq \; ?]
\geq 1/2 + \epsilon_i/2$.
Define
$
{\sf Good} :=
\{i \in [n]: k_i \leq 2 k/\epsilon, \epsilon_i \geq \epsilon/2\}.
$
By Markov's inequality, $|{\sf Good}| > n \epsilon/2$.
By Fact~\ref{fact:randomaccess},
\begin{eqnarray*}
I(X : Y B)
& \geq & \sum_{i=1}^n
\frac{\epsilon_i^2\cdot 2^{-4 \epsilon_i^{-2}(k_i'+1)}}{4}
\;\geq\; \sum_{i \in {\sf Good}}
\frac{\epsilon_i^2\cdot 2^{-4 \epsilon_i^{-2}(k_i'+1)}}{4} \\
& \geq & \frac{n\epsilon^3\cdot
2^{\epsilon^{-3}(2k+4\sqrt{2k+2}+2\log(2k+2)+6)}
}
{32}
\;\geq\; \frac{n}{2^{\epsilon^{-3}(2k+4\sqrt{2k+2}+2\log(2k+2)+12)}} \\
& \geq & \frac{n}{2^{\epsilon^{-3}(14 k + 24)}}.
\end{eqnarray*}
By the arguments in the first paragraph of this proof, we have
$L^{{\cal P}}(\mathsf{SetMemb}_n, \mu, A, B) \geq I(X : Y B) - 2$. This
completes the proof of the theorem.
\end{proof}\\
{\bf Remark:}
This theorem is the formal version of Result~\ref{res:privacy} stated
in the introduction.
As we have mentioned earlier, this theorem has been generalised
in~\cite{jain:entangred} in a suitable manner to relate the privacy
loss for any function in terms of its one-way communication
complexity. We do not get into the details of this statement here.
Instead, we give a weaker corollary of the present theorem that relates
the privacy loss of a function to the {\em Vapnik-Chervonenkis dimension
(VC-dimension)} of its communication matrix.
\begin{definition}[VC-dimension]
For a boolean valued function
$f: {\cal X} \times {\cal Y} \rightarrow \{0, 1\}$, a set
$T \subseteq {\cal Y}$ is {\em shattered}, if for all $S \subseteq T$
there is an $x \in {\cal X}$ such that
$\forall y \in T: f(x, y) = 1 \Leftrightarrow
y \in S$. The {\em VC-dimension} of $f$ for
${\cal X}$, ${\rm VC}_{{\cal X}}(f)$, is the largest size of such a
shattered set $T \subseteq {\cal Y}$. We define ${\rm VC}_{{\cal Y}}(f)$ analogously.
\end{definition}
Informally, ${\rm VC}_{{\cal X}}(f)$ captures the size of the largest instance of
the set membership problem $\mathsf{SetMemb}_n$ that can be `embedded' into $f$.
Using this connection, one can trivially prove a privacy trade-off
result for $f$ in terms of ${\rm VC}_{{\cal X}}(f)$, ${\rm VC}_{{\cal Y}}(f)$ by
invoking Theorem~\ref{thm:index}. This generalises
Klauck's lower bound~\cite{klauck:oneround} for
the communication complexity of bounded error one-way quantum protocols
for $f$ in terms of its VC-dimension.
\begin{corollary}
\label{cor:vcdim}
Let $f: {\cal X} \times {\cal Y} \rightarrow \{0, 1\}$ be a
boolean valued function. Let ${\rm VC}_{{\cal X}}(f) = n$.
Then there is a product distribution $\mu$ on
${\cal X} \times {\cal Y}$ such that,
if ${\cal P}$ is a quantum protocol for $f$ with
average error at most $1/2 - \epsilon$ with respect to $\mu$,
\[
L^{{\cal P}}(f, \mu, B, A) \leq k
\Leftrightarrow
L^{{\cal P}}(f, \mu, A, B) \geq
\frac{n}{2^{\epsilon^{-3}(14 k + 24)}} - 2.
\]
An analogous statement holds for ${\rm VC}_{{\cal Y}}(f)$.
\end{corollary}
\begin{proof}
Since ${\rm VC}_{{\cal X}}(f) = n$, there is a set
$T \subseteq {\cal Y}$, $|T| = n$ which is shattered. Without
loss of generality, $T = [n]$. For any subset $S \subseteq T$,
there is an $x \in {\cal X}$ such that
$\forall y \in T: f(x, y) = 1 \Leftrightarrow
y \in S$. We now give a reduction from $\mathsf{SetMemb}_n$
to $f$ as follows: In $\mathsf{SetMemb}_n$, Alice is given a subset $S \subseteq [n]$
and Bob is given a $y \in [n]$. Alice and Bob run the protocol
${\cal P}$ for $f$ on inputs $x$ and $y$ respectively, to solve
$\mathsf{SetMemb}_n$. The corollary now follows from Theorem~\ref{thm:index}.
\end{proof}
The following consequence of Corollary~\ref{cor:vcdim} is immediate.
\begin{corollary}
\label{cor:privacy}
Quantum protocols for
set membership $\mathsf{SetMemb}_n$, set disjointness for subsets of $[n]$ and
inner product modulo $2$ in $\{0,1\}^n$ each
suffer from $\Omega(\log n)$ privacy loss.
\end{corollary}
\begin{proof}
Follows trivially from Corollary~\ref{cor:vcdim} since all the three
functions have VC-dimension $n$.
\end{proof}
\section{The substate theorem}
\label{sec:substate}
In this section, we prove the quantum substate theorem. But first, we
state a fact from game theory that will be used in its proof.
\subsection{A minimax theorem}
We will require the following minimax theorem from game theory, which
is a consequence of the Kakutani fixed point theorem in real
analysis.
\begin{fact}
\label{fact:minimax}
Let $A_1,A_2$ be non-empty, convex and compact subsets of $\mathbb R^n$
for some $n$. Let $u: A_1 \times A_2 \rightarrow \mathbb R$ be a
continuous function, such that
\begin{itemize}
\item $\forall a_2 \in A_2$, the set
$\{a_1 \in A_1:\forall a_1' \in A_1 \,
u(a_1,a_2) \geq u(a_1',a_2)\}$ is convex; and
\item $\forall a_1 \in A_1$, the set
$\{a_2 \in A_2: \forall a_2' \in A_2 \,
u(a_1,a_2) \leq u(a_1,a_2')\}$ is convex.
\end{itemize}
Then, there is an $(a_1^\ast, a_2^\ast) \in A_1 \times A_2$ such that
\begin{displaymath}
\max_{a_1\in A_1}\, \min_{a_2\in A_2} u(a_1,a_2)
= u(a_1^\ast, a_2^\ast)
= \min_{a_2 \in A_2}\, \max_{a_1 \in A_1} u(a_1,a_2).
\end{displaymath}
\end{fact}
\paragraph{Remark:}
The above statement follows by combining Proposition~20.3
(which shows the
existence of Nash equilibrium $a^\ast$ in strategic games) and
Proposition~22.2 (which connects Nash equilibrium and the min-max
theorem for games defined using a pay-off function such as $u$) of
Osborne and Rubinstein's~\cite[pages 19--22]{osborne:gametheory}
book on game theory.
\subsection{Proof of the substate theorem}
\label{subsec:substate}
We now state the quantum substate theorem as it is actually used in
our privacy lower bound proofs.
\begin{theorem}[Quantum substate theorem]
\label{thm:substate}
Consider two Hilbert spaces ${\cal H}$ and ${\cal K}$,
$\dim({\cal K}) \geq \dim({\cal H})$. Let $\mathbb C^2$ denote the
two dimensional complex Hilbert space.
Let $\rho, \sigma$ be density matrices in ${\cal H}$.
Let $r > 1$ be any real number. Let $k := S(\rho \| \sigma)$.
Let $\ket{\psi}$
be a purification of $\rho$ in ${\cal H} \otimes {\cal K}$. Then there
exist pure states
$\ket{\phi}, \ket{\theta} \in {\cal H} \otimes {\cal K}$ and
$\ket{\zeta} \in {\cal H} \otimes {\cal K} \otimes \mathbb C^2$,
depending on $r$,
such that $\ket{\zeta}$ is a purification of $\sigma$ and
$\trnorm{\ketbra{\psi} - \ketbra{\phi}} \leq 2/\sqrt{r}$, where
\begin{displaymath}
\ket{\zeta} :=
\sqrt{\frac{r - 1}{r 2^{r k'}}} \, \ket{\phi}\ket{1} +
\sqrt{1 - \frac{r - 1}{r 2^{r k'}}} \, \ket{\theta}\ket{0}
~~~ {\rm and} ~~~
k' := k + 4 \sqrt{k+2} + 2 \log (k+2) + 5.
\end{displaymath}
\end{theorem}
{\bf Remarks:} \\
1.\ \ Note that Result~\ref{res:substate}
in the introduction follows from above by
tracing out ${\cal K} \otimes \mathbb C^2$. \\
2.\ \ From Result~\ref{res:substate}, one can easily see that
$\trnorm{\rho - \sigma} \leq 2 - 2^{-O(k)}$. This implies a
$2^{-O(k)}$ lower bound on the fidelity of $\rho$ and $\sigma$.
\bigskip
\noindent
{\bf Overview of the proof of Theorem~\ref{thm:substate}:} As we
have mentioned earlier, our proof of
the quantum substate theorem goes through first by
defining a new notion of
distinguishability called {\em observational divergence},
$D(\rho \| \sigma)$, between two density matrices $\rho$, $\sigma$ in the
same Hilbert space ${\cal H}$. Informally speaking, this notion is a single
observational version of relative entropy. Truly speaking,
the substate theorem is a
relationship between observational divergence and the substate condition.
We first prove an observational divergence lifting theorem which shows
that given two
states $\rho, \sigma$ in ${\cal H}$ and any extension $\sigma'$ of $\sigma$
in ${\cal H} \otimes {\cal K}$, $\dim({\cal K}) \geq \dim({\cal H})$, one can find a
purification $\ket{\phi}$ of
$\rho$ in ${\cal H} \otimes {\cal K}$ such that
$D(\ketbra{\phi} \| \sigma) = O(D(\rho \| \sigma))$.
This theorem may be of independent
interest. This helps us reduce the statement we intend to prove only
to the case when $\rho$ is a pure state. This case is then further reduced
to analysing only a two dimensional scenario which is then resolved by a
direct calculation. The final statement of the quantum substate theorem
in terms of relative entropy
is established by showing that observational divergence is never much
bigger than relative entropy for any pair of states.
Let us begin by defining observational divergence.
\begin{definition}[Observational divergence]
\label{def:div}
Let $\rho, \sigma$ be density matrices in the same
Hilbert space ${\cal H}$. Their observational divergence
is defined as
\begin{displaymath}
D(\rho \| \sigma) :=
\sup_F \left(\mbox{{\rm Tr} } (F \rho) \log \frac{\mbox{{\rm Tr} } (F \rho)}{\mbox{{\rm Tr} } (F \sigma)}
\right),
\end{displaymath}
where $F$ above ranges over POVM elements on ${\cal H}$
such that $\mbox{{\rm Tr} } (F \sigma) \neq 0$.
\end{definition}
The following properties of observational
divergence follow easily from the definition.
\begin{proposition}
\label{prop:propdiv}
Let $\rho, \sigma$ be density matrices in the same
Hilbert space ${\cal H}$. Then
\begin{enumerate}
\item $D(\rho \| \sigma) \geq 0$, with equality iff $\rho = \sigma$.
\item $D(\rho \| \sigma) < +\infty$ iff
${\rm supp}(\rho) \subseteq {\rm supp}(\sigma)$.
If $D(\rho \| \sigma) < +\infty$, then
there is a POVM element $F$ which achieves equality in
Definition~\ref{def:div}.
\item $D(\cdot \| \cdot)$ is continuous in its two arguments when
it is not infinite.
\item {\em (Unitary invariance)} If $U$ is a unitary transformation on
${\cal H}$,
$D(U \rho U^\dagger \| U \sigma U^\dagger) = D(\rho \| \sigma)$.
\item {\em (Monotonicity)} Suppose ${\cal K}$ is a Hilbert
space, and $\rho', \sigma'$ are extensions of $\rho, \sigma$ in
${\cal H} \otimes {\cal K}$. Then,
$D(\rho' \| \sigma') \geq D(\rho \| \sigma)$. This implies,
via unitary invariance and the Kraus representation theorem,
that if ${\cal T}$ is a completely positive trace preserving
superoperator from ${\cal H}$ to a Hilbert space ${\cal L}$, then
$D({\cal T} \rho \| {\cal T} \sigma) \leq D(\rho \| \sigma)$.
\end{enumerate}
\end{proposition}
Fact~\ref{fact:relentprop} and Proposition~\ref{prop:propdiv} seem
to suggest
that relative entropy and observational divergence are
similar quantities. In fact, the relative entropy is an upper bound
on the observational divergence to within an additive constant.
More properties of observational divergence as well as comparisons
with relative entropy are discussed in the appendix.
\begin{proposition}
\label{prop:divrelentropy}
Let $\rho, \sigma$ be density matrices in the same
Hilbert space ${\cal H}$. Then,
$D(\rho \| \sigma ) < S(\rho \| \sigma) + 1$.
\end{proposition}
\begin{proof}
By Fact~\ref{fact:relentprop} and
Proposition~\ref{prop:propdiv},
$D(\rho \| \sigma) = +\infty$
iff ${\rm supp}(\rho) \not \subseteq {\rm supp}(\sigma)$ iff
$S(\rho \| \sigma) = +\infty$. Thus,
we can henceforth assume without loss of generality that
$D(\rho \| \sigma) < +\infty$.
By Proposition~\ref{prop:propdiv},
there is a POVM element $F$ such that
$D(\rho \| \sigma) = p \log (p/q)$,
where $p := \mbox{{\rm Tr} } (F \rho)$ and $q := \mbox{{\rm Tr} } (F \sigma)$.
We now have
\begin{eqnarray*}
S(\rho \| \sigma )
& \geq & p \log \frac{p}{q} +
(1 - p) \log \frac{(1 - p)}{(1 - q)}
\; > \; p \log \frac{p}{q} +
(1 - p) \log \frac{1}{(1 - q)} - 1
\;\geq\; p \log \frac{p}{q} - 1 \\
& = & D(\rho \| \sigma) - 1.
\end{eqnarray*}
The first inequality follows from the Lindblad-Uhlmann monotonicity
of relative entropy (Fact~\ref{fact:relentprop}),
and the second
inequality follows because
$(1 - p) \log (1 - p) \geq (- \log e)/e > -1$,
for $0 \leq p \leq 1$. This completes the proof of the lemma.
\end{proof}
We now prove the following lemma, which can be thought of as a
substate theorem when the first density matrix is in fact
a pure state.
\begin{lemma}
\label{lem:twobytwo}
Let $\ket{\psi}$ be a pure state and $\sigma$ be a density matrix
in the same Hilbert space ${\cal H}$.
Let $k := D\left((\ketbra{\psi}) \| \sigma\right)$.
Then for all $r \geq 1$, there exists a pure state
$\ket{\phi}$, depending on $r$, such that
\begin{displaymath}
\trnorm{\ketbra{\psi} - \ketbra{\phi}} < \frac{2}{\sqrt{r}}
~~~ {\rm and} ~~~
\left(\frac{r-1}{r 2^{rk}}\right) \ketbra{\phi} < \sigma.
\end{displaymath}
\end{lemma}
\begin{proof}
We assume without loss of generality that $0 < k < +\infty$.
Consider $M := \sigma - (\ketbra{\psi}/2^{rk})$. Since
$-(\ketbra{\psi}/2^{rk})$ has exactly one non-zero eigenvalue and this
eigenvalue is negative viz. $-1/2^{rk}$, and $\sigma$ is positive
semidefinite, $M$ is a
hermitian matrix with at most one negative eigenvalue.
If $M \geq 0$ we take $\ket{\phi}$ to be $\ket{\psi}$. The
lemma trivially holds in this case.
Otherwise, let $\ket{w}$ be the eigenvector corresponding to
the unique negative eigenvalue $-\alpha$ of $M$.
Thinking of $\ketbra{w}$ as a POVM element, we get
\begin{displaymath}
0 > -\alpha = \mbox{{\rm Tr} } (M \ketbra{w}) =
\unibraket{w}{\sigma} - \frac{|\braket{\psi}{w}|^2}{2^{rk}}
\Rightarrow
\unibraket{w}{\sigma} < \frac{|\braket{\psi}{w}|^2}{2^{rk}}.
\end{displaymath}
Hence
\begin{displaymath}
k = D(\ketbra{\psi} \| \sigma) \geq
|\braket{\psi}{w}|^2 \log \frac{|\braket{\psi}{w}|^2}
{\unibraket{w}{\sigma}} >
r k |\braket{\psi}{w}|^2
\Rightarrow
|\braket{\psi}{w}|^2 < \frac{1}{r} \leq 1.
\end{displaymath}
In particular, this shows that $\ket{\psi}, \ket{w}$ are linearly
independent.
Let $n := \dim ({\cal H})$.
Let $\{\ket{v}, \ket{w}\}$ be an orthonormal basis for
the two dimensional subspace of ${\cal H}$ spanned by
$\{\ket{\psi}, \ket{w}\}$. Extend it to
$\{\ket{v_1}, \ldots, \ket{v_{n-2}}, \ket{v}, \ket{w}\}$,
an orthonormal basis
for the entire space ${\cal H}$.
In this basis we have the following matrix equation,
\begin{equation}
\label{eq:matrix}
\threearray{F}{e}{d}{a}{b}{c} - \threearray{0}{0}{0}{x}{y}{z} =
\bigtwoarray{P}{l}{-\alpha},
\end{equation}
where the first, second and third matrices are $\sigma$,
$\ketbra{\psi}/2^{rk}$ and $M$ respectively.
$F$ is an $(n - 2) \times (n - 2)$ matrix, $P$ is an
$(n - 1) \times (n - 1)$ matrix, $d$, $e$ are $(n - 2) \times 1$
matrices and $l$ is an $(n - 1) \times 1$ matrix.
$a, c, x, z, \alpha$ are non-negative real numbers and $b, y$ are
complex numbers. The zeroes above denote
all zero matrices of appropriate dimensions. The dagger denotes
conjugate transpose.
\begin{claim}
\label{claim:twobytwoineq}
We have the following properties.
\begin{enumerate}
\item $b, y \in \mathbb C$, $a, c, x, z, \alpha \in \mathbb R$.
\item $b = y \neq 0$, $1/(r2^{rk}) > z = c + \alpha > c > 0$,
$\alpha > 0$, $a > 0$, $0 < x < 1/2^{rk}$, $x + z = 1/2^{rk}$,
$l = 0$ and $d = 0$.
\item $0 < \frac{x c}{|b|^2} < \frac{x z}{|y|^2} = 1$.
\end{enumerate}
\end{claim}
\begin{proof}
The first part of the claim has already been mentioned above.
Since $\ket{w}$ is an eigenvector of $M$ corresponding to eigenvalue
$-\alpha$, $l = 0$.
By inspection, we have $b = y, z = c + \alpha, d = 0$.
We have $x > 0$ since $\ket{\psi}, \ket{w}$ are linearly
independent, and $z > c \geq 0$ since $\alpha > 0$.
Now, $x + z = \mbox{{\rm Tr} } (\ketbra{\psi}/2^{rk}) = 1/2^{rk}$ and so
$x < 1/2^{rk}$. Also, $z = |\braket{\psi}{w}|^2/2^{rk} < 1/(r2^{rk})$.
Since $\sigma \geq 0$, $F \geq 0$ and
$\twoarray{a}{b}{c} \geq 0$. Hence,
\begin{displaymath}
\det \twoarray{a}{b}{c} = a c - |b|^2 \geq 0.
\end{displaymath}
Since $\ketbra{\psi}/2^{rk}$ has one dimensional support,
\begin{displaymath}
\det \twoarray{x}{y}{z} = x z - |y|^2 = 0.
\end{displaymath}
If $c = 0$ then $y = b = 0$, which implies that $xz = 0$, which is
a contradiction. Hence, $c > 0$ and $b \neq 0$. Similarly,
$a > 0$. This proves the second part of the claim. The third
part now follows easily.
\end{proof}
We can now write $\sigma = \sigma_1 + \sigma_2$, where
\begin{displaymath}
\sigma_1 := \threearray{F}{e}{0}{a - \frac{|b|^2}{c}}{0}{0}
~~~ {\rm and} ~~~
\sigma_2 := \threearray{0}{0}{0}{\frac{|b|^2}{c}}{b}{c}.
\end{displaymath}
Note that $\ket{\xi} = (0, \ldots, 0, 1, -b^\dag/c)$ is
an eigenvector of $\sigma_2$ corresponding to the eigenvalue $0$.
We have $\sigma_2 \geq 0$, and in fact, $\sigma_2$ has one
dimensional support. We now claim that
$\sigma_1 \geq 0$. For otherwise, since $F \geq 0$, there is a
vector $\ket{\theta}$ of the form
$(a_1, \ldots ,a_{n - 2}, 1, 0)$ such that
$\unibraket{\theta}{\sigma_1} < 0$.
Now consider the vector
$\ket{\theta'} := (a_1, \ldots, a_{n - 2}, 1, -b^\dag/c)$.
We have,
\begin{displaymath}
\unibraket{\theta'}{\sigma} =
\unibraket{\theta'}{\sigma_1} +
\unibraket{\theta'}{\sigma_2} =
\unibraket{\theta}{\sigma_1} +
\unibraket{\xi}{\sigma_2} < 0,
\end{displaymath}
contradicting $\sigma \geq 0$.
This shows that $\sigma_1 \geq 0$, and hence, $\sigma \geq \sigma_2 $.
We are now finally in a position to define the pure state $\ket{\phi}$.
Note that $\ketbra{\phi}$ is nothing but $\sigma_2$ normalised to have
unit trace. That is,
\[
\ketbra{\phi} := \frac{\sigma_2}{\frac{|b|^2}{c} + c}.
\]
Using Claim~\ref{claim:twobytwoineq} we get,
\begin{displaymath}
\mbox{{\rm Tr} } \sigma_2 = \frac{|b|^2}{c} + c
> \frac{|b|^2}{z} + c
= x + z - \alpha
> \frac{r-1}{r 2^{rk}}.
\end{displaymath}
Hence,
$
\frac{r - 1}{r 2^{rk}} \ketbra{\phi} < \sigma_2 \leq \sigma.
$
This shows the second assertion of the lemma.
To complete the proof of the lemma,
we still need to show that
$\trnorm{\ketbra{\psi} - \ketbra{\phi}}$ is small.
Up to global phase factors, one can write $\ket{\psi}, \ket{\phi}$
as follows:
\begin{displaymath}
\ket{\psi} = \frac{\frac{b}{\sqrt{z}} \ket{v} + \sqrt{z} \ket{w}}
{\sqrt{\frac{|b|^2}{z} + z}},
~~~~~
\ket{\phi} = \frac{\frac{b}{\sqrt{c}} \ket{v} + \sqrt{c} \ket{w}}
{\sqrt{\frac{|b|^2}{c} + c}}.
\end{displaymath}
We now lower bound $|\braket{\phi}{\psi}|$ as follows,
using Claim~\ref{claim:twobytwoineq}.
\begin{eqnarray*}
|\braket{\phi}{\psi}|
& = & \frac{\frac{|b|^2}{\sqrt{cz}} + \sqrt{cz}}
{\sqrt{\frac{|b|^2}{c}+c} \cdot \sqrt{\frac{|b|^2}{z} + z}}
\;=\; \frac{|b|^2 + cz}{\sqrt{(|b|^2 + c^2)(|b|^2 + z^2)}} \\
& > & \frac{|b|^2 + cz}{\sqrt{(|b|^2 + cz)(|b|^2 + z^2)}}
\;=\; \sqrt{\frac{|b|^2 + cz}{|b|^2 + z^2}}
\;=\; \sqrt{\frac{x + c}{x + z}}
\;=\; \sqrt{1 - \frac{\alpha}{x + z}} \\
& > & \sqrt{1 - \frac{1}{r}}.
\end{eqnarray*}
This proves that
$
\trnorm{\ketbra{\psi} - \ketbra{\phi}}
= 2 \sqrt{1 - |\braket{\phi}{\psi}|^2}
< 2/\sqrt{r},
$
establishing the first assertion of the lemma and completing its
proof.
\end{proof}
We next prove the following lemma, which can be thought of as
an `observational substate' lemma.
\begin{lemma}
\label{lem:lifting1}
Consider two Hilbert spaces ${\cal H}$ and ${\cal K}$,
$\dim({\cal K}) \geq \dim({\cal H})$.
Let $\rho, \sigma$ be density matrices in ${\cal H}$.
Let $\ket{\psi}$ be a purification of $\rho$ in ${\cal H} \otimes {\cal K}$.
Let $F$ be a POVM element on ${\cal H} \otimes {\cal K}$. Let $\beta > 1$.
Then there exists
a purification $\ket{\phi}$ of $\sigma$ in ${\cal H} \otimes {\cal K}$
such that
$q \geq \frac{p}{2^{k'/p}}$,
where $p := \mbox{{\rm Tr} } (F \ketbra{\psi})$,
$q := \mbox{{\rm Tr} } (F \ketbra{\phi})$ and
$k' := \beta D(\rho \| \sigma) - 2 \log (1 - \beta^{-1/2}) $.
\end{lemma}
\begin{proof}
We assume without loss of
generality that $0 < D(\rho \| \sigma) < +\infty$ and that $p > 0$.
Let $n := \dim ({\cal H} \otimes {\cal K})$ and $\{\ket{\alpha_i}\}_{i=1}^n$ be
the orthonormal eigenvectors of $F$ with corresponding eigenvalues
$\{\lambda_i\}_{i=1}^n$. Note that $0 \leq \lambda_i \leq 1$ and
$\ket{\alpha_i} \in {\cal H} \otimes {\cal K}$.
We have,
\begin{displaymath}
p = \sum_{i=1}^n \lambda_i |\braket{\alpha_i}{\psi}|^2
~~~ {\rm and} ~~~
q = \sum_{i=1}^n \lambda_i |\braket{\alpha_i}{\phi}|^2.
\end{displaymath}
Define,
\begin{displaymath}
\ket{\theta'} :=
\frac{\sum_{i=1}^n \lambda_i \braket{\alpha_i}{\psi} \ket{\alpha_i}}
{\sqrt{p}}
~~~ {\rm and} ~~~
\ket{\theta} := \frac{\ket{\theta'}}{\|\ket{\theta'}\|}.
\end{displaymath}
Note that $p = |\braket{\psi}{\theta}|^2 \|\ket{\theta'}\|^2$ and
$0 < \|\ket{\theta'}\|^2 \leq 1$. Using the
Cauchy-Schwarz inequality, we see that
\begin{displaymath}
|\braket{\phi}{\theta}|^2 \|\ket{\theta'}\|^2 =
|\braket{\phi}{\theta'}|^2 =
\frac{\left| \sum_{i=1}^n \lambda_i \braket{\alpha_i}{\psi}
\braket{\phi}{\alpha_i}\right|^2}
{\sum_{i=1}^n \lambda_i |\braket{\alpha_i}{\psi}|^2} \leq
\sum_{i=1}^n \lambda_i |\braket{\alpha_i}{\phi}|^2 = q.
\end{displaymath}
Thus,
\begin{displaymath}
\frac{p}{2^{k'/p}} =
\frac{|\braket{\psi}{\theta}|^2 \|\ket{\theta'}\|^2}
{2^{k'/(|\braket{\psi}{\theta}|^2 \|\ket{\theta'}\|^2)}} \leq
\frac{|\braket{\psi}{\theta}|^2 \|\ket{\theta'}\|^2}
{2^{k'/|\braket{\psi}{\theta}|^2}}.
\end{displaymath}
Hence, it will suffice to show that there exists a purification
$\ket{\phi}$ of $\sigma$ in ${\cal H} \otimes {\cal K}$ such that
\begin{displaymath}
|\braket{\phi}{\theta}|^2 \geq \frac{|\braket{\psi}{\theta}|^2}
{2^{k'/|\braket{\psi}{\theta}|^2}}.
\end{displaymath}
Define the density matrix $\tau$ in ${\cal H}$ as
$\tau := \parttr{{\cal K}} \ketbra{\theta}$.
By Facts~\ref{fact:jozsa} and \ref{fact:fuchscaves}, there is a
purification $\ket{\phi}$ of $\sigma$ in ${\cal H} \otimes {\cal K}$ and a
POVM $\{F_1, \ldots, F_l\}$ in ${\cal H}$ such that,
\begin{displaymath}
|\braket{\phi}{\theta}| =
B(\tau, \sigma) = \sum_{i=1}^l \sqrt{c_i b_i},
\end{displaymath}
where $c_i := \mbox{{\rm Tr} } (F_i \tau)$ and $b_i := \mbox{{\rm Tr} } (F_i \sigma)$.
Let $a_i := \mbox{{\rm Tr} } (F_i \rho)$. We know from
Facts~\ref{fact:jozsa} and \ref{fact:fuchscaves} that
\begin{displaymath}
0 < \sqrt{p} \leq |\braket{\psi}{\theta}| \leq
B(\tau, \rho) \leq \sum_{i=1}^l \sqrt{c_i a_i}.
\end{displaymath}
Note that the $a_i$'s are non-negative real numbers summing up to $1$,
and so are the $b_i$'s and the $c_i$'s.
For $\beta > 1$, define the set
$
S_\beta :=
\left\{i \in [l]: a_i > b_i \cdot 2^{\beta k/B(\tau, \rho)^2}\right\}$,
where $k := D(\rho \| \sigma)$.
Note that $\forall i \in S, b_i \neq 0$ as ${\rm supp}(\rho) \subseteq
{\rm supp}(\sigma)$, $k$ being finite.
Define the POVM element $G$ on ${\cal H}$ as
$G := \sum_{i \in S_\beta} F_i$.
Let $a := \mbox{{\rm Tr} } (G \rho)$ and $b := \mbox{{\rm Tr} } (G \sigma)$. Then
$a = \sum_{i \in S_\beta} a_i$, $b = \sum_{i \in S_\beta} b_i$, $b > 0$ and
$a > b \cdot 2^{\beta k/B(\tau, \rho)^2}$. We have that
\begin{displaymath}
D(\rho \| \sigma) = k \geq a \log \frac{a}{b}
> \frac{\beta k a}{B(\tau, \rho)^2}
\Rightarrow
a < \frac{B(\tau, \rho)^2}{\beta}.
\end{displaymath}
Now, by the Cauchy-Schwarz inequality and the other inequalities
proved above, we get
\begin{eqnarray*}
B(\tau, \rho) & \leq & \sum_{i=1}^l \sqrt{c_i a_i}
\; = \; \sum_{i \in S_\beta} \sqrt{c_i a_i} +
\sum_{i \not \in S_\beta} \sqrt{c_i a_i} \\
& \leq & \sqrt{\sum_{i \in S_\beta} c_i}
\sqrt{\sum_{i \in S_\beta} a_i} +
2^{\beta k/(2 B(\tau, \rho)^2)}
\sum_{i \not \in S_\beta} \sqrt{c_i b_i}
\;\leq\; 1 \cdot \sqrt{a} + 2^{\beta k/(2 B(\tau, \rho)^2)}
B(\tau, \sigma) \\
& < & \frac{B(\tau, \rho)}{\sqrt{\beta}} +
2^{\beta k/(2 B(\tau, \rho)^2)} B(\tau, \sigma).
\end{eqnarray*}
This shows that
\begin{displaymath}
B(\tau, \rho)^2 < (1 - \beta^{-1/2})^{-2} \cdot
2^{\beta k/B(\tau, \rho)^2} B(\tau, \sigma)^2
\Rightarrow
|\braket{\psi}{\theta}|^2 < (1 - \beta^{-1/2})^{-2} \cdot
2^{\beta k/|\braket{\psi}{\theta}|^2}
|\braket{\phi}{\theta}|^2.
\end{displaymath}
Since $k' = \beta k - 2 \log (1 - \beta^{-1/2})$, we get
$
|\braket{\phi}{\theta}|^2 \geq \frac{|\braket{\psi}{\theta}|^2}
{2^{k'/|\braket{\psi}{\theta}|^2}},
$
completing the proof of the lemma.
\end{proof}
In the previous lemma, the purification $\ket{\phi}$ of $\sigma$
was a function of the POVM element $F$. We now prove a lemma which,
for any fixed $0 \leq p \leq 1$,
removes the dependence on $F$ satisfying $\mbox{{\rm Tr} } (F \ketbra{\psi}) \geq p$,
at the expense of having a, in general, mixed extension of $\sigma$ in the
place of a pure extension i.e. purification.
\begin{lemma}
\label{lem:lifting2}
Consider two Hilbert spaces ${\cal H}$ and ${\cal K}$,
$\dim({\cal K}) \geq \dim({\cal H})$.
Let $\rho, \sigma$ be density matrices in ${\cal H}$ and
$\ket{\psi}$ be a purification of $\rho$ in ${\cal H} \otimes {\cal K}$.
Let $0 \leq p \leq 1$ and $\beta > 1$. Then
there exists an extension $\omega$ of $\sigma$ in ${\cal H} \otimes {\cal K}$
such that for all
POVM elements $F$ on ${\cal H} \otimes {\cal K}$ such that
$\mbox{{\rm Tr} } (F \ketbra{\psi}) \geq p$,
$\mbox{{\rm Tr} } (F \omega) \geq p/2^{k'/p}$,
where $k' := \beta D(\rho \| \sigma) - 2 \log (1 - \beta^{-1/2})$.
\end{lemma}
\begin{proof}
We assume without loss of
generality that $0 < D(\rho \| \sigma) < +\infty$ and that $p > 0$.
Consider the set $A_1$ of all extensions $\omega$ of $\sigma$ in
${\cal H} \otimes {\cal K}$ and the set $A_2$ of all POVM operators $F$ in
${\cal H} \otimes {\cal K}$ such that $\mbox{{\rm Tr} } (F \ketbra{\psi}) \geq p$.
Observe that
$A_1$, $A_2$ are non-empty, compact, convex sets.
Without loss of generality, $A_2$ is non-empty.
The conditions of Fact~\ref{fact:minimax} are trivially satisfied
(note that we think
of our matrices, which in general have
complex entries, as vectors in a larger real vector space). Thus,
for every $F \in A_2$, we have a purification
$\ket{\phi^F} \in {\cal H} \otimes {\cal K}$
of $\sigma$ such that
\begin{displaymath}
\mbox{{\rm Tr} } \left(F \ketbra{\phi^F}\right) \geq
\frac{\mbox{{\rm Tr} } \left(F \ketbra{\psi}\right)}
{2^{k'/\mbox{{\rm Tr} } \left(F \ketbra{\psi}\right)}} \geq
\frac{p}{2^{k'/p}}.
\end{displaymath}
Using Fact~\ref{fact:minimax}, we see that there exists an extension
$\omega$ of $\sigma$ in ${\cal H} \otimes {\cal K}$
such that
$\mbox{{\rm Tr} } (F \omega) \geq \frac{p}{2^{k'/p}}$
for all $F \in A_1$. This completes the proof.
\end{proof}
The previous lemma depends upon the parameter $p$. We now remove
this restriction by performing a `discrete integration' operation and
obtain an observational divergence `lifting' result, which may
be of independent interest.
\begin{lemma}[Observational divergence lifting]
\label{lem:lifting3}
Consider two Hilbert spaces ${\cal H}, {\cal K}$,
$\dim({\cal K}) \geq \dim({\cal H})$.
Let $\rho, \sigma$ be density matrices in ${\cal H}$, and
$\ket{\psi}$ be a purification of $\rho$ in ${\cal H} \otimes {\cal K}$.
Then there exists an extension $\omega$ of $\sigma$ in ${\cal H} \otimes {\cal K}$
such that
$
D(\left(\ketbra{\psi}\right) \| \omega)
< D(\rho \| \sigma) + 4 \sqrt{D(\rho \| \sigma) + 1} +
2 \log (D(\rho \| \sigma) + 1) + 4.
$
\end{lemma}
\begin{proof}
We assume without loss of
generality that $0 < D(\rho \| \sigma) < +\infty$.
Let $\beta > 1$ and $\gamma \geq 1$.
Define the monotonically increasing
function $f: [0,1] \rightarrow [0,1]$ as follows:
\begin{displaymath}
f(p) := \frac{p}{2^{k'/p}}
~~~ {\rm where} ~~~
0 \leq p \leq 1
~~~ {\rm and} ~~~
k' := \beta D(\rho \| \sigma) - 2 \log (1 - \beta^{-1/2}).
\end{displaymath}
For a fixed positive integer $l$, define
$T_\gamma(l) := \sum_{i=1}^l l^{\gamma-1}$. It is easy to
see by elementary calculus that
$
\gamma^{-1} \cdot l^\gamma
\leq T_\gamma(l)
\leq \gamma^{-1} \cdot (l+1)^\gamma.
$
Define
the density matrix $\omega_l$ in ${\cal H} \otimes {\cal K}$ as
$\omega_l := (T_\gamma(l))^{-1} \sum_{i=1}^l i^{\gamma-1}\omega(i/l)$,
where for $0 \leq p \leq 1$, $\omega(p)$ is an extension of $\sigma$ in
${\cal H} \otimes {\cal K}$ such that
$\mbox{{\rm Tr} } (F \omega(p)) \geq f(p)$ for all POVM elements $F$ on
${\cal H} \otimes {\cal K}$ satisfying $\mbox{{\rm Tr} } (F \ketbra{\psi}) \geq p$.
Such an $\omega(p)$ exists by Lemma~\ref{lem:lifting2}.
Then, $\parttr{{\cal K}} \omega_l = \sigma$ i.e.
$\omega_l$ is an extension of $\sigma$ in ${\cal H} \otimes {\cal K}$.
Suppose $F$ is
a POVM element on ${\cal H} \otimes {\cal K}$. Let
$j/l \leq p := \mbox{{\rm Tr} } (F \ketbra{\psi}) < (j+1)/l$, where
$0 \leq j \leq l$. We assume without loss of generality that
$p > 0$. Then,
\begin{eqnarray*}
\mbox{{\rm Tr} } (F \omega_l)
& = & \frac{1}{T_\gamma(l)}
\sum_{i=1}^j i^{\gamma-1} \cdot \mbox{{\rm Tr} } (F \omega(i/l))
\;\geq\; \frac{1}{T_\gamma(l)} \sum_{i=1}^j i^{\gamma-1} \cdot f(i/l) \\
& \geq & \frac{T_\gamma(j)}{T_\gamma(l)} \cdot
f\left(\frac{1}{T_\gamma(j)} \sum_{i=1}^j \frac{i^{\gamma}}{l}
\right)
\; = \; \frac{T_\gamma(j)}{T_\gamma(l)} \cdot
f\left(\frac{T_{\gamma+1}(j)}{l \cdot T_\gamma(j)}\right) \\
& \geq & \left(\frac{j}{l+1}\right)^\gamma \cdot
f\left(\frac{\gamma \cdot j^{\gamma+1}}
{l (\gamma+1) \cdot (j+1)^\gamma}
\right) \\
& \geq & \left(\frac{pl-1}{l+1}\right)^\gamma \cdot
f\left(\left(\frac{\gamma (pl-1)}{(\gamma+1)l}\right)
\left(\frac{pl-1}{pl+1}\right)^\gamma
\right).
\end{eqnarray*}
The second inequality above follows from the convexity of $f(\cdot)$.
By compactness, the set $\{\omega_l: l \in \mathbb N\}$ has limit points.
Choose a limit point point $\omega$. By standard continuity arguments,
$\parttr{{\cal K}} \omega = \sigma$ and
\begin{eqnarray*}
q
& := & \mbox{{\rm Tr} } (F \omega)
\; \geq \; \lim_{l \rightarrow +\infty}
\left[
\left(\frac{pl-1}{l+1}\right)^\gamma \cdot
f\left(\left(\frac{\gamma (pl-1)}{(\gamma+1)l}\right)
\left(\frac{pl-1}{pl+1}\right)^\gamma
\right)
\right]
\; = \; p^\gamma \cdot
f\left(\frac{\gamma p}{\gamma+1}\right) \\
& = & \frac{\gamma \cdot p^{\gamma+1}}
{(\gamma+1) \cdot
2^{k'(\gamma+1) \gamma^{-1} p^{-1}}
}.
\end{eqnarray*}
Hence, $q > 0$ and
\begin{eqnarray*}
p \log \frac{p}{q}
& \leq & p \log
\left(\gamma^{-1} (\gamma+1) \cdot p^{-\gamma} \cdot
2^{k'(\gamma+1) \gamma^{-1} p^{-1}}
\right)
\; = \; p \log (1 + \gamma^{-1}) - \gamma p \log p +
(1 + \gamma^{-1}) k' \\
& < & (1 + \gamma^{-1}) k' + \gamma + 1.
\end{eqnarray*}
The second inequality follows because $- p \log p < 1$ for
$0 \leq p \leq 1$, and $\log (1 + \gamma^{-1}) \leq 1$ for all
$\gamma \geq 1$. Substituting
$k' = \beta D(\rho \| \sigma) - 2 \log (1 - \beta^{-1/2})$ gives
\[
D(\left(\ketbra{\psi}\right) \| \omega)
< \beta (1 + \gamma^{-1}) D(\rho \| \sigma) -
2 (1 + \gamma^{-1}) \log (1 - \beta^{-1/2}) + \gamma + 1.
\]
We set $\beta = (1 + (D(\rho \| \sigma)+1)^{-1/2})^2$ and
$\gamma = (D(\rho \| \sigma)+1)^{1/2}$ to get
\begin{eqnarray*}
D(\left(\ketbra{\psi}\right) \| \omega)
& < & (1 + (D(\rho \| \sigma)+1)^{-1/2})^2 \cdot
(1 + (D(\rho \| \sigma)+1)^{-1/2}) \cdot D(\rho \| \sigma) \\
& & {} +
(1 + (D(\rho \| \sigma)+1)^{-1/2}) \cdot
\log (D(\rho \| \sigma)+1) + (D(\rho \| \sigma)+1)^{1/2} + 1 \\
& < & D(\rho \| \sigma) + 4 \sqrt{D(\rho \| \sigma)+1} +
(1 + (D(\rho \| \sigma)+1)^{-1/2}) \cdot
\log (D(\rho \| \sigma)+1) + 4 \\
& < & D(\rho \| \sigma) + 4 \sqrt{D(\rho \| \sigma)+1} +
2 \log (D(\rho \| \sigma)+1) + 4.
\end{eqnarray*}
This completes the proof of the lemma.
\end{proof}
Lemma~\ref{lem:lifting3} relates the observational
divergence of a pair of density matrices
to the observational
divergence of their extensions in an extended Hilbert space,
where the extension
of the first density matrix is a pure state. Using this,
we are now finally in a position to prove
the quantum substate theorem.
\bigskip
\noindent{\bf Proof (Theorem~\ref{thm:substate}):}
By Proposition~\ref{prop:divrelentropy} and Lemma~\ref{lem:lifting3},
there exists a density matrix $\omega$ in ${\cal H} \otimes {\cal K}$ such that
$\parttr{{\cal K}} \omega = \sigma$ and
\begin{eqnarray*}
D\left(\left(\ketbra{\psi}\right) \| \, \omega\right)
& < & D(\rho \| \sigma) + 4 \sqrt{D(\rho \| \sigma)+1} +
2 \log (D(\rho \| \sigma)+1) + 4 \\
& < & S(\rho \| \sigma) + 4 \sqrt{S(\rho \| \sigma) + 2} +
2 \log (S(\rho \| \sigma)+2) + 5
\;=\;k'.
\end{eqnarray*}
By Lemma~\ref{lem:twobytwo}, there exists a pure state $\ket{\phi}$
such that
\begin{displaymath}
\trnorm{\ketbra{\psi} - \ketbra{\phi}} \leq \frac{2}{\sqrt{r}}
~~~ {\rm and} ~~~
\left(\frac{r-1}{r 2^{rk'}}\right) \ketbra{\phi} \leq \omega.
\end{displaymath}
Let $\tau_1 := \parttr{{\cal K}} \ketbra{\phi}$. By above,
$
\left(\frac{r-1}{r 2^{rk'}}\right) \tau_1 \leq \sigma.
$
That is,
there exists a density matrix $\tau_2$ in ${\cal H}$ such that
\begin{displaymath}
\sigma = \left( \frac{r-1}{r2^{rk'}} \right) \tau_1 +
\left( 1 - \frac{r-1}{r2^{rk'}} \right) \tau_2.
\end{displaymath}
Let $\ket{\theta} \in {\cal H} \otimes {\cal K}$ be a canonical purification
of $\tau_2$. Then, $\ket{\zeta}$ defined in the statement of
Theorem~\ref{thm:substate} is a purification
of $\sigma$ in ${\cal H} \otimes {\cal K} \otimes \mathbb C^2$.
This completes the proof of Theorem~\ref{thm:substate}.
\qed
\section{Conclusion and open problems}
\label{sec:conclusion}
In this paper we have proved a theorem about relative entropy of
quantum states which gives a novel interpretation to this
information theoretic quantity. Using this theorem, we have shown a
privacy trade-off for computing set membership in the two-party
quantum communication model.
The statements of the classical and quantum substate theorems have
one important difference. For two quantum states $\rho$, $\sigma$
with $S(\rho \| \sigma) = k$, the distance between $\rho$ and $\rho'$,
where $\rho'/2^{O(k)} \leq \sigma$, is less in the classical case
than in the quantum case. More formally,
the dependence on $r$ in Theorem~\ref{thm:substate} is
$O(1/\sqrt{r})$ whereas in the classical analogue,
Result~\ref{res:substate}', the dependence is like $O(1/r)$. The better
dependence in the classical scenario enables us to prove a kind
of converse to the classical substate theorem, which is outlined in
the appendix. It will
be interesting to see if the dependence in the quantum setting
can be improved to match the classical case, enabling us to prove a
similar quantum converse.
Another open question is if there is an alternate
proof for the quantum substate theorem which
does not go through observational divergence lifting.
Finally, it will also be interesting to see find yet more applications
of the classical and quantum substate theorems.
\subsection*{Acknowledgements}
We are very grateful to Ashwin Nayak for his contribution to this
work. He patiently went through several versions of our proofs; his
counter examples and insights were invaluable in arriving at our
definition of privacy. We are also grateful to K.~R.~Parthasarathy and
Rajendra Bhatia for sharing with us their insights in
operator theory.
\newcommand{\etalchar}[1]{$^{#1}$}
|
2,869,038,155,087 | arxiv | \section{Introduction}
Given a graph $G$ and a seed node in that graph, a local graph clustering algorithm finds a good small cluster that contains the seed node without looking at the whole graph~\cite{ACL06,SH13}. Because the graphs arising from modern applications are massive in size and yet are rich in small-scale local structures~\cite{LLDM09,Jeub15}, local graph clustering has become an important scalable tool for probing large-scale graph datasets with a wide range of applications in machine learning and data analytics~\cite{G15,FWY20,MS2021}.
Traditional local graph clustering algorithms primarily focus on the structural properties of a graph dataset, i.e. nodes and edges, and consequently the analyses of these algorithms are often concerned with the combinatorial properties of the output cluster. For example, in most previous studies one is interested in the conductance of a cluster and define a good cluster as one that has low conductance~\cite{SH13,ACL06,AP09,ALM13,AGPT2016,PKDJ17,WFHM2017,FWY20,LG20}. In this case, the objective of local graph clustering is thus detecting a low conductance cluster around the seed. With the increasing availability of multi-modal datasets, it is now very common for a graph dataset to contain additional sources of information such as node attributes, which may prove to be crucial for correctly identifying clusters with rather noisy edge connections. However, nearly all existing local graph clustering algorithms do not work with attributed graphs. Moreover, in the presence of node attributes, the objective and analysis of a local graph clustering algorithm should also adjust to take into account both sources of information (i.e. graph structure and attributes) as opposed to focus solely on the combinatorial notion of conductance.
\subsection{Our contributions}
We propose a simple local graph clustering algorithm that simultaneously considers both graph structural and node attribute information. We analyze the performance of the proposed algorithm from a statistical perspective where we assume that the target cluster and the node attributes have been generated from a random data model. We provide conditions under which the algorithm is guaranteed to fully recover the target cluster with bounded false positives.
Our local graph clustering algorithm uses the recently proposed flow diffusion model on graphs~\cite{FWY20,CPW21}. The original flow diffusion is proposed to solve the local graph clustering problem on unweighted graphs without node attributes. In this work we consider flow diffusion on weighted graphs where the edge weights are designed to reflect the proximity between node attributes. A distinct characteristic of the proposed algorithm is its simplicity and flexibility. On one hand, the algorithm has few hyperparameters and thus it does not require much tuning; while on the other hand, it allows flexible initialization of source mass and sink capacities, which enables us to obtain different types of recovery guarantees.
Our main contribution is the analyses of the algorithm for the recovery of a target cluster with a single seed node. We provide high probability guarantees on the performance of the algorithm under a certain type of contextual random graph model. The data model we consider is fairly general. On the structural side, it only concerns the connectivity of nodes within the target cluster and its adjacent nodes, and hence it encompasses the stochastic block model (SBM) and the planted cluster model as special cases; on the node attribute side, it allows an attribute be modelled by a sub-Gaussian random variable, and this includes Gaussian, uniform, Bernoulli, and any discrete or continuous random variables over a finite domain. Depending on a signal-to-noise ratio of the node attributes, we present two recovery results. Informally, if we have very good node attributes, then with overwhelming probability the algorithm fully recovers the target cluster with nearly zero false positives, irrespective of the interval connectivity of the target cluster (as long as it is connected); on the other hand, if we have good, but not too good, node attributes, then with overwhelming probability the algorithm fully recovers the target cluster, with the size of the false positives jointly controlled by both the combinatorial conductance of the target cluster and the signal-to-noise ratio of the node attributes.
Finally, we carry out experiments on synthetic data to verify all theoretical claims and on real-world data to demonstrate the advantage of incorporating node attributes.
\subsection{Previous work}
The local graph clustering problem is first introduced by \cite{SH13} where the authors proposed a random-walk based algorithm with early termination. Later \cite{ACL06} studied the same problem using approximate personalized PageRank vectors. There is a long line of work on local graph clustering where the analysis of the algorithm concerns the conductance of the output cluster~\cite{ACL06,AP09,SH13,ALM13,AGPT2016,PKDJ17,WFHM2017,FWY20,LG20}. The first statistical analysis of local graph clustering is considered in \cite{HFM2021} where the authors analyze the average-case performance of the $\ell_1$-regularized PageRank~\cite{FKSCM2017} over a random data model. None of these work studies local clustering in attributed graphs.
The idea to utilize both structural and node attribute information has been applied in the context of community detection, where the goal is to detect all clusters in a graph~\cite{YML13,jia2017node,zhe2019community,sun2020network}. These methods require processing the whole graph and hence are not suitable for local graph clustering.
Recently, contextual random graph models are been used in the literature for analyzing the performance of certain algorithms for attributed graphs. \cite{DSMM18,YS2021,BTB22,AFW22} study algorithms for community detection in the contextual stochastic block model (CSBM). \cite{BFJ2021,FLYBJ22,BFJ23} analyze the separability of nodes in CSBM by functions that are representable by graph neural networks. The random model we consider in this work is more general and we are the first to consider statistical performance of a local graph clustering algorithm in contextual random models.
\section{Weighted flow diffusion and local graph clustering with node attributes}\label{sec:formulation}
In this section, we start by providing an overview of flow diffusion on graphs, describing its physical interpretation as spreading mass in a graph along edges, and disucssing some important algorithmic properties.
Then, we present an algorithm that uses edge-weighted flow diffusion for local graph clustering with node attributes.
\subsection{Notations and basic properties of flow diffusion}
We consider undirected, weighted and connected graph $G = (A,W)$ which consists of $n$ nodes and $m$ edges, where $A \in \{0,1\}^{n \times n}$ is the combinatorial adjacency matrix, i.e., $A_{ij} = 1$ if node $i$ is adjacent to node $j$ and 0 otherwise, $W \in \mathbb{R}^{m \times m}$ is a diagonal matrix of edge weights. We write $w_{ij} = W_{(i,j),(i,j)}$ as the weight of an edge $(i,j)$. If $W=I$ then $G$ reduces to an unweighted graph. We denote $V = \{1,2,\ldots,n\}$ as the set of nodes and $E$ as the set of edges, $i \sim j$ if $(i,j) \in E$. The combinatorial degree $\deg_G(i)$ of a node $i \in V$ is the number of edges incident to it. For a subset $C \subseteq V$, the volume of $C$ is given by $\mathrm{vol}_G(C) = \sum_{i \in C}\deg_G(i)$. We use subscripts to indicate the graph we are working with, and we omit them when the graph is clear from context. We denote $B \in \mathbb{R}^{m \times n}$ as the combinatorial signed incidence matrix under an arbitrary orientation of the graph, where the row that corresponds to the oriented edge $(i,j)$ has two nonzero entries, with $-1$ at column $i$ and $1$ at column $j$. The support of a vector $x$ is $\mathrm{supp}(x) = \{i: x_i \neq 0\}$. We use standard notations $O_n,\Omega_n,\Theta_n,o_n,\omega_n$ for asymptotic behaviors of a function with respect to $n$, and we omit the subscript when it is clear.
Given a source vector $\Delta \in \mathbb{R}^n$ and a sink capacity vector $T \in \mathbb{R}^n$, a flow diffusion on $G$ is formulated as the following optimization problem:
\begin{equation}\label{eq:primal}
\min_f \frac{1}{2}f^TWf \quad \mbox{s.t.} \; \Delta + B^TWf \le T,
\end{equation}
where $W$ is restricted to be the identity matrix in the original formulation~\cite{FWY20}. The flow variables $f \in \mathbb{R}^m$ determine the amount of mass that moves between nodes $i$ and $j$ for every edge $(i,j) \in E$. More precisely, $w_{ij}f_{ij}$ specifies the amount of mass that travels along $(i,j)$. We abuse the notation and use $f_{ij} = -f_{ji}$ for an edge $(i,j)$, so $w_{ij}f_{ij}$ is the amount of mass that moves from node $i$ to node $j$.
In a flow diffusion, we assign $\Delta_i$ source mass to node $i$ and enforce a constraint that node $i$ can hold up to $T_i$ mass. Because one may always scale $\Delta$ and $T$ by the same constant, we assume without loss of generality that $T_i \ge 1$ for all $i$. If $T_i > \Delta_i$ at some node $i$, then we need to spread the source mass along edges in the graph to satisfy the capacity constraint. The vector $\Delta+B^TWf$ measures the final mass at each node if we spread the mass according to $f$. Therefore, the goal of the flow diffusion problem~\eqref{eq:primal} is to find a feasible way to spread the mass while minimizing the cost of flow $f^TWf$. In this work we allow different edge weights as long as they are positive, i.e., $W$ consists of positive diagonal entries. In the context of flow diffusion, edge weights define the efficiencies at which mass can spread over edges. To see this, simply note that $w_{ij}f_{ij}$ determines the amount of mass that moves along the edge $(i,j)$, and thus for fixed $f_{ij}$, the higher $w_{ij}$ is the more mass we can move along $(i,j)$.
For local graph clustering, it is usually more convenient to consider the dual problem of \eqref{eq:primal}:
\begin{equation}\label{eq:dual}
\min_{x\ge0} \frac{1}{2}x^TLx + x^T(T - \Delta)
\end{equation}
where $L = B^TWB$ is the weighted Laplacian matrix of $G$. Throughout this work we use $f^*$ and $x^*$ to denote the optimal solutions of \eqref{eq:primal} and \eqref{eq:dual}, respectively. The solution $x^* \in \mathbb{R}^n_+$ embeds the nodes on the nonnegative real line. In \cite{FWY20} the authors apply a sweep-cut rounding procedure for local graph clustering without node attributes, and they obtain a combinatorial guarantee in terms of the conductance of a cluster. In this work, with the presence of node attributes which may come from some unknown distributions, we take a natural statistical perspective and show how $\mathrm{supp}(x^*)$ recovers a target cluster generated from a contextual random graph model.
In order to compute the solution to \eqref{eq:dual} one may extend the iterative coordinate method in \cite{FWY20} to work with weighted edges. We layout the algorithmic steps in Algorithm~\ref{alg:opt}, where we describe each coordinate-wise gradient update (i.e., $\texttt{push}(i)$) using its combinatorial interpretation as spreading mass from a node to its neighbors. In Algorithm~\ref{alg:opt}, $m_i$ represents the current mass at node $i$. At every iteration, we pick a node $i$ whose current mass $m_i$ exceeds its capacity $T_i$, and we remove the excess amount $m_i-T_i$ by sending it to the neighbors. Algorithm~\ref{alg:opt} may be viewed as an equivalent algorithmic form of flow diffusion since the iterates converge to $x^*$~\cite{FWY20}. An important property of Algorithm~\ref{alg:opt} is that it updates $x_i$ only if $x^*_i > 0$, and it updates $m_j$ only if $j \sim i$ for some $i$ such that $x^*_i > 0$. This means that the algorithm will not explore the whole graph if $x^*$ is sparse, which is usually the case in applications such local clustering and node ranking. We state this {\em local} property in Proposition~\ref{prop:local} and provide a running time bound in Proposition~\ref{prop:runtime}. Both propositions can be proved by simply including edge weights in the original arguments from \cite{FWY20} and our assumption that $T_i \ge 1$ for all $i$.
\begin{algorithm}[tb]
\caption{Flow diffusion (algorithmic form)}
\label{alg:opt}
\begin{algorithmic}
\STATE \hspace{-3mm}{\bfseries Input:} graph $G$, source $\Delta$ and sink $T$
\begin{enumerate}[leftmargin=.2cm,noitemsep,nolistsep]
\item Initialize $x_i = 0$ and $m_i = \Delta_i$ for all $i \in V$.
\item For $t = 1,2,\ldots$ do
\begin{enumerate}[label=(\alph*),leftmargin=.5cm,noitemsep,nolistsep]
\item Pick $i \in \{j : m_j > T_j\}$ uniformly at random.
\item Apply $\texttt{push}(i)$.
\end{enumerate}
\item Return $x$.
\end{enumerate}
\end{algorithmic}
\begin{algorithmic}
\algrule
\STATE \hspace{-3mm}$\texttt{push}(i)$:
\algrule
\STATE \hspace{-3mm}Make the following updates:
\begin{enumerate}[leftmargin=.2cm,noitemsep,nolistsep]
\item $x_i \gets x_i + (m_i-T_i)/w_i$ where $w_i = \sum_{j\sim i}w_{ij}$.
\item $m_i \gets T_i$.
\item For each node $j \sim i$: $m_j \gets m_j+ (m_i-T_i)w_{ij}/w_i$.
\end{enumerate}
\end{algorithmic}
\end{algorithm}
\begin{proposition}[\cite{FWY20}]\label{prop:local}
Let $x^t$ for $t \ge 1$ be iterates generated by Algorithm~\ref{alg:opt}, then $\mathrm{supp}(x^t) \subseteq \mathrm{supp}(x^*)$. Moreover, $|\mathrm{supp}(x^*)| \le \|\Delta\|_1$.
\end{proposition}
\begin{proposition}[\cite{FWY20}]\label{prop:runtime}
Assuming $|\mathrm{supp}(x^*)| < n$, then after $\tau = O(\|\Delta\|_1\frac{\alpha}{\beta}\log \frac{1}{\epsilon})$ iterations, where $\alpha = \max_{i \in \mathrm{supp}(x^*)} w_i$ where $w_i = \sum_{j \sim i}w_{ij}$, and $\beta \ge \min_{(i,j) \in \mathrm{supp}(Bx^*)}w_{ij}$, one has $\mathbb{E}[F(x^{\tau})] - F(x^*) \le \epsilon$, where $F$ denotes the objective function of \eqref{eq:dual}.
\end{proposition}
Let $\bar{d}$ denote the maximum degree of a node in $\mathrm{supp}(x^*)$. Since each iteration of Algorithm~\ref{alg:opt} only touches a node $i \in \mathrm{supp}(x^*)$ and its neighbors, Proposition~\ref{prop:local} implies that the total number of nodes (except their neighbors $j$ such that $x^*_j = 0$) that Algorithm~\ref{alg:opt} will ever look at is upper bounded by the total amount of source mass $\|\Delta\|_1$. Therefore, if the source mass is small and $\bar{d}$ does not scale linearly with $n$, then Algorithm~\ref{alg:opt} would only explore locally in the graph, and the size of the subgraph which Algorithm~\ref{alg:opt} explores is controlled by $\|\Delta\|_1$. Proposition~\ref{prop:runtime} implies that the total running time of Algorithm~\ref{alg:opt} for computing an $\epsilon$-accurate solution is $O(\bar{d}\|\Delta\|_1\frac{\alpha}{\beta}\log\frac{1}{\epsilon})$. Therefore, if $\bar{d}, \|\Delta\|_1, \frac{\alpha}{\beta}$ are all sublinear in $n$, then Algorithm~\ref{alg:opt} takes sublinear time.
\subsection{Local clustering with node attributes}
In local graph clustering, we are given a seed node $s \in V$ and the goal is to identify a good cluster that contains the seed. Existing methods mostly focus on the setting where one only has access to the structural information, i.e. nodes and edges of a graph, and they do not take into account node attributes. However, it is reasonable to expect that informative node attributes should help improve the performance of a local clustering algorithm. For example, the original flow diffusion solves the local graph clustering problem by spreading source mass from the seed node to nearby nodes, and an output cluster is obtained based on where in the graph the mass diffuse to~\cite{FWY20}. In this case, node attributes may be used to guide the spread of mass so that more mass are trapped inside the ground-truth target cluster, and consequently, improve the accuracy of the algorithm.
The idea to guide the diffusion by using node attributes can be easily realized by relating edge weights to node attributes, which we describe in the next. Given a graph $G = (A,W)$ with a set of node attributes $X_i \in \mathbb{R}^d$ for $i \in V$, and given a seed node $s$ from an unknown target cluster $K$, the goal is to recover $K$. To do so, we construct a new graph $G' = (A,W')$ having the same structure but new edge weights $w_{ij}' = h(w_{ij}, X_i,X_j)$ where $h$ is some function that should improve diffusion. For example, one may set $h(w_{ij}, X_i,X_j) = w_{ij}\rho(X_i,X_j)$ where $\rho(X_i,X_j)$ measures the proximity between $X_i$ and $X_j$. This means that, for a flow diffusion in $G'$, if two adjacent nodes $i$ and $j$ have similar attributes, then it is easier to send a lot of mass along the edge $(i,j)$. In particular, when one removes the excess mass from a node $i$ by sending it to the neighbors, the amount of mass that a neighbor $j$ receives is proportional to $w'_{ij}$ (cf. $\texttt{push}(i)$), and hence more mass will be sent to a neighbor whose attributes also bear close proximity. Therefore, if nodes within the target cluster $K$ share similar attributes, then a flow diffusion in $G'$, which starts from a seed node $s\in K$, would force more mass to spread within $K$ than a flow diffusion in the original graph $G$.
In this work, we focus on a particular choice of $h$ given by $w_{ij}' = h(w_{ij}, X_i,X_j) = w_{ij}\exp(-\gamma \|X_i-X_j\|_2^2)$ where $\gamma\ge0$ is a hyperparameter. For simplicity we will assume that $w_{ij} = 1$ for all $(i,j) \in E$. In this case, $h$ is the Gaussian kernel of node attributes and has proved useful in many applications, e.g., spectral clustering. In the next section we provide rigorous statistical guarantees on the performance of local graph clustering with node attributes by using the optimal solution of weighted flow diffusion~\eqref{eq:dual}, where edge weights are defined by the Gaussian kernel for an appropriately chosen $\gamma>0$. We summarize the local clustering procedure in Algorithm~\ref{alg:lgc}. As we show in the next section, suitable choices for $T$ include $T_i = 1$ or $T_i = \deg_G(i)$ for all $i$, and one may correspondingly set $\Delta_s = \alpha\sum_{i \in K}T_i$ for $\alpha > 1$ where $K$ is the target cluster.
\begin{algorithm}[tb]
\caption{Local graph clustering with node attributes}
\label{alg:lgc}
\begin{algorithmic}
\STATE \hspace{-3mm}{\bfseries Input:} unweighted graph $G=(A,I)$, node attributes $X_i$ for all $i \in V$, seed node $s\in V$, hyperparameter $\gamma \ge 0$.
\STATE \hspace{-3mm}{\bfseries Output:} a cluster $C \subseteq V$
\begin{enumerate}[leftmargin=.2cm,noitemsep,nolistsep]
\item Define weighted graph $G'=(A,W)$ whose edge weights are given by $w_{ij} = \exp(-\gamma\|X_i-X_j\|_2^2)$.
\item Set source mass $\Delta_s > 0$ and $\Delta_i = 0$ for $i \neq s$, set sink capacity $T_i$.
\item Run flow diffusion (Algorithm~\ref{alg:opt}) with input $G',\Delta,T$ and obtain output $x^{\tau}$.
\item Return $\mathrm{supp}(x^{\tau})$
\end{enumerate}
\end{algorithmic}
\end{algorithm}
\section{Statistical guarantees under contextual random graph model}
We assume that the node attributes and a target cluster are generated from the following random model.
\begin{definition}[Contextual local random model]\label{def:data_model}
Given a set of nodes $V$, let $K \subseteq V$ be a target cluster with cardinality $|K| = k$. For every pair of nodes $i$ and $j$, if $i,j \in K$ then we draw an edge $(i,j)$ with probability $p$; if $i \in K$ and $j \notin K$ then we draw an edge $(i,j)$ with probability $q$; otherwise, we allow any (deterministic or random) model to draw an edge. The node attributes $X_i$ for a node $i$ are given as $X_i = \mu_i + Z_i$, where $\mu_i \in \mathbb{R}^d$ is a fixed signal vector and $Z_i \in \mathbb{R}^d$ is a random noise vector whose $\ell$\textsuperscript{th} coordinate $Z_{i\ell}$ follows independent mean zero sub-Gaussian distribution with variance proxy $\sigma_{\ell}$, i.e., for any $t \ge 0$ we have $\mathbb{P}(|Z_{i\ell}| \ge t) \le 2\exp(-\frac{t^2}{2\sigma_{\ell}^2})$. Though not necessary, to simplify the discussion we require $\mu_i=\mu_j$ for $i,j \in K$ .
\end{definition}
This random model is fairly general. For example, if the edges that connect nodes in $V\backslash K$ have been generated from the SBM, $\mu_i = \mu_j$ for every $i,j$ that belong to the same block, and all $Z_i$'s follow the same isotropic Gaussian distribution, then we obtain the CSBM which has been extensively used in the analyses of algorithms for attributed graphs~\cite{DSMM18,BFJ2021,YS2021}. On the other hand, if the edges that connect nodes in $V\backslash K$ have been generated from the Erd\H{o}s-Renyi model with probability $q$, $\mu_i = \mu_j$ for $i,j \in K$ and $\mu_i = 0$ for $i \not\in K$, and all $Z_i$'s follow the same isotropic Gaussian distribution, then we obtain a natural coupling of the planted densest subgraph problem and the submatrix localization problem~\cite{CX2016}. In terms of modelling the noise of node attributes, sub-Gaussian distributions include Gaussian, Bernoulli, and any other continuous or discrete distribution over finite domains. Therefore the random model allows different types of coordinate-wise noise (and different level of noise controlled by $\sigma_{\ell}$) which could depend on the nature of the specific attribute. For example, the noise of a continuous attribute may be Gaussian or uniform, whereas the noise of a binary-encoded categorical attribute may be Bernoulli.
In order for node attributes to provide useful information, nodes inside $K$ should have distinguishable attributes compared to nodes not in $K$. Denote
\begin{align*}
\hat\mu := \min_{i \in K, j\not\in K}\|\mu_i - \mu_j\|_2, \quad \hat{\sigma} := \max_{1 \le \ell \le d}\sigma_{\ell}.
\end{align*}
We make Assumption~\ref{assum:mu_sigma} which states that the relative signal $\hat\mu$ dominates the maximum coordinate-wise noise $\hat\sigma$, and that the sum of normalized noises does not grow faster than $\log n$. The later assumption is easily satisfied, e.g., when the dimension $d$ of node attributes does not scale with the number of nodes $n$. In practice, when the set of available or measurable attributes are fixed a priori, one always has $d = O_n(1)$. This is particularly relevant in the context of local clustering where it is desirable to have sublinear algorithms, since if $d = \Omega(n)$ then even computing a single edge weight $w_{ij}$ would take time at least linear in $n$.
\begin{assumption}\label{assum:mu_sigma}
$\hat\mu = \omega(\hat\sigma\sqrt{\lambda\log n})$ for some $\lambda = \Omega_n(1)$; $\sum_{\ell=1}^d \sigma_{\ell}^2/\hat\sigma^2 = O(\log n)$.
\end{assumption}
Before we move on to discuss how exactly node attributes help to recover $K$, we need to talk about the signal and noise from the graph structure. For a node $i \in K$, the expected number of neighbors in $K$ is $p(k-1)$, and the expected number of neighbors not in $K$ is $q(n-k)$. Since mass spread along edges, if there are too many edges connecting $K$ to $V\backslash K$, it may become difficult to prevent a lot of mass from spreading out of $K$. The consequence of having too much mass which start in $K$ to leak out of $K$ is that $\mathrm{supp}(x^*)$ may have little overlap with $K$, and consequently Algorithm~\ref{alg:lgc} would have poor performance.
Fortunately, node attributes may be very helpful when the structural information is not strong enough, e.g., when $q(n-k) > p(k-1)$. As discussed earlier, informative node attributes should be able to guide the spread of mass in the graph. In a flow diffusion, where the mass get spread to from the source node depends on the edge weights. The higher weight an edge has, the easier to send mass along that edge. Therefore, in order to keep as much mass as possible inside the target cluster $K$, an ideal situation would be that edges inside $K$ have significantly more weights than an edge that connects $K$ to $V\backslash K$. It turns out that this is exactly the case when we have good node attributes. By applying concentration results on the sum of squares of sub-Gaussian random variables, Lemma~\ref{lem:edge_weight} says that, with overwhelming probability, one obtains a desirable separation of edge weights as a consequence of node attributes having more signal than noise (i.e. when Assumption~\ref{assum:mu_sigma} holds).
\begin{lemma}\label{lem:edge_weight}
Under Assumption~\ref{assum:mu_sigma}, one may pick $\gamma$ such that $\gamma\hat\sigma^2 = o(\log^{-1}n)$ and $\gamma\hat\mu^2 = \omega_n(\lambda)$. Consequently, with probability at least $1-o_n(1)$, the edge weight $w_{ij} = \exp(-\gamma \|X_i-X_j\|_2^2)$ satisfies $w_{ij} \ge 1-o_n(1)$ for all $i,j \in K$, and $w_{ij} \le \exp(-\omega_n(\lambda))$ for all $i \in K, j\not\in K$.
\end{lemma}
Not surprisingly, Lemma~\ref{lem:edge_weight} implies that the gap between edge weights is controlled by $\lambda$ which, according to Assumption~\ref{assum:mu_sigma}, measures how strong the attribute signal is. If $\lambda$ is sufficiently large, then naturally one would expect an algorithm that uses the node attributes to nearly perfectly recover $K$, irrespective of how noisy the graph is. Otherwise, the performance to recover $K$ would depend on a combination of both structural and attribute information. In what follows we present two recovery results which precisely correspond to these two scenarios. In all probability bounds, we keep explicit dependence on the cluster size $k$ because, for local graph clustering, $k$ may be a large constant and does not necessarily scale with $n$.
\begin{theorem}[Recovery with very good node attributes]\label{thm:recovery1}
Under Assumption~\ref{assum:mu_sigma}, for any $\gamma$ satisfying $\gamma\hat\sigma^2 = o(\log^{-1}n)$ and $\gamma\hat\mu^2 = \omega_n(\lambda)$, with source mass $\Delta_s = (1+\beta)\sum_{i \in K}T_i$ for any $\beta > 0$, we have that for large enough $n$,
\begin{enumerate}[leftmargin=.5cm,itemsep=0.1cm,nolistsep]
\item if $K$ is connected and $\lambda = \Omega_n(\log k + \log(q(n-k)) + \log(1/\beta))$, then with probability at least $1-o_n(1)-k^{-1/3}$, for every seed node $s \in K$ we have $K \subseteq \mathrm{supp}(x^*)$ and $\sum_{i \in \mathrm{supp}(x^*)\backslash K}T_i \le \beta\sum_{i \in K}T_i$;
\item if $p \ge \frac{(4+\epsilon)}{\delta^2}\frac{\log k}{k-1}$ for some $0<\delta<1$ and $\epsilon > 0$, and $\lambda = \Omega_n(\log k + \log(\frac{q(n-k)}{p(k-1)}) + \log(1/\beta) + \log(1-\delta))$, then with probability at least $1-o_n(1)-k^{-1/3}-ek^{-\epsilon/2}$, for every seed node $s \in K$ we have $K \subseteq \mathrm{supp}(x^*)$ and $\sum_{i \in \mathrm{supp}(x^*)\backslash K}T_i \le \beta\sum_{i \in K}T_i$.
\end{enumerate}
In particular, we obtain the following bounds on false positives: if $T_i = 1$ for all $i \in V$ then
\[
|\mathrm{supp}(x^*)\backslash K| \le \beta|K|;
\]
if $T_i = \deg_G(i)$ for all $i \in V$ then
\[
\mathrm{vol}_G(\mathrm{supp}(x^*)\backslash K) \le \beta \mathrm{vol}_G(K).
\]
\end{theorem}
Some discussions are in order. The first part of Theorem~\ref{thm:recovery1} does not assume anything about the internal connectivity of $K$. It applies as long as $K$ is connected, and this includes the extreme case when the induced subgraph on $K$ is a tree but each node in $K$ is also connected to many other nodes not in $K$. The second part of Theorem~\ref{thm:recovery1} requires a weaker condition on the strength of attribute signal $\hat\mu$. The additive term $\log(q(n-k))$ from part 1 is weakened to $\log(\frac{q(n-k)}{p(k-1)})$ due to the improved connectivity of $K$, under the additional assumption that $p \ge \Omega(\log k / k)$. We consider two specific choices of $T$. The first choice gives the exact bound on the number of false positives, and the second choice bounds the size of false positives in terms of volume~\cite{HFM2021}. Note that even in the case where the node attributes alone provide sufficient signal, the graph structure still plays a very important role as it allows the possibility that an algorithm would return a good output without having to explore all data points. For example, during the execution of Algorithm~\ref{alg:lgc}, one only needs to query the attributes of a node whenever they are required for subsequent computations.
Let us introduce one more notion before presenting the recovery guarantee with good, but not too good, node attributes. Given the contextual random model described in Definition~\ref{def:data_model}, consider a ``population'' graph $\bar{G} = (\bar{A},\bar{W})$ where $\bar{A}_{ij} = 1$ for every pair $i,j$ such that $i\neq j$, and the edge weight $\bar{w}_{ij}$ satisfies $\bar{w}_{ij} = p\exp(-\gamma\|\mathbb{E}[X_i]-\mathbb{E}[X_j]\|_2^2) = p$ if $i,j \in K$, $\bar{w}_{ij} = q\exp(-\gamma\|\mathbb{E}[X_i]-\mathbb{E}[X_j]\|_2^2) \le qe^{-\gamma\hat\mu^2}$ if $i\in K, j \notin K$. A frequently used measure of cluster quality is conductance which quantifies the ratio between external and internal connectivity. For a set of nodes $C$ in $\bar{G}$, its conductance is defined as $\sum_{i\in C, j\notin C}\bar{w}_{ij}/\sum_{i \in C}\sum_{j \sim i}\bar{w}_{ij}$. For $0 \le c \le 1$ denote
\[
\eta(c) := \frac{p(k-1)}{p(k-1) + q(n-k)e^{-c\gamma\hat\mu^2}}.
\]
One may easily verify that the conductance of $K$ in $\bar{G}$ is upper bounded by $1-\eta(1)$. Therefore, the higher $\eta(1)$ is the lower conductance $K$ may have in $\bar{G}$. On the other hand, in the absence of node attributes, or if all nodes share identical attributes, then the conductance of $K$ in $\bar{G}$ is exactly $1-\eta(0)$. Note that $1-\eta(c) \ge 1-\eta(0)$ for any $c \ge 0$. Intuitively, a low conductance cluster is better connected internally than externally, and thus it should be easier to detect. Therefore, the advantage of having node attributes is that they help reduce the conductance of the target cluster, making it easier to recover from the population graph. While in practice one never works with the population graph, our next theorem indicates that, with overwhelming probability, the recoverability of $K$ in the population graph transfers to an realization of the random model in Definition~\ref{def:data_model}. More specifically, Theorem~\ref{thm:recovery2} says that when the node attributes are good, i.e. Assumption~\ref{assum:mu_sigma} holds, but not too good, i.e. conditions required in Theorem~\ref{thm:recovery1} may not hold, then Algorithm~\ref{alg:lgc} still fully recovers $K$ as long as there is sufficient internal connection. Moreover, the relative size of false positives (compared to the size of $K$) is upper bounded by $O(1/\eta(c)^2)-1$ for any $c<1$ and large enough $n$. Denote
\[
m(\delta_1,\delta_2) = \frac{(1+3\delta_1+\frac{1}{p(k-1)})^2}{(1-\delta_1)(1-\delta_2)}, \quad \ T_{\max} = \max_{i \in K}T_i.
\]
\begin{theorem}[Recovery with good node attributes]\label{thm:recovery2}
Under Assumption~\ref{assum:mu_sigma}, if $p \ge \max(\frac{(3+\epsilon_1)}{\delta_1^2}\frac{\log k}{k-1}, \frac{(2+\epsilon_2)}{\delta_2\sqrt{1-\delta_1}}\frac{\sqrt{\log k}}{\sqrt{k-1}})$ where $0< \delta_1,\delta_2 \le 1$ and $\epsilon_1,\epsilon_2 > 0$, then with probability at least $1-o_n(1)-4k^{-\epsilon_1/3}-k^{-2\epsilon_2}$, for every seed node $s \in K$ with source mass
\[
\Delta_s = c_1T_{\max}\frac{m(\delta_1,\delta_2)k}{\eta(c_2)^2}
\]
for any constants $c_1 > 1$ and $c_2<1$, it holds that for all large enough $n$, $K \subseteq \mathrm{supp}(x^*)$. Moreover, if $T_i = 1$ for all $i \in V$ then
\[
|\mathrm{supp}(x^*)\backslash K| \le \left(\frac{c_1m(\delta_1,\delta_2)}{\eta(c_2)^2} - 1\right)|K|;
\]
if $T_i = \deg_G(i)$ for all $i \in V$
\[
\mathrm{vol}_G(\mathrm{supp}(x^*)\backslash K) \le \left(\frac{c_1m(\delta_1,\delta_2)}{\eta(c_2)^2}\frac{(1+\delta_1)}{(1-\delta_1)}- 1\right)\mathrm{vol}_G(K).
\]
\end{theorem}
In the special case where there is no node attribute, we may simply take $\hat\mu = 0$ and Theorem~\ref{thm:recovery2} still holds. For this specific setting we obtain a nearly identical recovery guarantee (i.e. same assumption and same result) that has been previously obtained for local graph clustering using PageRank vectors without node attributes~\cite{HFM2021}, where the relative size of false positives is $O(1/\eta(0)^2-1)$. This comparison quantifies the advantage of having good node attributes as they reduce the bound to $O(1/\eta(c)^2-1)$ for any $c < 1$, which can be substantially smaller. Note that the expression $1/\eta(c)^2$ is jointly controlled by the combinatorial conductance of $K$ and the attribute signal $\hat\mu$.
\section{Experiments}
We evaluate the performance of Algorithm~\ref{alg:lgc} for local graph clustering with node attributes. First, we investigate empirically our theoretical results over synthetic data generated from a specification of the random model described in Definition~\ref{def:data_model}. We use the synthetic experiments to demonstrate (i) the distinction between having weak and strong graph structural information, and (ii) the distinction between having very good and moderately good node attributes. In addition, the synthetic experiments indicates the necessity of Assumption~\ref{assum:mu_sigma} in order for Algorithm~\ref{alg:lgc} to have notable performance improvement against method that does not use node attributes. Second, we carry out experiments using real-world data. We show that incorporating node attributes improves the F1 scores by an average of 4.3\% over 20 clusters from two academic co-authorship networks.
\subsection{Simulated data and results}
{\bf The generative model.} We generate random graphs using the stochastic block model with block size $k = 500$ and the total number of clusters $r = 20$. The total number of nodes is $n = kr = 10,000$. Two nodes within the same cluster are connected with probability $p$, and two nodes from different clusters are connected with probability $q$. We fix $q=0.002$ and vary $p$ to control the strength of the structural signal. We randomly pick one of the clusters as the target cluster $K$. The dimension of the node attributes is set to $d=100$. For node attributes $X_i = \mu_i + Z_i$, we sample $Z_i$ from Gaussian distribution with mean 0 and identity covariance. Therefore $\sigma_{\ell} = 1$ for all $\ell = 1,2,\ldots,d$, and hence $\hat\sigma = 1$. We set $\mu_{i\ell} = a\hat\sigma\sqrt{\log n}/2\sqrt{d}$ for all $\ell$ if $i \in K$, and $\mu_{i\ell} = -a\hat\sigma\sqrt{\log n}/2\sqrt{d}$ for all $\ell$ if $i \not\in K$. In this way, we get that $\hat\mu = \max_{i \in K, j\not\in K}\|\mu_i-\mu_j\|_2 = a\hat\sigma\sqrt{\log n}$. We vary $a$ to control the strength of node attribute signal.
{\bf Setup and evaluation metric.} We set the sink capacity $T_i = 1$ for all $i$. We set the source mass $\Delta_s = \alpha k$ and we allow $\alpha$ to vary. We set $\gamma = (\log^{-3/2} n) / 4\hat\sigma^2$ so that $\gamma\hat\sigma^2 = o(\log^{-1} n)$ as required by Theorem~\ref{thm:recovery1} and Theorem~\ref{thm:recovery2}. To measure the quality of an output cluster $C := \mathrm{supp}(x^\tau)$, we use precision and recall which are defined as $|C\cap K|/|C|$ and $|C \cap K|/|K|$, respectively. The F1 score is the harmonic mean of precision and recall given by $2/(\mbox{Precision}^{-1} + \mbox{Recall}^{-1})$. For comparison we also consider the performance of unweighted flow diffusion which does not use node attributes. There are other methods for local graph clustering without node attributes, such as the $\ell_1$-regularized PageRank~\cite{ACL06,HFM2021}. We did not consider other methods because the unweighted flow diffusion is shown to achieve state-of-the-art performance~\cite{FWY20}. Moreover, the comparison between weighted and unweighted flow diffusions, which either use or does not use node attributes, allows us to obtain a fair estimate on the benefits of node attributes.
\begin{figure*}[h!]
\centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/vary_alpha_sparse_graph_good_attributes.pdf}
\caption{$p=0.01,q=0.002, \hat\mu=3\hat\sigma\log n$}
\label{fig:vary_alpha_1}
\end{subfigure}%
\hskip 0.05\textwidth
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/vary_alpha_sparse_graph_okay_attributes.pdf}
\caption{$p=0.01,q=0.002,\hat\mu=\frac{5}{2}\hat\sigma\log n$}
\label{fig:vary_alpha_2}
\end{subfigure}
\vskip 0.15in
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/vary_alpha_denser_graph_okay_attributes.pdf}
\caption{$p=0.03,q=0.002,\hat\mu=\frac{5}{2}\hat\sigma\log n$}
\label{fig:vary_alpha_3}
\end{subfigure}
\caption{Demonstration of Theorem~\ref{thm:recovery1}. The lines show average performance over 100 trails. In each trial we randomly pick a seed node $s$ from the target cluster $K$. The error bars show standard deviation. Figure~\ref{fig:vary_alpha_1} and Figure~\ref{fig:vary_alpha_3} show full recovery of $K$ as soon as $\alpha>1$ (i.e. as soon as $\beta > 0$, see first part of Theorem~\ref{thm:recovery1}). The distinction between Figure~\ref{fig:vary_alpha_2} and Figure~\ref{fig:vary_alpha_3} demonstrate that the required threshold for $\hat\mu$ depends on $p$ (cf. second part of Theorem~\ref{thm:recovery1}). With very good node attributes, the performance of flow diffusion that uses node attributes is significantly better than the performance of flow diffusion that does not use node attributes.}
\label{fig:vary_alpha}
\end{figure*}
\begin{figure}[h!]
\centering
\includegraphics[width=.6\columnwidth]{plots/vary_mu_denser_graph.pdf}
\caption{Performance of Algorithm~\ref{alg:lgc} as $\hat\mu$ increases. $\hat\mu$ needs to be larger than $\hat\sigma\sqrt{\log n}$ in order for node attributes to be useful. The $x$-axis shows the value of $a$ where $\hat\mu = a\hat\sigma\sqrt{\log n}$. We average over 100 trials, each trial uses a randomly selected seed node.}
\label{fig:vary_mu}
\end{figure}
{\bf Results.} Figure~\ref{fig:vary_alpha} shows detailed views of the performance of Algorithm~\ref{alg:lgc} as we vary $\alpha$ between $[0.1,5]$ with $0.1$ increments. It is used to demonstrate the two claims of Theorem~\ref{thm:recovery1}. In Figure~\ref{fig:vary_alpha_1}, we set $p=0.01 < \log k / k$, so the target cluster $K$ is very sparse.
On average, each node $i \in K$ only has 5 neighbors inside $K$ while it has 19 neighbors outside of $K$. This means that the graph structural information alone is not very helpful for recovering $K$. On the other hand, we set $a = 3\sqrt{\log n}$ so $\hat\mu = 3\hat\sigma\log n$. This means that the node attributes contain very strong signal. In this case, observe that as soon as $\alpha$ becomes strictly larger than 1, the output cluster $C$ fully recovers $K$, i.e. Recall = 1. This demonstrates the first claim of Theorem~\ref{thm:recovery1}. As a comparison, the unweighted flow diffusion which does not use node attributes has very poor performance for every choice of $\alpha$. This is expected because edge connectivity reveals very little clustering information. In Figure~\ref{fig:vary_alpha_2}, we keep the same graph structure but slightly weaken the node attributes to $\hat\mu = \frac{5}{2}\hat\sigma\log n$ by reducing $a$. This stops the output cluster $C$ from fully recovering $K$ for small $\alpha$ larger than 1. The algorithm still has a good performance if one chooses $\alpha$ properly. This scenario is covered by Theorem~\ref{thm:recovery2} and we will discuss more about it later. In Figure~\ref{fig:vary_alpha_3}, we keep the same node attributes as in Figure~\ref{fig:vary_alpha_2} but increase $p$ from 0.01 to 0.03 which is slightly larger than $2\log k / k$. In this case, the output cluster $C$ again fully recovers $K$ as soon as $\alpha$ is strictly larger than 1. The distinction between Figure~\ref{fig:vary_alpha_2} and Figure~\ref{fig:vary_alpha_3} means that the required threshold for $\hat\mu$ to fully recover $K$ at any $\alpha>1$ decreases as $p$ increases. This demonstrates the second claim of Theorem~\ref{thm:recovery1}.
In Figure~\ref{fig:vary_mu} we consider a more realistic setting where one may not know the size of the target cluster $K$ and the node attributes may be noisy. We keep the same graph connectivity (i.e. $p=0.03$ and $q=0.002$) and vary $a$ between $[0, 8]$ with $0.5$ increments. Recall that the node attributes are set in a way such that $\hat\mu = a\hat\sigma\sqrt{\log n}$, therefore the strength of node attributes increases as $a$ increases. For each choice of $a$, given a seed node $s$, we run Algorithm~\ref{alg:lgc} multiple times with source mass $\alpha k$ for $\alpha \in \{1.1,1.6,\ldots,10.1\}$. This gives multiple output clusters, one from each choice of $\alpha$. We consider two cases for selecting a final cluster. The first case is a best-case scenario where we pick the cluster that achieves the best F1 score, the second case is a more realistic case where we pick the cluster that has the minimum conductance.\footnote{Given edge weights $w_{ij}$ and a cluster $C$, we consider weighted conductance which is the ratio $\sum_{i \in C, j \not\in C} w_{ij}/\sum_{i \in C}\sum_{j\sim i}w_{ij}$.} Figure~\ref{fig:vary_mu} illustrates the performance of Algorithm~\ref{alg:lgc} in these two cases. The $x$-axis of Figure~\ref{fig:vary_mu} is the value of $a$ where $\hat\mu = a\hat\sigma\sqrt{\log n}$. Overall, the performance improves as $\hat\mu$ increases. When the node attributes are reasonably strong, e.g. $a \ge 4$, the scenario where we select a cluster based on minimum conductance matches with the best-case performance. Note that, the higher $\hat\mu$ is, the lower $\eta(c)$ is for any $0 < c \le 1$, and according to Theorem~\ref{thm:recovery2}, there should be less false positives and hence a higher F1 score. This is exactly what Figure~\ref{fig:vary_mu} shows. In Figure~\ref{fig:vary_mu} we also plot the best-case performance of unweighted flow diffusion without node attributes. When the node attributes are very noisy, and in particular, when $\hat\mu \le \hat\sigma\sqrt{\log n}$ where Assumption~\ref{assum:mu_sigma} clearly fails, we see that using node attributes can be harmful as it can lead to worse performance than not using node attributes at all. On the other hand, once the node attributes become strong enough, e.g., $a \ge 4$, using node attributes start to yield much better outcome.
\subsection{Real-world graphs and results}
We evaluate the performance of Algorithm~\ref{alg:lgc} on two co-authorship graphs based on the Microsoft Academic Graph from the KDD Cup 2016 challenge~\cite{SMBG18}.\footnote{In Appendix~\ref{sec:amazon-results} we include additional experiments using Amazon co-purchase graph~\cite{mcauley2015image} and demonstrate the performance of Algorithm~\ref{alg:lgc} when the node attributes are not strong enough. (F1 only increases by 1\% on average.)} In these graphs, nodes are authors, and two nodes are connected by an edge if they have coauthored a paper. The clusters are defined according to the most active research field of each author. The node attributes represent paper keywords for each author's papers. The first graph consists of 18,333 computer science researchers and 81,894 connections among them. Each computer science researcher belongs to one of the 15 ground-truth clusters. The second graph consists of 34,493 physics researchers and 247,962 connections among them. Each physics researcher belongs to one of the 5 ground-truth clusters. Details of node attributes and cluster sizes are found in Appendix~\ref{sec:additional_experiments}.
\begin{table}[h!]
\caption{F1 scores for local clustering in co-authorship networks}
\label{tab:real-coauthor-results}
\centering
\begin{tabular}{clrrr}
\toprule
Network & Cluster & No attr. & Use attr. & Improv.\\
\midrule
\multirow{15}{*}{\rotatebox[origin=c]{90}{Computer Science}}
& Bioinformatics & 32.1 & 39.3 & 7.2 \\
& Machine Learning & 30.9 & 37.3 & 6.4 \\
& Computer Vision & 37.6 & 35.5 & -2.1 \\
& NLP & 45.2 & 52.3 & 7.1 \\
& Graphics & 38.6 & 49.2 & 10.6 \\
& Networks & 44.1 & 47.0 & 2.9 \\
& Security & 29.9 & 35.7 & 5.8 \\
& Databases & 48.5 & 58.1 & 9.6 \\
& Data Mining & 27.5 & 28.8 & 1.3 \\
& Game Theory & 60.6 & 66.0 & 5.4 \\
& HCI & 70.0 & 77.6 & 7.6 \\
& Information Theory & 47.4 & 46.9 & -0.5 \\
& Medical Informatics & 65.7 & 70.3& 4.6 \\
& Robotics & 59.9 & 59.9 & 0.0 \\
& Theoretical CS & 66.3 & 70.7 & 4.4 \\
\midrule
\multirow{5}{*}{\rotatebox[origin=c]{90}{Physics}}
& Phys. Rev. A & 69.4 & 70.9 & 1.5 \\
& Phys. Rev. B & 41.4 & 42.3 & 0.9 \\
& Phys. Rev. C & 79.3 & 82.1 & 2.8 \\
& Phys. Rev. D & 62.3 & 68.9 & 6.6 \\
& Phys. Rev. E & 49.5 & 53.7 & 4.2 \\
\midrule
\multicolumn{2}{c}{AVERAGE} & 50.3 & 54.6 & 4.3\\
\bottomrule
\end{tabular}
\end{table}
We consider two choices for the sink capacities $T$. The first is $T_i = \deg_G(i)$ for all $i$ and the second is $T_i = 1$ for all $i$. For each cluster $K$ in a graph, given a seed node $s \in K$, we run Algorithm~\ref{alg:lgc} with source mass $\Delta_s = \alpha\sum_{i \in K}T_i$ for $\alpha \in \{1.5,1.75,2,\ldots,5\}$. We select the output cluster that has the minimum conductance and measure the recovery quality using the F1 score. For each of the 20 target clusters we run 100 trials and each trial uses a different seed node. We report the average F1 scores using the first choice for $T$ in Table~\ref{tab:real-coauthor-results}. Additional results using the second choice for $T$, along with more details on parameter choices, are found in Appendix~\ref{sec:additional_experiments}. In most cases, incorporating node attributes improves recovery accuracy. Over the total 20 clusters in the two co-authorship networks, using node attributes increases the F1 score by 4.3\% on average.
\section{Conclusion and future work}\label{sec:conclusion}
In this work we propose and analyze a simple algorithm for local graph clustering with node attributes. We provide conditions under which the algorithm is guaranteed to work well. We empirically demonstrate the advantage of incorporating node attributes over both synthetic and real-world datasets. To the best of our knowledge, this is the first local graph clustering algorithm for attributed graphs that also have provable guarantees. The current work is the first step towards building principled tools for local learning on graphs using both structural and attribute information without processing the whole graph. An interesting future direction is to incorporate node embedding and parameter learning into local diffusion, where the attributes and their relative importance may be optimized simultaneously alongside the local diffusion process.
\section{Primal-dual solutions of flow diffusion}\label{sec:primal-dual-opt}
Recall that we denote $f^*$ and $x^*$ as the optimal solutions of the primal and dual flow diffusion problem \eqref{eq:primal} and \eqref{eq:dual}, respectively. We derive two useful properties of $x^*$ based on the primal-dual relationships between $f^*$ and $x^*$. In Appendix~\ref{sec:proofs} when we analyze the support of $x^*$, we will repeatedly use these properties to characterize the nodes covered by $\mathrm{supp}(x^*)$. Note that
\begin{align*}
& \min_f \frac{1}{2}f^TWf \quad \mbox{s.t.} \; \Delta + B^TWf \le T\\
=~& \min_f \max_{x\ge0} \frac{1}{2}f^TWf + x^T(\Delta + B^TWf - T) \\
=~& \max_{x\ge0} \min_f \frac{1}{2}f^TWf + x^T(\Delta + B^TWf - T)\\
=~& \max_{x\ge0} -\frac{1}{2}x^TB^TWBx + x^T(\Delta-T),
\end{align*}
therefore the optimal solutions $f^*$ and $x^*$ are related by $f^* = -Bx^*$. According to the physical interpretation of the flow variables $f$, this means that, in an optimal flow diffusion, the amount of mass that moves from node $i$ to node $j$ is precisely $w_{ij}(x^*_i-x^*_j)$ where $w_{ij}$ is the weight for the edge $(i,j)$. Moreover, we have $x^*_i > 0$ only if $\Delta_i + [B^TWf^*]_i = T_i$. Recall that the quantity $\Delta_i + [B^TWf^*]_i $ represents the amount of mass at node $i$ after spreading mass according to $f^*$, therefore, we get that $x^*_i > 0$ only if the final mass at node $i$ equals exactly to its sink capacity $T_i$. In this case, we say that node $i$ is {\em saturated}.
\section{Proofs}\label{sec:proofs}
\subsection{Proof of Lemma~\ref{lem:edge_weight}}
We have that
\begin{equation}\label{eq:edge_weight_exponent_expression}
\|X_i - X_j\|_2^2 = \left\{\begin{array}{ll}\|Z_i-Z_j\|_2^2, & \mbox{if} \; i,j \in K, \\ \|Z_i-Z_j\|_2^2 + \|\mu_i-\mu_j\|_2^2 + (\mu_i-\mu_j)^T(Z_i-Z_j), & \mbox{if} \; i \in K, j \not\in K.\end{array}\right.
\end{equation}
Consider the random variable
\[
\|Z_i-Z_j\|_2^2-\mathbb{E}[\|Z_i-Z_j\|_2^2] = \sum_{\ell=1}^d \bigg((Z_{i\ell}-Z_{j\ell})^2 - \mathbb{E}[(Z_{i\ell}-Z_{j\ell})^2]\bigg).
\]
Each term in the summation is sub-exponential and satisfies
\[
\|(Z_{i\ell}-Z_{j\ell})^2 - \mathbb{E}[(Z_{i\ell}-Z_{j\ell})^2]\|_{\psi_1}
\le C\|(Z_{i\ell}-Z_{j\ell})^2\|_{\psi_1}
= C\|Z_{i\ell}-Z_{j\ell}\|_{\psi_2}^2
\le 2C\|Z_{i\ell}\|_{\psi_2}^2
\le C' \sigma_{\ell}^2
\]
for some absolute constants $C,C'$, where $\|\cdot\|_{\psi_1}$ and $\|\cdot\|_{\psi_2}$ denote the sub-exponential norm and the sub-Gaussian norm, respectively~\cite{vershynin2018high}. The first inequality follows from standard centering inequality for the sub-exponential norm (e.g. see Lemma 2.6.8 and Exercise 2.7.10 in \cite{vershynin2018high}), and the second equality follows from Lemma 2.7.6 in \cite{vershynin2018high}. Therefore, we may apply a Bernstein-type inequality for the sum of sub-exponential random variables (e.g. see Theorem 2.8.1 in \cite{vershynin2018high}) and get
\begin{align*}
&\mathbb{P}\left(\Big|\|Z_i-Z_j\|_2^2-\mathbb{E}\|Z_i-Z_j\|_2^2\Big| > t\right)\\
\le~&\exp\left(-\min\left(\frac{t^2}{c\sum_{\ell=1}^d \|(Z_{i\ell}-Z_{j\ell})^2 - \mathbb{E}[(Z_{i\ell}-Z_{j\ell})^2]\|_{\psi_1}^2}, \frac{t}{c'\max_{\ell} \|(Z_{i\ell}-Z_{j\ell})^2 - \mathbb{E}[(Z_{i\ell}-Z_{j\ell})^2]\|_{\psi_1}}\right)\right)\\
=~& \exp\left(-\min\left(\frac{t^2}{c'\sum_{\ell=1}^d \sigma_{\ell}^4}, \frac{t}{c''\hat\sigma^2}\right)\right)
\end{align*}
for some absolute constants $c,c'$. Set $t = c''\hat\sigma^2\log n$ for a large enough constant $c''$, use $\sum_{\ell=1}^d (\sigma_{\ell}/\hat\sigma)^4 \le \sum_{\ell=1}^d (\sigma_{\ell}/\hat\sigma)^2 = O(\log n)$ which follows from Assumption~\ref{assum:mu_sigma}, and take a union bound over all $i,j \in V$, we get that with probability at least $1-o_n(1)$, for all $i,j \in V$ it holds that
\begin{equation}\label{eq:sub-exp-noise}
\begin{split}
\|Z_i-Z_j\|_2^2
&\le \mathbb{E}\|Z_i-Z_j\|_2^2 + O(\hat\sigma^2\log n)\\
&\le \tilde{c}\sum_{\ell=1}^d \|Z_{i\ell}-Z_{j\ell}\|_{\psi_2}^2 + O(\hat\sigma^2\log n)\\
&\le \tilde{c}'\sum_{\ell=1}^d \sigma_{\ell}^2 + O(\hat\sigma^2\log n)\\
&= O(\hat\sigma^2\log n),
\end{split}
\end{equation}
where $\tilde{c},\tilde{c}'$ are absolute constants.
For $i \in K$ and $j \notin K$, the term $(\mu_i-\mu_j)^T(Z_i-Z_j) = \sum_{\ell=1}^d (\mu_{i\ell}-\mu_{j\ell})(Z_{i\ell}-Z_{j\ell})$ is a sum of independent and mean zero sub-Gaussian random variables. We may apply a general Hoeffding’s inequality (see Lemma 2.6.3 in \cite{vershynin2018high}) and get that
\[
\mathbb{P}(|(\mu_i-\mu_j)^T(Z_i-Z_j)| \ge t)
\le 2\exp\left(\frac{ct^2}{\max_{\ell}\|Z_{i\ell}-Z_{j\ell}\|_{\psi_2}^2\|\mu_i-\mu_j\|_2^2}\right)
\le 2\exp\left(-\frac{c't^2}{\hat\sigma^2\|\mu_i-\mu_j\|_2^2}\right),
\]
and hence by setting $t = c''\hat\sigma \sqrt{\log n}\|\mu_i-\mu_j\|_2$ for a large enough constant $c''$ we get that with probability at least $1-o_n(1)$,
\begin{equation}\label{eq:sub-g-noise}
(\mu_i-\mu_j)^T(Z_i-Z_j) \ge -O(\hat\sigma \sqrt{\log n}\|\mu_i-\mu_j\|_2), \; \forall i \in K, j \notin K.
\end{equation}
Combining \eqref{eq:edge_weight_exponent_expression}, \eqref{eq:sub-exp-noise}, \eqref{eq:sub-g-noise}, and using $\|\mu_i-\mu_j\|_2 \ge \hat\mu = \omega(\hat\sigma\sqrt{\log n})$, we get that with probability at least $1-o_n(1)$,
\begin{align*}
\|X_i - X_j\|_2^2 &\le O(\hat\sigma^2\log n), \; \forall i \in K, \forall j \in K,\\
\|X_i - X_j\|_2^2 &\ge \|\mu_i-\mu_j\|_2^2 - O(\hat\sigma \sqrt{\log n}\|\mu_i-\mu_j\|_2) \\
&= \|\mu_i-\mu_j\|_2^2(1-o_n(1)) \ge \hat\mu^2(1-o_n(1)), \; \forall i \in K, \forall j \not\in K.
\end{align*}
By Assumption~\ref{assum:mu_sigma}, we may pick $\gamma$ that satisfies $\gamma\hat\sigma^2 = o(\log^{-1}n)$ and $\gamma\hat\mu^2 = \omega_n(\lambda)$, and for any such $\gamma$ we have
\[
\begin{split}
\exp(-\gamma\|X_i - X_j\|_2^2) & \ge \exp(-o_n(1)), \; \forall i \in K, \forall j \in K, \\
\exp(-\gamma\|X_i - X_j\|_2^2) & \le \exp(-\gamma\hat\mu^2(1-o_n(1))) , \; \forall i \in K, \forall j \not\in K,
\end{split}
\]
as required.
\subsection{Proof of Theorem~\ref{thm:recovery1}}
We start with part 1 of the theorem. Without loss of generality let us assume that the node indices are such that $K = \{1,2,\ldots, k\}$ and that $x^*_1 \ge x^*_2 \ge \ldots \ge x^*_k$. In order to show that $K \subseteq \mathrm{supp}(x^*)$, it suffices to show that $x^*_k > 0$. Assume for the sake of contradiction that $x^*_k = 0$. Note that since the initial mass is $(1+\beta)\sum_{i \in K}T_i$, in an optimal flow routing, the amount of mass that flows over an edge cannot be greater than $(1+\beta)\sum_{i \in K}T_i$. This means that $w_{ij}|x^*_i - x^*_j| \le (1+\beta)\sum_{i' \in K}T_{i'}$ for all $i,j \in V$ (recall the basic properties of $x^*$ provided in Section~\ref{sec:primal-dual-opt}). Therefore we have that
\[
x^*_1
\le \sum_{i=1}^{k-1}\frac{(1+\beta)\sum_{i' \in K}T_{i'}}{w_{i(i+1)}} + x^*_k
= \sum_{i=1}^{k-1}\frac{(1+\beta)\sum_{i' \in K}T_{i'}}{w_{i(i+1)}}.
\]
It then follows from Lemma~\ref{lem:edge_weight} that with probability at least $1-o_n(1)$,
\[
x^*_1 \le (1+\beta)k(1+o_n(1))\sum_{i \in K}T_i.
\]
On the other hand, the total amount of mass that leaves $K$ is
\[
\sum_{i=1}^k \sum_{\substack{j \ge k+1\\ j \sim i}} w_{ij}(x^*_i - x^*_j)
\le \sum_{i=1}^k x^*_i \sum_{\substack{j \ge k+1\\ j \sim i}} w_{ij}
\le x^*_1\sum_{(i,j)\in \mathrm{cut}_G(K)} w_{ij} .
\]
Apply Lemma~\ref{lem:edge_weight}, Lemma~\ref{lem:external_cut} and pick $\epsilon=\delta=1$ there, and use the above bound on $x^*_1$, we get that, with probability at least $1-o_n(1)-k^{-1/3}$,
\begin{align*}
\sum_{i=1}^k \sum_{\substack{j \ge k+1\\ j \sim i}} w_{ij}(x^*_i - x^*_j)
\le (1+\beta) k^2(1+o_n(1))(2q(n-k) + 4\log k/k)\exp(-\gamma\hat\mu^2(1-o_n(1))) \sum_{i \in K}T_i.
\end{align*}
Since we started with $(1+\beta)\sum_{i \in K}T_i$ initial mass inside $K$, nodes in $K$ can settle at most $\sum_{i \in K}T_i$ units of mass, we know that at least $\beta \sum_{i \in K}T_i$ amount of mass must leave $K$. In what follows we show that this cannot be the case for appropriately chosen $\gamma$, and hence arriving at the desired contradiction. Since $\hat\mu = \omega(\hat\sigma\sqrt{\log n(1+\lambda)})$, we may pick $\gamma$ such that $\gamma\hat\sigma^2 = o(\log^{-1}n)$ to satisfy the assumption required for Lemma~\ref{lem:edge_weight}, and at the same time $\gamma\hat\mu^2 = \omega(1+\lambda)$. Since $\lambda = \Omega(\log k + \log(q(n-k)) + \log(1/\beta))$, we know that for any terms $a_n = o_n(1)$ and $b_n = o_n(1)$ and for sufficiently large $n$,
\[
\gamma\hat\mu^2(1-a_n) > 2\log k + \log(2q(n-k)+ 4\log k/k) + \log(1/\beta+1) + \log(1+b_n),
\]
which implies that, for sufficiently large $n$,
\[
(1+\beta) k^2(1+o_n(1))(2q(n-k) + 4\log k/k)\exp(-\gamma\hat\mu^2(1-o_n(1))) < \beta,
\]
and hence
\[
\sum_{i=1}^k \sum_{\substack{j \ge k+1\\ j \sim i}} w_{ij}(x^*_i - x^*_j) < \beta \sum_{i \in K}T_i,
\]
which is the desired contradiction. Therefore we must have that $x^*_k >0$ and consequently $K \subseteq \mathrm{supp}(x^*)$. Now, since $x^*_i > 0$ for all $i \in K$, this means that nodes inside $K$ settles exactly $\sum_{i \in K}T_i$ units mass, and hence exactly $\beta \sum_{i \in K}T_i$ mass leaves $K$. Because $x^*_i > 0$ only if node $j$ is saturated with $T_i$ unit mass, we get that $\sum_{i \in \mathrm{supp}(x^*_i)\backslash K}T_i \le \beta\sum_{i \in K}T_i$.
Part 2 of the theorem is prove by following the same reasoning. Assume for the sake of contradiction that $x^*_k=0$. Since $p \ge \frac{(4+\epsilon)}{\delta^2}\frac{\log k}{k-1}$, we apply Lemma~\ref{lem:mincut} and get that with probability at least $1-ek^{-\epsilon/2}$, $\mathrm{cut}_K(C) \ge (1-\delta)p(k-1)$ for every $C \subseteq K$ such that $1 \le |C| \le k-1$. We will assume that this event holds. Moreover, for any $1 \le i \le k-1$, the total amount of mass that moves from $\{1,2,\ldots,i\}$ to $\{i+1, i+2, \ldots, k\}$ cannot be greater than $(1+\beta)\sum_{i \in K}T_i$. Since there are at least $(1-\delta)p(k-1)$ edges between $\{1,2,\ldots,i\}$ and $\{i+1, i+2, \ldots, k\}$, we must have that
\[
x^*_i - x^*_{i+1} \le \frac{(1+\beta)\sum_{i' \in K}T_{i'}}{(1-\delta)p(k-1)\min_{j,j'\in K, j\sim j'}w_{jj'}}, \forall i=1,2,\ldots,k-1,
\]
because, otherwise, there would be more than $(1+\beta)\sum_{i \in K}T_i$ mass that moves from $\{1,2,\ldots,i\}$ to $\{i+1, i+2, \ldots, k\}$. Apply Lemma~\ref{lem:edge_weight} we have that, with probability at least $1-o_n(1) - ek^{-\epsilon/2}$,
\[
x^*_1
\le \sum_{i=1}^{k-1}\frac{(1+\beta)\sum_{i' \in K}T_{i'}}{(1-\delta)p(k-1)\min_{j,j'\in K, j\sim j'}w_{jj'}}
\le \frac{(1+\beta)k(1+o_n(1))\sum_{i' \in K}T_{i'}}{(1-\delta)p(k-1)}.
\]
The rest of the proof proceeds as the proof of part 1.
\subsection{Proof of Theorem~\ref{thm:recovery2}}
To see that $K \subseteq \mathrm{supp}(x^*)$, let us assume for the sake of contradiction that $x^*_i = 0$ for some $i \in K$. This means that node $i$ receives at most $T_i \le T_{\max}$ mass, because otherwise we would have $x^*_i > 0$. We also know that $i\neq s$ because $T_{\max} < \Delta_s$. Denote $F := \{j \in K : j \sim s\}$. We will consider two cases depending on if $i \in F$ or not. If $i \in F$, then we must have that, with probability at least $1-o_n(1)$,
\[
w_{is}(x^*_s - x^*_i) \le T_{\max} \iff x^*_s \le T_{\max}/w_{is} + x^*_i = T_{\max}(1+a_n)
\]
for some $a_n = o_n(1)$, where the last equality follows Lemma~\ref{lem:edge_weight}. Moreover, since $c_2 < 1$ we have that
\begin{equation}\label{eq:eta_ineq}
\frac{p(k-1)}{\eta(c_2)} = p(k-1) + q(n-k)e^{-c_2\gamma\hat\mu^2} > p(k-1) + q(n-k)e^{-\gamma\hat\mu^2(1-b_n)}
\end{equation}
for any $b_n = o_n(1)$ and for all sufficiently large $n$. Therefore, with probability at least $1-o_n(1)-4k^{-\epsilon_1/3}$ and for all sufficiently large $n$, the total amount of mass that is sent out from node $s$ is
\begin{align*}
\sum_{\ell \sim s}w_{is}(x^*_s - x^*_{\ell})
&= \sum_{\substack{\ell \sim s\\ \ell \in K}}w_{is}(x^*_s - x^*_{\ell}) + \sum_{\substack{\ell \sim s\\ \ell \notin K}}w_{is}(x^*_s - x^*_{\ell})\\
&\stackrel{\text{(i)}}{\le} \sum_{\substack{\ell \sim s\\ \ell \in K}} x^*_s + \sum_{\substack{\ell \sim s\\ \ell \notin K}}e^{-\gamma\hat\mu^2(1-b_n)}x^*_s \qquad \mbox{for some}~ b_n = o_n(1)\\
&\stackrel{\text{(ii)}}{\le} (1+\delta_1)p(k-1)x^*_s + ((1+\delta_1)q(n-k) + 2\delta_1p(k-1))e^{-\gamma\hat\mu^2(1-b_n)}x^*_s\\
&\le (1+3\delta_1)\left(p(k-1) + q(n-k)e^{-\gamma\hat\mu^2(1-b_n)}\right)x^*_s\\
&\stackrel{\text{(iii)}}{<} (1+3\delta_1)\frac{p(k-1)}{\eta(c_2)}x^*_s\\
&\le (1+3\delta_1)\frac{p(k-1)}{\eta(c_2)}T_{\max}(1+a_n)\\
&\stackrel{\text{(iv)}}{<} c_1(1+3\delta_1)\frac{p(k-1)}{\eta(c_2)}T_{\max},
\end{align*}
where (i) follows from Lemma~\ref{lem:edge_weight} and $x^* \ge 0$, (ii) follows from Lemma~\ref{lem:degree}, (iii) follows from \eqref{eq:eta_ineq}, (iv) follows from the assumption that $c_1 > 1$ and hence for all sufficiently large $n$ we have $c_1 \ge (1+a_n)$ where $a_n = o_n(1)$. Since the initial mass equals the sum of $T_s$ and the total amount of mass that is sent out from $s$, we get that the total amount of initial mass is
\[
\Delta_s < c_1(1+3\delta_1)\frac{p(k-1)}{\eta(c_2)}T_{\max} + T_{\max} < c_1T_{\max}\left(\frac{(1+3\delta_1)(1+\frac{1}{k-1})k}{\eta(c_2)}\right) < c_1T_{\max}\frac{m(\delta_1,\delta_2)k}{\eta(c_2)^2} = \Delta_s,
\]
which is a contradiction. Therefore, we must have $i \not\in F$.
Suppose now that $i \not\in F$. Then we know that node $i$ receives at most $T_i \le T_{\max}$ mass from its neighbors. In particular, node $i$ receives at most $T_{\max}$ mass from nodes in $F$, that is, $\sum_{\substack{j \in F \\ j\sim i}}w_{ij}x^*_j \le T_{\max}$. By Lemma~\ref{lem:connectivity}, we know that with probability at least $1-2k^{-\epsilon_1/3}-k^{-2\epsilon_2}$, node $i$ has at least $(1-\delta_1)(1-\delta_2)p^2(k-1)$ neighbors in $F$. Apply Lemma~\ref{lem:edge_weight} we get that, with probability at least $1-o_n(1)-2k^{-\epsilon_1/3}-k^{-2\epsilon_2}$,
\begin{align*}
\sum_{\substack{j \in F \\ j\sim i}}w_{ij}x^*_j \le T_i
&\implies (1-\delta_1)(1-\delta_2)p^2(k-1) \cdot \min_{\substack{j \in F \\ j\sim i}} x^*_j\le T_{\max}\cdot\max_{\substack{j \in F \\ j\sim i}} \frac{1}{w_{ij}}\\
&\implies \min_{\substack{j \in F \\ j\sim i}} \le \frac{T_{\max}(1+a_n)}{(1-\delta_1)(1-\delta_2)p^2(k-1)}\\
&\implies \min_{\substack{j \in F}} \le \frac{T_{\max}(1+a_n)}{(1-\delta_1)(1-\delta_2)p^2(k-1)}
\end{align*}
for some $a_n = o_n(1)$. Let $j \in F$ a node such that $x^*_j \le x^*_{\ell}$ for all $\ell \in F$, then
\begin{equation}\label{eq:x_j_upper_bound}
x^*_j \le \frac{T_{\max}(1+a_n)}{(1-\delta_1)(1-\delta_2)p^2(k-1)}.
\end{equation}
By Lemma~\ref{lem:connectivity}, with probability at least $1-2k^{-\epsilon_1/3}-k^{-2\epsilon_2}$, node $j$ has at least $(1-\delta_1)(1-\delta_2)p^2(k-1)-1$ neighbors in $F$. Since $x^*_j \le x^*_{\ell}$ for all $\ell \in F$ and $x^*_j \le x^*_s$, we know that
\begin{equation}\label{eq:x_j_neighbors}
|\{\ell \in K: x^*_{\ell} \ge x^*_j\}| \ge (1-\delta_1)(1-\delta_2)p^2(k-1)
\end{equation}
Therefore, for all sufficiently large $n$, with probability at least $1-o_n(1)-4k^{-\epsilon_1/3}-k^{-2\epsilon_2}$, the maximum amount of mass that node $j$ can send out is
\begin{align*}
\sum_{\ell \sim j} w_{j\ell}(x^*_j-x^*_{\ell})
&=\sum_{\substack{\ell\sim j \\ \ell \in K}}w_{j\ell} (x^*_j-x^*_{\ell}) + \sum_{\substack{\ell\sim j \\\ell \not\in K}}w_{j\ell} (x^*_j-x^*_{\ell})\\
&\stackrel{\text{(i)}}{\le}\sum_{\substack{\ell\sim j \\ \ell \in K}}w_{j\ell}(x^*_j-x^*_{\ell}) + \sum_{\substack{\ell\sim j \\\ell \not\in K}}e^{-\gamma\hat\mu^2(1-b_n)}(x^*_j-x^*_{\ell}) \qquad \mbox{for some}~ b_n = o_n(1)\\
&\stackrel{\text{(ii)}}{\le} \Big((1+\delta_1)p(k-1)-(1-\delta_1)(1-\delta_2)p^2(k-1)\Big)x^*_j \\
&\qquad + \Big((1+\delta_1)q(n-k) + 2\delta_1p(k-1)\Big)e^{-\gamma\hat\mu^2(1-b_n)}x^*_j\\
&\le \left[(1+3\delta_1)\left(p(k-1) + q(n-k)e^{-\gamma\hat\mu^2(1-b_n)}\right) - (1-\delta_1)(1-\delta_2)p^2(k-1)\right]x^*_j\\
&\stackrel{\text{(iii)}}{\le} \left[(1+3\delta_1)\frac{p(k-1)}{\eta(c_2)} - (1-\delta_1)(1-\delta_2)p^2(k-1)\right]x^*_j\\
&\stackrel{\text{(iv)}}{\le} \left[(1+3\delta_1)\frac{p(k-1)}{\eta(c_2)} - (1-\delta_1)(1-\delta_2)p^2(k-1)\right]\frac{T_{\max}(1+a_n)}{(1-\delta_1)(1-\delta_2)p^2(k-1)}\\
&\le T_{\max}(1+a_n)\frac{(1+3\delta_1)}{(1-\delta_1)(1-\delta_2)}\frac{1}{p\eta(c_2)} - T_{\max},
\end{align*}
where (i) follows from Lemma~\ref{lem:edge_weight}, (ii) follows from Lemma~\ref{lem:degree} and \eqref{eq:x_j_neighbors}, (iii) follows from \eqref{eq:eta_ineq} and (iv) follows from \eqref{eq:x_j_upper_bound}. Now, since node $j$ settles at most $T_j \le T_{\max}$ mass, the maximum amount of mass that node $j$ receives is
\[
T_{\max}(1+a_n)\frac{(1+3\delta_1)}{(1-\delta_1)(1-\delta_2)}\frac{1}{p\eta(c_2)} - T_{\max} + T_{\max} = T_{\max}(1+a_n)\frac{(1+3\delta_1)}{(1-\delta_1)(1-\delta_2)}\frac{1}{p\eta(c_2)}.
\]
This means that
\begin{align*}
&w_{js}(x^*_s-x^*_j) \le T_{\max}(1+a_n)\frac{(1+3\delta_1)}{(1-\delta_1)(1-\delta_2)}\frac{1}{p\eta(c_2)}\\
\implies & x^*_s \le \frac{T_{\max}(1+a'_n)}{(1-\delta_1)(1-\delta_2)}\left(\frac{1}{p^2(k-1)}+ \frac{(1+3\delta_1)}{p\eta(c_2)}\right)
\end{align*}
for some $a'_n = o_n(1)$, where we have applied Lemma~\ref{lem:edge_weight} for $w_{js}$. Apply the same reasoning as before, we get that with probability at least $1-o_n(1)-4k^{-\epsilon_1/3}-k^{-2\epsilon_2}$ for all sufficiently large $n$, the total amount of mass that is sent out from node $s$ is
\begin{align*}
\sum_{\ell \sim s}w_{is}(x^*_s - x^*_{\ell})
&< (1+3\delta_1)\frac{p(k-1)}{\eta(c_2)}x^*_s\\
&\le \frac{T_{\max}(1+a'_n)}{(1-\delta_1)(1-\delta_2)}\left(\frac{(1+3\delta_1)}{p\eta(c_2)} + \frac{(1+3\delta_1)^2(k-1)}{\eta(c_2)^2}\right)\\
&\le c_1T_{\max}\frac{(1+3\delta_1)}{(1-\delta_1)(1-\delta_2)}\frac{(1+3\delta_2+\frac{1}{p(k-1)})}{\eta(c_2)^2}(k-1)\\
&\le c_1T_{\max}\frac{m(\delta_1,\delta_2)(k-1)}{\eta(c_2)^2},
\end{align*}
but then this means that the total amount of initial mass is
\[
\Delta_s < c_1T_{\max}\frac{m(\delta_1,\delta_2)(k-1)}{\eta(c_2)^2} + T_{\max} < c_1T_{\max}\frac{m(\delta_1,\delta_2)k}{\eta(c_2)^2} = \Delta_s
\]
which is a contradiction. Therefore we must have $i \not\in K$, but then this contradicts our assumption that $i \in K$. Since our choice of $i,s \in K$ were arbitrary, this means that $x^*_i > 0$ for all $i \in K$ and for all $s \in K$.
Finally, the upper bound on the false positives follows directly from the fact that $x^*_i > 0$ only if node $i$ is saturated with exactly $T_i$ mass. When $T_i = 1$ for all $i$ the result follows directly from $\Delta_s = c_1m(\delta_1,\delta_2)k/\eta(c_2)^2$. When $T_i = \deg(i)$ for all $i$, we may apply Lemma~\ref{lem:degree} and get that
\[
\Delta_s \le \frac{c_1m(\delta_1,\delta_2)}{\eta(c_2)^2}(1+\delta_1)k(p(k-1)+q(n-k)) \le \frac{c_1m(\delta_1,\delta_2)}{\eta(c_2)^2}\frac{(1+\delta_1)}{(1-\delta_1)}\mathrm{vol}(K)
\]
from which the result follows.
\section{Technical lemmas}
\begin{lemma}[Lower bound of internal cut]\label{lem:mincut}
For any $0 < \delta \le 1$ and $\epsilon > 0$, if $p \ge \frac{(4+\epsilon)}{\delta^2}\frac{\log k}{k-1}$ and $k\ge20$, then with probability at least $1-ek^{-\epsilon/2}$ we have that $\mathrm{cut}_K(C) \ge (1-\delta)p(k-1)$ for all proper subsets $C \subset K$.
\end{lemma}
\begin{proof}
Consider integers $j$ such that $1 \le j \le k/2$. First fix some $j$ and let $C \subset K$ be such that $|C| = j$. Note that $\mathrm{cut}(C)$ is the sum of $j(k-j)$ independent Bernoulli random variables with expectation $\mathbb{E}(\mathrm{cut}(C)) = pj(k-j)$. Therefore we may apply the Chernoff bound and get
\begin{align*}
\mathbb{P}(\mathrm{cut}_K(C) \le (1-\delta)p(k-1)) \le e^{-pj(k-j)}\left(\frac{ej(k-j)}{(1-\delta)(k-1)}\right)^{(1-\delta)p(k-1)}.
\end{align*}
By a union bound over all subsets $C \subset K$ such that $|C| = j$ we get that
\begin{align}
&\mathbb{P}\left(\mathrm{cut}_K(C) \le (1-\delta)p(k-1), \forall C \subset K \ \mbox{s.t.} \ |C| = j\right)\nonumber\\
\le~&{k \choose j} e^{-pj(k-j)}\left(\frac{ej(k-j)}{(1-\delta)(k-1)}\right)^{(1-\delta)p(k-1)}\nonumber\\
\le~&\left(\frac{ek}{j}\right)^j \exp\left[-pj(k-j) + (1-\delta)p(k-1) + (1-\delta)p(k-1)\log\left(\frac{j(k-j)}{(1-\delta)(k-1)}\right)\right]\nonumber\\
=~&\exp\left[-pj(k-j) + (1-\delta)p(k-1) + (1-\delta)p(k-1)\log\left(\frac{j(k-j)}{(1-\delta)(k-1)}\right) + j + j\log\left(\frac{k}{j}\right)\right]\label{eq:mincut_prob_exp}.
\end{align}
Now consider the exponent in \eqref{eq:mincut_prob_exp},
\begin{align*}
f(j) = -pj(k-j) + (1-\delta)p(k-1) + (1-\delta)p(k-1)\log\left(\frac{j(k-j)}{(1-\delta)(k-1)}\right) + j + j\log\left(\frac{k}{j}\right),
\end{align*}
we will show that $f(j) \le -(1+\epsilon/2)\log k + 1$ for all $1 \le j \le k/2$ and $k\ge20$. Let us first consider the interval $[1, 3k/8]$. The derivative of $f(j)$ with respect to $j$ is
\[
f'(j) = -p(k-2j) + (1-\delta)p(k-1)\frac{(k-2j)}{j(k-j)} + \log\left(\frac{k}{j}\right),
\]
and we have that $f'(j) \le 0$ for all $1 \le j \le 3k/8$. To see this, for $1 \le j \le k/2$ we have
\begin{equation}\label{eq:mincut_display1}
\begin{split}
\frac{(k-1)}{j(k-j)} \le 1
&\iff \frac{(1-\delta)p(k-1)(k-2j)}{j(k-j)} \le (1-\delta)p(k-2j)\\
&\iff -p(k-2j) + (1-\delta)p(k-1)\frac{(k-2j)}{j(k-j)} \le -\delta p(k-2j),
\end{split}
\end{equation}
moreover, since $p \ge \frac{(4+\epsilon)}{\delta^2}\frac{\log k}{k-1}$, for $1 \le j \le 3k/8$ and $k\ge2$ we have
\begin{equation}\label{eq:mincut_display2}
-\delta p(k-2j) \le -\frac{\delta pk }{4} \le -\frac{(4+\epsilon)k}{4\delta(k-1)}\log k \le -\log k \le -\log (k/j),
\end{equation}
and thus by combining \eqref{eq:mincut_display1} and \eqref{eq:mincut_display2} we get $f'(j) \le -\delta p(k-2j) + \log (k/j) \le 0$ for all $1 \le j \le 3k/8$. This implies that $f(j)$ achieves maximum at $j = 1$ over the interval $[1,3k/8]$. Therefore, for all $1 \le j \le 3k/8$,
\begin{align*}
f(j) \le f(1)
&= -p(k-1) + (1-\delta)p(k-1) - (1-\delta)p(k-1) \log(1-\delta) + 1 + \log k\\
&= -p(k-1)(\delta + (1-\delta)\log(1-\delta)) + 1 + \log k\\
&\le -p(k-1)\delta^2/2 + 1 + \log k\\
&\le -(2+\epsilon/2)\log k + 1 + \log k\\
&= -(1+\epsilon/2) \log k + 1
\end{align*}
where the second inequality follows from the numeric inequality $\delta + (1-\delta)\log(1-\delta) \ge \delta^2/2$ for $\delta \in (0,1)$, and the third inequality follows from the assumption that $p \ge \frac{(4+\epsilon)}{\delta^2}\frac{\log k}{k-1}$.
Next, consider the value of $f(j)$ over the interval $[3k/8, k/2]$. We have that for $3k/8 \le j \le k/2$ and $k\ge 20$,
\begin{align*}
f(j)
&\le -p\left(\frac{3k}{8}\right)\left(\frac{5k}{8}\right) + (1-\delta)p(k-1)\left(1 + \log\left(\frac{k^2/4}{(1-\delta)(k-1)}\right)\right) + \frac{k}{2} + \frac{3k}{8}\log\left(\frac{8}{3}\right)\\
&\le -\frac{15}{64}pk^2 + p(k-1)\left(1 + (1-\delta)\log\left(\frac{k^2/4}{k-1}\right)\right) + \frac{22}{25}k\\
&\le -pk\left(\frac{41}{256}k - 1 - \log\left(\frac{k^2/4}{k-1}\right)\right) - k\left(\frac{19}{256}pk - \frac{22}{25}\right)\\
&\le -\frac{1}{2}pk\\
&\le -(2+\epsilon/2)\log k.
\end{align*}
In the above, the first inequality follows from the fact that the term $j\log(k/j)$ is decreasing over the interval $[3k/8, k/2]$, the second inequality follows from the numeric inequality $(1-\delta) - (1-\delta)\log(1-\delta) \le 1$ for $\delta \in (0,1)$ which follows from the fact that $\log x \ge 1 - 1/x$ for $x > 0$, the forth inequality follows from $k \ge 20$.
Therefore, the exponent in \eqref{eq:mincut_prob_exp} satisfies $f(j) \le -(1+\epsilon/2)\log k + 1$ for all $1 \le j \le k/2$ and $k \ge 20$. Finally, apply a union bound we get that
\begin{align*}
&\mathbb{P}(\mathrm{cut}_K(C) \le (1-\delta)p(k-1), \forall C \subset K \ \mbox{s.t.} \ 1\le |C| \le k-1 )\\
&=\sum_{j=1}^{\lfloor k/2 \rfloor} \mathbb{P}(\mathrm{cut}_K(C) \le (1-\delta)p(k-1), \forall C \subset K \ \mbox{s.t.} \ |C|=j )\\
&\le\exp(f(j) + \log k)
\le \exp\left(-\frac{\epsilon}{2}\log k + 1\right) = ek^{-\epsilon/2}
\end{align*}
which proves the required result.
\end{proof}
\begin{lemma}[Upper bound of external cut]\label{lem:external_cut}
For any $0 < \delta \le 1$ and $\epsilon > 0$ with probability at least $1 - k^{-\epsilon/3}$ we have that $\mathrm{cut}_G(K) \le (1+\delta)qk(n-k) + (e\epsilon/\delta^2 + \epsilon/3)\log k$.
\end{lemma}
\begin{proof}
Note that $\mathrm{cut}_G(K)$ is the sum of $k(n-k)$ independent Bernoulli random variables with mean $\mathbb{E}[\mathrm{cut}_G(K)] = qk(n-k)$. We consider two cases depending on the value of $qk(n-k)$. If $qk(n-k) \ge \epsilon\log k/\delta^2$, then by the multiplicative Chernoff bound we have that,
\begin{equation}\label{eq:external_cut_prob1}
\mathbb{P}(\mathrm{cut}_G(K) \ge (1+\delta)qk(n-k)) \le \exp\left(-\frac{\delta^2}{3}qk(n-k)\right) \le \exp\left(-\epsilon\log k/3\right).
\end{equation}
Next consider the case $qk(n-k) \le \epsilon\log k/\delta^2$. Denote $c(\epsilon,\delta) := e\epsilon/\delta^2 + \epsilon/3$ and observe that
\[
\frac{\epsilon}{\delta^2}
= \frac{c(\epsilon,\delta) - \epsilon/3}{e}
= \left(1-\frac{\epsilon/3}{c(\epsilon,\delta)}\right)\frac{c(\epsilon,\delta)}{e}
\le \exp\left(-\frac{\epsilon/3}{c(\epsilon,\delta)}\right)\frac{c(\epsilon,\delta)}{e}.
\]
This means that
\[
qk(n-k) \le \frac{\epsilon}{\delta^2}\log k \le \exp\left(-\frac{\epsilon/3}{c(\epsilon,\delta)}-1\right)c(\epsilon,\delta)\log k,
\]
and thus
\[
\frac{qk(n-k)}{c(\epsilon,\delta)\log k} \le \exp\left(-\frac{\epsilon/3}{c(\epsilon,\delta)}-1\right)
\ \iff \
c(\epsilon,\delta) + c(\epsilon,\delta)\log\left(\frac{qk(n-k)}{c(\epsilon,\delta)\log k}\right) \le - \epsilon/3.
\]
Therefore the Chernoff bound yields
\begin{equation}\label{eq:external_cut_prob2}
\begin{split}
\mathbb{P}\left(\mathrm{cut}_G(K) \ge c(\epsilon,\delta)\log k\right)
&\le e^{-qk(n-k)}\left(\frac{eqk(n-k)}{c(\epsilon,\delta)\log k}\right)^{c(\epsilon,\delta)\log k}\\
&=\exp\left(-qk(n-k) + c(\epsilon,\delta)\log k\left(1 + \log\left(\frac{qk(n-k)}{c(\epsilon,\delta)\log k}\right)\right)\right)\\
&\le\exp\left(\log k\left( c(\epsilon,\delta) + c(\epsilon,\delta)\log\left(\frac{qk(n-k)}{c(\epsilon,\delta)\log k}\right) \right)\right)\\
&\le\exp(-\epsilon\log k/3).
\end{split}
\end{equation}
Combining \eqref{eq:external_cut_prob1} and \eqref{eq:external_cut_prob2} gives the required result.
\end{proof}
\begin{lemma}[Concentration of degrees]\label{lem:degree}
If $p \ge \frac{(3+\epsilon)}{\delta^2}\frac{\log k}{k-1}$ for some $\epsilon>0$ and $0<\delta\le1$, then with probability at least $1-2k^{-\epsilon/3}$ we have that
\[
(1-\delta)p(k-1) \le \deg_K(i) \le (1+\delta)p(k-1), \forall i \in K.
\]
Similarly, with probability at least $1-2k^{-\epsilon/3}$ we have that
\[
(1-\delta)(p(k-1)+q(n-k)) \le \deg_G(i) \le (1+\delta)(p(k-1)+q(n-k)), \forall i \in K.
\]
\end{lemma}
\begin{proof}
For each node $i \in K$, $\deg_K(i)$ is the sum of independent Bernoulli random variables with mean $\mathbb{E}[\deg_K(i)] = p(k-1)$, therefore, apply the multiplicative Chernoff bound we have
\[
\mathbb{P}(|\deg_K(i) - p(k-1)| \ge \delta p(k-1)) \le 2\exp(-\delta^2p(k-1)/3) \le 2\exp(-(1+\epsilon)\log k/3).
\]
By taking a union bound over all $i \in K$ we obtain the required concentration result for $\deg_K(i)$ for all $i \in K$. The result for $\deg_G(i)$ for all $i \in K$ is obtained similarly.
\end{proof}
\begin{lemma}[Well-connected cluster]\label{lem:connectivity}
If $p \ge \max(\frac{(3+\epsilon_1)}{\delta_1^2}\frac{\log k}{k-1}, \frac{(2+\epsilon_2)}{\delta_2\sqrt{1-\delta_1}}\frac{\sqrt{\log k}}{\sqrt{k-1}})$, then with probability at least $1-2k^{-\epsilon_1/3}-k^{-2\epsilon_2}$ we have that for all $s \in K$, for all $i \in K\backslash\{s\}$, there are at least $(1-\delta_1)(1-\delta_2)p^2(k-1)$ paths connecting node $i$ to node $s$ such that, the path lengths are at most 2 and the paths are mutually non-overlapping, i.e., an edge appears in at most one of the paths.
\end{lemma}
\begin{proof}
Let $s \in K$ and denote $F$ the set of neighbors of $s$ in $K$. By Lemma~\ref{lem:degree} and our assumption on $p$ we know that $|F| \ge (1-\delta_1)p(k-1)$ with probability at least $1-2k^{-\epsilon_1/3}$. Let us denote $E(A,B)$ the set of edges between $A \subseteq K$ and $B \subseteq K$. Let $i \in K\backslash\{s\}$. If $i \not\in F$, then $|E(\{i\}, F)|$ is the sum of independent Bernoulli random variables with mean $\mathbb{E}[|E(\{i\}, F)|] = |F|p$. Apply the multiplicative Chernoff bound we get that
\[
\mathbb{P}(|E(\{i\}, F)| \le (1-\delta_2)|F|p) \le \exp\left(-\frac{\delta_2^2}{2}|F|p\right) \le \exp\left(-\frac{\delta_2^2(1-\delta_1)}{2}p^2(k-1)\right) \le \exp(-(2+2\epsilon_2)\log k)
\]
where the last inequality is due to our assumption that $p \ge \frac{(2+\epsilon_2)}{\delta_2\sqrt{1-\delta_1}}\frac{\sqrt{\log k}}{\sqrt{k-1}}$. If $i \in F$, then the edge $(i,s)$ is a path of length 1 between node $i$ and node $s$, moreover,
\[
\mathbb{P}(|E(\{i\}, F\backslash\{i\})| + 1 \le (1-\delta_2)|F|p) \le \mathbb{P}(|E(i', F)| \le (1-\delta_2)|F|p)
\]
for any node $i' \in K\backslash F$ and $i'\neq s$. Note that, for $i \in K\backslash\{s\}$, each edge $(i,j)$ in $E(\{i\}, F\backslash\{i\})$ identifies a unique path $(i,j,s)$ and all these paths do not have overlapping edges. Therefore, denote $P(i,s)$ the set of mutually non-overlapping paths of length at most 2 between $i$ and $s$. and take union bounds over all $i \in K\backslash\{s\}$ and then over all $s \in K$, we get that
\[
\mathbb{P}(P(i,s) \le (1-\delta_2)|F|p, \forall s \in K, \forall i \in K\backslash\{s\}) \le k^{-2\epsilon_2}.
\]
Finally, a union bound over the above event and the event that $|F| \le (1-\delta_1)p(k-1)$ gives the required result.
\end{proof}
\section{Dataset details, empirical setup and additional results}\label{sec:additional_experiments}
The co-authorship graphs are based on the Microsoft Academic Graph from the KDD Cup 2016 challenge~\cite{SMBG18}. In these graphs, nodes are authors, and two nodes are connected by an edge if they have coauthored a paper. The clusters are defined according to the most active research field of each author. The node attributes represent paper keywords for each author's papers. The first graph consists of 18,333 computer science researchers and 81,894 connections among them. Each computer science researcher belongs to one of the 15 ground-truth clusters. The node attributes consists of 6,805 key words. The second graph consists of 34,493 physics researchers and 247,962 connections among them. Each physics researcher belongs to one of the 5 ground-truth clusters. The node attributes consists of 8,415 key words. The cluster sizes are given in Table~\ref{tab:real-coauthor-stats}.
\begin{table}[h!]
\caption{Cluster statistics in co-authorship graphs}
\label{tab:real-coauthor-stats}
\centering
\setlength\tabcolsep{2.5pt}
\begin{tabular}{clrr}
\toprule
Network & Cluster & Number of nodes & Volume \\
\midrule
\multirow{16}{*}{\rotatebox[origin=c]{90}{Computer Science}}
& Bioinformatics & 708 & 3767 \\
& Machine Learning & 462 & 4387\\
& Computer Vision & 2050 & 20384\\
& NLP & 429 & 2476\\
& Graphics & 1394 & 15429\\
& Networks & 2193 & 18364\\
& Security & 371 & 2493\\
& Databases & 924 &9954\\
& Data Mining & 775 & 7573\\
& Game Theory & 118 & 362 \\
& HCI & 1444 & 15145\\
& Information Theory & 2033 & 16007 \\
& Medical Informatics & 420 & 3838\\
& Robotics & 4136 & 33708\\
& Theoretical CS & 876 & 9901\\
\cmidrule(l{2pt}r{2pt}){2-4}
& TOTAL & 18333 & 163788 \\
\midrule
\multirow{6}{*}{\rotatebox[origin=c]{90}{Physics}}
& Phys. Rev. A & 5750 & 52151\\
& Phys. Rev. B & 5045 & 54853\\
& Phys. Rev. C & 17426 & 325475\\
& Phys. Rev. D & 2753 & 40451\\
& Phys. Rev. E & 3519 & 22994\\
\cmidrule(l{2pt}r{2pt}){2-4}
& TOTAL & 34493& 495924\\
\bottomrule
\end{tabular}
\end{table}
For both datasets, we preprocess the node attributes by applying PCA to reduce the dimension to 128. In addition, for each node we enhance its attributes by taking a uniform average over its own attributes and the neighbors' attributes. Uniform averaging of neighborhood attributes has been shown to improve the signal-to-noise ratio in CSBM~\cite{BFJ2021}. This operation does not break the local nature of Algorithm~\ref{alg:lgc} because it only needs to be done whenever it becomes necessary for subsequent computations, i.e., when a node is looked at by Algorithm~\ref{alg:lgc}.
We consider two ways for setting the sink capacities. The first is $T_i = \deg_G(i)$ for all $i$. The corresponding local clustering results are reported in Table~\ref{tab:real-coauthor-results} in the main text. The second is $T_i = 1$ for all $i$. The additional results are presented in Table~\ref{tab:real-coauthor-results-complete}. For each cluster $K$ in a graph, given a seed node $s \in K$, we run Algorithm~\ref{alg:lgc} with source mass $\Delta_s = \alpha\sum_{i \in K}T_i$ for $\alpha \in \{1.5,1.75,2,\ldots,5\}$. We select the cluster that has the minimum edge-weighted conductance. Given edge weights $w_{ij}$ for $(i,j) \in E$ and a cluster $C \subseteq V$, the edge-weighted conductance of $C$ is the ratio
\[
\frac{\sum_{i \in C, j \not\in C} w_{ij}}{\sum_{i \in C}\sum_{j \sim i}w_{ij}}.
\]
We measure recovery quality using the F1 score. For each cluster we run 100 trials, for each trial we randomly select a seed node from the target cluster. We report average F1 scores over 100 trials. We set $\gamma = 0.02$ so that the edge weights are reasonably distributed between 0 and 1, that is, not all edges weights are arbitrarily close to 1, and not all edge weights are arbitrarily close 0. We find that the results do not change much when we use other choices for $\gamma$ within a reasonable range, e.g. $\gamma \in [0.005,0.1]$. For both choices of $T$, using node attributes generally improves the recovery accuracy. Overall, setting the sink capacities to $T_i = \deg_G(i)$ leads to higher F1 scores than setting $T_i = 1$.
\begin{table}[h!]
\caption{F1 scores for local clustering in co-authorship networks under different settings of flow diffusion}
\label{tab:real-coauthor-results-complete}
\centering
\begin{tabular}{clrrrrrr}
\toprule
& & \multicolumn{3}{c}{$T_i=\deg_G(i)$ for all $i$} & \multicolumn{3}{c}{$T_i = 1$ for all $i$} \\
\cmidrule(l{2pt}r{2pt}){3-5} \cmidrule(l{2pt}r{2pt}){6-8}
Network & Cluster & No attr. & Ues attr. & Improv. & No attr. & Ues attr. & Improv.\\
\midrule
\multirow{15}{*}{\rotatebox[origin=c]{90}{Computer Science}}
& Bioinformatics & 32.1 & 39.3 & 7.2 & 23.5 & 31.7 & 8.2\\
& Machine Learning & 30.9 & 37.3 & 6.4 & 27.5 & 34.4 & 6.9\\
& Computer Vision & 37.6 & 35.5 & -2.1 & 40.4 & 37.8 & -2.6 \\
& NLP & 45.2 & 52.3 & 7.1 & 34.3 & 37.2 & 2.9\\
& Graphics & 38.6 & 49.2 & 10.6 & 39.1 & 41.3 & 2.2\\
& Networks & 44.1 & 47.0 & 2.9 & 43.0 & 44.1 & 1.1\\
& Security & 29.9 & 35.7 & 5.8 & 23.0 & 26.2 & 3.2\\
& Databases & 48.5 & 58.1 & 9.6 & 41.9 & 42.6 & 0.7\\
& Data Mining & 27.5 & 28.8 & 1.3 & 26.2 & 28.6 & 2.4\\
& Game Theory & 60.6 & 66.0 & 5.4 & 56.9 & 62.6 & 5.7\\
& HCI & 70.0 & 77.6 & 7.6 & 44.0 & 63.1 & 19.1\\
& Information Theory & 47.4 & 46.9 & -0.5 & 41.6 & 41.4 & -0.2\\
& Medical Informatics & 65.7 & 70.3& 4.6 & 62.7 & 68.1 & 5.4\\
& Robotics & 59.9 & 59.9& 0.0 & 58.8 & 55.9 & -2.9\\
& Theoretical CS & 66.3 & 70.7 & 4.4 & 54.9 & 59.1 & 4.2\\
\midrule
\multirow{5}{*}{\rotatebox[origin=c]{90}{Physics}}
& Phys. Rev. A & 69.4 & 70.9 & 1.5 & 53.5 & 60.9 & 7.4\\
& Phys. Rev. B & 41.4 & 42.3 & 0.9 & 40.4 & 41.1 & 0.7\\
& Phys. Rev. C & 79.3 & 82.1 & 2.8 & 84.9 & 85.9 & 1.0\\
& Phys. Rev. D & 62.3 & 68.9 & 6.6 & 63.6 & 70.0 & 6.4\\
& Phys. Rev. E & 49.5 & 53.7 & 4.2 & 30.1 & 34.9 & 4.8\\
\midrule
\multicolumn{2}{c}{AVERAGE} & 50.3 & 54.6 & 4.3 & 44.5 & 48.3 & 3.8\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Additional experiments with Amazon co-purchase graph}\label{sec:amazon-results}
We carry out additional experiments using a segment of the Amazon co-purchase graph~\cite{mcauley2015image,SMBG18}. In this graph, nodes represent products, and two products are connected by an edge if they are frequently bought together. The clusters are defined according to the product category. The node attributes are bag-of-words encoded product reviews. The cluster sizes are given in Table~\ref{tab:real-amazon-stats}. We use exactly the same empirical settings as before. The local clustering results are reported in Table~\ref{tab:real-amazon-results}.
\begin{table}[h!]
\caption{Cluster statistics in the Amazon co-purchase graph}
\label{tab:real-amazon-stats}
\centering
\setlength\tabcolsep{2.5pt}
\begin{tabular}{lrr}
\toprule
Cluster & Number of nodes & Volume \\
\midrule
Film Photography & 365 & 13383\\
Digital Cameras & 1634 & 32208\\
Binoculars \& Scopes & 686 & 21611\\
Lenses & 901 & 26479\\
Tripods \& Monopods & 872 & 26133\\
Video Surveillance & 798 & 17959\\
Lighting \& Studio & 1900 & 86989\\
Flashes & 331 & 13324\\
\cmidrule(l{2pt}r{2pt}){1-3}
TOTAL & 18333 & 163788 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[h!]
\caption{F1 scores for local clustering in a segment of the Amazon co-purchase graph}
\label{tab:real-amazon-results}
\centering
\begin{tabular}{lrrrrrr}
\toprule
& \multicolumn{3}{c}{$T_i=\deg_G(i)$ for all $i$} & \multicolumn{3}{c}{$T_i = 1$ for all $i$} \\
\cmidrule(l{2pt}r{2pt}){2-4} \cmidrule(l{2pt}r{2pt}){5-7}
Cluster & No attr. & Ues attr. & Improv. & No attr. & Ues attr. & Improv.\\
\midrule
Film Photography & 69.0 & 71.9 & 2.9 & 70.4 & 74.0 & 3.6\\
Digital Cameras & 54.4 & 56.0 & 1.6 & 42.7 & 43.1 & 0.4\\
Binoculars & 83.3 & 85.1 & 1.8 & 81.8 & 82.7 & 0.9\\
Lenses & 39.0 & 40.4 & 1.4 & 32.2 & 32.9 & 0.7\\
Tripods \& Monopods & 46.3 & 47.8 & 1.5 & 37.9 & 38.1 & 0.2\\
Video Surveillance & 94.7 & 94.9 & 0.2 & 94.0 & 93.8 & -0.2\\
Lighting \& Studio & 49.6 & 49.5 & -0.1 & 53.7 & 53.5 & -0.2\\
Flashes & 33.3 & 32.7 & -0.6 & 27.0 & 25.8 & -1.2\\
\midrule
AVERAGE & 58.7 & 59.8 & 1.1 & 55.0 & 55.5 & 0.5\\
\bottomrule
\end{tabular}
\end{table}
We estimate an average signal-to-noise ratio in each dataset as follows. Let $K_1,K_2,\ldots,K_C$ denote a partition of nodes into distinct clusters. Let $X_i$ be the node attributes of node $i$. For $1 \le r \le C$ let
\[
\bar\mu_r := \frac{1}{|K_r|}\sum_{i \in K_r} X_i
\]
be the empirical mean of node attributes in the cluster $K_r$. Denote
\[
\bar\lambda_r := \min_{1\le s \le C, s \neq r}\|\bar\mu_r - \bar\mu_s \|_2
\]
the empirical minimum pairwise mean distance between cluster $K_r$ and other clusters. Let $\bar\sigma_{\ell}$ denote the empirical standard deviation for the $\ell$th attribute and let $\bar\sigma = \frac{1}{d}\sum_{\ell=1}^d \bar\sigma_{\ell}$, where $d$ is the dimension of node attributes. Then we compute an average relative signal strength for the entire dataset as
\[
\mbox{ratio} := \frac{1}{|C|}\sum_{r = 1}^C\bar\lambda_r/\bar\sigma.
\]
The computed results are shown in Table~\ref{tab:snr}. Observe that the ratio is much smaller for the Amazon co-purchase graph than the two co-authorships graphs. This means that the relative strength of attribute signal is much smaller for the Amazon co-purchase graph, and it explains why there is only a very small improvement when using node attributes.
\begin{table}[h!]
\caption{Relative signal strength for each dataset}
\label{tab:snr}
\centering
\begin{tabular}{lr}
\toprule
graph & ratio\\
\midrule
Co-authorship (Computer Science) & 41.69\\
Co-authorship (Physics) & 77.09\\
Amazon co-purchase & 7.58 \\
\bottomrule
\end{tabular}
\end{table}
The results we observe in the experiments with real-world datasets indicate that, an very interesting future work is to incorporate node embedding and parameter learning into the local flow diffusion pipeline (to improve signal-to-noise ratio of node attributes), where the attributes and their relative importance may be optimized simultaneously alongside the local diffusion process.
|
2,869,038,155,088 | arxiv | \section{Introduction}
Hate speech represents written or oral communication that in any way discredits a person or a group based on characteristics such as race, color, ethnicity, gender, sexual orientation, nationality, or religion \cite{warner2012detecting}. Hate speech targets disadvantaged social groups and harms them both directly and indirectly \cite{waldron2012harm}. Social networks like Twitter and Facebook, where hate speech frequently occurs, receive many critics for not doing enough to deal with it. As the connection between hate speech and the actual hate crimes is high \cite{bleich2011rise}, the importance of detecting and managing hate speech is not questionable. Early identification of users who promote such kind of communication can prevent an escalation from speech to action. However, automatic hate speech detection is difficult, especially when the text does not contain explicit hate speech keywords. Lexical detection methods tend to have low precision because, during classification, they do not take into account the contextual information those messages carry \cite{davidson2017automated}. Recently, contextual word and sentence embedding methods capture semantic and syntactic relation among the words and improve prediction accuracy.
Recent works on combining probabilistic Bayesian inference and neural network methodology attracted much attention in the scientific community \cite{myshkov2016posterior}. The main reason is the ability of probabilistic neural networks to quantify trustworthiness of predicted results. This information can be important, especially in tasks were decision making plays an important role \cite{inproceedings}. The areas which can significantly benefit from prediction uncertainty estimation are text classification tasks which trigger specific actions. Hate speech detection is an example of a task where reliable results are needed to remove harmful contents and possibly ban malicious users without preventing the freedom of speech. In order to assess the uncertainty of the predicted values, the neural networks require a Bayesian framework.
On the other hand, Srivastava et al. \cite{srivastava2014dropout} proposed a regularization approach, called dropout, which has a considerable impact on the generalization ability of neural networks. The approach drops some randomly selected nodes from the neural network during the training process. Dropout increases the robustness of networks and prevents overfitting. Different variants of dropout improved classification results in various areas \cite{baldi2013understanding}. Gal and Ghahramani \cite{gal2016dropout} exploited the interpretation of dropout as a Bayesian approximation and proposed a Monte Carlo dropout (MCD) approach to estimate the prediction uncertainty. In this paper, we analyze the applicability of Monte Carlo dropout in assessing the predictive uncertainty.
Our main goal is to accurately and reliably classify different forms of text as hate or non-hate speech, giving a probabilistic assessment of the prediction uncertainty in a comprehensible visual form. We also investigate the ability of deep neural network methods to provide good prediction accuracy on small textual data sets. The outline of the proposed methodology is presented in Figure \ref{fig:int}.
\begin{figure}[b!!]
\centering
\includegraphics[width=0.8 \linewidth]{name/dia5.pdf}
\caption{The diagram of the proposed methodology.
}
\label{fig:int}
\end{figure}
Our main contributions are:
\begin{itemize}
\item investigation of prediction uncertainty assessment to the area of text classification,
\item implementation of hate speech detection with reliability output,
\item evaluation of different contextual embedding approaches in the area of hate speech,
\item a novel visualization of prediction uncertainty and errors of classification models.
\end{itemize}
The paper consists of six sections. In Section 2, we present related works on hate speech detection, prediction uncertainty assessment in text classification context, and visualization of uncertainty. In Section 3, we propose the methodology for uncertainty assessment using dropout within neural network models, as well as our novel visualization of prediction uncertainty. Section 4 presents the data sets and the experimental scenario. We discuss the obtained results in Section 5 and present conclusions and ideas for further work in Section 6.
\section{Related Work}
We shortly present the related work in three areas which constitute the core of our approach: hate speech detection, recurrent neural networks with Monte Carlo dropout for assessment of prediction uncertainty in text classification, and visualization of predictive uncertainty.
\subsection{Hate Speech Detection}
Techniques used for hate speech detection are mostly based on supervised learning. The most frequently used classifier is the Support Vector Machines (SVM) method \cite{schmidt2017survey}. Recently, deep neural networks, especially recurrent neural network language models \cite{mehdad2016characters}, became very popular.
Recent studies compare (deep) neural networks \cite{rother2018ulmfit,corazza2018comparing,del2017hate} with the classical machine learning methods.
Our experiments investigate embeddings and neural network architectures that can achieve superior predictive performance to SVM or logistic regression models. More specifically, our interest is to explore the performance of MCD neural networks applied to the hate speech detection task.
\subsection{Prediction Uncertainty in Text Classification}
Recurrent neural networks (RNNs) are a popular choice in text mining. The dropout technique was first introduced to RNNs in 2013 \cite{wang2013fast} but further research revealed negative impact of dropout in RNNs, especially within language modeling. For example, the dropout in RNNs employed on a handwriting recognition task, disrupted the ability of recurrent layers to effectively model sequences \cite{pham2014dropout}. The dropout was successfully applied to language modeling by \cite{zaremba2014recurrent} who applied it only on fully connected layers. The then state-of-the-art results were explained with the fact that by using the dropout, much deeper neural networks can be constructed without danger of overfitting.
Gal and Ghahramani \cite{gal2016theoretically} implemented the variational inference based dropout which can also regularize recurrent layers. Additionally, they provide a solution for dropout within word embeddings. The method mimics Bayesian inference by combining probabilistic parameter interpretation and deep RNNs. Authors introduce the idea of augmenting probabilistic RNN models with the prediction uncertainty estimation. Recent works further investigate how to estimate prediction uncertainty within different data frameworks using RNNs \cite{zhu2017deep}. Some of the first investigation of probabilistic properties of SVM prediction is described in the work of Platt \cite{Platt99probabilisticoutputs}. Also, investigation how Bayes by Backprop (BBB) method can be applied to RNNs was done by \cite{fortunato2017bayesian}.
Our work combines the existing MCD methodology with the latest contextual embedding techniques and applies them to hate speech classification task. The aim is to obtain high quality predictions coupled with reliability scores as means to understand the circumstances of hate speech.
\subsection{Prediction Uncertainty Visualization in Text Classification}
Visualizations help humans in making decisions, e.g., select a driving route, evacuate before a hurricane strikes, or identify optimal methods for allocating business resources. One of the first attempts to obtain and visualize latent space of predicted outcomes was the work of Berger et al. \cite{berger2011uncertainty}. Prediction values were also visualized in geo-spatial research on hurricane tracks \cite{cox2013visualizing,ruginski2016non}. Importance of visualization for prediction uncertainty estimation in the context of decision making was discussed in \cite{liu2019visualizing,liu2016uncertainty}.
We are not aware of any work on prediction uncertainty visualization for text classification or hate speech detection. We present visualization of tweets in a two dimensional latent space that can reveal relationship between analyzed texts.
\section{Deep Learning with Uncertainty Assessment}
\label{sec:MCD}
Deep learning received significant attention in both NLP and other machine learning applications. However, standard deep neural networks do not provide information on reliability of predictions. Bayesian neural network (BNN) methodology can overcome this issue by probabilistic interpretation of model parameters. Apart from prediction uncertainty estimation, BNNs offer robustness to overfitting and can be efficiently trained on small data sets \cite{kucukelbir2017automatic}. However, neural networks that apply Bayesian inference can be computationally expensive, especially the ones with the complex, deep architectures. Our work is based on Monte Carlo Dropout (MCD) method proposed by \cite{gal2016dropout}. The idea of this approach is to capture prediction uncertainty using the dropout as a regularization technique.
In contrast to classical RNNs, Long Short-term Memory (LSTM) neural networks introduce additional gates within the neural units. There are two sources of information for specific instance $t$ that flows through all the gates: input values $x_t$ and recurrent values that come from the previous instance $h_{t-1}$. Initial attempts to introduce dropout within the recurrent connections were not successful, reporting that dropout brakes the correlation among the input values. Gal and Ghahramani \cite{gal2016theoretically} solve this issue using predefined dropout mask which is the same at each time step. This opens the possibility to perform dropout during each forward pass through the LSTM network, estimating the whole distribution for each of the parameters. Parameters' posterior distributions that are approximated with such a network structure, $q(\omega)$, is used in constructing posterior predictive distribution of new instances $y^*$:
\begin{equation}
p (y^*|x^*,D) \approx \int p\big (y^*|f^\omega(x^*)\big) \; q(\omega) d\omega,
\end{equation}
where $p\big (y^*|f^\omega(x^*)\big)$ denotes the likelihood function. In the regression tasks, this probability is summarized by reporting the means and standard deviations while for classification tasks the mean probability is calculated as:
\begin{equation}
\dfrac{1}{K}\sum_{k=1}^K p(y^*|x^*,\hat{\omega}_k)
\end{equation}
where $\hat{\omega}_k$ $\sim$ $q(\omega)$. Thus, collecting information in $K$ dropout passes throughout the network during the training phase is used in the testing phase to generate (sample) $K$ predicted values for each of the test instance. The benefit of such results is not only to obtain more accurate prediction estimations but also the possibility to visualize the test instances within the generated outcome space.
\subsection{Prediction Uncertainty Visualization}
For each test instance, the neural network outputs a vector of probability estimates corresponding to the samples generated through Monte Carlo dropout. This creates an opportunity to visualize the variability of individual predictions. With the proposed visualization, we show the correctness and reliability of individual predictions, including false positive results that can be just as informative as correctly predicted ones. The creation of visualizations consists of the following five steps, elaborated below.
\begin{enumerate}
\item Projection of the vector of probability estimates into a two dimensional vector space.
\item Point coloring according to the mean probabilities computed by the network.
\item Determining point shapes based on correctness of individual predictions (four possible shapes).
\item Labeling points with respect to individual documents.
\item Kernel density estimation of the projected space --- this step attempts to summarize the instance-level samples obtained by the MCD neural network.
\end{enumerate}
As the MCD neural network produces hundreds of probability samples for each target instance, it is not feasible to directly visualize such a multi-dimensional space. To solve this, we leverage the recently introduced UMAP algorithm \cite{mcinnes2018umap}, which projects the input $d$ dimensional data into a $s$-dimensional (in our case $s=2$) representation by using computational insights from the manifold theory. The result of this step is a two dimensional matrix, where each of the two dimensions represents a latent dimension into which the input samples were projected, and each row represents a text document.
In the next step, we overlay the obtained representation with other relevant information, obtained during sampling. Individual points (documents) are assigned the mean probabilities of samples, thus representing the reliability of individual predictions. We discretize the $[0,1]$ probability interval into four bins of equal size for readability purposes. Next, we shape individual points according to the correctness of predictions. We take into account four possible outcomes (TP - true positives, FP - false positives, TN - true negatives, FN - false negatives).
As the obtained two dimensional projection represents an approximation of the initial sample space, we compute the kernel density estimation in this subspace and thereby outline the main neural network's predictions. We use two dimensional Gaussian kernels for this task.
The obtained estimations are plotted alongside individual predictions and represent densities of the neural network's focus, which can be inspected from the point of view of correctness and reliability.
\section{Experimental Setting}
We first present the data sets used for the evaluation of the proposed approach, followed by the experimental scenario. The results are presented in Section \ref{sec:Results}.
\subsection{Hate Speech Data Sets}
We use three data sets related to the hate speech.
\subsubsection{1 - HatEval} data set is taken from the SemEval task "Multilingual detection of hate speech against immigrants and women in Twitter (hatEval)\footnote{\href{https://competitions.codalab.org/competitions/19935}{https://competitions.codalab.org/competitions/19935}}". The competition was organized for two languages, Spanish and English; we only processed the English data set. The data set consists of 100 tweets labeled as 1 (hate speech) or 0 (not hate speech).
\subsubsection{2 - YouToxic} data set is a manually labeled text toxicity data, originally containing \SI{1000} comments crawled from YouTube videos about the Ferguson unrest in 2014\footnote{\href{https://zenodo.org/record/2586669\#.XJiS8ChKi70}{https://zenodo.org/record/2586669\#.XJiS8ChKi70}}. Apart from the main label describing if the comment is hate speech, there are several other labels characterizing each comment, e.g., if it is a threat, provocative, racist, sexist, etc. (not used in our study). There are 138 comments labeled as a hate speech and 862 as non-hate speech. We produced a data set of 300 comments using all 138 hate speech comments and randomly sampled 162 non-hate speech comments.
\subsubsection{3 - OffensiveTweets}
data set\footnote{\href{https://github.com/t-davidson/hate-speech-and-offensive-language}{https://github.com/t-davidson/hate-speech-and-offensive-language}} originates in a study regarding hate speech detection and the problem of offensive language \cite{davidson2017automated}. Our data set consists of \SI{3000} tweets. We took 1430 tweets labeled as hate speech and randomly sampled 1670 tweets from the collection of remaining \SI{23353} tweets.
\subsubsection{Data Preprocessing}
Social media text use specific language and contain syntactic and grammar errors. Hence, in order to get correct and clean text data we applied different prepossessing techniques without removing text documents based on the length. The pipeline for cleaning the data was as follows:
\begin{itemize}
\item Noise removal: user-names, email address, multiple dots, and hyper-links are considered irrelevant and are removed. \item Common typos are corrected and typical contractions and hash-tags are expanded.
\item Stop words are removed and the words are lemmatized.
\end{itemize}
\subsection{Experimental Scenario}
We use logistic regression (LR) and Support Vector Machines (SVM) from the scikit-learn library \cite{sklearn_api} as the baseline classification models. As a baseline RNN, the LSTM network from the Keras library was applied \cite{chollet2015keras}. Both LSTM and MCD LSTM networks consist of an embedding layer, LSTM layer, and a fully connected layer within the Word2Vec and ELMo embeddings. The embedding layer was not used in TF-IDF and Universal Sentence encoding.
To tune the parameters of LR (i.e. \textit{liblinear} and \textit{lbfgs} for the solver functions and the number of component $C$ from $0.01$ to $100$) and SVM (i.e. the \textit{rbf} for the kernel function, the number of components $C$ from $0.01$ to $100$ and the gamma $\gamma$ values from $0.01$ to $100$), we utilized the random search approach \cite{bergstra2012random} implemented in scikit-learn. In order to obtain best architectures for the LSTM and MCD LSTM models, various number of units, batch size, dropout rates and so on were fine-tuned.
\section{Evaluation and Results}
\label{sec:Results}
We first describe experiments comparing different word representations, followed by sentence embeddings, and finally the visualization of predictive uncertainty.
\subsection{Word Embedding}
In the first set of experiments, we represented the text with word embeddings (sparse TF-IDF \cite{sparck1972statistical} or dense word2vec \cite{mikolov2013efficient}, and ELMo \cite{peters2018deep}). We utilise the gensim library \cite{rehurek_lrec} for word2vec model, the scikit-learn for TFIDF, and the ELMo pretrained model from TensorFlow Hub\footnote{https://tfhub.dev/google/elmo/2}. We compared different classification models using these word embeddings. The results are presented in Table \ref{tab3}.
The architecture of LSTM and MCD LSTM neural networks contains an embedding layer, LSTM layer, and fully-connected layer (i.e. dense layer) for word2vec and ELMo word embeddings. In LSTM, the recurrent dropout is applied to the units for linear transformation of the recurrent state and the classical dropout is used for the units with the linear transformation of the inputs.
The number of units, recurrent dropout, and dropout probabilities for LSTM layer were obtained by fine-tuning (i.e. we used $512$, $0.2$ and $0.5$ for word2vec and TF-IDF, $1024$, $0.5$, and $0.2$ for ELMo in the experiments with MCD LSTM architecture). The search ranges for hyper parameter tuning are described in Table \ref{table:nn_hyperparameters}.
\begin{table}[H]
\footnotesize
\centering
\caption{Comparison of classification accuracy (with standard deviation in brackets) for word embeddings, computed using 5-fold cross-validation. All the results are expressed in percentages and the best ones for each data set are in bold.}
\renewcommand{\arraystretch}{1}
\setlength{\tabcolsep}{3pt}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|rrr|rrr|rrr}
&\multicolumn{3}{c|}{ \textbf{HatEval}} & \multicolumn{3}{c|}{\textbf{YouToxic}} & \multicolumn{3}{c}{\textbf{OffensiveTweets}}\\
\textbf{Model} & \textbf{TF-IDF} & \textbf{W2V} & \textbf{ELMo} & \textbf{TF-IDF} & \textbf{W2V} & \textbf{ELMo} & \textbf{TF-IDF} & \textbf{W2V} & \textbf{ELMo} \\
\hline
\textbf{Logistic Regression} & 68.0 [2.4] & 54.0 [13.6] & 62.0 [6.8]& 69.3 [3.0] & 54.0 [3.0] & \textbf{76.6 [6.1]} & \textbf{77.2 [1.1]} & 68.0 [2.4] & 75.6 [1.2] \\
\textbf{SVM} & 63.0 [5.1]& 66.0 [3.7] & 62.0 [12.9] & 70.6 [4.2] & 55.0 [3.4] & 73.3 [5.5] & 77.0 [0.7] & 59.6 [1.5] & 73.0 [1.9]\\
\textbf{LSTM} & 69.0 [7.3] & 67.0 [6.8]& 66.0 [12.4] & 66.6 [2.3] & 59.3 [4.6] & 74.3 [2.7] & 73.4 [0.8] & 75.0 [1.7] & 74.7 [1.9] \\
\textbf{MCD LSTM} & 67.0 [10.8]& \textbf{69.0 [6.6]} & 67.0 [9.8] & 66.0 [3.7] & 59.3 [3.8] & 75.3 [5.5] & 71.1 [1.6] & 72.0 [1.6] & 75.2 [0.9] \\ \hline
\end{tabular}
}
\label{tab3}
\end{table}
\begin{table}[htbp]
\caption{Hyper-parameters for LSTM and MCD LSTM models}
\renewcommand{\arraystretch}{0.75}
\setlength{\tabcolsep}{3pt}
\label{table:nn_hyperparameters}
\centering
\begin{tabular}{l|l|l}
\textbf{Name} & \textbf{Parameter type} & \textbf{Values}\\
\hline
\textbf{Optimizers} & Categorical & Adam, rmsprop\\
\textbf{Batch size} & Discrete & 4 to 128, step=4\\
\textbf{Activation function} & Categorical & tanh, relu and linear\\
\textbf{Number of epochs} & Discrete & 10 to 100, step=5\\
\textbf{Number of units} & Discrete & 128, 256, 512, or 1024\\
\textbf{Dropout rate} & Float & 0.1 to 0.8, step=0.05\\
\hline
\end{tabular}
\end{table}
The classification accuracy for HatEval data set is reported in the Table \ref{tab3} (left). The difference between logistic regression and the two LSTM models indicates accuracy improvement once the recurrent layers are introduced. On the other hand, as the ELMo embedding already uses the LSTM layer to take into account semantic relationship among the words, no notable difference between logistic regression and LSTM models can be observed using this embedding.
Results for YouToxic and OffensiveTweets data sets are presented in Table \ref{tab3} (middle) and (right), respectively. Similarly to the HatEval data set, there is a difference between the logistic regression and the two LSTM models using the word2vec embeddings. For all data sets, the results with ELMo embeddings are similar across the four classifiers.
\subsection{Sentence Embedding}
In the second set of experiments, we compared different classifiers using sentence embeddings \cite{cer2018universal} as the representation. Table \ref{tab4} (left) displays results for HatEval. We can notice improvements in classification accuracy for all classifiers compared to the word embedding representation in Table \ref{tab3}. The best model for this small data set is MCD LSTM. For larger YouToxic and OffensiveTweets data sets, all the models perform comparably. Apart from the prediction accuracy the four models were compared using precision, recall and F1 score \cite{chi}.
We use the Universal Sentence Encoder module\footnote{https://tfhub.dev/google/universal-sentence-encoder-large/3} to encode the data. The architecture of LSTM and MCD LSTM contains a LSTM layer and dense layer. With MCD LSTM architecture in the experiments, the number of neurons, recurrent drop\-out and dropout value for LSTM is $1024$, $0.75$ and $0.5$, respectively. The dense layer has the same number of units as LSTM layer, and the applied dropout rate is $0.5$. The hyper-parameters used to tune the LSTM and MCD LSTM models are presented in the Table \ref{table:nn_hyperparameters}.
\begin{table}[H]
\centering
\caption{Comparison of predictive models using sentence embeddings. We present average classification accuracy, precision, recall and $F_1$ score (and standard deviations), computed using 5-fold cross-validation. All the results are expressed in percentages and the best accuracies are in bold.
}
\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{4pt}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|rrrr|rrrr|rrrr}
&\multicolumn{4}{c}{ \textbf{HatEval}} & \multicolumn{4}{c}{\textbf{YouToxic}} & \multicolumn{4}{c}{\textbf{OffensiveTweets}}\\
\textbf{Model} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F1} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F1} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F1} \\
\hline
\textbf{LR} & 66.0 [12.4] & 67.3 [15.3] & 65.2 [15.9] & 65.2 [13.1] & 77.3 [4.1]& 74.3 [7.3] & 77.3 [3.6] & 75.7 [5.3] & 80.8 [1.0]& 79.6 [1.9] & 84.9 [1.2] & 82.2 [1.1] \\
\textbf{SVM} & 67.0 [12.1] & 68.2 [15.2] & 65.0 [15.8] & 65.8 [13.3] & 77.3 [6.2]& 72.6 [8.6] & 80.7 [7.4] & 76.3 [7.6] & 80.7 [1.3]& 78.6 [2.0] & 86.7 [1.0] & 82.4 [1.2] \\
\textbf{LSTM} & 70.0 [8.4] & 70.8 [11.0] & 63.1 [17.5] & 66.2 [14.4] & 76.6 [8.6] & 73.4 [11.2] & 79.2 [8.0] & 75.8 [8.6] & 80.7 [1.6]& 82.8 [2.1] & 79.7 [2.3] & 81.1 [1.5] \\
\textbf{MCD LSTM} & \textbf{74.0 [10.7]} & 73.4 [12.7] & 78.4 [13.6] & 74.9 [10.0] & \textbf{78.7 [5.8]} & 74.7 [9.2] & 80.9 [6.5] & 77.5 [7.4] & \textbf{81.0 [1.2]} & 81.5 [1.8] & 82.5 [2.7] & 81.9 [1.3] \\
\hline
\end{tabular}}
\label{tab4}
\end{table}
\subsection{Visualizing Predictive Uncertainty}
In Figure~\ref{fig:viz} we present a new way of visualizing dependencies among the test tweets. The relations are result of applaing the MCD LSTM network to the HetEval data set. This allows further inspection of the results as well as interpretation of correct and incorrect predictions. To improve comprehensibility of predictions and errors, each point in the visualization is labeled with a unique identifier, making the point tractable to the original document, given in Table~\ref{tbl:tweets}.
\begin{figure}[H]
\centering
\includegraphics[width=0.5 \linewidth,height=4.5cm]{umap}
\caption{Visualization of individual predictions along with their probability ranges. The numeric labels besides each point correspond to individual tweets, summarized in Table ~\ref{tbl:tweets}. The size of the points corresponds to the neural network's mean probability for a given prediction. True positives are marked with circles, true negatives with crosses, false positives with squares, and false negatives as pluses.
}
\label{fig:viz}
\end{figure}
\begin{table}[hbt]
\centering
\caption{Test documents (one fold) used for the visualization in Figure \ref{fig:viz}.}
\resizebox{1\textwidth}{!}{
\begin{tabular}{l}
Index (see Figure \ref{fig:viz}) | Corresponding text \\
0 | ok if you fu***** said leave block me. but dm me first i am gonna kick your ass. shut the f*** up you \\
1 | wow mo c*** got hard. want to pull your panties down, push you against these door and let you feel my c*** deep in your wet p**** \\
2 | no i said hope. i hope you women learn your place! sitdowninthekitchen \\
3 | so this b**** blocked me but check her pinned tweet out, she is accusing men of rape on tumblr with \dots \\
4 | this is not rape, or anything like it. this a crude pass. this is not 'rape culture.' it is w**** culture. \\
5 | rape is bad but alota h*** be lying bout that s*** women are crazy dawg \& will f*** ya life up be careful \\
6 | lol. they did not applaud a rape threat .how long are you going to ride that s***? \\
7 | you had a great chance to show everyone that youre not a c*** on tv tonight and you f***** it up. \\
8 | lmao!!! stuff i say. some stuff be so damn funny.especially when it comes to illegal immigrants who are full
fledged criminals coming to america \dots \\
9 | the german government pays for 3 week vacation for refugees to go home - \\
10 | burundian refugees should go home! \\
11 | no benefits to any immigrants !! come here \& make it on your own, or go back home !! \\
12 | illegal aliens waiting for first class msc container-shippping in antwerpen stop immigration \\
13 | sick barstewards! this is what happens when we put up the refugees welcome signs! they not only rape our wives or girlfriends $\dots$ \\
14 | i am not forcing anyone. i am just putting out my view. people believe it or not is their prerogative.even if a single person agree with me $\dots$ \\
15 | tajikistani charged with beating new york university professor to death another animal./ \\
16 | child prostitution: feds break up mexico-to-queens sex trafficking ring via \dots \\
17 | home office guilty of a serious breach of the duty of candour and cooperation regarding children entitled to enter uk. where did these children go? \dots \\
18 | p.s why do you not pay unemployed people who do endless hours of voluntary work they do that to give something to the community \\
19 | seriously, amy and cindy are bffs, i know that for sure. hmm, mmm. \\
\end{tabular}
}
\label{tbl:tweets}
\end{table}
As Figure \ref{fig:viz} shows, the tweets are grouped into two clusters. According to the kernel density isometric lines, two centers are identified: the tweets assigned lower probability of being hate speech and the tweets with higher probability of being hate speech. Let us focus on the wrongly classified tweets and their positions in the graph (tweets 8, 16 and 18). While for tweets 8 and 18 the classifier wasn't certain and a mistake seems possible according to the plot, the tweet 16 was predicted to be hate speech with high probability. Analyzing the words that form this tweet, we notice that not only that most of them often do appear in the hate speech but also this combination of the words used together is very characteristic for the offensive language.
Our short demonstration shows the utility of the proposed visualization which can identify different types of errors and helps to explain weaknesses in the classifier or wrongly labeled data.
\section{Conclusions}
We present the first successful approach to assessment of prediction uncertainty in hate speech classification. Our approach uses LSTM model with Monte Carlo dropout and shows performance comparable to the best competing approaches using word embeddings and superior performance using sentence embeddings. We demonstrate that reliability of predictions and errors of the models can be comprehensively visualized. Further, our study shows that pretrained sentence embeddings outperform even state-of-the-art contextual word embeddings and can be recommended as a suitable representation for this task. The full Python code is publicly available \footnote{https://github.com/KristianMiok/Hate-Speech-Prediction-Uncertainty}.
As persons spreading hate speech might be banned, penalized, or monitored not to put their threats into actions, prediction uncertainty is an important component of decision making and can help humans observers avoid false positives and false negatives. Visualization of prediction uncertainty can provide better understanding of the textual context within which the hate speech appear. Plotting the tweets that are incorrectly classified and inspecting them can identify the words that trigger wrong classifications.
Prediction uncertainty estimation is rarely implemented for text classification and other NLP tasks, hence our future work will go in this direction. A recent emergence of cross-lingual embeddings possibly opens new opportunities to share data sets and models between languages. As evaluation in rare languages is difficult, the assessment of predictive reliability for such problems might be an auxiliary evaluation approach. In this context, we also plan to investigate convolutional neural networks with probabilistic interpretation.
\subsubsection*{Acknowledgments.}
The work was partially supported by the Slovenian Research Agen\-cy (ARRS) core research programme P6-0411. This project has also received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 825153 (EMBEDDIA).
\bibliographystyle{splncs03}
|
2,869,038,155,089 | arxiv | \section{Total Cross Sections}
The increase of total proton-proton (pp) cross sections with center-of-mass energy ($\sqrt{s}$) is first observed at the CERN ISR in 1973~\cite{amaldi73}. The total pp cross sections can not be calculated using perturbative quantum chromodynamics (QCD), however,
unitarity, analyticity, and factorization arguments suggest that the total hadronic cross sections should rise slower than $ln^2(s)$~\cite{froissart61}.
ATLAS~\cite{ref:ATLAS} measured the inelastic pp cross section at $\sqrt{s}=13$ TeV by selecting events with rings of plastic scintillators in the forward region (2.07$<$$|\eta|$$<$3.86)~\cite{ATLAS:2016ygv} in the fiducial phase space defined by $M^2_x/s$ $>$10$^{-6}$ to be 68.1$\pm$1.4 mb. Here, $M^2_x$ is the larger of the invariant mass any of the two final-state hadrons separated by the largest rapidity gap in the event. Using the $\sqrt{s}=7$ TeV measurements, this measurement is extrapolated to obtain the total inelastic cross section of 78.1$\pm$2.9 mb. The measurements are in good agreement with \PYTHIA~\cite{pythia8} and {\textsc{epos}}~\cite{Pierog:2013ria} in the LHC region and are consistent with the inelastic cross section increasing logarithmically with $\sqrt{s}$ up to $\sim$60 TeV also including the cosmic ray observations~\cite{auger} (see Fig.~\ref{fig:totalcrosssection}).
\begin{figure}[ht]
\centering
\includegraphics[width=7.5cm]{fig_03.pdf}
\caption{The inelastic proton-proton cross section as a function of $\sqrt{s}$. Measurements from other hadron collider experiments and the Pierre Auger observatory are also shown. See Ref.~\cite{ATLAS:2016ygv} for the corresponding references.}
\label{fig:totalcrosssection}
\end{figure}
\section{Minimum Bias, Underlying Event, and Color Reconnection}
Minimum bias (MB) at 13 TeV is measured by ATLAS
~\cite{atlas_mb}.
The data is compared with predictions from \PYTHIA~(with A2~\cite{ATLAS:2011krm} and Monash~\cite{Skands:2014pea} tunes), {\textsc{epos}}~and {\textsc{qgsjet-ii}}~\cite{Ostapchenko:2010vb}. {\textsc{epos}}~provides the best description of the data, also as a function of $\sqrt{s}$ (in $|\eta|<0.2$).
\PYTHIA~provides reasonable description of the data, while {\textsc{qgsjet-ii}}~performs the worst.
Underlying event (UE) measurements in bins of several different variables are made by ATLAS at $\sqrt{s}=13$ TeV~\cite{ATLAS:2017blj}.
The measurements compared to predictions from \PYTHIA~(with A2, A14~\cite{TheATLAScollaboration:2014rfk}, and Monash tunes), \HERWIG~(UE-MMHT tune)~\cite{Bellm:2015jjp},
and {\textsc{epos}}~show that the data can be described to $\sim$5\% accuracy, significantly larger than the data uncertainties of $\sim$1\%. Deviations of central values of the predictions in various observables indicate that improved MC tunes are needed for the interpretation of the LHC data.
There is no best model, but in particular {\textsc{epos}}, specialised for the simulation of inclusive soft QCD processes, displays significant discrepancies as the \ensuremath{p_{\mathrm{T}}}~scale increases and thus may not be adequate for modelling multiple-parton interactions (MPI) at LHC.
Sets of new CMS~\cite{ref:CMS} \PYTHIA~(CPi, i=1-5) and \HERWIG~(CHi, i=1-3) tunes~\cite{CMS:2019csb,CMS:2020dqt} are obtained assuming different $\alpha_S$$(M_Z)$ values used in the modelling of the initial-state and final-state radiation, hard scattering, and MPI, as well as the order of its evolution as a function of the four-momentum squared $Q^2$. The new tunes are distinguished according to the order of the NNPDF3.1 PDF set~\cite{Ball:2017nwa} used: LO, NLO, or NNLO. CP1 and CP2 are based on the LO, CP3 on NLO, and CP4 and CP5 on NNLO PDF set for \PYTHIA. All CHi tunes are based on NNLO-PDF with the corresponding $\alpha_S$$(M_Z)$ value for the PS component. The MPI and remnants components of CH1 uses NNLO-PDF with $\alpha_S$$(M_Z)$=0.118; CH2 uses LO-PDF but with $\alpha_S$$(M_Z)$=0.118; and CH3 uses LO-PDF and its corresponding $\alpha_S$$(M_Z)$ of 0.130.
Predictions of (N)NLO-PDF-based tunes reliably describe the central values of MB and UE data similar to LO-PDF tunes simultaneously with charged-particle distributions in diffractive and inelastic collisions. New tunes describe the data significantly better than the old tunes derived from data at lower collision energies. Double parton scattering is described worse by (N)NLO CPi tunes and none of the tunes describes the very forward region (-6.6$<$$\eta$$<$-5.2) better than $\sim$10\%.
New tunes are tested also against $\mathrm{t}\overline{\mathrm{t}}$, $\mathrm{t}\overline{\mathrm{t}}$~jet shapes,
Drell-Yan, dijet, V+jets for both CPi and CHi tunes, and also inclusive jet and event shape observables from LEP for the CHi tunes.
UE activity in $\mathrm{t}\overline{\mathrm{t}}$~dilepton events are measured by CMS at $\sqrt{s}=13$ TeV
removing charged particles associated with the decay products of the $\mathrm{t}\overline{\mathrm{t}}$~event candidates as well as with pileup interactions
for each event.
The observables and categories chosen for the measurements enhance the sensitivity to $\mathrm{t}\overline{\mathrm{t}}$~modelling, MPI, color reconnection (CR) and $\alpha_S$($M_Z$) in \PYTHIA.
Most of the comparisons indicate a fair agreement between the data and the
\POWHEG~\cite{powheg}+\PYTHIA~setup with the \textsc{cuetp8m2t4}~tune,
but disfavor the setups in which MPI and CR are switched off or
the default configurations of \POWHEG+\textsc{herwig++/7}, and \SHERPA~\cite{sherpa}.
The UE measurements in $\mathrm{t}\overline{\mathrm{t}}$~events test the hypothesis of universality of UE at an energy scale two times the top quark mass (\mt), considerably higher than the ones at which UE models have been studied in detail.
The results also show that a value of $\alpha_S^{FSR}$$(M_{Z})$=0.120$\pm$0.006
is consistent with the data and the corresponding uncertainties
translate to a variation of the renormalization scale by a factor of $\sqrt{2}$.
New sets of tunes for two of the CR models implemented in \PYTHIA, QCD-inspired (CR1) and gluon-move (CR2), are derived by ATLAS~\cite{ATLAS:2017wln} and more recently by CMS~\cite{CMS-PAS-GEN-17-002}.
The ATLAS tunes are derived using ATLAS data taken at $\sqrt{s}=0.9$, 7, and 13 TeV, while CMS used $\sqrt{s}=$1.96 TeV CDF, and 7 and 13 TeV CMS data.
It is observed that the new ATLAS CR models describe most observables within $\sim20\%$.
The MPI-based CR and CR1 tunes of ATLAS perform significantly better than CR2. New CMS CR tunes are based on CP5 tune and they are tested against LEP, CDF, and 7-13 TeV data with MB, UE, forward energy flow, strange particle production, $p/\pi$,
$\mathrm{t}\overline{\mathrm{t}}$~jet shapes (see Fig.~\ref{deltaRg}) and color flow in $\mathrm{t}\overline{\mathrm{t}}$. The new CMS CR tunes for MB and UE describe the data significantly better than the ones with the default parameters (also in the forward region).
\begin{figure}
\centering
\includegraphics[width=8.0cm]{CMS-PAS-GEN-17-002_Figure_018-b.pdf}
\caption{The normalized jet substructure angle between the groomed subjects.
Data are compared with the predictions from
CP5 and the corresponding CR tunes~\cite{CMS-PAS-GEN-17-002}. The colored band in the ratio plot represents the total experimental uncertainty in the data.
}
\label{deltaRg}
\end{figure}
The new CMS CR tunes are also used to estimate the CR uncertainty on the
\mt~measurement with semi-leptonic $\mathrm{t}\overline{\mathrm{t}}$~events.
The largest deviation between the \mt~predictions is found to be between the CP5 and CP5-CR2 tune (with ERD) with a value of 0.32 GeV, similar to the value obtained in the dedicated \mt~measurement at 13 TeV (using CUETP8M2T4 tune~\cite{CMS:2016kle} and respective CR tunes).
Therefore, CP5 with its new CR tunes does not improve or degrade the precision of the \mt~measurements and more detailed studies are needed.
The CMS CR tunes are also tested against strange particle production. Although they describe well the rapidity distribution of $K_s^0$, they do not improve the description of the $\Lambda$ baryon rapidity distribution.
This may indicate that improved hadronization models are needed.
It is observed that baryon/meson ratios are well-described by the new CR tunes at different $\sqrt{s}$ of 10, 90 GeV, and 13 TeV. However, none of the tunes describe the $p/\pi$ yield ratio as a function of \ensuremath{p_{\mathrm{T}}}~ (in the range \ensuremath{p_{\mathrm{T}}}=0.4-1.2 GeV) in MB events at \text{13 TeV}.
Moreover, none of the tunes describe the jet shapes well and with similar predictions. Some differences are observed with respect to color-flow data which is particularly sensitive to the ERD option in the CR models. It is verified that the impact of CR models is negligible in Drell-Yan events. To improve the description of Drell-Yan events higher-order corrections and improved MC codes are needed as shown by ATLAS~\cite{ATLAS-CONF-2021-033}; upgraded generators \MG(v.2.6.5) with FxFx merging simulating up to 3 jets at NLO, and \SHERPA(v.2.2.11) simulating up to 2 partons at NLO and up to 5 partons at LO and including NLO virtual electroweak corrections provide the best modelling.
\section{Top Quark Pair Spin Correlation and Common $\mathrm{t}\overline{\mathrm{t}}$~Setting for ATLAS and CMS}
The LHC Top Physics Working Group (LHC$top$WG) made comparisons of the ATLAS and CMS normalized cross sections in bins of $|\Delta\phi(\ell^+,\ell^-)|$ at the parton level\footnote{\url{https://twiki.cern.ch/twiki/bin/view/LHCPhysics/LHCTopWGSummaryPlots}}.
Very good agreement between ATLAS and CMS data and between ATLAS and CMS main MC predictions is observed. A good agreement of data with \MG~with FxFx merging~\cite{mg5,fxfx} is also observed as well as a fair agreement with the NNLO calculation \cite{behring19}. These comparisons pave the way for the first $\sqrt{s}$=13 TeV ATLAS+CMS combination from the LHC$top$WG. In a related note ATLAS and CMS, presented the first studies towards a common $\mathrm{t}\overline{\mathrm{t}}$~MC settings~\cite{commonttbarmc_cmsatlas} that would facilitate ATLAS+CMS combinations and understanding the different MC configurations used in the two experiments.
|
2,869,038,155,090 | arxiv | |
2,869,038,155,091 | arxiv | \section{Introduction}
The main aim of this paper is to introduced limit definition of the
derivative of a function which obeys classical properties including:
linearity, Product Rule, Quotient Rule, Chain Rule, Rolle's Theorem and Mean
Value Theorem.
Today, there are many fractional integral and fractional derivative
definitions such as Riemann-Liouville, Caputo, Gr\"{u}nwald-Letnikov,
Hadamard, Riesz. For these, please see \cite{Kilbas}, \cite{Katugampola1},
\cite{Samko}. For more information on the Fractional Calculus, please see (%
\cite{Akkurt}, \cite{Abdel}, \cite{iyiola}, \cite{Hammad}, \cite{Hammad1}).
Here, all fractional derivatives do not provide some properties such as
Product Rule, Quotient Rule, Chain Rule, Roll's Theorem and Mean Value
Theorem.
To overcome some of these and other difficulties, Khalil et al. \cite{Khalil}%
, came up with an interesting idea that extends the familiar limit
definition of the derivative of a function given by the following $T_{\alpha
}$%
\begin{equation}
T_{\alpha }\left( f\right) \left( t\right) =\lim_{\varepsilon \rightarrow 0}%
\frac{f\left( t+\varepsilon t^{1-\alpha }\right) -f\left( t\right) }{%
\varepsilon }. \label{a}
\end{equation}
In \cite{Almeida}, Almeida et al. introduced limit definition of the
derivative of a function as follows,%
\begin{equation}
f^{\left( \alpha \right) }\left( t\right) =\lim_{\varepsilon \rightarrow 0}%
\frac{f\left( t+\varepsilon k\left( t\right) ^{1-\alpha }\right) -f\left(
t\right) }{\varepsilon }. \label{b}
\end{equation}
Recently, in \cite{Katugampola} Katugampola introduced the idea of
fractional derivative%
\begin{equation}
D_{\alpha }\left( f\right) \left( t\right) =\lim_{\varepsilon \rightarrow 0}%
\frac{f\left( te^{\varepsilon t^{-\alpha }}\right) -f\left( t\right) }{%
\varepsilon }. \label{c}
\end{equation}
\section{Generalized new fractional derivative}
In this paper, we introduce a new fractional derivative which is generalized
the results obtained in \cite{Almeida}, \cite{Katugampola}, \cite{Khalil}.
In this section we present the definition of the Generalized new fractional
derivative and introduce the Generalized new fractional integral. We
provided representations for the Product Rule, Quotient Rule, Chain Rule,
Roll's Theorem and Mean Value Theorem. Also, we give some applications.
\begin{definition}
\label{d1} Let $k:[a,b]\rightarrow
\mathbb{R}
$ be a continuous nonnegative map such that $k\left( t\right) ,$ $k^{\prime
}\left( t\right) \neq 0,$ whenever $t>a.$ Given a function $%
f:[a,b]\rightarrow
\mathbb{R}
\ $and $\alpha \in \left( 0,1\right) \ $a real, we say that the generalized
fractional derivative\ of $f$ of order $\alpha $ is defined by,%
\begin{equation}
D^{\alpha }\left( f\right) \left( t\right) :=\lim_{\epsilon \rightarrow 0}%
\frac{f\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{%
\left( k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }%
}\right) -f\left( t\right) }{\epsilon } \label{1.8}
\end{equation}%
exist$.\ $If $f$ is $\alpha -$differentiable in some $\left( 0,a\right) ,\
\alpha >0,\ \lim\limits_{t\rightarrow 0^{+}}f^{\left( \alpha \right) }\left(
t\right) \ $exist, then define%
\begin{equation}
f^{\left( \alpha \right) }\left( 0\right) =\lim\limits_{t\rightarrow
0^{+}}f^{\left( \alpha \right) }\left( t\right) . \label{1.9}
\end{equation}%
We can write $f^{\left( \alpha \right) }\left( t\right) $ for $D^{\alpha
}\left( f\right) \left( t\right) $ to denote the generalized fractional
derivatives of $f$ of order $\alpha $.
\end{definition}
\begin{remark}
When $k\left( t\right) =t\ $in (\ref{1.8}), it turns out to be the
definition for derivatives of a function,\ in \cite{Katugampola}.
\end{remark}
\begin{remark}
When $\alpha \rightarrow 1\ $and $k\left( t\right) =t\ $in (\ref{1.8}), it
turns out to be the classical definition for derivatives of a function,\ $%
f^{\left( \alpha \right) }\left( t\right) =f^{\prime }\left( t\right) .$
\end{remark}
\begin{theorem}
Let $f:[a,b]\rightarrow
\mathbb{R}
\ $be a$\ $differentiable function and $t>a.\ $Then, $f\ $is a $\alpha -$%
differentiable at $t\ $and%
\begin{equation*}
f^{\left( \alpha \right) }\left( t\right) =\frac{\left( k\left( t\right)
\right) ^{1-\alpha }}{k^{\prime }\left( t\right) }\frac{df}{dt}(t).
\end{equation*}%
Also, if $f^{\prime }$\ is continuous at $t=a,\ $then%
\begin{equation*}
f^{\left( \alpha \right) }\left( a\right) =k^{\prime }\left( a\right) \left(
k\left( a\right) \right) ^{1-\alpha }\frac{df}{dt}(a).
\end{equation*}
\end{theorem}
\begin{proof}
From definition \ref{d1}, we have%
\begin{eqnarray*}
D^{\alpha }\left( f\right) \left( t\right) &=&\lim_{\epsilon \rightarrow 0}%
\frac{f\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{%
\left( k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }%
}\right) -f\left( t\right) }{\epsilon } \\
&& \\
&=&\lim_{\in \rightarrow 0}\frac{\left( t-k\left( t\right) +k\left( t\right) %
\left[ 1+\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{%
k^{\prime }\left( t\right) }+\frac{\left( \varepsilon \frac{\left( k\left(
t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }\right) ^{2}}{2!}%
+...\right] \right) -f\left( t\right) }{\varepsilon } \\
&& \\
&=&\lim_{\epsilon \rightarrow 0}\frac{f\left( t+\epsilon \frac{\left(
k\left( t\right) \right) ^{1-\alpha }}{k^{\prime }\left( t\right) }\left[
1+O\left( \epsilon \right) \right] \right) -f\left( t\right) }{\epsilon }.
\end{eqnarray*}%
Taking%
\begin{equation*}
h=\epsilon \frac{\left( k\left( t\right) \right) ^{1-\alpha }}{k^{\prime
}\left( t\right) }\left[ 1+O\left( \epsilon \right) \right]
\end{equation*}%
we have,%
\begin{eqnarray*}
D^{\alpha }\left( f\right) \left( t\right) &=&\lim_{\epsilon \rightarrow 0}%
\frac{f\left( t+h\right) -f\left( t\right) }{\frac{k^{\prime }\left(
t\right) \left( k\left( t\right) \right) ^{\alpha -1}h}{1+O\left( \epsilon
\right) }} \\
&& \\
&=&\frac{\left( k\left( t\right) \right) ^{1-\alpha }}{k^{\prime }\left(
t\right) }\frac{df}{dt}(t).
\end{eqnarray*}
\end{proof}
\begin{theorem}
If a function $f:[a,b]\rightarrow
\mathbb{R}
\ $is $\alpha -$differentiable at $a>0,\ \alpha \in \left( 0,1\right] ,\ $%
then $f$\ is continuous at $a.$
\end{theorem}
\begin{proof}
Since%
\begin{equation*}
f\left( a-k\left( a\right) +k\left( a\right) e^{\varepsilon \frac{\left(
k\left( a\right) \right) ^{-\alpha }}{k^{\prime }\left( a\right) }}\right)
-f\left( a\right) =\tfrac{f\left( a-k\left( a\right) +k\left( a\right)
e^{\varepsilon \frac{\left( k\left( a\right) \right) ^{-\alpha }}{k^{\prime
}\left( a\right) }}\right) -f\left( a\right) }{\epsilon }\epsilon ,
\end{equation*}%
we have%
\begin{equation*}
\lim_{\epsilon \rightarrow 0}\left[ f\left( a-k\left( a\right) +k\left(
a\right) e^{\varepsilon \frac{\left( k\left( a\right) \right) ^{-\alpha }}{%
k^{\prime }\left( a\right) }}\right) -f\left( a\right) \right]
=\lim_{\epsilon \rightarrow 0}\tfrac{\left[ f\left( a-k\left( a\right)
+k\left( a\right) e^{\varepsilon \frac{\left( k\left( a\right) \right)
^{-\alpha }}{k^{\prime }\left( a\right) }}\right) -f\left( a\right) \right]
}{\epsilon }\lim_{\epsilon \rightarrow 0}\epsilon .
\end{equation*}%
Let $h=\epsilon \frac{\left( k\left( t\right) \right) ^{1-\alpha }}{%
k^{\prime }\left( t\right) }\left[ 1+O\left( \epsilon \right) \right] .\ $%
Then,%
\begin{equation*}
\lim_{h\rightarrow 0}\left[ f\left( a+h\right) -f\left( a\right) \right]
=D^{\alpha }\left( f\right) \left( a\right) .0
\end{equation*}%
and%
\begin{equation*}
\lim_{h\rightarrow 0}f\left( a+h\right) =f\left( a\right) .
\end{equation*}%
This completes the proof.
\end{proof}
\begin{theorem}
Let $\alpha \in \left( 0,1\right] $ and $f,g$ be $\alpha -$differentiable at
$a$ point $t>0$. Then,
\end{theorem}
$1.\ D^{\alpha }\left( af+bg\right) \left( t\right) =aD^{\alpha }\left(
f\right) \left( t\right) +bD^{\alpha }\left( g\right) \left( t\right) ,\ $%
for all $a,b\in
\mathbb{R}
\ $(linearity)$.$
\bigskip
$2.D^{\alpha }\left( t^{n}\right) =\frac{\left( k\left( t\right) \right)
^{1-\alpha }}{k^{\prime }\left( t\right) }nt^{n-1}\ $for all $n\in
\mathbb{R}
.$
\bigskip
$3.\ D^{\alpha }\left( c\right) =0,\ $for all constant functions\ $f\left(
t\right) =c.$
\bigskip
$4.\ D^{\alpha }\left( fg\right) \left( t\right) =f\left( t\right) D^{\alpha
}\left( g\right) \left( t\right) +g\left( t\right) D^{\alpha }\left(
f\right) \left( t\right) \ $(Product Rule)$.$
\bigskip
$5.\ D^{\alpha }\left( \dfrac{f}{g}\right) \left( t\right) =\dfrac{f\left(
t\right) D_{\alpha }\left( g\right) \left( t\right) -g\left( t\right)
D_{\alpha }\left( f\right) \left( t\right) }{\left[ g\left( t\right) \right]
^{2}}\ $(Quotient Rule)$.$
\bigskip
$6.\ D^{\alpha }\left( f\circ g\right) \left( t\right) =\frac{\left( k\left(
t\right) \right) ^{1-\alpha }}{k^{\prime }\left( t\right) }f^{\prime }\left(
g\left( t\right) \right) D^{\prime }\left( g\right) \left( t\right) $ (Chain
rule).
\begin{proof}
Part (1) and (3) follow directly from the definition. Let us prove (2), (4),
(5) and (6) respectively. Now, for fixed $\alpha \in \left( 0,1\right] ,\
n\in
\mathbb{R}
\ $and $t>0,\ $we have%
\begin{eqnarray*}
D^{\alpha }\left( t^{n}\right) &=&\lim_{\epsilon \rightarrow 0}\frac{\left(
t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{\left( k\left(
t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }}\right) ^{n}-t^{n}%
}{\epsilon } \\
&& \\
&=&\lim_{\epsilon \rightarrow 0}\frac{\left( t+\epsilon \frac{\left( k\left(
t\right) \right) ^{1-\alpha }}{k^{\prime }\left( t\right) }\left[ 1+O\left(
\epsilon \right) \right] \right) ^{n}-t^{n}}{\epsilon } \\
&& \\
&=&\frac{\left( k\left( t\right) \right) ^{1-\alpha }}{k^{\prime }\left(
t\right) }nt^{n-1}.
\end{eqnarray*}%
This completes proof of (2). Then, we shall prove (4). To this end, since $%
f,g$ are $\alpha -$differentiable at $t>0$, note that,%
\begin{eqnarray*}
&&D^{\alpha }\left( fg\right) \left( t\right) \\
&=&\lim_{\epsilon \rightarrow 0}\tfrac{f\left( t-k\left( t\right) +k\left(
t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{%
k^{\prime }\left( t\right) }}\right) g\left( t-k\left( t\right) +k\left(
t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{%
k^{\prime }\left( t\right) }}\right) -f\left( t\right) g\left( t\right) }{%
\epsilon } \\
&& \\
&=&\lim_{\epsilon \rightarrow 0}\left[ \tfrac{f\left( t-k\left( t\right)
+k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right)
^{-\alpha }}{k^{\prime }\left( t\right) }}\right) g\left( t-k\left( t\right)
+k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right)
^{-\alpha }}{k^{\prime }\left( t\right) }}\right) -f\left( t\right) g\left(
t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{\left( k\left(
t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }}\right) }{%
\epsilon }\right. \\
&& \\
&&+\left. \tfrac{f\left( t\right) g\left( t-k\left( t\right) +k\left(
t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{%
k^{\prime }\left( t\right) }}\right) -f\left( t\right) g\left( t\right) }{%
\epsilon }\right] \\
&& \\
&=&\lim_{\epsilon \rightarrow 0}\left[ \tfrac{f\left( t-k\left( t\right)
+k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right)
^{-\alpha }}{k^{\prime }\left( t\right) }}\right) -f\left( t\right) }{%
\epsilon }g\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{%
\left( k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }%
}\right) \right] \\
&& \\
&&+f\left( t\right) \lim_{\epsilon \rightarrow 0}\tfrac{g\left( t-k\left(
t\right) +k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right)
\right) ^{-\alpha }}{k^{\prime }\left( t\right) }}\right) -g\left( t\right)
}{\epsilon } \\
&& \\
&=&D^{\alpha }\left( f\right) \left( t\right) \lim_{\epsilon \rightarrow 0}
\left[ g\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{%
\left( k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }%
}\right) \right] +f\left( t\right) D^{\alpha }\left( g\right) \left( t\right)
\\
&& \\
&=&g\left( t\right) D^{\alpha }\left( f\right) \left( t\right) +f\left(
t\right) D^{\alpha }\left( g\right) \left( t\right) .
\end{eqnarray*}%
Since $g$ is continuous at $t$, $\lim_{\epsilon \rightarrow 0}\left[ g\left(
t-k\left( t\right) +k\left( t\right) e^{\varepsilon k\left( t\right)
^{-\alpha }}\right) \right] =g\left( t\right) .$ This completes the proof of
(4). Next, we prove (5). Similarly,
\begin{eqnarray*}
&&D^{\alpha }\left( \frac{f}{g}\right) \left( t\right) \\
&=&\lim_{\epsilon \rightarrow 0}\frac{\frac{f\left( t-k\left( t\right)
+k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right)
^{-\alpha }}{k^{\prime }\left( t\right) }}\right) }{g\left( t-k\left(
t\right) +k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right)
\right) ^{-\alpha }}{k^{\prime }\left( t\right) }}\right) }-\frac{f\left(
t\right) }{g\left( t\right) }}{\epsilon } \\
&& \\
&=&\lim_{\epsilon \rightarrow 0}\tfrac{f\left( t-k\left( t\right) +k\left(
t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{%
k^{\prime }\left( t\right) }}\right) g\left( t\right) -f\left( t\right)
g\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{\left(
k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }}\right) }{%
\epsilon g\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{%
\left( k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }%
}\right) g\left( t\right) } \\
&& \\
&=&\lim_{\epsilon \rightarrow 0}\tfrac{f\left( t-k\left( t\right) +k\left(
t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{%
k^{\prime }\left( t\right) }}\right) g\left( t\right) -f\left( t\right)
g\left( t\right) +f\left( t\right) g\left( t\right) -f\left( t\right)
g\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{\left(
k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }}\right) }{%
\epsilon g\left( t-k\left( t\right) +k\left( t\right) e^{\varepsilon \frac{%
\left( k\left( t\right) \right) ^{-\alpha }}{k^{\prime }\left( t\right) }%
}\right) g\left( t\right) } \\
&& \\
&=&\lim_{\epsilon \rightarrow 0}\frac{1}{g\left( t-k\left( t\right) +k\left(
t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{%
k^{\prime }\left( t\right) }}\right) g\left( t\right) } \\
&& \\
&&\times \left[ \tfrac{f\left( t-k\left( t\right) +k\left( t\right)
e^{\varepsilon \frac{\left( k\left( t\right) \right) ^{-\alpha }}{k^{\prime
}\left( t\right) }}\right) -f\left( t\right) }{\epsilon }g\left( t\right)
-f\left( t\right) \tfrac{g\left( t\right) -g\left( t-k\left( t\right)
+k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right)
^{-\alpha }}{k^{\prime }\left( t\right) }}\right) }{\epsilon }\right] \\
&& \\
&=&\dfrac{f\left( t\right) D^{\alpha }\left( g\right) \left( t\right)
-g\left( t\right) D^{\alpha }\left( f\right) \left( t\right) }{\left(
g\left( t\right) \right) ^{2}}.
\end{eqnarray*}%
We have implicitly assumed here that $f^{\left( \alpha \right) }$ and $%
g^{\left( \alpha \right) }$ exist and that $g\left( t\right) \neq 0.\ $%
Finally, we prove (6). We have from the definition that%
\begin{eqnarray*}
D^{\alpha }\left( f\circ g\right) \left( t\right) &=&\lim_{\epsilon
\rightarrow 0}\frac{\left( f\circ g\right) \left( t-k\left( t\right)
+k\left( t\right) e^{\varepsilon \frac{\left( k\left( t\right) \right)
^{-\alpha }}{k^{\prime }\left( t\right) }}\right) -\left( f\circ g\right)
\left( t\right) }{\epsilon } \\
&=&\lim_{\epsilon \rightarrow 0}\frac{\left( f\circ g\right) \left(
t+\epsilon \frac{\left( k\left( t\right) \right) ^{1-\alpha }}{k^{\prime
}\left( t\right) }\left[ 1+O\left( \epsilon \right) \right] \right) -\left(
f\circ g\right) \left( t\right) }{\epsilon }.
\end{eqnarray*}%
Let $h=\epsilon \frac{\left( k\left( t\right) \right) ^{1-\alpha }}{%
k^{\prime }\left( t\right) }\left[ 1+O\left( \epsilon \right) \right] \ $%
such that%
\begin{eqnarray*}
D^{\alpha }\left( f\circ g\right) \left( t\right) &=&\lim_{\epsilon
\rightarrow 0}\frac{\left( f\circ g\right) \left( t+\epsilon \frac{\left(
k\left( t\right) \right) ^{1-\alpha }}{k^{\prime }\left( t\right) }\left[
1+O\left( \epsilon \right) \right] \right) -\left( f\circ g\right) \left(
t\right) }{\epsilon } \\
&=&\lim_{h\rightarrow 0}\frac{\left( f\circ g\right) \left( t+h\right)
-\left( f\circ g\right) \left( t\right) }{\frac{k^{\prime }\left( t\right)
\left( k\left( t\right) \right) ^{\alpha -1}h}{1+O\left( \epsilon \right) }}.
\end{eqnarray*}%
Therefore, we have%
\begin{equation*}
D^{\alpha }\left( f\circ g\right) \left( t\right) =\frac{\left( k\left(
t\right) \right) ^{1-\alpha }}{k^{\prime }\left( t\right) }f^{\prime }\left(
g\left( t\right) \right) D^{\prime }\left( g\right) \left( t\right) .
\end{equation*}
This completes the proof of the theorem.
\end{proof}
Now, we will give the derivatives of some special functions.
\begin{theorem}
\label{thm1} Let $a,n\in
\mathbb{R}
\ $and $\alpha \in \left( 0,1\right] .\ $Then we have the following results.
\end{theorem}
$1.\ D^{\alpha }\left( 1\right) =0,$
\bigskip
$2.\ D^{\alpha }\left( e^{ax}\right) =a\frac{\left( k\left( x\right) \right)
^{1-\alpha }}{k^{\prime }\left( x\right) }e^{ax},$
\bigskip
$3.\ D^{\alpha }\left( \sin (ax)\right) =a\frac{\left( k\left( x\right)
\right) ^{1-\alpha }}{k^{\prime }\left( x\right) }\cos (ax),$
\bigskip
$4.\ D^{\alpha }\left( \cos (ax)\right) =-a\frac{\left( k\left( x\right)
\right) ^{1-\alpha }}{k^{\prime }\left( x\right) }\sin (ax),$
\bigskip
$5.\ D^{\alpha }\left( \log _{a}bx\right) =\dfrac{1}{x}\frac{\left( k\left(
x\right) \right) ^{1-\alpha }}{k^{\prime }\left( x\right) }\frac{1}{\ln a},$
\bigskip
$6.\ D^{\alpha }\left( a^{bx}\right) =b\frac{\left( k\left( x\right) \right)
^{1-\alpha }}{k^{\prime }\left( x\right) }a^{bx}\ln a.$
When $\alpha =1\ $and $k\left( t\right) =t\ $in Theorem \ref{thm1}, it turns
out to be the classical derivatives of a function.
\begin{theorem}[Rolle's theorem for $\protect\alpha -$generalized Fractional
Differentiable functions]
\label{thm2} Let $a>0\ $and $f:[a,b]\rightarrow
\mathbb{R}
$ be a function with the properties that,
\end{theorem}
1. $f$ is continuous on $[a,b],$
2. $f$ is a $\alpha $-differentiable{}on $\left( a,b\right) \ $for some $%
\alpha \in \left( 0,1\right) ,$
3. $f(a)=f(b).$
Then, there exist $c\in \left( a,b\right) ,\ $such that $D^{\alpha }\left(
f\right) \left( c\right) =0.$
\begin{proof}
We will prove this theorem by using contradiction. Since $f$ is continuous
on $[a,b]$ and $f(a)=f(b)$, there is $c\in \left( a,b\right) $ at which the
function has a local extrema. Then,%
\begin{equation*}
D^{\alpha }\left( f\right) \left( c\right) =\lim_{\epsilon \rightarrow 0^{-}}%
\tfrac{\left[ f\left( c-k\left( c\right) +k\left( c\right) e^{\varepsilon
\frac{\left( k\left( c\right) \right) ^{-\alpha }}{k^{\prime }\left(
c\right) }}\right) -f\left( c\right) \right] }{\epsilon }=\lim_{\epsilon
\rightarrow 0^{+}}\tfrac{\left[ f\left( c-k\left( c\right) +k\left( c\right)
e^{\varepsilon \frac{\left( k\left( c\right) \right) ^{-\alpha }}{k^{\prime
}\left( c\right) }}\right) -f\left( c\right) \right] }{\epsilon }.
\end{equation*}%
But, the two limits have opposite signs. Hence, $D^{\alpha }\left( f\right)
\left( c\right) =0.$
\end{proof}
When $\alpha =1\ $and $k\left( t\right) =t\ $in Theorem \ref{thm2}, it turns
out to be the classical Rolles's Theorem.
\begin{theorem}[Mean value theorem for Generalized fractional differentiable
functions]
Let $\alpha \in (0,1]$ and $f:[a,b]\rightarrow
\mathbb{R}
$ be a continuous on $[a,b]$ and an $\alpha $-generalized fractional
differentiable mapping on $\left( a,b\right) $ with $0\leq a<b.\ $Let $%
k:[a,b]\rightarrow
\mathbb{R}
$ be a continuous nonnegative map such that $k\left( t\right) ,$ $k^{\prime
}\left( t\right) \neq 0.$ Then, there exists $c\in (a,b)$, such that%
\begin{equation}
D^{\alpha }\left( f\right) \left( c\right) =\frac{f(b)-f(a)}{\frac{k^{\alpha
}\left( b\right) }{\alpha }-\frac{k^{\alpha }\left( a\right) }{\alpha }}.
\label{m1}
\end{equation}
\end{theorem}
\begin{proof}
Let $h$ be a constant. Consider the function,%
\begin{equation}
G\left( x\right) =f\left( x\right) +h\frac{k^{\alpha }\left( x\right) }{%
\alpha }. \label{m2}
\end{equation}%
$G$ is continuous functions on $[a,b]\ $and integrable $\forall x\in \left(
a,b\right) $. Here, if we choose $G\left( a\right) =G\left( b\right) ,\ $%
then we have%
\begin{equation*}
f\left( a\right) +h\frac{k^{\alpha }\left( a\right) }{\alpha }=f\left(
b\right) +h\frac{k^{\alpha }\left( b\right) }{\alpha }.
\end{equation*}%
Thus,%
\begin{equation}
h=-\frac{f\left( b\right) -f\left( a\right) }{\frac{k^{\alpha }\left(
b\right) }{\alpha }-\frac{k^{\alpha }\left( a\right) }{\alpha }}. \label{m3}
\end{equation}%
Using (\ref{m3}) in (\ref{m2}), it follows that%
\begin{equation}
G\left( x\right) =f\left( x\right) -\frac{f\left( b\right) -f\left( a\right)
}{\frac{k^{\alpha }\left( b\right) }{\alpha }-\frac{k^{\alpha }\left(
a\right) }{\alpha }}\frac{k^{\alpha }\left( x\right) }{\alpha }. \label{m4}
\end{equation}%
\begin{eqnarray*}
D^{\alpha }\left( G\right) \left( x\right) &=&D^{\alpha }\left( f\right)
\left( x\right) -\frac{f\left( b\right) -f\left( a\right) }{\frac{k^{\alpha
}\left( b\right) }{\alpha }-\frac{k^{\alpha }\left( a\right) }{\alpha }}%
D^{\alpha }\left( \frac{k^{\alpha }\left( x\right) }{\alpha }\right) \\
&=&D^{\alpha }\left( f\right) \left( x\right) -\frac{f\left( b\right)
-f\left( a\right) }{\frac{k^{\alpha }\left( b\right) }{\alpha }-\frac{%
k^{\alpha }\left( a\right) }{\alpha }}\frac{\left( k\left( t\right) \right)
^{1-\alpha }}{k^{\prime }\left( t\right) }\frac{d}{dt}\left( \frac{k^{\alpha
}\left( x\right) }{\alpha }\right) \\
&=&D^{\alpha }\left( f\right) \left( x\right) -\frac{f\left( b\right)
-f\left( a\right) }{\frac{k^{\alpha }\left( b\right) }{\alpha }-\frac{%
k^{\alpha }\left( a\right) }{\alpha }}.
\end{eqnarray*}%
Then, the function $g$ satisfies the condition of the generalized fractional
Rolle's theorem.\ Hence, there exist $c\in \left( a,b\right) ,$ such that $%
D^{\alpha }\left( G\right) \left( c\right) =0.$\ Using the fact that\ $%
D^{\alpha }\left( \frac{k^{\alpha }\left( x\right) }{\alpha }\right) =1$, we
have%
\begin{equation*}
f^{\left( \alpha \right) }\left( x\right) =\frac{f\left( b\right) -f\left(
a\right) }{\frac{k^{\alpha }\left( b\right) }{\alpha }-\frac{k^{\alpha
}\left( a\right) }{\alpha }}.
\end{equation*}%
Therefore, we get desired result.
\end{proof}
When $\alpha =1\ $and $k\left( t\right) =t\ $in Theorem \ref{thm2}, it turns
out to be the classical Mean Value Theorem.
\section{Generalized new fractional integral}
Now we introduce the generalized fractional integral as follows:
\begin{definition}[Generalized Fractional Integral]
\label{d2} Let $a\geq 0\ $and $t\geq a.\ $Also, let $f$ be a function
defined on $(a,t]$\ and $\alpha \in
\mathbb{R}
.\ $Let $k:[a,b]\rightarrow
\mathbb{R}
$ be a continuous nonnegative map such that $k\left( t\right) ,$ $k^{\prime
}\left( t\right) \neq 0.$\ Then, the $\alpha -$generalized fractional
integral of $f$ is defined by,%
\begin{equation*}
I^{\alpha }\left( f\right) \left( t\right) =\int\limits_{a}^{b}\frac{%
k^{\prime }\left( x\right) f\left( x\right) }{\left( k\left( x\right)
\right) ^{1-\alpha }}dx
\end{equation*}%
if the Riemann improper integral exist.
\end{definition}
\begin{theorem}[Inverse property]
Let $a\geq 0\ $and $\alpha \in (0,1).$Also, let $f$ be a continuous function
such that $I^{a}f$ exist. Let $k:[a,b]\rightarrow
\mathbb{R}
$ be a continuous nonnegative map such that $k\left( t\right) ,$ $k^{\prime
}\left( t\right) \neq 0.\ $Then, for all $t>a,$ we have%
\begin{equation*}
D^{a}\left[ I^{a}f\left( t\right) \right] =f\left( t\right) .
\end{equation*}
\end{theorem}
\begin{proof}
Since $f$ is continuous, then $I^{a}f\left( t\right) $ is clearly
differentiable. Hence,%
\begin{eqnarray*}
D^{a}\left[ I^{a}\left( f\right) \left( t\right) \right] &=&\frac{\left(
k\left( t\right) \right) ^{1-\alpha }}{k^{\prime }\left( t\right) }\frac{d}{%
dt}I^{a}(t) \\
&& \\
&=&\frac{\left( k\left( t\right) \right) ^{1-\alpha }}{k^{\prime }\left(
t\right) }\frac{d}{dt}\int\limits_{a}^{t}\frac{f\left( x\right) k^{\prime
}\left( x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx \\
&& \\
&=&\frac{\left( k\left( t\right) \right) ^{1-\alpha }}{k^{\prime }\left(
t\right) }\frac{f\left( t\right) k^{\prime }\left( t\right) }{\left( k\left(
t\right) \right) ^{1-\alpha }} \\
&& \\
&=&f\left( t\right) .
\end{eqnarray*}
\end{proof}
\begin{theorem}
\label{T2} Let $f:(a,b)\rightarrow
\mathbb{R}
$ be differentiable and $0<\alpha \leq 1$. Let $k:[a,b]\rightarrow
\mathbb{R}
$ be a continuous nonnegative map such that $k\left( t\right) ,$ $k^{\prime
}\left( t\right) \neq 0.\ $Then, for all $t>a$ we have%
\begin{equation}
I^{a}\left[ D^{a}\left( f\right) \left( t\right) \right] =f\left( t\right)
-f\left( a\right) . \label{1.12}
\end{equation}
\end{theorem}
\begin{proof}
\begin{eqnarray*}
I^{a}\left[ D^{a}\left( f\right) \left( t\right) \right] &=&\int%
\limits_{a}^{t}\frac{k^{\prime }\left( x\right) }{\left( k\left( x\right)
\right) ^{1-\alpha }}D^{a}\left( f\right) (x)dx \\
&& \\
&=&\int\limits_{a}^{t}\frac{k^{\prime }\left( x\right) }{\left( k\left(
x\right) \right) ^{1-\alpha }}\frac{\left( k\left( x\right) \right)
^{1-\alpha }}{k^{\prime }\left( x\right) }\frac{df}{dx}(x)dx \\
&& \\
&=&\int\limits_{a}^{t}\frac{df}{dx}(x)dx \\
&& \\
&=&f\left( t\right) -f\left( a\right) .
\end{eqnarray*}
\end{proof}
\begin{theorem}
(\textbf{Integration by parts}) Let $f,g:[a,b]\rightarrow
\mathbb{R}
$ be two functions such that $fg$ is differentiable. Then%
\begin{equation*}
\int_{a}^{b}f\left( x\right) D^{\alpha }\left( g\right) \left( x\right)
d_{\alpha }x=\left. fg\right\vert _{a}^{b}-\int_{a}^{b}g\left( x\right)
D^{\alpha }\left( f\right) \left( x\right) d_{\alpha }x.
\end{equation*}
\end{theorem}
\begin{proof}
The proof is done in a similar way in \cite{Abdel}.
\end{proof}
\begin{theorem}
\label{T1} Let $f$ and $g$ be functions satisfying the following
\end{theorem}
$\left( a\right) $ continuous on $[a,b],$
$\left( b\right) \ $bounded and integrable functions on $[a,b],$
In addition$,\ $Let $g(x)$ be nonnegative (or nonpositive) on $[a,b]$. Let $%
k:[a,b]\rightarrow
\mathbb{R}
$ be a continuous nonnegative map such that $k\left( t\right) ,$ $k^{\prime
}\left( t\right) \neq 0.$ Let us set $m=\inf \{f(x):x\in \lbrack a,b]\}$ and
$M=\sup \{f(x):x\in \lbrack a,b]\}.\ $Then there exists a number $\xi $ in $%
(a,b)$ such that%
\begin{equation}
\int\limits_{a}^{b}\frac{f\left( x\right) g\left( x\right) k^{\prime
}\left( x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx=\xi
\int\limits_{a}^{b}\frac{g\left( x\right) k^{\prime }\left( x\right) }{%
\left( k\left( x\right) \right) ^{1-\alpha }}dx. \label{t1}
\end{equation}%
If $f$ continuous on $[a,b],$ then for $\exists x_{0}\in \left[ a,b\right] $%
\begin{equation}
\int\limits_{a}^{b}\frac{f\left( x\right) g\left( x\right) k^{\prime
}\left( x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx=f\left(
x_{0}\right) \int\limits_{a}^{b}\frac{g\left( x\right) k^{\prime }\left(
x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx. \label{t2}
\end{equation}
\begin{proof}
Let $m=\inf f$, $M=\sup f\ $and $g(x)\geq 0\ $in $[a,b].\ $Then, we get%
\begin{equation}
mg(x)<f(x)g(x)<Mg(x). \label{t4}
\end{equation}%
Multiplying (\ref{t4}) by $\frac{k^{\prime }\left( x\right) }{\left( k\left(
x\right) \right) ^{1-\alpha }}\ $and integrating (\ref{t4}) with respect to $%
x$ over $(a,b)$, we obtain:%
\begin{equation}
m\int\limits_{a}^{b}\frac{g\left( x\right) k^{\prime }\left( x\right) }{%
\left( k\left( x\right) \right) ^{1-\alpha }}dx<\int\limits_{a}^{b}\frac{%
f(x)g\left( x\right) k^{\prime }\left( x\right) }{\left( k\left( x\right)
\right) ^{1-\alpha }}dx<M\int\limits_{a}^{b}\frac{g\left( x\right)
k^{\prime }\left( x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx.
\label{t5}
\end{equation}%
Then there exists a number $\xi $ in $\left[ m,M\right] $ such that%
\begin{equation*}
\int\limits_{a}^{b}\frac{f(x)g\left( x\right) k^{\prime }\left( x\right) }{%
\left( k\left( x\right) \right) ^{1-\alpha }}dx=\xi \int\limits_{a}^{b}%
\frac{g\left( x\right) k^{\prime }\left( x\right) }{\left( k\left( x\right)
\right) ^{1-\alpha }}dx.
\end{equation*}%
When $g(x)<0$, the proof is done in a similar way.
By the intermediate value theorem, $f$ attains every value of the interval $%
[m,M]$, so for some $x_{0}\ $in$\ [a,b]\ f\left( x_{0}\right) =\xi .\ $Then%
\begin{equation*}
\int\limits_{a}^{b}\frac{f(x)g\left( x\right) k^{\prime }\left( x\right) }{%
\left( k\left( x\right) \right) ^{1-\alpha }}dx=f\left( x_{0}\right)
\int\limits_{a}^{b}\frac{g\left( x\right) k^{\prime }\left( x\right) }{%
\left( k\left( x\right) \right) ^{1-\alpha }}dx.
\end{equation*}%
If\ $g(x)=0,\ $equality (\ref{t1}) becomes obvious; if $g(x)>0,\ $then (\ref%
{t5}) implies%
\begin{equation*}
m<\dfrac{\int\limits_{a}^{b}\frac{f(x)g\left( x\right) k^{\prime }\left(
x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx}{%
\int\limits_{a}^{b}\frac{g\left( x\right) k^{\prime }\left( x\right) }{%
\left( k\left( x\right) \right) ^{1-\alpha }}dx}<M
\end{equation*}%
there exists a point $x_{0}$ in $(a,b)$ such that%
\begin{equation*}
m<f\left( x_{0}\right) <M,
\end{equation*}%
which yields the desired result (\ref{t1}). In particular, when $g(x)=1$, we
get from Theorem \ref{T1} the following result%
\begin{eqnarray*}
\int\limits_{a}^{b}\frac{f\left( x\right) k^{\prime }\left( x\right) }{%
\left( k\left( x\right) \right) ^{1-\alpha }}dx &=&f\left( x_{0}\right)
\int\limits_{a}^{b}\frac{k^{\prime }\left( x\right) }{\left( k\left(
x\right) \right) ^{1-\alpha }}dx \\
&& \\
&=&f\left( x_{0}\right) \left( \frac{k^{\alpha }\left( b\right) }{\alpha }-%
\frac{k^{\alpha }\left( a\right) }{\alpha }\right) .
\end{eqnarray*}%
Thus, we have%
\begin{equation}
f\left( x_{0}\right) =\frac{1}{\frac{k^{\alpha }\left( b\right) }{\alpha }-%
\frac{k^{\alpha }\left( a\right) }{\alpha }}\int\limits_{a}^{b}\frac{%
f\left( x\right) k^{\prime }\left( x\right) }{\left( k\left( x\right)
\right) ^{1-\alpha }}dx. \label{t10}
\end{equation}%
This (\ref{t10}) is called the mean value or variance of the $f$ function.
\end{proof}
For $\alpha =1$ and $k\left( t\right) =t$ this reduces to the classical mean
value theorem of integral calculus,%
\begin{equation*}
\int\limits_{a}^{b}f\left( x\right) dx=\left( b-a\right) f\left(
x_{0}\right) .
\end{equation*}
\begin{theorem}
Let $a\geq 0\ $and $\alpha \in (0,1].\ $Also, let $f,g:\left[ a,b\right]
\rightarrow
\mathbb{R}
$ be a continuous function. Let $k:[a,b]\rightarrow
\mathbb{R}
$ be a continuous nonnegative map such that $k\left( t\right) ,$ $k^{\prime
}\left( t\right) \neq 0.\ $Then,
\end{theorem}
$i.\ \int\limits_{a}^{b}\left( f\left( x\right) +g\left( x\right) \right)
\frac{k^{\prime }\left( x\right) }{\left( k\left( x\right) \right)
^{1-\alpha }}dx=\int\limits_{a}^{b}\frac{f\left( x\right) k^{\prime }\left(
x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx+\int%
\limits_{a}^{b}\frac{g\left( x\right) k^{\prime }\left( x\right) }{\left(
k\left( x\right) \right) ^{1-\alpha }}dx,$
$ii.\ \int\limits_{a}^{b}\lambda \frac{f\left( x\right) k^{\prime }\left(
x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx=\lambda
\int\limits_{a}^{b}\frac{f\left( x\right) k^{\prime }\left( x\right) }{%
\left( k\left( x\right) \right) ^{1-\alpha }}dx,\ \lambda \in
\mathbb{R}
,$
$iii.\ \int\limits_{a}^{b}\frac{f\left( x\right) k^{\prime }\left( x\right)
}{\left( k\left( x\right) \right) ^{1-\alpha }}dx=-\int\limits_{b}^{a}\frac{%
f\left( x\right) k^{\prime }\left( x\right) }{\left( k\left( x\right)
\right) ^{1-\alpha }}dx,$
$iv.\ \int\limits_{a}^{b}\frac{f\left( x\right) k^{\prime }\left( x\right)
}{\left( k\left( x\right) \right) ^{1-\alpha }}dx=\int\limits_{a}^{c}\frac{%
f\left( x\right) k^{\prime }\left( x\right) }{\left( k\left( x\right)
\right) ^{1-\alpha }}dx+\int\limits_{c}^{b}\frac{f\left( x\right) k^{\prime
}\left( x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}dx,$
$v.\ \int\limits_{a}^{a}\frac{f\left( x\right) k^{\prime }\left( x\right) }{%
\left( k\left( x\right) \right) ^{1-\alpha }}dx=0,$
$vi.\ $if $f(x)\geq 0$ for all $x\in \lbrack a,b]$ , then $%
\int\limits_{a}^{b}\frac{f\left( x\right) k^{\prime }\left( x\right) }{%
\left( k\left( x\right) \right) ^{1-\alpha }}dx\geq 0,$
$vii.\ \left\vert \int\limits_{a}^{b}\frac{f\left( x\right) k^{\prime
}\left( x\right) }{\left( k\left( x\right) \right) ^{1-\alpha }}%
dx\right\vert \leq \int\limits_{a}^{b}\frac{\left\vert f\left( x\right)
\right\vert k^{\prime }\left( x\right) }{\left( k\left( x\right) \right)
^{1-\alpha }}dx.$
\begin{proof}
The relations follow from Definition \ref{d2} and Theorem \ref{T2},
analogous properties of generalized fractional integral, and the properties
of section 2 for the generalized fractional derivative.
\end{proof}
\begin{acknowledgement}
M.E. Yildirim was partially supported by the Scientific and Technological
Research Council of Turkey (TUBITAK Programme 2228-B).
\end{acknowledgement}
|
2,869,038,155,092 | arxiv | \section{Introduction}
\label{sec.Intro}
Dialect refers to a variant of a given language, which could be defined by factors of regional speech patterns, social class or ethnicity~\cite{lyons1981language}. Except for pronunciation, a dialect is also distinguished by its textual expression~\cite{wong2018register}. For instance, Mandarin (\textsc{Man}) and Cantonese (\textsc{Can}) are the official language and the most widely used dialect of China, respectively~\cite{lee1998cancorp}. As seen in Fig.~\ref{Fig.Examples}, although the sentences have absolutely same semantic meaning, they have distinct attributes with respect to the expression on text. Correspondingly, in this task we attempt to build automatic translation system for dialects.
\begin{figure}
\centering
\includegraphics[keepaspectratio, width=0.45\textwidth]{fig_example.pdf}
\caption{An example of \textsc{Can}-\textsc{Man} translation.}
\label{Fig.Examples}
\end{figure}
An intuitive way is to leverage advanced machine translation systems which have recently yielded human-level performance with the use of neural networks~\cite{Chen:2018:ACL,li2018multi}. Nevertheless, contrast with traditional machine translation, there are two main challenges in dialect translation. First, the success of supervised neural machine translation depends on large-scale training parallel data, while dialect translation is not equipped such kind of prerequisite. This makes our task fall into unsupervised learning category~\cite{artetxe2018unsupervised,lample2018unsupervised,lample2018phrase}. Second, dialects are closely related and, despite their differences, often share similar grammar, e.g. morphology and syntax~\cite{chambers_trudgill_1998}. The extraction of {\em commonality} is beneficial to unsupervised mapping~\cite{lample2018word} and model robustness~\cite{firat2016multi}, in the meanwhile, preserving the explicit {\em diversity} plays a crucial role in our dialect translation. Consequently, it is challenging to balance the commonality and diversity for dialect translation thus to improve its performance.
We approach the mentioned problems by proposing unsupervised neural dialect translation model, which is merely trained using monolingual corpus and sufficiently leverage commonality and diversity of dialects.
Specifically, we train an advanced NMT model~\textsc{Transformer}~\cite{vaswani2017attention} with denoising reconstruction~\cite{vincent2008extracting} and back-translation~\cite{sennrich2016improving}, which aim at building common language model and mapping different attributes, respectively.
We introduce several strategies into translation model for balancing the commonality and diversity: 1) parameter-sharing that forces dialects to share the same latent space; 2) pivot-private embedding which models similarities and differences at lexical level; and 3) layer coordination which enhances the interaction of features between two sides of translation.
In order to evaluate the effectiveness of the proposed model, we propose monolingual dialect corpus which consists of 20 million colloquial sentences for each of~\textsc{Man}\footnote{For simplification, we regard official language as a dialect.} and~\textsc{Can}.
The sentences are extracted from conversations and comments in forums, social medias as well as subtitles, and carefully filtered during data preprocessing.\footnote{Our codes and data are released at:~\url{https://github.com/NLP2CT/Unsupervised_Dialect_Translation}.}
Empirical results on two directions of \textsc{Man}-\textsc{Can} translation task demonstrate that the proposed model significantly outperforms existing unsupervised NMT~\cite{lample2018phrase} with even fewer parameters. The quantitative and qualitative analyses verified the necessity of commonality and diversity modeling for dialect translation.
\section{Preliminary}
\label{sec.Preliminary}
Neural machine translation (NMT) aims to use a neural network to build a translation model, which is trained to maximize the conditional distribution of sentence pairs~\cite{bahdanao2014neural,sennrich2016neural,vaswani2017attention}.
Given a source sentence $\mathbf{X}=\{\mathbf{x}_1,\cdots,\mathbf{x}_{I}\}$, conditional probability of its corresponding translation $\mathbf{Y}=\{\mathbf{y}_1,\cdots,\mathbf{y}_{J}\}$ is defined as:
\begin{align}
\mathbf{P}(\mathbf{Y} | \mathbf{X})&=\prod \limits_{j=1}^{|{J}|}\mathbf{P}(\mathbf{y}_j|\mathbf{Y}_{<j},\mathbf{X};\theta),
\end{align}
where $\mathbf{y}_j$ indicates the $j$-th target token. $\theta$ denotes the parameters of NMT model, which are optimized to minimize the following loss function over the training corpus $\mathbf{D}$:
\begin{align}
\label{eq:lossbase}
\mathcal{L} &= \mathbb{E}_{(\mathbf{X},\mathbf{Y})\sim \mathbf{D}}[-\log \mathbf{P} (\mathbf{Y} | \mathbf{X};\theta)]
\end{align}
Such kind of auto-regressive translation process is generally achieved upon the encoder-decoder framework~\cite{sutskever2014sequence}. Specifically, the inputs of encoder $\textbf{S}^0$ and decoder $\textbf{T}^0$ are obtained by looking up source and target embeddings according to the input sentences $\textbf{X}$ and $\textbf{Y}$, respectively:
\begin{align}
\textbf{S}^0 &= {\rm{Emb}}_{src}(\textbf{X}) & \in \mathbb{R}^{I \times d} \label{eq:embs}\\
\textbf{T}^0 &= {\rm{Emb}}_{trg}(\textbf{Y}) & \in \mathbb{R}^{J \times d} \label{eq:embr}
\end{align}
where $d$ indicates the dimensionality. The encoder is composed of a stack of $N$ identical layers. Given the input layer $\mathbf{S}^{n-1}\in \mathbb{R}^{I \times d}$, the output of the $n$-th layer can be formally expressed as:
\begin{align}
{\rm\mathbf{S}}^{n} &={{\rm{Layer}}^{n}_{enc} ({\rm\mathbf{S}}^{n-1})} & \in \mathbb{R}^{I \times d}
\end{align}
The decoder is also composed of a stack of $N$ identical layers. Contrary to the encoder which takes all the tokens into account, the decoder merely summarizes the forward representations in the input layer $\mathbf{T}^{n-1}\in {\mathbb{R}^{J \times d}}$ at each decoding step, since the subsequent representations are invisible. Besides, the generation process considers the contextual information of source sentence, by feeding the top layer of the encoder $\mathbf{S}^{N}$. Accordingly, the $j$-th representation in $n$-th decoding layer $\mathbf{T}^{n} = \{\mathbf{t}^{n}_1,\cdots,\mathbf{t}^{n}_J\}$ is calculated as:
\begin{align}
\label{eq:dec}
{\rm \mathbf{t}}^{n}_j &={\rm{Layer}}^{n}_{dec} (\mathbf{T}^{n-1}_{\leqslant j}, {\rm{Att}}^{{n}}
(\mathbf{t}^{n-1}_j, \mathbf{S}^{N})) & \in \mathbb{R}^{d}
\end{align}
where $\rm{Att}(\cdot)$ indicates the attention model~\cite{bahdanao2014neural} which has recently been a basic module to allow a deep learning model to dynamically select related representations as needed. Finally, the conditional probability of the $j$-th target word $\mathbf{y}_j$ is calculated using a non-linear function $\rm{Softmax(\cdot)}$:
\begin{align}
\mathbf{P}(\mathbf{y}_j|\mathbf{Y}_{<j},\mathbf{X};\theta) &= {\rm{Softmax}} ({\rm{Proj}} (\mathbf{t}^N_j))
\end{align}
\begin{figure*}[t!]
\centering
\hspace{-5pt}
\begin{subfigure}[c]{\columnwidth}
{
\centering\includegraphics[keepaspectratio,height=0.65\columnwidth]{fig_model_1.pdf}
\caption{Conventional NMT model.}
}
\end{subfigure}
\quad
\hspace{15pt}
\begin{subfigure}[c]{\columnwidth}
{
\centering\includegraphics[keepaspectratio,height=0.65\columnwidth]{fig_model_2.pdf}
\caption{The proposed model.}
}
\end{subfigure}
\caption{Illustration of (a) conventional NMT model and (b) the proposed model. As seen, we propose pivot-private embedding, which learns commonality (${\rm{Emb}}^{pi}$) and diversity (${\rm{Emb}}^{pr}_{src}$ and ${\rm{Emb}}^{pr}_{trg}$) at lexical level. Besides, the decoder attends to source representations layer by layer, rather than merely from the topmost layer.}
\label{Fig.Model}
\end{figure*}
In this section, we propose unsupervised neural dialect translation. We first serve the dialect translation as an unsupervised learning task to tackle with the low-resource problem. Moreover, concerning the commonality and diversity between dialects, we introduce pivot-private embedding and layer coordinating to improve the dialect translation model.
\subsection{Dialect Translation with Unsupervised Learning}
\label{subsec.UnsupervisedNeuralMachineTranslation}
Despite the success of NMT over past years, the performance of a NMT model relies on large-scale parallel training corpus~\cite{sennrich2016improving,artetxe2018unsupervised}. As a low-resource translation task, dialect translation fails at leveraging conventional training strategy, since parallel resources are normally inaccessible. The scarcity of bilingual corpus leads to extraordinary challenging on building translation models for dialects. On the contrary, monolingual corpora is relatively easier to be collected.
Partially inspired by recent studies on unsupervised NMT~\cite{lample2018unsupervised,artetxe2018unsupervised,lample2018phrase}, we propose to build dialect translation model with unsupervised learning which merely depends on monolingual data.
Generally, most of the features with respect to dialects are similar, while only a few of the surface information is different. To this end, we propose to divide the training process into two parts: 1) commonality modeling which learns to capture general features of all dialects; and 2) diversity modeling which builds connections between different expressions.
\paragraph{Commonality Modeling}
This procedure aims at offering our model the ability to extract the universal features of two dialects. Intuitively, the commonality modeling can be trained by reconstructing two dialects using one model. \citeauthor{artetxe2018unsupervised}~\shortcite{artetxe2018unsupervised} and \citeauthor{lample2018unsupervised}~\shortcite{lample2018unsupervised} suggest that denoising autoencoding is beneficial to the language modeling. More importantly, it can avoid our model from severely copying the input sentence to the output. Contrary to~\citeauthor{artetxe2018unsupervised}~\shortcite{artetxe2018unsupervised} and \citeauthor{lample2018unsupervised}~\shortcite{lample2018unsupervised} who employ distinct model for each language, we train one model for both the two dialects, thus to encourage different dialects to be modeled under a common latent space. Consequently, the loss function is defined as:
\begin{align}
\mathcal{L}_{com} =& \mathbb{E}_{\mathbf{X}\sim \mathbf{D}_X}[-\log \mathbf{P} (\mathbf{X} | \mathbf{X}^{noise};\theta)] + \nonumber\\
& \mathbb{E}_{\mathbf{Y}\sim \mathbf{D}_Y}[-\log \mathbf{P} (\mathbf{Y} | \mathbf{Y}^{noise};\theta)]
\end{align}
where $\mathbf{D}_X$ and $\mathbf{D}_Y$ are monolingual corpora for two dialects, $\mathbf{X}^{noise}$ and $\mathbf{Y}^{noise}$ denote noised inputs.\footnote{We add noises to inputs by swapping, dropping and blanking words following~\citeauthor{lample2018unsupervised}~\shortcite{lample2018unsupervised}, except that we swap two words rather than three, which shows better empirical results in our experiments.}
As seen, the two reconstruction models are shared with the same parameter $\theta$.
\paragraph{Diversity Modeling} Although there is marginal difference between dialects, the transfer of diversity is the key problem of dialect translation. Contrast to supervised NMT model which learns relevance between source and target using parallel data, dialect translation model fails to directly establish the functional mapping from source latent space to target one. An alternative way is to exploiting back-translation~\cite{sennrich2016improving,edunov-etal-2018-understanding}. Specifically, $\mathbf{X}$ and $\mathbf{Y}$ are first translated to their candidate translation $\mathbf{Y}^{bak}$ and $\mathbf{X}^{bak}$, respectively.
The mapping of cross-dialect latent spaces can be learned by minimizing:
\begin{align}
\mathcal{L}_{div} =& \mathbb{E}_{\mathbf{X}\sim \mathbf{D}_X}[-\log \mathbf{P} (\mathbf{X} | \mathbf{Y}^{bak};\theta)] + \nonumber\\
& \mathbb{E}_{\mathbf{Y}\sim \mathbf{D}_Y}[-\log \mathbf{P} (\mathbf{Y} | \mathbf{X}^{bak};\theta)]
\end{align}
Finally, the loss function in Equation~\ref{eq:lossbase} is modified as:
\begin{align}
\mathcal{L} &= \lambda_{com}\mathcal{L}_{com} + \lambda_{div}\mathcal{L}_{div}
\end{align}
where $\lambda_{com}, \lambda_{div}$ are hyper-parameters balancing the importance of commonality and diversity modeling, respectively.
\subsection{Pivot-Private Embedding}
\label{subsec.PivotEmbedding}
An open problem in unsupervised NMT is the initialization of the translation model, which plays a crucial role in the iteratively training~\cite{lample2018unsupervised,artetxe2018unsupervised} and affects the final performance of the unsupervised learning~\cite{lample2018phrase}.
For two languages with different vocabularies, an usual solution in recent studies is to map the same tokens which are then cast as seeds for aligning other words~\cite{artetxe2018unsupervised,lample2018unsupervised}. For example,~\citeauthor{artetxe2018unsupervised}~\shortcite{artetxe2018unsupervised} employ unsupervised bilingual word embeddings~\cite{artetxe2017learning}, while~\citeauthor{lample2018phrase}~\shortcite{lample2018phrase} utilize the representations of shared tokens~\cite{Mikolov2013Distributed} in different languages to initialize the lookup tables.
Fortunately, dialect translation dispels this problem since most of tokens are shared among dialects.
Therefore, we propose pivot and private embedding, in which, the former learns to share a part of the features while the latter captures the word-level characteristics in different dialects.
\paragraph{Pivot Embedding}
Since vocabularies in different dialects are almost the same, we join monolingual corpora of two dialects and extract all the tokens in it. In order to build the connections between source and target, we assign pivot embedding with $d_s$ dimensions as the initial alignments:
\begin{align}
\textbf{S}^{pi} &= {\rm{Emb}}^{pi}(\textbf{X}) & \in \mathbb{R}^{I \times d_s} \\
\textbf{T}^{pi} &= {\rm{Emb}}^{pi}(\textbf{Y}) & \in \mathbb{R}^{J \times d_s}
\end{align}
where the function of looking up embedding ${\rm{Emb}}^{pi}(\cdot)$ shares parameters across dialects.
\paragraph{Private Embedding} Except the common features, there also exists differences between dialects. We argue that such kind of difference mainly lies in the word-level surface information. To this end, we introduce private embedding for each translation side to distinguish and maintain the characteristics in dialects:
\begin{align}
\textbf{S}^{pr} &= {\rm{Emb}}_{src}^{pr}(\textbf{X}) & \in \mathbb{R}^{I \times (d-d_s)} \\
\textbf{T}^{pr} &= {\rm{Emb}}_{trg}^{pr}(\textbf{Y}) & \in \mathbb{R}^{J \times (d-d_s)}
\end{align}
Contrary to pivot embedding, ${\rm{Emb}}_{src}^{pr}(\cdot)$ and ${\rm{Emb}}_{trg}^{pr}(\cdot)$ are assigned distinct parameters. Thus, the final input embedding in Equation~\ref{eq:embs} and \ref{eq:embr} are modified as:
\begin{align}
\textbf{S}^0 &= \textbf{S}^{pr} \oplus \textbf{S}^{pi} & \in \mathbb{R}^{I \times d} \\
\textbf{T}^0 &= \textbf{T}^{pr} \oplus \textbf{T}^{pi} & \in \mathbb{R}^{J \times d}
\end{align}
where $\oplus$ is the concatenation operator. Note that, since each token has $d_s$ and $d-d_s$ dimensions for the associate pivot embedding and private embedding, the final input is still composed of $d$-dimensional vector. ${\rm{Emb}}^{pi}(\cdot)$, ${\rm{Emb}}_{src}^{pr}(\cdot)$ and ${\rm{Emb}}_{trg}^{pr}(\cdot)$ are all pretrained, and co-optimized under the translation objective. In this way, we expect that pivot embedding can enhance the commonality of translation model, while private embedding raises the ability to capture the diversity of different dialects~\cite{liu19shared}.
\subsection{Layer Coordination}
\label{subsec.CrossAttentionCoordinate}
Recent studies have pointed out that multiple neural network layers are able to capture different types of syntactic and semantic information~\cite{Peters:2018:NAACL,Li:2019:NAACL}. For example,~\citeauthor{Peters:2018:NAACL}~\shortcite{Peters:2018:NAACL} demonstrate that higher-level layer states capture the context-dependent aspects of word meaning while lower-level states model the aspects of syntax, and simultaneously exposing all of these signals is highly beneficial.
To sufficiently interact these features, an alternative way is to perform attention from a decoder layer to its corresponding encoder layer, rather than merely from the topmost layer. Accordingly, the $n$-th decoding layer (Equation~\ref{eq:dec}) is changed to:
\begin{align}
\mathbf{t}^{n}_j &={{\rm{Layer}}^n_{dec}} (\mathbf{T}^{n-1}_{\leqslant j}, {{\rm{Att}}^n} (\mathbf{t}^{n-1}_j, \mathbf{S}^{n})) & \in \mathbb{R}^{d}
\end{align}
This technique has been proven effective \cite{he2018layer,yang2019context,Hao:2019:NAACL} upon NMT tasks via shortening the path of gradient propagation, thus stabilizes the training of a extremely deep model.
However, the improvements on traditional translation tasks become marginal when we apply layer coordination to the models with less than 6 layers \cite{he2018layer}. We attribute this to the fact that directly interacting lexical and syntactic level information between different languages affects the diversity modeling of them, since it forces the two languages to share the same latent space layer by layer. Different from prior studies, our work focuses on a pair of languages which have extremely similar grammar. We examine whether layer coordination is conductive to commonality modeling of dialects and the translation quality.
\section{Datasets}
In this section, we first introduce the \textsc{Can} and \textsc{Man} datasets collected for our experiments, then show adequate rudimentary statistical results upon training corpora.
\paragraph{Monolingual Corpora}
The lack of \textsc{Can} monolingual corpora with strong colloquial features is serious obstacle in our research.
Existing \textsc{Can} corpora, such as HKCanCor \cite{luke2015hong} and CANCORP \cite{lee1998cancorp}, all have the following shortcomings:
1) they were collected in rather early years, the linguistic features of which vary from the current ones due to language evolution; and
2) they are scarce for data-intensive unsupervised training.
Due to the fact that colloquial corpora possess more distinguished linguistic features of \textsc{Can}, we collect \textsc{Can} sentences among domains including talks, comments and dialogues from scratch.\footnote{\url{https://www.wikipedia.org}, \url{https://www.cyberctm.com}, \url{http://discuss.hk} and \url{https://lihkg.com}.}
In order to maintain the consistency of training sets, \textsc{Man} corpora are also derived from same domains as \textsc{Can} from ChineseNlpCorpus and Large Scale Chinese Corpus for NLP.\footnote{\url{https://github.com/brightmart/nlp_chinese_corpus} and \url{https://github.com/SophonPlus/ChineseNlpCorpus}.}
\begin{table}[t]
\centering
\begin{tabular}{c| c c c c}
\hline
Dialect & \# Sents & Vocab size & Unique \\
\hline \hline
\textsc{Can} & 20M & 9,025 & 541 \\
\textsc{Man} & 20M & 8,856 & 372 \\
\hline
\end{tabular}
\caption{Statistics of two monolingual corpora after preprocessing. We conduct experiment at character-based level, and the joint vocabulary size is exactly 9,397.}
\label{Tab.CorporaStats}
\end{table}
\paragraph{Parallel Corpus}
We collect adequate parallel corpora for the development and evaluation of models. Parallel sentence pairs from dialogues are manually selected by native \textsc{Can} and \textsc{Man} speakers. Consequently, 1,227 and 1,085 sentence pairs are selected as development and test set, respectively.
\paragraph{Data Preprocessing \& Statistics}
As there is no well-performed \textsc{Can} segment toolkit, we conduct all the experiments at character level.
In order to share the commonality of both languages and reduce the size of vocabularies, we convert all the texts into simplified Chinese.\footnote{We also attempt to transform all the texts into traditional characters. It does not work well since some simplified characters has multiple corresponding traditional characters and such kind of one-to-many mapping results in ambiguity and data sparsity.}
For reasons of computational efficiency, we keep the sentences whose length lies between 4 and 32, and remove sentences composing characters with low frequencies. Finally, each of \textsc{Man} and \textsc{Can} monolingual training corpora consists of 20M sentences. The statistics of training set are concluded in Tab.~\ref{Tab.CorporaStats}.
As seen, \textsc{Can} has larger vocabulary size and more unique characters than \textsc{Man}.
To identify the commonality and diversity of \textsc{Can} and \textsc{Man}, we compute the Spearman's rank correlation coefficient \cite{zhelezniak2019correlation} between two vocabulary rankings by their frequencies within each corpus.
The coefficient score of two full vocabularies is $0.81$ ($p<0.001$), meaning that the overall relation is significantly strong.
While the coefficient score of the 250 most frequent tokens is $0.26$ ($p<0.001$), indicating that the relation is significantly weak.
These results cater to our hypothesis that dialects share considerate commonality with each other, but possess diversity upon most frequent tokens.
\begin{table*}[t]
\centering
\begin{tabular}{l|c c c}
\hline
Model & \textsc{Can}$\Rightarrow$\textsc{Man} & \textsc{Man}$\Rightarrow$\textsc{Can} & \# Params (M) \\
\hline
\hline
\multicolumn{4}{c}{\textit{Baseline}} \\
\hline
Character-level Rule-based Transition & 42.18 & 42.27 & - \\
Unsupervised Style Transfer \cite{hu2017toward} & 41.97 & 42.03 & 14.40 \\
Unsupervised PB-SMT \cite{lample2018phrase} & 42.12 & 42.20 & - \\
Unsupervised NMT \cite{lample2018phrase} & 42.90 & 42.39 & 39.08 \\
\hline
\hline
\multicolumn{4}{c}{\textit{Ours}} \\
\hline
Layer Coordination & 48.45 & 43.11 & 39.08 \\
Pivot-Private Embedding & 52.74 & 46.69 & 36.65 \\
Pivot-Private Embedding + Layer Coordination & \textbf{54.95} & \textbf{47.45} & 36.65 \\
\hline
\end{tabular}
\caption{Experimental results on unsupervised dialect neural machine translation. \# Params (M): number of parameters in million. We can see that layer coordination provides improvement over baseline on both directions, and pivot-private embedding improves the result further by almost 10 BLEU scores on \textsc{Can}$\Rightarrow$\textsc{Man}. Combining both layer coordination and pivot-private embedding gives the best result, exceeding 12 and 5 BLEU scores than baseline NMT system on two directions, respectively. }
\label{Tab.AllResults}
\end{table*}
\section{Experiments}
\label{sec.Experiments}
\subsection{Experimental Setting}
\label{subsec.ModelTraining}
We use \textsc{Transformer}~\cite{vaswani2017attention} as our model architecture, and follow the base model setting for our model dimensionalities.
We refer to the parameter setting of ~\citeauthor{lample2018phrase}~\shortcite{lample2018phrase}, and implement our approach on top of their source code.\footnote{https://github.com/facebookresearch/UnsupervisedMT}
We use BLEU score as the evaluation metric.
The training of each model was early-stopped to maximize BLEU score on the development set.
All the embeddings are pretrained using fasttext \cite{bojanowski2017enriching},\footnote{https://github.com/facebookresearch/fastText} and pivot embeddings are derived from concatenated training corpora.
In the procedure of training, $\lambda_{div}$ is set to 1.0, while $\lambda_{com}$ is linearly decayed from 1.0 at the beginning to 0.0 at the step being 200k.
\paragraph{Baseline}
We compare our model with four systems:
\begin{itemize}
\item We collect simple \textsc{Can}-\textsc{Man} conversion rules and regard character-level transition as one of our baseline systems.
\item Our model is built upon unsupervised NMT methods, we choose one of the most widely used architecture \cite{lample2018phrase} as our baseline system.
\item Moreover, unsupervised phrase-based statistical MT \cite{lample2018phrase} has comparable performance to its NMT counterpart. Therefore, we also take unsupervised PB-SMT model into account.
\item For reference, we also examine whether a style transfer system \cite{hu2017toward} can handle dialect translation task.
\end{itemize}
\subsection{Overall Performances}
\label{sec.Results}
Tab.~\ref{Tab.AllResults} lists the experimental results. As seen,
character-level rule-based translation system performs comparably with conventional unsupervised NMT system.
This is in accord with~\citeauthor{lample2018phrase}~\shortcite{lample2018phrase} that training process of unsupervised NMT is vulnerable, because no aligned information between languages can be afforded to model training. Relatively, character transition rules offer adequate aligned references to conduct the fairish results. Besides, the unsupervised PB-SMT model performs slightly worse than NMT system, a possible reason is that the model is hard to extract a well-performed phrase table from colloquial data~\cite{laurens97lexicalist}. We also evaluate a style transfer system~\cite{hu2017toward}. The model underperforms unsupervised NMT baseline, indicating that, to some extent, style transfer is not adequate for dialect translation.
As to our proposed methods, layer coordination improves the performance by more than 5 BLEU scores at \textsc{Can}$\Rightarrow$\textsc{Man} direction, proving that sharing coordinate information at the same semantic level among dialects is effective. Besides, using pivot-private embedding further gives a higher increase of nearly 10 BLEU scores as well as reducing the model size, verifying that jointly modeling commonality and diversity of both dialects is both effective and efficient. Furthermore, combining both of above can give us more than 12 BLEU scores improvement than baseline NMT system, revealing that both pivot-private embedding and layer coordination are complementary to each other. As to the \textsc{Man}$\Rightarrow$\textsc{Can} direction, we can also observe improvements of our proposed methods. Translating \textsc{Man} to \textsc{Can} is more difficult since it contains more one-to-many character-level transition cases than its reversed direction. Despite this, our best approach still gains 5 BLEU scores improvement than baseline systems on \textsc{Man}$\Rightarrow$\textsc{Can} translation, revealing the universal effectiveness of our proposed method.
\begin{table}[t]
\centering
\begin{tabular}{l| c c}
\hline
Model & \textsc{Can}$\Rightarrow$\textsc{Man} & \textsc{Man}$\Rightarrow$\textsc{Can} \\
\hline
\hline
Baseline & 1.80 $\pm$ 0.44 & 2.57 $\pm$ 0.50 \\
Our Model & ~~~2.50 $\pm$ 0.87 $\uparrow$ & ~~~3.16 $\pm$ 0.61 $\uparrow$ \\
\hline
\end{tabular}
\caption{Human assessment on our experimental results. $\uparrow$: improvement is strongly significant ($p<0.01$). }
\label{Tab.human}
\end{table}
\paragraph{Human Assessment}
Since BLEU metric may be insufficient to reflect the quality of oral sentences, we randomly extract 50 \textsc{Can} $\Rightarrow$ \textsc{Man} and 50 \textsc{Man} $\Rightarrow$ \textsc{Can} examples from test set for human evaluation, respectively. Each example contains source sentence, translated sentences from Unsupervised NMT model (``baseline'') and our proposed model. Each native speaker is asked to present a score ranging from 1 to 4 to determine the translation quality of each translated result within each example. Each of the reported result is the average score assessed by 10 native speakers. As seen in Tab.~\ref{Tab.human}, results prove that proposed method significantly outperforms baseline NMT system ($p<0.01$) in both \textsc{Can}$\Rightarrow$\textsc{Man} and \textsc{Man}$\Rightarrow$\textsc{Can} directions.
\subsection{Effectiveness of Pivot-Private Embedding}
\label{subsec.AnalysisPivot}
\begin{figure}[t!]
\hspace{-5pt}
\centering
\includegraphics[keepaspectratio,height=0.55\columnwidth]{fig_pivot.pdf}
\caption{Model performances with various pivot embedding dimensionalities upon dev set. \# Params (M): number of parameters in million. We can observe that applying adequate dimensionality to pivot embedding is effective, rather than non-sharing any dimension among two dialects (dimensionality is 0) or sharing all dimensions (dimensionality is 512). }
\label{Fig.PivotEmbedding}
\end{figure}
To investigate the effectiveness of pivot-private embedding, we also conduct further research on the dimensionality of pivot embedding. As seen in Fig.~\ref{Fig.PivotEmbedding}, adequately sharing part of word embedding among dialects can greatly improve the effect, while using two independent sets of embedding for dialects, or sharing all dimensions of embedding leads to poor results. This indicates the importance of balancing the commonality and diversity for dialect translation. Moreover, the more the dimensionalities assigned to pivot embedding, the fewer the parameters required by models. We argue that using pivot-private embedding is not only an efficient way to augment the ability of dialect translation system to model diversity, but also offer an alternative way to relieve the effect of over-parameterization.
Comparing to the model with the dimensionality being 128, the model with 256 pivot embedding dimensions yields comparable results on the two translation directions, while assigns fewer parameters. Consequently, we apply 256 as our default setting for pivot embedding dimensionality.
\subsection{Effectiveness of Layer Coordination}
\label{subsec.AnalysisCoCAN}
\begin{figure}[t!]
\hspace{-5pt}
\centering
\includegraphics[keepaspectratio,height=0.55\columnwidth]{fig_convergence.pdf}
\caption{Learning curves of models upon dev set. Model with layer coordination (w) reaches its convergence at around step 240k, while model without (w/o) at around step 200k. As seen in this figure, applying layer coordination improves the performance of dialect translation model, as well as significantly stabilizes the training process.}
\label{Fig.Convergence}
\end{figure}
\begin{figure*}[t!]
\centering
\hspace{-10pt}
\begin{subfigure}[c]{\columnwidth}
{
\centering\includegraphics[keepaspectratio,width=0.85\columnwidth]{fig_coor_1.pdf}
\caption{\textsc{Man}$\Rightarrow$\textsc{Can}}
}
\end{subfigure}
\quad
\hspace{15pt}
\begin{subfigure}[c]{\columnwidth}
{
\centering\includegraphics[keepaspectratio,width=0.85\columnwidth]{fig_coor_2.pdf}
\caption{\textsc{Can}$\Rightarrow$\textsc{Man}}
}
\end{subfigure}
\caption{Experiments on number of shared encoder/decoder layers upon dev set. Here w and w/o denotes with and without layer coordination, respectively. From both figures, we can see that even without any shared layer, model with layer coordination can also be trainable rather than without. Models without layer coordination gain significant improvement upon sharing adequate layers for two dialects, while the performances decrease if all layers are shared. As to proposed layer coordination, the more layers shared for two dialects, the higher performance models can possess.}
\label{Fig.Coor}
\end{figure*}
Layer coordination intuitively interacts features from all dialects, helping the model to capture the commonality of linguistic features at coordinate semantic level \cite{Peters:2018:NAACL}.
\citeauthor{he2018layer}~\shortcite{he2018layer} reveal that layer coordination offers more aligned features at the same level, from lexical, through syntactic, to semantic. In this section, we investigate how layer coordination effects on translation quality.
\paragraph{Stability Analysis}
We first visualize the convergence of models with and without layer coordination. From Fig.~\ref{Fig.Convergence} we can observe that the model with layer coordination gains a steady training process, whereas training process of model without layer coordination is fragile, especially drop nearly 5 BLEU scores upon dev set at the middle term.
We attribute this to the fact that layer coordination provides coordinate semantic information~\cite{he2018layer}, which is beneficial to our dialect translation task with respect to commonality modeling. Since the two dialects share similar features, each decoder layer can leverage more fine-grained information from source side at the same semantic level, instead of only exploiting top-level representations.
\paragraph{Parameter Sharing}
For further investigation, we also conduct analyses on the effect of shared layers.
As visualized in Fig.~\ref{Fig.Coor}, baseline system performs worse when the number of shared layer is less than 1, and models with 3 layers shared performs better. This is consistent with findings in \citeauthor{lample2018phrase}~\shortcite{lample2018phrase} who suggest to share higher 3 layers in encoder and lower 3 ones in decoder.
Considering the proposed model, sharing more layers for \textsc{Can} and \textsc{Man} translation on both directions is profitable, and model with all layers shared gives the best performance on both directions. This demonstrates that \textsc{Can} and \textsc{Man} have more similar characteristics in numerous aspects of linguistics than distant languages \cite{artetxe2018unsupervised,lample2018unsupervised}, and layer coordination also contributes to the balance of commonality and diversity modeling upon dialect translation task.
\section{Related Work}
\label{sec.RelatedWork}
In this section, we will give an account of related research.
\paragraph{Dialect Translation} To the best of our knowledge, related studies on dialect translation have been carried upon a lot of languages. For example, in Arabic \cite{baniata2018neural} and Indian \cite{chakraborty2018bengali}, applying syllable symbols is effective for sharing information across languages. Compared to these tasks, our work mainly focus on handling problems in \textsc{Can} and \textsc{Man} translation task. \textsc{Can} and \textsc{Man} have little syllable information in common, as even the same character can be widely divergent in aspect of pronunciation~\cite{lee1998cancorp,wong2018register}. To push the difference further, a set of \textsc{Can} characters is quite rarely to be seen in \textsc{Man}, because \textsc{Can} is a dialect that without formal regulation of written characters~\cite{lee1998cancorp}. Moreover, younger \textsc{Can} speakers more likely refer to use phonetic labels (e.g. ``d'' responses to ``di'') or homophonetic character symbols instead of ground truth, which raises intractable issues when building the translation model.
\paragraph{Unsupervised Learning}
Our work refers to quantitative researches on unsupervised machine translation \cite{lample2018unsupervised,artetxe2018unsupervised,lample2018phrase}, which compose a well-designed training schedule for unsupervised translation tasks.
The difference between our research and theirs mainly lies in the similarity of involved languages, where dialects in our research are far similar with each other than those in unsupervised NMT tasks.
Moreover, our research is closely related to studies on style transfer~\cite{hu2017toward,prabhumoye2018style}. There are two main differences between our task and style transfer. Firstly, the source and target sides in style transfer task belong to the same language, where the difference mainly contributed by style, e.g. sentiment~\cite{hu2017toward}, while dialect translation has to identically guarantee the semantics between two sides. Secondly, there are more commonalities between source and target in style transfer than that in dialect translation. The former focus on the transition of different styles, the two sides can sometimes be distinguished by only a few words. Nevertheless, dialects have wide discrepancies which vary from vocabulary and word frequency to syntactic structure.
Methodologically, compare to the mentioned studies, we motivated by similarity and difference between dialects and propose pivot-private embedding and layer coordination to jointly balance {\em commonality} and {\em diversity}.
\section{Conclusions and Future Work}
\label{sec.Conclusion}
In this study, we investigate the feasibility of building a dialect machine translation system.
Due to the lack of parallel training corpus, we approach the problem with unsupervised learning. Considering the characteristics in dialect translation, we further improve our translation model by contributing pivot-private embedding and layer coordination, thus enriching the mutual linguistic information sharing across dialects (\textsc{Can}-\textsc{Man}). Our experimental results confirm that our improvements are universally-effectiveness and complementary to each other.
Our contributions are mainly in:
\begin{itemize}
\item We propose dialect translation task, and conduct massive examples of monolingual sentences with respect to dialects of spoken \textsc{Man} and \textsc{Can};
\item We apply an unsupervised learning algorithm to accomplish \textsc{Can}-\textsc{Man} dialect translation task. We leverage {\em commonality} and {\em diversity} modeling to strengthen the translation functionality among dialects, including pivot-private embedding and layer coordination;
\item Our approach outperforms conventional unsupervised NMT system over 12 BLEU scores, achieving a considerable performance and a new benchmark for the proposed \textsc{Can}-\textsc{Man} translation task.
\end{itemize}
In the future, it is interesting to validate our principles, i.e. commonality and diversity modeling, into other tasks, such as conventional machine translation and style transfer.
Another promising direction is to incorporate linguistic knowledge into unsupervised learning procedure, e.g. phrasal pattern~\cite{xu19leveraging}, word order information~\cite{yang19assessing} and syntactic structure~\cite{yang2019improving}.
\section{Acknowledgements}
This work was supported in part by the National Natural Science Foundation of China (Grant No. 61672555), the Joint Project of the Science and Technology Development Fund, Macau SAR and National Natural Science Foundation of China (Grant No. 045/2017/AFJ), the Science and Technology Development Fund, Macau SAR (Grant No. 0101/2019/A2), and the Multi-year Research Grant from the University of Macau (Grant No. MYRG2017-00087-FST). We thank all the reviewers for their insightful comments.
\bibliographystyle{aaai}
|
2,869,038,155,093 | arxiv | \section{Introduction}
\label{sec:intro}
D-branes \cite{DLP,Polchinski,PolchRev}, ``solitonic'' objects
carrying Ramond-Ramond charges, play crucial roles in non-perturbative
understanding of string theories based on dualities \cite{Duality}.
Their existence was expected from various dualities including
$SL(2,\bm{Z})$ duality in type IIB closed superstring theory
\cite{SchwarzSen},
and the corresponding classical solutions with Ramond-Ramond charges
were found in supergravity theory which describes the low energy
dynamics of superstring theory \cite{Schwarz}.
Then a microscopic construction of such ``solitonic'' objects was
given as hypersurfaces on which Dirichlet open strings end
\cite{DLP}.
Though first introduced as fixed hypersurfaces, D-branes become
dynamical objects with internal degrees of freedom originating from
Dirichlet open strings.
For example, a part of the massless gauge fields (``photons'') are
converted into the degrees of freedom representing the collective
coordinates for translation of D-branes.
The purpose of this paper is to introduce D-branes in covariant
bosonic string field theory (SFT), a formulation of string theory
as a straightforward extension of gauge field theories.
The most ambitious and interesting approach to D-branes using SFT
would be identify D-branes as
classical solutions (solitons) in closed SFT and carry out the
quantization around D-branes using the technique familiar in local
field theories \cite{Soliton}.
This attempt, however, seems very hard and might be impossible in view
of the $1/g$ dependence of the D-brane tension on the closed string
coupling constant $g$.
Instead, we shall adopt another way of introducing D-branes into
closed SFT. This is to add to the SFT action a term describing
the interaction between D-brane and closed string.
This interaction will be given as a product $B\cdot\Phi$ of closed
string field $\Phi$ and the ``boundary state'' $B$
\cite{ADGNSV,CLNY1,CLNY2,CLNY3,CallanKlebanov}.
The latter has been known to describe the initial (final) state of a
closed string emitted from (absorbed by) the D-brane.
Of course, we need a principle for introducing such a new
interaction. Our principle here is to keep the stringy local gauge
invariance present in the original closed SFT.
This stringy invariance includes, for example, the general coordinate
invariance and the gauge invariance associated with massless
anti-symmetric tensor field.
In order to preserve the gauge invariance after introducing the
$B\cdot\Phi$ interaction, the boundary state $B$ must also transform.
We shall see that this is realized by defining the transformation law
of the dynamical variables $W$ associated with the boundary state.
The gauge invariance requirement also fixes the $W$-dependence of the
normalization factor of the boundary state as well as the purely
$W$-term which we have to add to the SFT action besides the
$B\cdot\Phi$ interaction.
In this paper we take as $W$, the dynamical variable associated with
a D-brane, constant field strength $F_{\mu\nu}$ and the parameter
$\theta_\mu^i$ representing the tilt of the D-brane.
Unfortunately, we do not yet have a fully satisfactory quantum
theory of covariant closed SFT. The origin of the problem is that,
although we have closed SFT actions having gauge invariance
\cite{HIKKOlettclosed,HIKKOclosed,Non-Poly},
their naive path-integral quantization leads to theories where
gauge invariance and unitarity are broken.
One way to remedy this defect is to add quantum corrections to
classical SFT action \cite{HataBV,Zwiebach} by following the
Batalin-Vilkovisky formalism \cite{BV}.
However, the resulting theories become too complicated to be used for
practical analysis.
In this paper, we ignore this quantization problem and construct a
``closed SFT + D-brane'' system on the basis of covariant closed SFT
proposed in refs.\ \cite{HIKKOlettclosed,HIKKOclosed}.
We hope that the essentials of this paper remain valid in a more
complete formulation.
Finally, we mention another way of describing D-branes in the
framework of SFT which we do not adopt in this paper.
This is to consider a field theory of Dirichlet open string.
Given a SFT for Neumann open string (e.g., the ones given in
\cite{WittenSFT,HIKKOopen}), the Dirichlet open SFT is obtained by
T-duality transformation \cite{GPR,KZ}: in the BRST charge and the
string vertices we have only to replace the center-of-mass momentum in
the transverse directions with the difference between the coordinates
of the two D-branes on which the open string end.
However, in this approach the closed string degrees of freedom are
treated rather indirectly since they appear dynamically as
loop effects in covariant open SFT.
The organization of the rest of this paper is as follows.
In Sec.\ \ref{sec:dLm}, we construct ``closed SFT + D-brane'' system
using the gauge invariance principle mentioned above.
The gauge transformation considered is the one which shifts the
anti-symmetric tensor field in $\Phi$ by a constant.
In Sec.\ \ref{sec:dLp}, we examine the gauge invariance under linear
coordinate transformation.
Though in Secs.\ \ref{sec:dLm} and \ref{sec:dLp} we take only
$F_{\mu\nu}$ as dynamical variable associated with D-brane, in
Sec.\ \ref{sec:tilt} we introduce the variable $\theta_\mu^i$ specifying
the tilt of the D-brane. In Sec.\ \ref{sec:sigma}, the correspondence
between the $\sigma$-model approach and the present SFT approach is
studied. The final section is devoted to a summary and discussions.
In Appendix \ref{app:sft}, we summarize various formulas in SFT used
in the text, and in Appendix \ref{app:star}, we present the details
of the calculation of the star products used in Secs.\ \ref{sec:dLm},
\ref{sec:dLp} and \ref{sec:tilt}.
\clearpage
\section{Introducing D-brane to SFT}
\label{sec:dLm}
\subsection{Source term and gauge invariance}
We start with the system of closed SFT field $\Phi$
described by the action \cite{HIKKOclosed},
\begin{equation}
S_0[\Phi]
=\frac{1}{g^2}\left\{\frac{1}{2}\,\Phi\cdotQ_{\rm B}\Phi
+ \frac{1}{3}\Phi\cdot(\Phi\star\Phi)\right\},
\label{eq:Sz}
\end{equation}
which has an invariance under the stringy local gauge transformation
$\delta_\Lambda$,
\begin{eqnarray}
\delta_\Lambda\Phi = Q_B \Lambda +2\Phi\star\Lambda .
\label{eq:dL_Phi}
\end{eqnarray}
In eqs.\ (\ref{eq:Sz}) and (\ref{eq:dL_Phi}) the meaning of the
products $\cdot$ and $\star$ are as given in ref.\ \cite{HIKKOclosed}.
Compared with the closed string field $\Phi$ in ref.\
\cite{HIKKOclosed}, we have rescaled it by the coupling constant $g$
for later convenience.
We would like to extend this closed string field system
to the one containing the D-brane degrees of freedom.
As explained in Sec.\ \ref{sec:intro},
our principle of the extension is to keep the gauge invariance
(\ref{eq:dL_Phi}) intact.
Since D-brane can be regarded as a source of closed strings,
let us add to (\ref{eq:Sz}) the following source term
\begin{equation}
S_{\rm source} = B[W]\cdot\Phi =\int\! dz_0\braket{B[W]}{\Phi} .
\label{eq:Ss}
\end{equation}
Here, $\ket{B[W]}$ represents the state for the emission and
absorption of closed strings. It is a function of new
dynamical degrees of freedom associated with the D-brane, which we
denote collectively by $W$.
The integration measure $dz_0$ is over the zero-modes of the string
coordinates
$Z(\sigma)\equiv\left(X^M(\sigma),c(\sigma),\overline{c}(\sigma)\right)$ and
the string-length parameter.
In the following we adopt the $\pi_c^0$-omitted formulation
\cite{HIKKOclosed} and the
representation $z_0\equiv(x^M,\overline{c}_0,\wt{\alpha})$, where
$\wt{\alpha}$ is the variable conjugate to the string-length parameter
$\alpha$.\footnote{
String field $\Phi$ in SFT of refs.\ \cite{HIKKOclosed,HIKKOopen}
contains as its argument the string-length parameter $\alpha$ in
addition to $\left(X^M(\sigma),c(\sigma),\overline{c}(\sigma)\right)$.
The $\wt{\alpha}$-representation is obtained from the
$\alpha$-representation by the Fourier transformation $\int\!d\alpha
\exp\left(i\alpha\wt{\alpha}\right)$:
$\alpha$ is a momentum-like variable while $\wt{\alpha}$ is a
coordinate-like one.
Physical quantities do not depend on $\wt{\alpha}$, and we have an
infinite number of equivalent worlds specified by $\wt{\alpha}$.
}
Since the string field $\Phi$ and the measure $dz_0$ have ghost number
$N_{\rm gh}[\Phi]=-1$ and $N_{\rm gh}[dz_0]=1$, respectively, $B[W]$ should carry
$N_{\rm gh}[B]=0$.
The Grassmann integration over $\overline{c}_0$ is defined by
$\int\!d\overline{c}_0\,\overline{c}_0=1$.
Assuming that the transformation law (\ref{eq:dL_Phi}) of the closed
string field $\Phi$ remains unchanged after introducing $S_{\rm source}$,
the requirement that the system described by the action $S_0+S_{\rm source}$
be invariant under the gauge transformation implies that the equation,
\begin{equation}
0=\delta_\Lambda\left(S_0 +S_{\rm source}\right)=\delta_\Lambda S_{\rm source} =
\delta_\Lambda B\cdot\Phi + B\cdot\left(Q_{\rm B}\Lambda+2\Phi\star\Lambda
\right) ,
\label{eq:dL_Ss=0}
\end{equation}
holds for any $\Phi$ and any $\Lambda$. This leads to the following
two conditions:
\begin{eqnarray}
&&Q_{\rm B} B[W] =0 \label{eq:QB_B=0} ,\\
&&\delta_\Lambda B[W] = 2 B[W]\star\Lambda .
\label{eq:dL_B}
\end{eqnarray}
Namely, $B[W]$ must be a BRST invariant state annihilated by $Q_{\rm B}$,
and the gauge transformation law $\delta_\LambdaW$ must be determined so as to
satisfy the second condition (\ref{eq:dL_B}) (of course, there is no
guarantee at this stage that there exists $\delta_\LambdaW$ satisfying
eq.\ (\ref{eq:dL_B})).
Since (\ref{eq:QB_B=0}) should hold for arbitrary $W$, $W$ can be
regarded as arbitrariness in the solution of $Q_{\rm B} B=0$ just like
the collective coordinates of soliton solutions in conventional field
theories.
Although in the complete treatment $W$ is expected to represent the
same number of degrees of freedom as Dirichlet open strings,
we have not yet succeeded in developing a systematic
method of treating within our framework all the degrees of freedom
associated with D-brane.
In this section, we shall consider the simplest case of taking as $W$
only the {\em constant} field strength $F_{\mu\nu}$ of the massless
gauge field on the D-$p$-brane. In Sec.\ \ref{sec:tilt} we shall add
to $W$ the degrees of freedom representing the tilt of the D-brane.
Let $\mu=0,1,\cdots,p$ and $i=p+1,\cdots,d-1$
denote the space-time indices parallel and perpendicular to
the $p$-brane, respectively.
Then, the state $\ket{B(F)}$ satisfying the BRST-invariance
condition (\ref{eq:QB_B=0}) has been known as the boundary state
\cite{ADGNSV,CLNY1,CLNY2,CLNY3,CallanKlebanov}:
\begin{equation}
\ket{B(F)(x^M,\overline{c}_0,\wt{\alpha})}=
N(F)\ket{\B{N}(F)}\otimes\ket{\B{D}}\otimes\ket{\B{gh}} ,
\label{eq:B}
\end{equation}
with the factor states $\ket{\B{N,D,gh}}$ given by\footnote{
The Lorentz indices $M=(\mu,i)$ are raised/lowered by using the flat
metric $\eta^{MN}=\eta_{MN}=(\eta_{\mu\nu},\delta_{ij})
={\rm diag}(-1,1,\cdots,1)$.
}
\begin{eqnarray}
&&\ket{\B{N}(F)}=\exp\left\{
-\sum_{n\ge 1}\frac{1}{n}\a_{-n}^{(+)\mu}
{\cal O}(F)_\mu{}^\nu\a_{-n\,\nu}^{(-)}
\right\}\ket{0}_{p+1} ,
\label{eq:B_N}\\
&&\ket{\B{D}}=\exp\left\{
\sum_{n\ge 1}\frac{1}{n}\a_{-n}^{(+)\,i}
\a_{-n\,i}^{(-)}\right\}\ket{0}_{d-p-1}
\delta^{d-p-1}\bigl(x^i\bigr) ,
\label{eq:B_D}\\
&&\ket{\B{gh}}=\exp\left\{
\sum_{n\ge 1}\left(c_{-n}^{(+)}\overline{c}_{-n}^{(-)}
+ c_{-n}^{(-)}\overline{c}_{-n}^{(+)}\right)\right\}\ket{0}_{\rm gh} .
\label{eq:B_gh}
\end{eqnarray}
In eq.\ (\ref{eq:B_N}), ${\cal O}$ is a $(p+1)\times(p+1)$ matrix
satisfying the orthonormality condition
\begin{equation}
{\cal O}_\mu{}^\rho\eta_{\rho\lambda}{\cal O}_{\nu}{}^\lambda
=\eta_{\mu\nu} ,
\label{eq:OetaO=O}
\end{equation}
and is expressed in terms of an anti-symmetric constant matrix
$F_{\mu\nu}$ as
\begin{equation}
{\cal O}(F)_\mu{}^\nu\equiv
\left[\left(1 + F\right)^{-1}
\left(1 - F\right)\right]_\mu^{\;\nu} .
\label{eq:calO}
\end{equation}
The state $\ket{B(F)}$ is characterized by the
following conditions for the string coordinates,
\begin{eqnarray}
&&X^i(\sigma)\ket{B(F)}=0 ,
\label{eq:X_B=0}\\
&&\left(P_\mu(\sigma)-F_{\mu\nu}\Drv{}{\sigma}
X^\nu(\sigma)\right)\ket{B(F)}=0 ,
\label{eq:(P-FX)B=0}\\
&&\pi_c(\sigma)\ket{B(F)}=\pi_{\overline{c}}(\sigma)\ket{B(F)}=0 .
\label{eq:pi_c_B=0}
\end{eqnarray}
They are equivalently expressed in terms of the oscillation modes as
\begin{eqnarray}
&&\left(\a^{(+)\,i}_{n} - \a^{(-)\,i}_{-n}\right)\ket{B(F)}=0 ,
\label{eq:X^i_B=0_oscl}\\
&&\left(\a^{(+)}_{n\,\mu}+{\cal O}_\mu{}^\nu\a^{(-)}_{-n\,\nu}
\right)\ket{B(F)}
=\left(\a^{(-)\,\mu}_{n}+\a^{(+)\,\nu}_{-n}{\cal O}_\nu{}^\mu
\right)\ket{B(F)}=0 ,
\label{eq:(P-FX)B=0_oscl}\\
&&\left(c^{(+)}_n + c^{(-)}_{-n}\right)\ket{B(F)}
=\left(\overline{c}^{(+)}_n - \overline{c}^{(-)}_{-n}\right)\ket{B(F)}=0 ,
\label{eq:pi_c_B=0_oscl}\\
&&\left(x^i, \Pdrv{}{x^\mu}, \Pdrv{}{\overline{c}_0}\right)\ket{B(F)}=0 ,
\label{eq:zero-modes_B=0}
\end{eqnarray}
with $n=\pm 1, \pm 2, \cdots$.
The BRST invariance of $\ket{B(F)}$,
\begin{equation}
Q_{\rm B}\ket{B(F)}=0 ,
\label{eq:QB_B(F)=0}
\end{equation}
may be understood from the form of $Q_{\rm B}$ (see eq.\ (\ref{eq:QB_sc})
for the complete expression),
\begin{equation}
Q_{\rm B}=2\radical"270370}%\def\sqrt{\@@sqrt{\pi}\int_0^{2\pi}\!d\sigma\left\{
i\pi_{\overline{c}}\left[\cdots\right]
-c\left(P_M X'^M + c'\pi_c + \pi'_{\overline{c}}\overline{c}\right)\right\} ,
\label{eq:QB_rough}
\end{equation}
and the properties (\ref{eq:X_B=0}), (\ref{eq:(P-FX)B=0}) and
(\ref{eq:pi_c_B=0}), and in particular, the anti-symmetric nature of
$F_{\mu\nu}$.\footnote{
Although the normal ordering for $Q_{\rm B}$ is ignored in this argument
based on eq.\ (\ref{eq:QB_rough}), the part of $Q_{\rm B}$ where the normal
ordering is relevant is $L\left(\partial/\partial\overline{c}_0\right)$ (see
(\ref{eq:QB-II})), which annihilates $\ket{B(F)}$ since it is
independent of $\overline{c}_0$.
}
The front factor $N(F)$ in eq.\ (\ref{eq:B}) cannot be determined at
this stage from the requirement of the BRST-invariance
(\ref{eq:QB_B(F)=0}) alone.
In the next subsection we shall fix $N(F)$ using the gauge invariance
requirement. The resultant $N(F)$ will agree with the one obtained
previously from quite a different argument \cite{CLNY3}.
\subsection{Determination of $N(F)$
and the transformation law of $F_{\mu\nu}$}
In this subsection we shall examine eq.\ (\ref{eq:dL_B}), the
condition to determine $\delta_\LambdaW$. Since we have restricted $W$ to a
subspace of {\em constat} field strength $F$, we are not allowed to
consider arbitrary gauge transformation functional $\Lambda$:
$\delta_\LambdaW$ is in general not confined to our subspace.
As one of the ``allowed'' $\Lambda$,
let us consider the following $\Lambda_-$:
\begin{equation}
\ket{\Lambda_{-}(x,\overline{c}_0,\wt{\alpha})}
=i\,\overline{c}_0\left(
\a_{-1}^{(+)\mu}\overline{c}_{-1}^{(-)}-\a_{-1}^{(-)\mu}\overline{c}_{-1}^{(+)}
\right)\ket{0}\zeta_\mu(x,\wt{\alpha}) ,
\label{eq:Lambda_-}
\end{equation}
with $\zeta_\mu$ linear in $x^\mu$,
\begin{equation}
\zeta_\mu(x,\wt{\alpha})=a_{\mu\nu}x^\nu .
\label{eq:zeta_mu}
\end{equation}
The gauge transformation (\ref{eq:dL_Phi}) for this $\Lambda_-$
induces the shift
$\delta_{\Lambda_-}B_{\mu\nu}
=\partial_\mu\zeta_\nu-\partial_\nu\zeta_\mu + \ldots$
on massless anti-symmetric tensor $B_{\mu\nu}$ contained in
$\Phi$ (see Sec.\ \ref{sec:sigma}).
Since $\Lambda_-$ (\ref{eq:Lambda_-}) does not depend on
$\wt{\alpha}$, implying that it has vanishing string-length
$\alpha=0$, the star product $\Psi\star\Lambda_-$ for any string
functional $\Psi$ is expressed as
\begin{equation}
\ket{\Psi\star\Lambda_{-}}
=\frac{1}{2}\, a_{\mu\nu}{\cal D}_{-}^{\mu\nu}\ket{\Psi} ,
\label{eq:Psi*L-}
\end{equation}
in terms of the linear anti-hermitian operator ${\cal D}_{-}^{\mu\nu}$
given by
\begin{eqnarray}
&&{\cal D}_{-}^{\mu\nu}=-i\int_0^{2\pi}\!\!d\sigma\,
\Drv{X^{\mu}(\sigma)}{\sigma}X^{\nu}(\sigma)
-\eta^{\mu\nu}\int_0^{2\pi}\!\!d\sigma\,
i\pi_c(\sigma)|_{\rm oscl}\cdot{}i\pi_{\overline{c}}(\sigma)|_{\rm oscl}
\nonumber\\
&&=-\frac{1}{2}\sum_{n\ge 1}\frac{1}{n}\left(
\a_{-n}^{(+)\mu}\a_{n}^{(+)\nu}
-\a_{-n}^{(-)\mu}\a_{n}^{(-)\nu}
+\a_{n}^{(+)\mu}\a_{n}^{(-)\nu}
-\a_{-n}^{(+)\mu}\a_{-n}^{(-)\nu}
-[\mu\leftrightarrow\nu]
\right)
\nonumber\\
&&\qquad+ \mbox{(ghost coordinates part)} ,
\label{eq:Dminus}
\end{eqnarray}
where $i\pi_c(\sigma)\vert_{\rm oscl}$ denotes the non-zero mode part
of $i\pi_c(\sigma)$.
Details of the derivation of (\ref{eq:Psi*L-}) is presented in
Appendix \ref{app:star}.
The first term of ${\cal D}_{-}^{\mu\nu}$,
$-i\int_0^{2\pi}d\sigma\left(dX^\mu/d\sigma\right)
\zeta_\mu\left(X\right)$,
is a geometrically natural one.
We do not know the intuitive interpretation of the ghost coordinate
part of ${\cal D}_{-}^{\mu\nu}$.
Applying eq.\ (\ref{eq:Psi*L-}) for $\Psi=B(F)$, we obtain
\begin{eqnarray}
&&\ket{B(F)\star\Lambda_{-}}
=\Biggl\{-\frac{1}{2}\zeta(0)\,\mathop{\rm tr}\left[\left(a-\tp{a}\right)\frac{1}{1+F}\right]
\nonumber\\
&&\hspace*{3.5cm}
+\sum_{n\ge 1}\frac{1}{n}\a_{-n}^{(+)\mu}
\left[\frac{1}{1+F}\left(a-\tp{a}\right)\frac{1}{1+F}\right]_{\mu\nu}
\a_{-n}^{(-)\nu}\Biggr\}\ket{B(F)} ,
\label{eq:B*L-}
\end{eqnarray}
where $\tp{a}$ is the transposition of matrix $a$, and
$\zeta(0)$ is the value of the zeta-function at the origin;
$\zeta(0)=\sum_{n=1}^\infty 1$.
In deriving eq.\ (\ref{eq:B*L-}) we have used the fact that
$\ket{B(F)}$ is annihilated by the ghost coordinates part of
${\cal D}_{-}^{\mu\nu}$ due to (\ref{eq:pi_c_B=0}).
We would like to determine the transformation law $\delta_{\Lambda_-} F$ of the
(constant) field strength $F$ under the present gauge transformation.
$\delta_{\Lambda_-} F$ should satisfy (\ref{eq:dL_B}), namely,
\begin{equation}
\delta_{\Lambda_-}\ket{B(F)}=2 \ket{B(F)\star\Lambda_-} .
\label{eq:dLm_B}
\end{equation}
It is easily seen that, if such a $\delta_{\Lambda_-} F$ exists, it should be given
by
\begin{equation}
\delta_{\Lambda_-} F_{\mu\nu}=\partial_\nu\zeta_\mu-\partial_\mu\zeta_\nu
=a_{\mu\nu}-a_{\nu\mu} ,
\label{eq:dLm_F}
\end{equation}
since the $\a_{-n}^{(+)}\a_{-n}^{(-)}$ term on the RHS of
(\ref{eq:B*L-}) is nothing but the variation of the exponent of
$\ket{\B{N}(F)}$ under this $\delta_{\Lambda_-}$ (\ref{eq:dLm_F}).
In order for eq.\ (\ref{eq:dLm_B}) be satisfied completely,
the first term on the RHS of eq.\ (\ref{eq:B*L-}) must be equal to
$\delta_{\Lambda_-}\ln N(F)$:
\begin{equation}
\delta_{\Lambda_-}\ln N(F)= - \zeta(0)\,\mathop{\rm tr}\left[\left(a-\tp{a}\right)\frac{1}{1+F}\right] ,
\label{eq:dLm_lnN(F)}
\end{equation}
This fixes $N(F)$ to be
\begin{equation}
N(F)=\frac{T_p}{4}\left[\det\left(1+F\right)\right]^{-\zeta(0)} ,
\label{eq:N(F)}
\end{equation}
where $T_p$ is a constant, and the factor $1/4$ is for the convenience
of the comparison with the $\sigma$-model approach in Sec.\
\ref{sec:sigma}.
Thus we have determined $\delta_{\Lambda_-} F$ as well as the front factor $N(F)$
which satisfy eq.\ (\ref{eq:dLm_B}).
The form of the front factor (\ref{eq:N(F)}) agrees with the one
determined by different arguments \cite{CLNY3}.
\subsection{Born-Infeld action}
One might think that the invariance of the system $S_0+S_{\rm source}$
under the gauge transformation $\delta_{\Lambda_-}$ has been established since we
have eqs.\ (\ref{eq:QB_B(F)=0}) and (\ref{eq:dLm_B}). However, this
is not the case due to the fact that $Q_{\rm B} B(F)=0$ does not ensure
$B(F)\cdot\overrightarrow{Q_{\rm B}}\Lambda_-=0$ for the present $\Lambda_-$
(the right-arrow over $Q_{\rm B}$ indicates that it should operate on the
right). In fact, we have
\begin{equation}
B(F)\cdot\overrightarrow{Q_{\rm B}}\Lambda_-
=\int\!dz_0\bra{B(F)}\overrightarrow{Q_{\rm B}}\ket{\Lambda_-}
=2 V_{p+1}V_{\wt{\alpha}}\,
\mathop{\rm tr}\left[\left(a-\tp{a}\right)\frac{1}{1+F}\right]N(F)
\ne 0 ,
\label{eq:B_QB_L-}
\end{equation}
where $V_{p+1}=\int\!d^{p+1}x$ is the space-time volume of the
D-$p$-brane and $V_{\wt{\alpha}}=\int\!d\wt{\alpha}$ is the
volume of the $\wt{\alpha}$-space.
The impossibility of reversing the direction of the arrow over $Q_{\rm B}$
in (\ref{eq:B_QB_L-}) may be explained as follows.
Note that the part of $Q_{\rm B}$ contributing to $Q_{\rm B}\ket{\Lambda_-}$ is
$i\sum_\pm
\left(c_{-1}^{(\pm)}\a_{1}^{(\pm)\nu}+c_{1}^{(\pm)}\a_{-1}^{(\pm)\nu}
\right)\!\partial/\partial x^\mu$.
Since $\Lambda_-\propto x$, we have, extracting the $x$-part of the
inner product,
\begin{equation}
B(F)\cdot\overrightarrow{Q_{\rm B}}\Lambda_-
\sim\int\! dx 1\frac{\overrightarrow\partial}{\partial x} x
\ne
-\int\! dx 1\frac{\overleftarrow\partial}{\partial x} x
\sim B(F)\cdot\overleftarrow{Q_{\rm B}}\Lambda_- =0 .
\label{eq:symbolic}
\end{equation}
To achieve the full gauge invariance under $\delta_{\Lambda_-}$, we have to add
to $S_0 + S_{\rm source}$ a purely $F$ term $I(F)$ whose $\delta_{\Lambda_-}$ transformation
cancels (\ref{eq:B_QB_L-}):
\begin{equation}
\delta_{\Lambda_-} I(F) + B(F)\cdot\overrightarrow{Q_{\rm B}}\Lambda_- =0 .
\label{eq:dLm_I+B_QB_L-=0}
\end{equation}
In view of eq.\ (\ref{eq:dLm_lnN(F)}), the desired $I(F)$ satisfying
eq.\ (\ref{eq:dLm_I+B_QB_L-=0}) is seen to be given by
\begin{equation}
I(F)=\frac{2 V_{p+1}V_{\wt{\alpha}}}{\zeta(0)}N(F)
=\frac{2}{\zeta(0)}\int\! d^{p+1}x\!\int\! d\wt{\alpha}\,N(F) .
\label{eq:I(F)}
\end{equation}
This is nothing but the Born-Infeld action if we adopt the
zeta-function regularization $\zeta(0)=-1/2$.
\subsection{Total action}
Summarizing the results of the previous subsections, the final
expression of our closed SFT system coupled to a D-brane is described
by the action $S_{\rm tot}[\Phi,F]$,
\begin{equation}
\S{tot}[\Phi,F]=S_0[\Phi] + B(F)\cdot\Phi + I(F) .
\label{eq:Stot}
\end{equation}
$\S{tot}$ is invariant under the gauge transformation $\delta_{\Lambda_-}$
with $\Lambda_-$ given by (\ref{eq:Lambda_-}) and (\ref{eq:zeta_mu}).
The transformation rules of $\Phi$ and $F$ are given respectively by
eqs.\ (\ref{eq:dL_Phi}) and (\ref{eq:dLm_F}).
\section{Linear coordinate transformation}
\label{sec:dLp}
As another gauge transformation which is closed within the constant
$F_{\mu\nu}$, let us consider the one corresponding to the linear
coordinate transformation,
$x^\mu\to x^\mu - \xi^\mu(x,\wt{\alpha})$ with
\begin{equation}
\xi^\mu(x,\wt{\alpha})=b^\mu{}_\nu\,x^\nu .
\label{eq:xi^mu}
\end{equation}
Such a coordinate transformation is generated by
\begin{equation}
\ket{\Lambda_{+}(x,\overline{c}_0,\wt{\alpha})}
=i\,\overline{c}_0\left(
\a_{-1\,\mu}^{(+)}\overline{c}_{-1}^{(-)}+\a_{-1\,\mu}^{(-)}\overline{c}_{-1}^{(+)}
\right)\ket{0}\xi^\mu(x,\wt{\alpha}) ,
\label{eq:Lambda_+}
\end{equation}
which is symmetric with respect to the left- and the right-moving
oscillators. Similarly to (\ref{eq:Psi*L-}), we have for any $\Psi$
(see Appendix \ref{app:star}),
\begin{equation}
\ket{\Psi\star\Lambda_{+}}
=\frac{1}{2}\, b_{\mu\nu}{\cal D}_{+}^{\mu\nu}\ket{\Psi} ,
\label{eq:Psi*L+}
\end{equation}
where ${\cal D}_{+}^{\mu\nu}$ is a linear operator given by
\begin{eqnarray}
&&{\cal D}_{+}^{\mu\nu}=
\frac{i}{2}\int_0^{2\pi}\!\!d\sigma
\Bigl\{X^\nu(\sigma),P^\mu(\sigma)\Bigr\}
- \eta^{\mu\nu}\left(
\frac{1}{2}\left\{\wt{\alpha},\Pdrv{}{\wt{\alpha}}\right\}
+{\cal G}\right) .
\label{eq:Dplus}
\end{eqnarray}
On the RHS of eq.\ (\ref{eq:Dplus}), the oscillator expression of the
first term is
\begin{eqnarray}
&&\frac{i}{2}
\int_0^{2\pi}\!d\sigma\Bigl\{X^\nu(\sigma),P^\mu(\sigma)\Bigr\}
=\frac{1}{2}\left\{x^\nu,\Pdrv{}{x_\mu}\right\}
\nonumber\\
&&\qquad
-\frac{1}{2}\sum_{\pm}\sum_{n\ge 1}\frac{1}{n}\left(
\a_{-n}^{(\pm)\mu}\a_{n}^{(\pm)\nu}-\a_{-n}^{(\pm)\nu}\a_{n}^{(\pm)\mu}
+\a_{n}^{(\pm)\mu}\a_{n}^{(\mp)\nu}-\a_{-n}^{(\pm)\mu}\a_{-n}^{(\mp)\nu}
\right) ,
\label{eq:int_PX}
\end{eqnarray}
while the second term ${\cal G}$ consists solely of the ghost coordinates:
\begin{eqnarray}
&&{\cal G}=
\frac{1}{2}\int_0^{2\pi}\!\!d\sigma
\Bigl[\overline{c}|_{\rm oscl}(\sigma), i\pi_{\overline{c}}|_{\rm oscl}(\sigma)\Bigr]
+ \widetilde{N}_{\rm gh}
\nonumber\\
&&\phantom{{\cal G}}
=\frac{1}{2}\sum_\pm\sum_{n\ge 1}\left(
\overline{c}_n^{(\pm)}c_n^{(\mp)}+\overline{c}_{-n}^{(\pm)}c_{-n}^{(\mp)}
\right)+\frac{1}{2}\widetilde{N}_{\rm gh} ,
\label{eq:G}
\end{eqnarray}
with $\widetilde{N}_{\rm gh}$ being the oscillator part of the ghost
number operator (\ref{eq:wtNFP}).
Note that the first term of ${\cal D}_{+}^{\mu\nu}$,
$(1/2)\int_0^{2\pi}d\sigma\left\{X^\nu,\delta/\delta X_\mu\right\}$,
is in fact the operator of linear coordinate transformation.
For $b^\mu{}_\nu$ with non-vanishing trace,
$b_{\mu\nu}{\cal D}_{+}^{\mu\nu}$
also contains the $\wt{\alpha}$ and the ghost coordinate parts,
whose geometrical interpretation is not clear to us.
Applying eq.\ (\ref{eq:Psi*L+}) to $\Psi=B(F)$, we have
\begin{eqnarray}
&&2\ket{B(F)\star\Lambda_{+}}
=\Biggl\{\frac{1}{2}\,
\zeta(0)\,\mathop{\rm tr}\left[\left(b+\tp{b}\right)\left(\frac{1-F}{1+F}+1\right)\right]
\nonumber\\
&&\qquad\quad
+ 2\sum_{n\ge 1}\frac{1}{n}\a_{-n}^{(+)\mu}\left[
\frac{1}{1+F}\left(\tp{b}F + Fb\right)\frac{1}{1+F}\right]_{\mu\nu}
\a_{-n}^{(-)\nu}
\Biggr\}\ket{B(F)} .
\label{eq:B*L+}
\end{eqnarray}
In deriving eq.\ (\ref{eq:B*L+}), we have used in particular that
\begin{equation}
{\cal G}\ket{B(F)}= -\zeta(0)\ket{B(F)} ,
\label{eq:G_B(F)}
\end{equation}
and that the zero-mode part of ${\cal D}_{+}^{\mu\nu}$,
\begin{eqnarray}
&&{\cal D}_+^{\mu\nu}\Big|_{\rm 0-modes}
=\frac{1}{2}\left\{x^\nu,\Pdrv{}{x_\mu}\right\}
-\frac{1}{2}\eta^{\mu\nu}\left\{\wt{\alpha},\Pdrv{}{\wt{\alpha}}\right\}
=x^\nu\Pdrv{}{x_\mu}-\eta^{\mu\nu}\wt{\alpha}\Pdrv{}{\wt{\alpha}} ,
\label{eq:calDplus_0}
\end{eqnarray}
annihilates $\ket{B(F)}$ since it depends on neither $x^\mu$ nor
$\wt{\alpha}$ for a constant $F$.
Our next task is to identify the transformation rule of $F$ under the
present gauge transformation $\delta_{\Lambda_+}$. The equation for determining
$\delta_{\Lambda_+} F$ is
\begin{equation}
\delta_{\Lambda_+}\ket{B(F)}= 2 \ket{B(F)\star\Lambda_+} .
\label{eq:dLp_B}
\end{equation}
Since $\delta_{\Lambda_+}$ corresponds to the linear coordinate transformation, it
is natural to take
\begin{equation}
\delta_{\Lambda_+} F_{\mu\nu}=\partial_\mu\xi^\lambda F_{\lambda\nu}
+ \partial_\nu\xi^\lambda F_{\mu\lambda} + \xi^\lambda\partial_\lambda F_{\mu\nu}
= b^\lambda{}_\mu F_{\lambda\nu}
+ b^\lambda{}_\nu F_{\mu\lambda} ,
\label{eq:dLp_F}
\end{equation}
or equivalently $\delta_{\Lambda_+} F = \tp{b}\,F+F b$ in matrix notation.
However, since under (\ref{eq:dLp_F}) we have
\begin{eqnarray}
&&\delta_{\Lambda_+}\ket{B(F)}=\Biggl\{
\frac{1}{2}\,\zeta(0)\mathop{\rm tr}\left[
\left(b + \tp{b}\right)\left(\frac{1-F}{1+F} -1\right)\right]
\nonumber\\
&&\hspace*{3cm}
+ 2\sum_{n\ge 1}\frac{1}{n}\a_{-n}^{(+)\mu}\left[
\frac{1}{1+F}\left(\tp{b}F + Fb\right)\frac{1}{1+F}\right]_{\mu\nu}
\a_{-n}^{(-)\nu}
\Biggr\}\ket{B(F)} ,
\label{eq:dLp_B(F)}
\end{eqnarray}
eq.\ (\ref{eq:dLp_B}) holds only for a traceless $b$ satisfying
$b^\mu{}_\mu=0$ due to disagreement between the oscillator independent
terms of eqs.\ (\ref{eq:B*L+}) and (\ref{eq:dLp_B(F)}).\footnote{
The tracelessness restriction $b^\mu{}_\mu=0$ persists even if we take
into account the change of the measure $dz_0$ since the variations of
$d^{p+1}x$ and $d\wt{\alpha}$ cancel each other as seen from
eq.\ (\ref{eq:calDplus_0}).
}
Note that the first term on the RHS of (\ref{eq:dLp_B(F)}) is the
contribution of $\delta_{\Lambda_+}\ln N(F)$:
\begin{equation}
\delta_{\Lambda_+}\ln N(F)=-\zeta(0)\,\mathop{\rm tr}\left(\frac{1}{1+F}\,\delta_{\Lambda_+} F\right)
=\frac{1}{2}\,\zeta(0)\,\mathop{\rm tr}\left[
\left(b + \tp{b}\right)\left(
\frac{1-F}{1+F} -1\right)\right] .
\label{eq:dLp_lnN(F)}
\end{equation}
To establish the invariance under $\delta_{\Lambda_+}$, we have to confirm also
\begin{equation}
\delta_{\Lambda_+} I(F) + B(F)\cdot\overrightarrow{Q_{\rm B}}\Lambda_+ =0 ,
\label{eq:dLp_I+B_QB_L+=0}
\end{equation}
for $I(F)$ of eq.\ (\ref{eq:I(F)}).
(Note that, since $\Lambda_+$ is proportional to $x$, reversing the
direction of the operation of $Q_{\rm B}$ in the inner product
$B(F)\cdot\overrightarrow{Q_{\rm B}}\Lambda_+$ is not allowed as in the case
of $\Lambda_-$.)
Using eq.\ (\ref{eq:dLp_lnN(F)}) and
\begin{equation}
B(F)\cdot\overrightarrow{Q_{\rm B}}\Lambda_+
=\int\!dz_0\bra{B(F)}\overrightarrow{Q_{\rm B}}\ket{\Lambda_+}
= - V_{p+1}\,V_{\wt{\alpha}}\,
\mathop{\rm tr}\left[\left(b+\tp{b}\right)\left(\frac{1-F}{1+F} +1\right)\right]N(F) ,
\label{eq:B_QB_L+}
\end{equation}
we find that eq.\ (\ref{eq:dLp_I+B_QB_L+=0}) holds for traceless
$b$.
\section{Tilting the D-brane}
\label{sec:tilt}
So far we have considered a D-$p$-brane fixed at $x^i=0$
($i=p+1,\cdots,d-1$).
In this section we shall allow the D-brane to ``tilt'',
namely, consider a D-brane
\begin{equation}
x^i=\theta_\mu^i x^\mu ,
\label{eq:tilted_D-brane}
\end{equation}
specified by $\theta_\mu^i$ ($\theta_\mu^i$ with $\mu\ne 0$ is the tilt
angle of the D-brane and $\theta_0^i$ is its velocity).
The boundary state $\ket{B(F,\theta)}$ corresponding to such a tilted
D-brane is obtained from $\ket{B(F)}$
of the previous sections as\footnote{
Note that $\ket{B(F)}$ itself can be expressed as
$\ket{B(F)}=\exp\left(
\frac{1}{2}F_{\mu\nu}{\cal D}_{-}^{\mu\nu}\right)\ket{B(F=0)}$.
}
\begin{equation}
\ket{B(F,\theta)}=U(\theta)\ket{B(F)} ,
\label{eq:B=UB}
\end{equation}
where $U(\theta)$ is a unitary operator,
\begin{equation}
U(\theta)=\exp\left(-\theta_\mu^i{\cal D}_{+\,i}{}^\mu\right) ,
\label{eq:U}
\end{equation}
with ${\cal D}_{+\,i}{}^\mu$ given by
\begin{equation}
{\cal D}_{+\,i}{}^\mu = i\int_0^{2\pi}\!d\sigma
X^\mu(\sigma)P_i(\sigma) .
\label{eq:calD_t}
\end{equation}
In fact, $U(\theta)$ effects the following transformation on the
string coordinates and their conjugates,
\begin{equation}
U(\theta)\pmatrix{X^\mu\cr P_\mu\cr X^i\cr P_i} U(\theta)^{-1}
=\pmatrix{X^\mu\cr P_\mu + \theta_\mu^j P_j\cr
X^i - \theta_\nu^i X^\nu\cr P_i} ,
\label{eq:U(P,X)U}
\end{equation}
and hence $\ket{B(F,\theta)}$ (\ref{eq:B=UB}) satisfies,
instead of eqs.\ (\ref{eq:X_B=0}) and (\ref{eq:(P-FX)B=0}), the
following two:
\begin{eqnarray}
&&\left(X^i(\sigma)-\theta_\nu^i X^\nu(\sigma)
\right)\ket{B(F,\theta)}=0 ,
\label{eq:(X-fX)B=0}\\
&&\left(P_\mu(\sigma) + \theta_\mu^j P_j(\sigma)
- F_{\mu\nu}(dX^\nu(\sigma)/d\sigma)
\right)\ket{B(F,\theta)}=0 .
\label{eq:(P+fP-FX')B=0}
\end{eqnarray}
Our new boundary state $\ket{B(F,\theta)}$ is also BRST invariant:
\begin{equation}
Q_{\rm B}\ket{B(F,\theta)}=0 .
\label{eq:QB_B(F,tlt)=0}
\end{equation}
This is easily seen by noting that $P_M(\sigma)X'^M(\sigma)$
contained in $Q_{\rm B}$ is invariant under the transformation of $U(\theta)$
(recall eq.\ (\ref{eq:QB_rough})):
\begin{equation}
U(\theta)\, P_M X'^M\,U(\theta)^{-1}= P_M X'^M .
\label{eq:UdXPU}
\end{equation}
We would like to repeat the construction of ``closed SFT + D-brane''
system of Sec.\ \ref{sec:dLm} by taking $F_{\mu\nu}$ {\em and}
$\theta_\mu^i$ as dynamical variables associated with the D-brane.
The total action of this should be given by
\begin{equation}
\S{tot}[\Phi,F,\theta]=S_0[\Phi] + B(F,\theta)\cdot\Phi + I(F,\theta) ,
\label{eq:Stot(F,tlt)}
\end{equation}
and we shall determine $I(F,\theta)$ and a possible
$(F,\theta)$-dependent factor multiplied on $\ket{B(F,\theta)}$
so that the system has gauge invariance as before.
Let us consider the gauge transformation $\delta_{\Lambda_-}$.
$B(F,\theta)$ and $I(F,\theta)$ have to satisfy the following two
conditions:
\begin{eqnarray}
&&\delta_{\Lambda_-}\ket{B(F,\theta)}=2\ket{B(F,\theta)\star\Lambda_-}
=a_{\mu\nu}{\cal D}_{-}^{\mu\nu}\ket{B(F,\theta)} ,
\label{eq:dLm_B(F,tlt)}\\
&&\delta_{\Lambda_-} I(F,\theta)+B(F,\theta)\cdot\overrightarrow{Q_{\rm B}}\Lambda_- =0 .
\label{eq:dLm_I(F,tlt)+B(F,tlt)_QB_L-=0}
\end{eqnarray}
{}From (\ref{eq:dLm_B(F,tlt)}) we find that the transformation rule of
$F$ is as before and $\theta$ is inert under $\delta_{\Lambda_-}$,
\begin{equation}
\delta_{\Lambda_-} F_{\mu\nu}= a_{\mu\nu} - a_{\nu\mu} ,
\quad
\delta_{\Lambda_-}\theta_\mu^i =0 ,
\label{eq:dLm_(F,tlt)}
\end{equation}
since ${\cal D}_{-}^{\mu\nu}$ (\ref{eq:Dminus}) commutes with $U(\theta)$,
$\left[U(\theta), {\cal D}_{-}^{\mu\nu}\right]=0$.
We need no extra $(F,\theta)$-dependent factor multiplying
$\ket{B(F,\theta)}$ of eq.\ (\ref{eq:B=UB}).
Our next task is the determination of $I(F,\theta)$ satisfying
(\ref{eq:dLm_I(F,tlt)+B(F,tlt)_QB_L-=0}). We shall do this in a manner
different from Sec.\ \ref{sec:dLm}.
For this purpose, observe that
\begin{equation}
\overline{c}_0\bra{0}a_{\mu\nu}{\cal D}_{-}^{\mu\nu}\ket{B(F,\theta)}
=-\frac{1}{2}\zeta(0)\bra{\Lambda_-}\overleftarrow{Q_{\rm B}}\ket{B(F,\theta)} .
\label{eq:Obs}
\end{equation}
This is easily understood by noticing that ${\cal D}_-^{\mu\nu}$ and the
exponent of $\ket{B(F,\theta)}$ are given as sums over the oscillator
level number $n$, and that each term in the $n$-summation in
${\cal D}_{-}^{\mu\nu}$ (\ref{eq:Dminus}) gives equal contribution to
the LHS of (\ref{eq:Obs}).
Then, comparing $\braket{0}{\,\mbox{eq.\ (\ref{eq:dLm_B(F,tlt)})}}$
and (\ref{eq:dLm_I(F,tlt)+B(F,tlt)_QB_L-=0}), we find that the desired
$I(F,\theta)$ is given by
\begin{equation}
I(F,\theta)=\frac{2}{\zeta(0)}\int\! d^{d+1}x\int\! d\wt{\alpha}
\braket{0}{B(F,\theta)} .
\label{eq:I=braket0B}
\end{equation}
To calculate the inner product
$\braket{0}{B(F,\theta)}=\bra{0}U(\theta)\ket{B(F)}$, let us express
$U(\theta)$ (\ref{eq:U}) as
\begin{equation}
U(\theta)=
e^{-i\theta_\mu^i x^i p_\mu} e^{-u^\dagger(\theta)} e^{u(\theta)} ,
\label{eq:U=e^ue^u}
\end{equation}
with $u(\theta)$ given by
(note that $\left[u(\theta),u^\dagger(\theta)\right]=0$)
\begin{equation}
u(\theta)=\frac{1}{2}\,\theta_\mu^i\sum_{n\ge 1}\frac{1}{n}
\left(\a_{-n}^{(+)} + \a_n^{(-)}\right)_i
\left(\a_n^{(+)} - \a_{-n}^{(-)}\right)^\mu .
\label{eq:u}
\end{equation}
Then, using eqs.\ (\ref{eq:X^i_B=0_oscl}) and
(\ref{eq:(P-FX)B=0_oscl}) to express the annihilation operators in
$u(\theta)$ in terms of the creation ones and making use of the formula,
\begin{equation}
\bra{0}\exp\left(\frac{1}{2} a_a M_{ab} a_b\right)
\exp\left(\frac{1}{2} a_a^\dagger N_{ab} a_b^\dagger\right)\ket{0}
=\left[\det\left(1 - NM\right)\right]^{-1/2} ,
\label{eq:0expexp0}
\end{equation}
valid for creation/annihilation operators $(a_a^\dagger, a_a)$
with $[a_a,a_b^\dagger]=\delta_{a,b}$,
we obtain
\begin{equation}
\braket{0}{B(F,\theta)}=
N(F)\left[\det\left(
\delta^\mu_{\,\nu} + \frac{1}{2}\left(
\eta^{\mu\lambda}+{\cal O}^{\mu\lambda}\right)
\theta_\lambda^i \theta_\nu^i
\right)\right]^{-\zeta(0)}\!\!
\delta^{d-p-1}\!\left(x^i-\theta_\mu^i x^\mu\right) .
\end{equation}
Therefore, $I(F,\theta)$ is given by
\begin{equation}
I(F,\theta)=\frac{T_p}{2\zeta(0)}\int\! d^{p+1}x\int\! d\wt{\alpha}
\left[-\det\left(\eta_{\mu\nu} + \theta_\mu^i\theta_\nu^i
+ F_{\mu\nu}\right)\right]^{-\zeta(0)} .
\label{eq:I(F,tlt)}
\end{equation}
For the use in the next section, we also present a more explicit
expression of $\ket{B(F,\theta)}$:
\begin{equation}
\ket{B(F,\theta)}=N\left(F+\theta\tlt\right)
\exp\left\{\sum_{n\ge 1}\frac{1}{n}\a_{-n}^{(+)M}A_{MN}\a_{-n}^{(-)N}
\right\}\ket{0}_{d}
\otimes\ket{\B{gh}}\delta^{d-p-1}\left(x^i-\theta_\mu^i x^\mu\right) ,
\label{eq:B(F,tlt)}
\end{equation}
where $F+\theta\tlt$ is short for $F_{\mu\nu}+\theta_\mu^i\theta_\nu^i$, and
the $d\times d$ matrix $A_{MN}$ is given by
\begin{equation}
\begin{array}{ll}
\displaystyle A_{\mu\nu}=\eta_{\mu\nu}
-\left(2/\left(\wt{\eta}+F\right)\right)_{\mu\nu} ,
&\displaystyle A_{\mu i}=
-\left(2/\left(\wt{\eta}+F\right)\right)_{\mu}^{\;\;\lambda}
\,\theta_\lambda^i ,
\\[10pt]
\displaystyle A_{i\mu}= -\theta_\lambda^i
\left(2/\left(\wt{\eta}+F\right)\right)^\lambda_{\;\;\mu} ,
&\displaystyle A_{ij}=\delta_{ij}-\theta_\mu^i
\left(2/\left(\wt{\eta}+F\right)\right)^{\mu\nu}\theta_\nu^j ,
\end{array}
\label{eq:A_MN}
\end{equation}
with
\begin{equation}
\wt{\eta}(\theta)_{\mu\nu}=\eta_{\mu\nu} + \theta_\mu^i\theta_\nu^i .
\label{eq:wteta}
\end{equation}
$\ket{B(F,\theta)}$ of (\ref{eq:B(F,tlt)}) has correct
normalization and satisfies the conditions (\ref{eq:(X-fX)B=0}) and
(\ref{eq:(P+fP-FX')B=0}).
Here, we have considered only the gauge transformation $\delta_{\Lambda_-}$.
It is straightforward to confirm the invariances under other gauge
transformations closed within constant $F_{\mu\nu}$ and $\theta_\mu^i$.
For example, under the transformation $\delta_{\Lambda_+}$ of Sec.\ \ref{sec:dLp},
$F_{\mu\nu}$ and $\theta_\mu^i$ should transform as
$\delta_{\Lambda_+} F_{\mu\nu}=b^\lambda{}_\mu F_{\lambda\nu}
+b^\lambda{}_\nu F_{\mu\lambda}$
and $\delta_{\Lambda_+}\theta_\mu^i=b^\nu{}_\mu\theta_\nu^i$, respectively.
Furthermore, the introduction of $\theta_\mu^i$ allows the gauge
transformation $\delta_{\Lambda_t}$ generated by
\begin{equation}
\ket{\Lambda_{t}}=i\,\overline{c}_0\left(
\overline{c}_{-1}^{(+)}\a_{-1\,i}^{(-)} +\overline{c}_{-1}^{(-)}\a_{-1\,i}^{(+)}
\right)\ket{0}b_\mu^i x^\mu .
\label{eq:Lambda_t}
\end{equation}
For this $\Lambda_{t}$ we have
$\ket{\Psi\star\Lambda_{t}}
=\frac{1}{2}\theta_\mu^i{\cal D}_{+\,i}{}^\mu\ket{\Psi}$,
and the action $\S{tot}[\Phi,F,\theta]$ (\ref{eq:Stot(F,tlt)}) is
invariant under $\delta_{\Lambda_t}$ if the transformation law of
$(F_{\mu\nu},\theta_\mu^i)$ is defined by
$\delta_{\Lambda_t} F_{\mu\nu}=0$ and $\delta_{\Lambda_t}\theta_\mu^i=-b_\mu^i$.
\section{Comparison with the $\sigma$-model approach}
\label{sec:sigma}
In this section we shall examine the correspondence between our
``SFT + D-$p$-brane'' system (\ref{eq:Stot(F,tlt)}) and the
$\sigma$-model approach \cite{CLNY1,CLNY2,CLNY3}. We show that the
actions in both the approaches coincide with each other to the first
non-trivial orders in the expansion in powers of massless fields of
closed string.
First, let us consider the $\sigma$-model approach.
It has been known that the low energy dynamics of the system of closed
string coupled to a D-$p$-brane is described by the following effective
action \cite{PolchRev},
\begin{equation}
\S{eff}=\S{bulk} + \S{D} ,
\label{eq:S=S+S}
\end{equation}
with the bulk part $\S{bulk}$ and the D-brane action $\S{D}$ given
respectively by
\begin{eqnarray}
&&\S{bulk}=\frac{1}{g^2}
\int\! d^dx\radical"270370}%\def\sqrt{\@@sqrt{-G}e^{-2D}\left\{R + 4\left(\nabla D\right)^2
- \frac{1}{12}H^2\right\} ,
\label{eq:S_bulk}\\[5pt]
&&S_{\rm D}= - T_p\int\!d^{p+1}\sigma\, e^{-D}
\radical"270370}%\def\sqrt{\@@sqrt{-\det\left(
\wt{G}_{\mu\nu}+\wt{B}_{\mu\nu}+F_{\mu\nu}\right)} ,
\label{eq:S_D}
\end{eqnarray}
where $G_{MN}$ and $D$ are the metric and the dilaton fields,
respectively, and $H_{MNP}$ is the field strength of the
anti-symmetric tensor $B_{MN}$, $H=dB$.
In eq.\ (\ref{eq:S_D}), $\wt{G}_{\mu\nu}$ and $\wt{B}_{\mu\nu}$ are
the induced fields on the D-$p$-brane parameterized by the coordinate
$\sigma^\mu$ ($\mu=0,\cdots,p$).
Namely, letting $Y^M(\sigma)$ denote the D-brane space-time
coordinate, the induced metric $\wt{G}_{\mu\nu}(\sigma)$ is
\begin{equation}
\wt{G}_{\mu\nu}(\sigma)=
\partial_\mu Y^M(\sigma)\,\partial_\nu Y^N(\sigma)
\,G_{MN}\left(Y(\sigma)\right) .
\label{eq:wtG}
\end{equation}
The expression of $\wt{B}_{\mu\nu}$ is quite similar.
For comparing (\ref{eq:S=S+S}) with our SFT approach, let us make the
Weyl rescaling,
\begin{equation}
G_{MN}\to e^{4D/(d-2)}G_{MN} ,
\label{eq:Weyl}
\end{equation}
under which the bulk action (\ref{eq:S_bulk}) is reduced to
\begin{equation}
\S{bulk}=\frac{1}{g^2}\int\! d^dx\radical"270370}%\def\sqrt{\@@sqrt{-G}\left\{
R - \frac{4}{d-2}\left(\nabla D\right)^2
- \frac{1}{12}e^{-8D/(d-2)}H^2\right\} .
\label{eq:S_bulk_new}
\end{equation}
As for the D-brane action (\ref{eq:S_D}), we expanded it in powers of
the massless fields associated with closed string; $D$, $B_{MN}$ and
the metric fluctuation $h_{MN}$ defined for the Weyl rescaled
$G_{MN}$ by
\begin{equation}
G_{MN}=\eta_{MN}+ h_{MN} .
\label{eq:G=eta+h}
\end{equation}
Adopting the static gauge with $\sigma^\mu=x^\mu$, $Y^M(x)$ for the
tilted D-brane (\ref{eq:tilted_D-brane}) is
\begin{equation}
Y^\mu(x)=x^\mu,\quad Y^i(x)=\theta_\mu^i x^\mu ,
\label{eq:Y}
\end{equation}
and hence the induced metric $\wt{G}_{\mu\nu}$ is given by
\begin{equation}
\wt{G}_{\mu\nu}=\wt{\eta}_{\mu\nu} + \wt{h}_{\mu\nu} ,
\label{eq:wtG=wteta+wth}
\end{equation}
in terms of $\wt{\eta}_{\mu\nu}$ of (\ref{eq:wteta}) and
$\wt{h}_{\mu\nu}$ defined by
\begin{equation}
\wt{h}_{\mu\nu}=h_{\mu\nu}+\theta_\mu^i h_{i\nu}+\theta_\nu^j h_{\mu j}
+ \theta_\mu^i\theta_\nu^j h_{ij} .
\label{eq:wth}
\end{equation}
The induced $\wt{B}_{\mu\nu}$ is also given by (\ref{eq:wth}) with $h$
replaced by $B$.
Then, keeping only the terms independent of and linear in
$\left(h_{MN}, D, B_{MN}\right)$, we have
\begin{equation}
\S{D}|_{\rm linear}
= - T_p\int\!d^{p+1}x\radical"270370}%\def\sqrt{\@@sqrt{-\det\left(\wt{\eta}+F\right)}
\left\{1 - D
+ \frac{2D}{d-2}\mathop{\rm tr}\!\left(\frac{\wt{\eta}}{\wt{\eta}+F}\right)
+ \frac{1}{2}\mathop{\rm tr}\!\left(\frac{\wt{h}+\wt{B}}{\wt{\eta}+F}\right)
\right\} .
\label{eq:S_D_exp}
\end{equation}
Next, let us consider our SFT approach.
The string field $\Phi$ consists of two parts, $\phi$ and $\psi$:
\begin{equation}
\ket{\Phi}= -\overline{c}_0\ket{\phi} + \ket{\psi} .
\label{eq:Phi=phi+psi}
\end{equation}
They are expanded in terms of the component fields as
follows (we keep only the massless fields):
\begin{eqnarray}
&&\ket{\phi(x)}=\biggl\
-\frac{1}{2}\hat{h}_{MN}(x)\left(\a^{M}_{-1}\a^{N}_{-1}\right)^{(+-)}
+\frac{1}{2} B_{MN}(x)\left(\a^{M}_{-1}\a^{N}_{-1}\right)^{[+-]} ,
\nonumber\\
&&\qquad\qquad
-\hat{D}(x)\left(c_{-1}\overline{c}_{-1}\right)^{(+-)}
+f(x)\left(c_{-1}\overline{c}_{-1}\right)^{[+-]}+\ldots \biggr\}\ket{0}
\label{eq:phi}\\[5pt]
&&\ket{\psi(x)}=\frac{i}{2}\left\{
b_M(x)\left(\a^M_{-1}\overline{c}_{-1}\right)^{(+-)}
+e_M(x)\left(\a^M_{-1}\overline{c}_{-1}\right)^{[+-]}
+\ldots \right\}\ket{0} ,
\label{eq:psi}
\end{eqnarray}
where we have used the (anti-)symmetrization symbol,
\begin{equation}
\left(ab\right)^{(+-)}\equiv
a^{(+)}b^{(-)}+a^{(-)}b^{(+)},\quad
\left(ab\right)^{[+-]}\equiv
a^{(+)}b^{(-)}-a^{(-)}b^{(+)} .
\label{eq:(anti)symm}
\end{equation}
In this section we omit the $\wt{\alpha}$-dependence of the fields.
Similarly, the component expansion of the gauge transformation
functional $\Lambda$ in (\ref{eq:dL_Phi}) is given as
\begin{eqnarray}
&&\ket{\Lambda}=i\overline{c}_0\left\{
\xi_M(x)\left(\a_{-1}^M\overline{c}_{-1}\right)^{(+-)}
+\zeta_M(x)\left(\a_{-1}^M\overline{c}_{-1}\right)^{[+-]}+\ldots\right\}\ket{0}
\nonumber\\
&&\qquad\qquad
+\,\eta(x)\,\overline{c}_{-1}^{(+)}\overline{c}_{-1}^{(-)}\ket{0}
+ \ldots\ .
\label{eq:Lambda}
\end{eqnarray}
In the following we shall consider only the lowest non-trivial parts
of $\S{tot}$ (\ref{eq:Stot}) in the power series expansion in the
closed string massless component fields.
Therefore, in $S_0$ (\ref{eq:Sz}) we keep only the
kinetic term $(1/2g^2)\Phi\cdotQ_{\rm B}\Phi$.
Then, after integrating out the auxiliary fields $b_M$ and $e_M$ and
gauging $f$ away by using the gauge freedom of $\eta$ in
(\ref{eq:Lambda}), we obtain
\begin{equation}
\frac{1}{2g^2}\left.\Phi\cdotQ_{\rm B}\Phi\right|_{\rm massless}
=\frac{1}{g^2}\int\!d^dx\left\{
\left(\radical"270370}%\def\sqrt{\@@sqrt{-G}R\right)_{\rm quadratic}
- \frac{4}{d-2}\left(\partial D\right)^2 - \frac{1}{12}H_{MNP}H^{MNP}
\right\} ,
\label{eq:PhiQBPhi}
\end{equation}
where we have reexpressed $\hat{h}_{MN}$ and $\hat{D}$ in
(\ref{eq:phi}) in terms of new $h_{MN}$ and $D$ as
\begin{equation}
\hat{h}_{MN}=h_{MN} +\frac{4}{d-2}D\,\eta_{MN},\quad
\hat{D}=\frac{4}{d-2}D + \frac{1}{2} h_M{}^M .
\label{eq:hat-nonhat}
\end{equation}
For the first term on the RHS of (\ref{eq:PhiQBPhi}), we have
used the formula
\begin{equation}
\left(\radical"270370}%\def\sqrt{\@@sqrt{-G}R\right)_{\rm quadratic}
=\frac{1}{4}h^{MN}\left(\lower2pt\hbox{\Large$\Box$} h_{MN}-2\partial_N\partial^P h_{MP}
+2\partial_M\partial_Nh^P_{\;P} -\eta_{MN}\lower2pt\hbox{\Large$\Box$} h^P_{\;P}\right) .
\label{eq:sqrtG_R}
\end{equation}
We see that eq.\ (\ref{eq:PhiQBPhi}) coincides with the part of
$\S{bulk}$ (\ref{eq:S_bulk_new}) quadratic in the fluctuations
$(h_{MN},D,B_{MN})$.
Then, let us consider the D-$p$-brane parts of $\S{tot}$
(\ref{eq:Stot(F,tlt)}).
Using eq.\ (\ref{eq:B(F,tlt)}) and keeping only the
massless component fields in $\Phi$, we obtain
\begin{eqnarray}
&&B(F,\theta)\cdot\Phi|_{\rm massless}
=2\int\!d^{p+1}x N(F+\theta\tlt)\left\{\frac{1}{2}\hat{h}_M{}^M
-\mathop{\rm tr}\!\left(\frac{\wt{\hat{h}}+\wt{B}}{\wt{\eta}+F}\right)
-\hat{D}\right\}
\nonumber\\
&&=-4\int\!d^{p+1}x\,N(F+\theta\tlt)\left\{
- D + \frac{2D}{d-2}\mathop{\rm tr}\!\left(\frac{\wt{\eta}}{\wt{\eta}+F}\right)
+ \frac{1}{2}\mathop{\rm tr}\!\left(\frac{\wt{h}+\wt{B}}{\wt{\eta}+F}\right)
\right\} ,
\end{eqnarray}
where $\wt{\hat{h}}_{\mu\nu}$ is defined by (\ref{eq:wth}) with $h$
replaced by $\hat{h}$.
Adopting the zeta function regularization $\zeta(0)=-1/2$, we find that
$B(F,\theta)\cdot\Phi|_{\rm masssless} + I(F,\theta)$ in SFT
approach indeed coincides with $\S{D}|_{\rm linear}$
(\ref{eq:S_D_exp}) for a common $T_p$.
Finally, we shall mention the gauge transformation properties of
the component fields.
The massless component fields appearing in (\ref{eq:PhiQBPhi})
transform under $\delta_\Lambda|_{\rm free}\Phi\equiv Q_{\rm B}\Lambda$ as
\begin{eqnarray}
&&\delta_\Lambda|_{\rm free} h_{MN}=\partial_M\xi_N+\partial_N\xi_M ,
\nonumber\\
&&\delta_\Lambda|_{\rm free} B_{MN}=\partial_M\zeta_N-\partial_N\zeta_M ,
\nonumber\\
&&\delta_\Lambda|_{\rm free} D=0 ,
\label{eq:dL_free}
\end{eqnarray}
and the free action (\ref{eq:PhiQBPhi}) is in fact invariant under
(\ref{eq:dL_free}). The transformation rule of the induced field
$\wt{B}_{\mu\nu}=B_{\mu\nu}+\theta_\mu^i B_{i\nu}+\theta_\nu^j B_{\mu j}
+\theta_\mu^i\theta_\nu^j B_{ij}$ under $\zeta_\mu$ of (\ref{eq:zeta_mu})
and $\zeta_i=0$ is
$\delta_{\Lambda_-}\wt{B}_{\mu\nu}=\partial_\mu\zeta_\nu-\partial_\nu\zeta_\mu=
a_{\nu\mu}-a_{\mu\nu}$ as is expected from the fact that $\wt{B}$ and
$F_{\mu\nu}$ appear in $\S{D}$ (\ref{eq:S_D}) in the combination
$\wt{B}_{\mu\nu}+F_{\mu\nu}$.
We have seen the equivalence between our SFT approach and the
$\sigma$-model approach to the first non-trivial orders in the
expansion in powers of the massless closed string fields.
To discuss the equivalence to higher orders, we have to carry out the
integrations over the massive fields in our SFT approach.
\section{Summary and discussions}
\label{sec:summary}
We have constructed a system of closed SFT coupled to a D-brane
on the basis of gauge invariance principle.
Invariance under stringy local gauge transformation requires that the
state $B$ coupled to closed string field be annihilated by the BRST
charge $Q_{\rm B}$. The gauge invariance requirement also gives the equation
which determines the transformation law of the dynamical variable $W$
associated with the D-brane.
Adopting as $B$ the boundary state $B(F,\theta)$ which is a BRST invariant
one, invariance requirement under a special gauge
transformation which shifts the anti-symmetric tensor by a constant
fixes the $(F,\theta)$-dependence of the front factor of $B$ as well as
the gauge transformation law of the field strength $F_{\mu\nu}$
and the tilt angle $\theta_\mu^i$ of the D-brane. Furthermore, due to
the unboundedness of this gauge transformation at infinity, we need to
introduce the Born-Infeld action to realize the gauge invariance.
The invariance under linear coordinate transformation has also been
studied.
We have checked the correspondence between the action of our ``closed
SFT + D-brane'' system and the effective action in the $\sigma$-model
approach.
Our construction here is still incomplete and there are many subjects
to be studied.
One of the most important among them is to extend the dynamical
variables associate with D-brane. In this paper, we considered only
{\em constant} $F_{\mu\nu}$ and $\theta_\mu^i$.
Since we know that D-branes have the same number of degrees of freedom
as Dirichlet open string, we should be able to incorporate all of them
in the present formalism. This extension includes generalizing
constant $(F_{\mu\nu},\theta_\mu^i)$ to $x^\mu$-dependent ones
$(A_\mu(x),Y^i(x))$,
as well as introducing massive degrees of freedom on D-branes.
One way of introducing non-constant $F_{\mu\nu}$ is to
make use of gauge transformation.
Assuming that the gauge transformation (\ref{eq:Lambda_-})
with a general $\zeta_\mu(x)$ generates
$\delta_{\Lambda_-} F_{\mu\nu}(x)=\partial_\mu\zeta_\nu(x)-\partial_\nu\zeta_\mu(x)$,
we can determine the boundary state for an (infinitesimally)
non-constant field strength $F_{\rm new}(x)=F + d\zeta(x)$ as
$B(F_{\rm new})=B(F) + 2B(F)\star\Lambda_-$.
Details of this extension will be given in a separate paper
\cite{HH2}.
As another problem left in our formalism, we have to determine
the D-brane tension $T_p$.
In the world sheet approach,
the D-brane tension has been determined by either using
Lovelace-Fischler-Susskind mechanism \cite{Lovelace,FS} or by
comparing the one-loop vacuum energy of Dirichlet open string with the
amplitude of massless field exchange in the low energy effective
action $\S{eff}$ (\ref{eq:S=S+S}) \cite{PolchRev}.
In our SFT approach, the boundary state $B$ has been introduced
as a state satisfying the BRST invariance condition (\ref{eq:QB_B=0}),
which is a linear equation and does not fix the absolute
magnitude of $B$.
It would be most interesting if we could ``improve'' our formalism in
such a way that the boundary state is determined by a non-linear
equation which allows the interpretation of the D-brane as a soliton
in closed SFT.
Although we do not know how to determine the absolute value of $T_p$
within our formalism, its dependence of the string coupling constant
$g$ can be deduced from the relation between the string coupling
constant and the dilaton expectation value. This well-known relation
is expressed in closed SFT as the property that $S_0$ (\ref{eq:Sz}) is
invariant under following transformation of the string field $\Phi$ and
the coupling constant $g$
\cite{Yoneya,HataNagoshi,KZ,Kawano,HataDilaton}:
\begin{eqnarray}
&&\delta_{\rm D}\ket{\Phi} =\left({\cal D}+\frac{d-2}{2}\right)\ket{\Phi}
+ 2\radical"270370}%\def\sqrt{\@@sqrt{d-2}\ket{\mbox{Dilaton}} ,
\label{eq:dD_Phi}\\
&&\delta_{\rm D} g = \frac{d-2}{2}\,g ,
\label{eq:dD_g}
\end{eqnarray}
where ${\cal D}$ is the dilatation operator defined by eq.\ (6)
of ref.\ \cite{HataDilaton}, and $\ket{\mbox{Dilaton}}$ is the zero
momentum dilaton state. In our ``closed SFT + D-brane'' system,
we can show that the total action $\S{tot}$ of eqs.\ (\ref{eq:Stot})
and (\ref{eq:Stot(F,tlt)}) is invariant under $\delta_{\rm D}$ of eqs.\
(\ref{eq:dD_Phi}) and (\ref{eq:dD_g}) and suitably defined
$\delta_{\rm D}\!\left(F_{\mu\nu},\theta_\mu^i\right)$, if $\delta_{\rm D} T_p$ is given by
\begin{equation}
\delta_{\rm D} T_p= -\frac{d-2}{2}T_p .
\label{eq:dD_T_p}
\end{equation}
Eqs.\ (\ref{eq:dD_g}) and (\ref{eq:dD_T_p}) implies that
$T_p\propto 1/g$.
\vspace{.7cm}
\noindent
{\Large\bf Acknowledgments}\\[.2cm]
We would like to thank T.\ Kugo and T.\ Takahashi for valuable
discussions.
\newpage
\vspace{1.5cm}
\centerline{\Large\bf Appendix}
|
2,869,038,155,094 | arxiv | \section{Introduction}
The standard model of spinor and gauge boson fields has higher symmetry
than does Einstein gravitational theory\cite{MAN06}.
For massless fields with definite conformal character
action integrals are invariant under local Weyl (conformal) scaling,
$g_{\mu\nu}(x) \to g_{\mu\nu}(x) e^{2\alpha(x)}$\cite{MAN06}.
A conformal energy-momentum tensor is traceless, while the Einstein
tensor is not.
\par Compatibility can be imposed in gravitational theory by replacing
the Einstein-Hilbert field action by a uniquely determined action
integral $I_g$ constructed using the conformal Weyl tensor\cite{MAN06}.
Conformal gravity accounts for anomalous galactic rotation velocities
without invoking dark matter\cite{MAN06}. Relativistic phenomenology
at the distance scale of the solar system is preserved.
\par An inherent conflict between gravitational and elementary particle
theory is removed if all massless elementary fields have conformal
symmetry. Standard cosmology\cite{DOD03} postulates uniform, isotropic
geometry, described by the Robertson-Walker (RW) metric tensor.
In RW geometry, conformal gravitational
${\cal L}_g$ vanishes identically\cite{MAN06}, but the residual
gravitational effect of a conformal scalar field is consistent with
Hubble expansion\cite{MAN06}, dominated in the current epoch by dark
energy, with negligible spatial curvature\cite{NES11,KOM09}.
\par
In electroweak theory, the Higgs mechanism introduces an SU(2) doublet
scalar field $\Phi$ that generates gauge boson mass\cite{PAS95,CAG98}.
Postulating universal conformal symmetry for massless elementary
fields, these two scalar fields can be identified\cite{NES10}.
Lagrangian density ${\cal L}_\Phi$ for conformal scalar field
$\Phi(x)\to\Phi(x)e^{-\alpha(x)}$ includes a term dependent on Ricci
scalar $R=g_{\mu\nu}R^{\mu\nu}$, where $R^{\mu\nu}$ is the
gravitational Ricci tensor\cite{MAN06}.
In uniform, isotropic geometry this determines a modified
Friedmann cosmic evolution equation\cite{NES11} consistent with
cosmological data back to the microwave background epoch\cite{KOM09}.
\par Implications for the standard electroweak model are examined here.
The Higgs model Lagrangian density contains
$\Delta{\cal L}_\Phi=(w^2-\lambda\Phi^\dag\Phi)\Phi^\dag\Phi$, where
$w^2$ and $\lambda$ are undetermined positive constants\cite{CAG98}.
Units here set $\hbar=c=1$.
Lagrangian term $\lambda(\Phi^\dag\Phi)^2$ is conformally covariant.
$w^2\Phi^\dag\Phi$ breaks conformal symmetry, but can be generated
dynamically\cite{NES10}. Conformal symmetry requires a term
$-\frac{1}{6} R\Phi^\dag\Phi$\cite{MAN06}. Empirical cosmological
$R>0$\cite{NES11}, so $-\frac{1}{6} R$ and $w^2$ have opposite signs. A
consistent theory must include $(w^2-\frac{1}{6} R)\Phi^\dag\Phi$\cite{NES11}.
\par The conformal scalar field equation has exact solutions such that
$\Phi^\dag\Phi=\phi_0^2=(w^2-\frac{1}{6} R)/2\lambda$, if this ratio is
positive and $R$ is treated as a constant.
Only the magnitude of $\Phi$ is determined. For $\phi_0^2>0$,
a modified Friedmann cosmic evolution equation has been
derived\cite{NES11} and solved to determine cosmological parameters.
The residual constant term in conformal
energy-momentum tensor $\Theta^{\mu\nu}_\Phi$ defines a
cosmological constant (dark energy)\cite{MAN06,NES11}.
Nonzero $\phi_0^2$ produces gauge boson masses\cite{CAG98}.
\par Conformal theory identifies $w^2$ with the empirically positive
cosmological constant\cite{NES11}, but does not specify the algebraic
sign of parameter $\lambda$. For the Higgs mechanism, condition
$\phi_0^2=(w^2-\frac{1}{6} R)/2\lambda>0$ requires the sign of $\lambda$ to
agree with $w^2-\frac{1}{6} R$. The scalar field energy density determined
by the coupled equations derived here is necessarily finite for any
real value of $\lambda$. This precludes destabilization of the vacuum.
\par Fluctuations $\delta\phi\to0$ about an exact solution of the scalar
field equation satisfy
$\partial_\mu\partial^\mu\delta\phi\to-4\lambda\phi_0^2\delta\phi$.
If $\lambda>0$ this is a Klein-Gordon equation with
$m_H^2=4\lambda\phi_0^2=2(w^2-\frac{1}{6} R)$,
which defines a Higgs boson\cite{CAG98} if $R<6w^2$.
In the conformal Higgs model, empirical values of parameters $w^2$,
$R$, and $\phi_0^2$ determine parameter $\lambda$. It is
argued here that these parameters, now well-established from
cosmological and electroweak data, imply $\lambda<0$,
consistent with an earlier formal argument\cite{MAN06}.
Hence fluctuations of a conformal Higgs scalar field do not satisfy a
Klein-Gordon equation. This rules out a standard Higgs particle of any
real mass. Negative $m_H^2$, or finite pure imaginary mass, would
define a tachyon\cite{FEI67}, if such a particle or field could exist,
and might justify an experimental search for such a tachyon.
\section{The modified Friedmann equation}
\par In cosmological theory, a uniform, isotropic universe is described
by Robertson-Walker (RW) metric\\
$ds^2=dt^2-a^2(t)(\frac{dr^2}{1-kr^2}+r^2d\omega^2)$,
if $c=\hbar=1$
and $d\omega^2=d\theta^2+\sin^2\theta d\phi^2$.
Gravitational field equations are determined by Ricci tensor
$R^{\mu\nu}$ and scalar $R$.
The RW metric defines two independent functions
$\xi_0(t)=\frac{\ddot a}{a}$ and
$\xi_1(t)=\frac{{\dot a}^2}{a^2}+\frac{k}{a^2}$,
such that $R^{00}=3\xi_0$ and $R=6(\xi_0+\xi_1)$.
The field equations reduce to Friedmann equations for
scale factor $a(t)$ and Hubble function $h(t)=\frac{{\dot a}}{a}(t)$.
\par If the scalar field required by Higgs symmetry-breaking has
conformal symmetry, its action integral $I_\Phi$ must depend on the
Ricci scalar, implying a gravitational effect.
Because conformal gravitational action integral $I_g$ vanishes
identically in RW geometry\cite{MAN06}, it is consistent to assume that
uniform cosmological gravity is determined by this scalar field.
\par Including term $(w^2-\frac{1}{6} R)\Phi^\dag\Phi$ in
${\cal L}_\Phi$\cite{NES11},
the field equation for scalar $\Phi$ is
$\partial_\mu\partial^\mu\Phi=
(w^2-\frac{1}{6} R-2\lambda\Phi^\dag\Phi)\Phi$. \\
Generalizing the Higgs construction, and neglecting the cosmological
time derivative of $R$, constant $\Phi=\phi_0$ is a global solution if
$\phi_0^2=\frac{1}{2\lambda}(w^2-\frac{1}{6} R)$. Evaluated for
this field solution,
${\cal L}_\Phi=\phi_0^2(w^2-\frac{1}{6} R-\lambda\phi_0^2)
=\frac{1}{2}\phi_0^2(w^2-\frac{1}{6} R)$.
\par Variational formalism of classical field theory\cite{NES03}
is easily extended to the context of general relativity\cite{MAN06}.
The metric functional derivative
$\frac{1}{\sqrt{-g}}\frac{\delta I}{\delta g_{\mu\nu}}$
of generic action integral $I=\int d^4x\sqrt{-g}{\cal L}$
is $X^{\mu\nu}=x^{\mu\nu}+\frac{1}{2} g^{\mu\nu}{\cal L}$,
if $\delta{\cal L}=x^{\mu\nu}\delta g_{\mu\nu}$.
The energy-momentum tensor is $\Theta^{\mu\nu}=-2X^{\mu\nu}$.
Varying $g_{\mu\nu}$ for fixed scalar field solution $\Phi$,
metric functional derivative
\begin{eqnarray}
X_\Phi^{\mu\nu}=
\frac{1}{6} R^{\mu\nu}\Phi^\dag\Phi+\frac{1}{2} g^{\mu\nu}{\cal L}_\Phi
\nonumber \\
=\frac{1}{6}\phi_0^2(R^{\mu\nu}-\frac{1}{4} Rg^{\mu\nu}+\frac{3}{2}w^2g^{\mu\nu})
\end{eqnarray}
implies modified Einstein and Friedmann equations\cite{NES11}.
\par The gravitational field equation driven by energy-momentum tensor
$\Theta_m^{\mu\nu}=-2X_m^{\mu\nu}$ for uniform matter and radiation is
$X_\Phi^{\mu\nu}=\frac{1}{2}\Theta_m^{\mu\nu}$.
Since $\Theta_m^{\mu\nu}$ is finite, determined by fields independent of
$\Phi$, $X_\Phi^{\mu\nu}$ must be finite, regardless of any parameters
of the theory. This precludes spontaneous destabilization of the
conformal Higgs model.
\par Defining ${\bar\kappa}=-3/\phi_0^2$ and
${\bar\Lambda}=\frac{3}{2}w^2$, the modified Einstein equation is
\begin{eqnarray}
R^{\mu\nu}-\frac{1}{4} Rg^{\mu\nu}+{\bar\Lambda}g^{\mu\nu}
=-{\bar\kappa}\Theta_m^{\mu\nu}.
\end{eqnarray}
Traceless conformal tensor $R^{\mu\nu}-\frac{1}{4} Rg^{\mu\nu}$ here replaces
the Einstein tensor of standard theory\cite{NES11}.
Cosmological constant ${\bar\Lambda}$ is determined by Higgs parameter
$w^2$. Nonstandard parameter ${\bar\kappa}<0$ is
determined by the scalar field\cite{MAN06,NES11}.
For energy density $\rho=\Theta_m^{00}$ this implies
$-\frac{2}{3}(R^{00}-\frac{1}{4} R)= \xi_1(t)-\xi_0(t)
=\frac{2}{3}({\bar\kappa}\rho+{\bar\Lambda})$.
Hence uniform, isotropic matter and radiation determine the
modified Friedmann cosmic evolution equation\cite{NES11}
\begin{eqnarray}
\xi_1(t)-\xi_0(t)=
\frac{{\dot a}^2}{a^2}+\frac{k}{a^2}-\frac{\ddot a}{a}=
\frac{2}{3}({\bar\kappa}\rho+{\bar\Lambda}).
\end{eqnarray}
\par Because the trace of $R^{\mu\nu}-\frac{1}{4} Rg^{\mu\nu}$ is identically
zero, a consistent theory must satisfy the trace condition
$g_{\mu\nu}{\bar\Lambda} g^{\mu\nu}=
4{\bar\Lambda}=-{\bar\kappa}g_{\mu\nu}\Theta_m^{\mu\nu}$.
From the definition of an energy-momentum tensor, this is just
the trace condition satisfied in conformal theory\cite{MAN09},
$g_{\mu\nu}(X_\Phi^{\mu\nu}+X_m^{\mu\nu})=0$. Vanishing trace
eliminates the second Friedmann equation derived in standard theory.
Although the $w^2$ term in $\Delta{\cal L}_\Phi$ breaks conformal
symmetry, a detailed argument shows that the trace
condition is preserved\cite{NES10}.
\section{Fits to cosmological data}
\par The modified Friedmann equation determines dimensionless
scale parameter $a(t)=1/(1+z(t))$, for redshift $z(t)$, and
function $h(t)=\frac{{\dot a}}{a}(t)$ in units of current
Hubble constant $H_0=$70.5 km/s/Mpc\cite{KOM09}, such that
$z=0, a=1, h=1$ at present time $t_0$.
Distances here are in Hubble units $c/H_0$.
\par The modified Friedmann equation depends on nominally constant
parameters, fitted to cosmological data for $z\leq z_*$:
$\alpha=\frac{2}{3}{\bar\Lambda}=w^2>0$,
$k\simeq0$, $\beta=-\frac{2}{3}{\bar\kappa}\rho_m a^3>0$, and
$\gamma=3\beta/4R_b(t_0)$.
$z_*=1090$ here characterizes the cosmic microwave background, at $t_*$,
when radiation became decoupled from matter.
$\frac{4}{3}R_b(t)$ is the
ratio of baryon to radiation energy densities.
Empirical value $R_b(t_0)=688.6$\cite{KOM09} is assumed.
Scaled energy densities $\rho_m a^3$ and $\rho_r a^4$, for matter
and radiation respectively, are constant.
In the absence of dark matter, $\rho_m\simeq\rho_b$, the baryon density.
\par The parametrized modified Friedmann equation is
\begin{eqnarray}
\frac{{\dot a}^2}{a^2}-\frac{{\ddot a}}{a}=
-\frac{d}{dt}\frac{{\dot a}}{a}={\hat\alpha}=
\alpha-\frac{k}{a^2}-\frac{\beta}{a^3}-\frac{\gamma}{a^4}.
\end{eqnarray}
Dividing this equation by $h^2(t)$ implies dimensionless sum rule
\begin{eqnarray}
\Omega_m(t)+\Omega_r(t)+\Omega_\Lambda(t)+\Omega_k(t)+\Omega_q(t)=1,
\end{eqnarray}
where
$\Omega_m(t)= \frac{2}{3}\frac{{\bar\kappa}\rho_m(t)}{h^2(t)}<0$,
$\Omega_r(t)= \frac{2}{3}\frac{{\bar\kappa}\rho_r(t)}{h^2(t)}<0$,
$\Omega_\Lambda(t)=\frac{w^2}{h^2(t)}>0$,
$\Omega_k(t)=-\frac{k}{a^2(t)h^2(t)}$, and
$\Omega_q(t)=\frac{{\ddot a}a}{{\dot a}^2}=-q(t)$.
In contrast to the standard sum rule, $\Omega_m$ and $\Omega_r$ are
negative, while acceleration parameter $\Omega_q(t)$ appears explicitly.
\par Hubble expansion is characterized for type Ia supernovae by scaled
luminosity distance $d_L$ as a function of redshift $z$.
Here $d_L(z)=(1+z)d_z$, for geodesic distance $d_z$ corresponding to
$r_z =\int cdt/a(t)$, integrated from $t_z$ to $t_0$. In curved space
(for $k<0$), $d_z =\frac{\sinh(\sqrt{-k}r_z)}{\sqrt{-k}}$.
In the standard $\Lambda CDM$ model\cite{DOD03}, radiation density and
curvature $\Omega_k$ can be neglected in the current epoch ($z\leq 1$).
This reduces the sum rule to $\Omega_\Lambda+\Omega_m=1$. Empirical
value $\Omega_\Lambda=0.726$ forces $\Omega_m$ to be much larger than
can be accounted for by observed matter, providing a strong argument for
dark matter. Mannheim\cite{MAN03,MAN06} questioned this implication,
and showed that observed luminosities could be fitted equally well
for $z\leq 1$ with $\Omega_m=0$, using the standard Friedmann equation.
However, sum rule $\Omega_\Lambda+\Omega_k=1$ then requires an
empirically improbable large curvature parameter $\Omega_k$.
Empirical limits are $\Omega_k\simeq\pm0.01$\cite{KOM09}.
\par This issue was examined by solving the modified Friedmann equation
with parameters $k, \beta, \gamma$ set to zero\cite{NES11}. $\Omega_q$
is determined by the solution. The modified sum rule
$\Omega_\Lambda+\Omega_q=1$ then presents no problem. Computed
$d_L(z)$ agrees with Mannheim's empirical function for $z\leq 1$ to
graphical accuracy, using parameter $\alpha=\Omega_\Lambda(t_0)=0.732$
for $\Omega_k(t_0)=0$. This is consistent with current empirical values
$\Omega_\Lambda=0.726\pm0.015, \Omega_k=-0.005\pm 0.013$\cite{KOM09}.
$\Omega_m$ and $\Omega_r$ can apparently be neglected for $z\leq 1$.
\par $t=0$ is defined by $h(t)=0$ in the conformal model,
which describes an initial inflationary epoch\cite{NES11}.
The modified Friedmann equation was solved numerically for
$0\leq t\leq t_0$\cite{NES11}, with parameters fitted to $d_L(z)$
for $z\leq 1$, to shift parameter $R(z_*)$\cite{WAM07},
and to acoustic scale ratio $\ell_A(z_*)$\cite{WAM07}.
This determines model parameters $\alpha=0.7171, k=-0.01249,
\beta=0.3650\times 10^{-5}$.
Fixed at $\gamma=3\beta/4R_b(t_0)$, which neglects dark matter,
parameter $\gamma=0.3976\times 10^{-8}$. There is no significant
inconsistency with model-independent empirical data\cite{KOM09}.
\par Defining $\zeta=\frac{1}{6} R-w^2$, the dimensionless sum rule
determines $\zeta=\xi_0+\xi_1-w^2=h(t)^2(2\Omega_q+\Omega_m+\Omega_r)$.
For $a\to 0$, when both $\alpha$ and $k$ can be neglected, the
sum rule implies $\zeta=h(t)^2(\Omega_q+1)$. For large $a$,
$\zeta= h(t)^2(2\Omega_q)$. $\zeta>0$ in both limits, regardless of
numerical values, since $\Omega_q>0$. The present empirical parameters
imply that $\zeta$ is positive for all $z$\cite{NES11}.
\par Conformal symmetry is consistent with any real value of parameter
$\lambda$. However, in electroweak theory Higgs symmetry-breaking
requires nonvanishing conformal scalar field $\Phi$\cite{PAS95}.
A positive value of $\zeta$ implies
\begin{eqnarray}
\lambda\phi_0^2=\frac{1}{2}(w^2-\frac{1}{6} R)=-\frac{1}{2}\zeta<0.
\end{eqnarray}
As argued above, for $\phi_0^2>0$ this conflicts with existence of the
hypothetical massive Higgs boson.
\section{Dynamical estimate of parameter $w^2$}
\par Since term $w^2\Phi^\dag\Phi$ in standard parametrized
$\Delta{\cal L}$ breaks conformal symmetry, it must be generated
dynamically in a consistent theory\cite{MAN09}. As shown above, this
term accounts for dark energy. Dynamically induced $w^2$ preserves the
conformal trace condition\cite{NES10}.
\par The Higgs model deduces gauge boson mass from an exact solution
of the parametrized scalar field equation\cite{CAG98}. For interacting
fields, this logic can be extended to deduce nominally constant field
parameters from a solution of the coupled field equations. Such a
solution of nonlinear equations does not depend on linearization or on
perturbation theory.
\par Interaction of scalar and gauge boson fields defines a
quasiparticle scalar field in Landau's sense:
$\Phi$ is dressed via virtual excitation of accompanying gauge fields.
The derivation summarized here considers gravitational field
$g_{\mu\nu}$ interacting with scalar field $\Phi$ and $U(1)$ gauge
field $B_\mu$. Solution of the coupled semiclassical field
equations\cite{NES10} gives an order-of-magnitude estimate of parameter
$w^2$, in agreement with the empirical cosmological constant, while
confirming the Higgs formula for gauge boson mass\cite{PAS95,CAG98}.
\par The conformal Higgs model assumes incremental Lagrangian density
$\Delta{\cal L}_\Phi=w^2\Phi^\dag\Phi-\lambda(\Phi^\dag\Phi)^2$,
with undetermined numerical parameters $w^2$ and $\lambda$.
The implied scalar field equation is
$\partial_\mu\partial^\mu\Phi+\frac{1}{6} R\Phi=
\frac{1}{\sqrt{-g}}\frac{\delta\Delta I}{\delta\Phi^\dag}=
(w^2-2\lambda\Phi^\dag\Phi)\Phi$.
If $R,w^2,\lambda$ are constant, this has an exact solution $\Phi^\dag\Phi=\phi_0^2=(w^2-\frac{1}{6} R)/2\lambda$,
if this ratio is positive.
For massive complex vector field $B_\mu$, parametrized
$\Delta{\cal L}_B$ implies field equation
$\partial_\nu B^{\mu\nu}=
2\frac{1}{\sqrt{-g}}\frac{\delta\Delta I}{\delta B_\mu^*}=
m_B^2 B^\mu-J_B^\mu$.
\par For interacting fields, both $\Delta{\cal L}_\Phi$ and
$\Delta{\cal L}_B$ can be identified with incremental
Lagrangian density $\Delta{\cal L}=$
\begin{eqnarray}
\frac{i}{2}g_b B^\mu(\partial_\mu\Phi)^\dag\Phi
-\frac{i}{2}g_b B_\mu^\dag\Phi^\dag\partial^\mu\Phi
+\frac{1}{4} g_b^2\Phi^\dag B_\mu^\dag B^\mu\Phi,
\end{eqnarray}
due to covariant derivatives, with coupling constant $g_b$.
Evaluated for solutions of the coupled field equations,
\begin{eqnarray}
2\frac{1}{\sqrt{-g}}\frac{\delta\Delta I}{\delta B_\mu^*}=
\frac{1}{2} g_b^2\Phi^\dag\Phi B^\mu-ig_b\Phi^\dag\partial^\mu\Phi
\end{eqnarray}
implies Higgs mass formula $m_B^2=\frac{1}{2} g_b^2 \phi_0^2$.
The fields are coupled by current density
$J^\mu_B=ig_b\Phi^\dag\partial^\mu\Phi$.
For the scalar field, neglecting derivatives of $B_\mu$,
\begin{eqnarray}
\frac{1}{\sqrt{-g}}\frac{\delta\Delta I}{\delta \Phi^\dag}=
\frac{1}{4} g_b^2B_\mu^*B^\mu \Phi
-\frac{i}{2}g_b(B_\mu^*+B_\mu)\partial^\mu\Phi
\end{eqnarray}
implies $w^2=\frac{1}{4} g_b^2 B_\mu^*B^\mu$.
\par For $\zeta=\frac{1}{6} R-w^2>0$,
$\Phi^\dag\Phi=\phi_0^2=-\zeta/2\lambda$
solves the scalar field equation if $\lambda<0$.
Ricci scalar $R(t)$ varies on a cosmological time scale, so that
$\frac{{\dot\phi}_0}{\phi_0}=\frac{1}{2}\frac{{\dot R}}{R-6w^2}\neq0$, for
constant $w^2$ and $\lambda$. This implies small but nonvanishing
real $\frac{{\dot\phi}_0}{\phi_0}$,
hence nonzero pure imaginary source current density
$J^0_B=ig_b\phi_0^*\partial^0\phi_0
=ig_b\frac{{\dot\phi}_0}{\phi_0}\phi_0^2$.
\par Derivatives due to cosmological
time dependence act as a weak perturbation of SU(2) scalar field
solution $\Phi=(\Phi_+,\Phi_0)\to (0,\phi_0)$.
Neglecting extremely small derivatives of the induced gauge fields
(but not of $\Phi$), the gauge field equation reduces to
$m_B^2 B^\mu= J_B^\mu$.
Implied pure imaginary $B^\mu$ does not affect parameter $\lambda$.
The coupled field equations imply $w_B^2=\frac{1}{4} g_b^2|B|^2$,
proportional to $(\frac{{\dot\phi}_0}{\phi_0})^2$.
Since observable properties depend only on $|B|^2$, a pure imaginary
virtual field implies no obvious physical inconsistency.
Gauge symmetry is broken in any case by a fixed field solution.
The scalar field is dressed by the induced gauge field.
\par Numerical solution of the modified Friedmann
equation\cite{NES11,NES10} implies
$\zeta(t_0)=1.224\times 10^{-66}eV^2$, at present time $t_0$.
Given $\phi_0=180GeV$\cite{CAG98},
$\lambda=-\frac{1}{2}\zeta/\phi_0^2=-0.189\times 10^{-88}$.
\par U(1) gauge field $B_\mu$ does not affect $\lambda$.
Using $|B|^2=|J_B|^2/m_B^4,
|J_B|^2=g_b^2(\frac{{\dot\phi}_0}{\phi_0})^2\phi_0^4$ and
$m_B^2=\frac{1}{2} g_b^2\phi_0^2$,
the dynamical value of $w^2$ due to $B_\mu$
is $w_B^2=\frac{1}{4} g_b^2|B|^2=(\frac{{\dot\phi}_0}{\phi_0})^2$.
\par From the solution of the modified Friedmann equation\cite{NES10},
$\frac{{\dot\phi}_0}{\phi_0}(t_0)=-2.651$ and $w_B^2=7.027$,
in Hubble units, so that
$w_B=2.651\hbar H_0=3.984\times 10^{-33}eV$ in energy units.
This can be considered only an order-of-magnitude estimate, since
time dependence of the assumed constants, implied by the present theory,
was not considered in fitting empirical cosmological data\cite{NES11}.
Moreover, the SU(2) gauge field has been omitted.
\section{Note on dark matter}
As stated in\cite{NES11}, interpretation of parameter $\Omega_m$
may require substantial revision of the standard cosmological model.
Directly observed inadequacy of Newton-Einstein gravitation may imply
the need for a modified theory rather than for inherently
unobservable dark matter.
\par Mannheim has applied
conformal gravity to anomalous galactic rotation\cite{MAN06},
fitting observed data for a set of galaxies covering a large range
of structure and luminosity. The role played in standard
$\Lambda$CDM by dark matter, separately parametrized for each galaxy,
is taken over in conformal theory for Schwarzschild geometry
by an external linear radial potential. The remarkable fit to observed
data shown in\cite{MAN06}[Sect.6.1,Fig.1] requires only two universal
parameters for the whole set of galaxies.
\par As discussed by Mannheim\cite{MAN06}[Sects.6.3,9.3],
a significant conformal contribution to centripetal acceleration is
independent of total galactic luminous mass. This implies an external
cosmological source. Such an isotropic source would determine
an inherently spherical halo of gravitational field surrounding
any galaxy. Quantitative results for lensing and for galactic
clusters should be worked out before assuming dark matter.
\section{Conclusions}
This paper is concerned with determining parameters $w^2$ and $\lambda$
in the incremental Lagrangian density of the Higgs model,
$\Delta{\cal L}_\Phi=(w^2-\lambda\Phi^\dag\Phi)\Phi^\dag\Phi$.
Fitting the modified Friedmann equation to cosmological
data\cite{NES11} implies dark energy parameter
$\Omega_\Lambda=w^2=0.717$, so that empirical
$w=\sqrt{0.717}\hbar H_0=1.273\times 10^{-33}eV$.
\par The modified Friedmann equation determines the time derivative of
the cosmological Ricci scalar, which implies nonvanishing source current
density for induced U(1) gauge field $B_\mu$, treated here
as a classical field in semiclassical coupled field equations.
The resulting gauge field intensity estimates the U(1) contribution
to $w^2$ such that $w_B=2.651\hbar H_0=3.984\times 10^{-33}eV$.
This order-of magnitude agreement between computed $w_B$ and empirical
$w$ supports the conclusion that conformal theory explains both the
existence and magnitude of dark energy\cite{NES10}.
\par The present argument obtains an accurate empirical value of
parameter $\lambda$ from the known dark energy parameter\cite{KOM09},
from the implied current value of Ricci scalar $R$\cite{NES11},
and from scalar field amplitude $\phi_0$ determined by gauge
boson masses\cite{CAG98}.
The mass parameter for a fluctuation of the
conformal Higgs scalar field satisfies $m_H^2=4\lambda\phi_0^2$.
Empirical value $\lambda=-0.189\times 10^{-88}$ is negative,
implying finite pure imaginary parameter $m_H$. If such a particle or
field could exist or be detected, this would define a
tachyon\cite{FEI67}, the quantum version of a classical particle that
moves more rapidly than light. Experimental data rule out a standard
massive Higgs boson with mass $0\leq m_H\leq108$GeV\cite{DGH89,OPA10}.
However, a Higgs tachyon\cite{FEI67} might either not exist at all,
or elude detection in experiments designed for a classical massive
Higgs boson. The present results would only be inconsistent if
experimental Higgs searches to date were capable of detecting a Higgs
tachyon and failed to do so. Conformal theory clearly rules out a
standard Higgs boson in the multi-GeV range.
|
2,869,038,155,095 | arxiv | \section{Introduction} \label{SECT:intro}
In the Wilf-Zeilberger theory, telescopers usually refer to the operators in the output of the method of creative telescoping, which are linear differential (resp. difference) operators annihilated by the definite integrals (resp. the definite sums) of the input functions. The telescopers have emerged at least from the work of Euler \cite{euler} and have been found many applications in the various areas of mathematics such as combinatorics, number theory, knot theory and so on (see Section 7 of \cite{koutschan} for details). In particular, telescopers for a function are often used to prove the identities involving this function or even obtain a simpler expression for the definite integral or sum of this function. As a clever and algorithmic process for constructing telescopers, creative telescoping firstly appeared as a term in the essay of van der Poorten on Apr\'ey's proof of the irrationality of $\zeta(3)$ \cite{vanderpoorten}. However, it was Zeilberger and his collaborators \cite{almkvist-zeilberger,petkovsek-wilf-zeilberger,wilf-zeilberger1,wilf-zeilberger2,zeilberger} in the early 1990s who equipped creative telescoping with a concrete meaning and formulated it as an algorithmic tool. Since then, algorithms for creative telescoping have been extensively studied. Based on the techniques used in the algorithms, the existing algorithms are divided into four generations, see \cite{chen-kauers} for the details. Most recent algorithms are called reduction-based algorithms which were first introduced by Bostan et al. in \cite{bostan-chen-chyzak} and further developed in \cite{bostan-chen-chyzak-li-xin,chen-kauers-koutschan,chen-vanhoeij-kauers-koutschan,bostan-chyzak-lairez-salvy} etc. The termination of these algorithms relies on the existence of telescopers. The question for which input functions the algorithms will terminate has been answered in \cite{wilf-zeilberger3,abramov2,abramov-le,chen-hou-mu,chen-chyzak-feng-fu-li} etc for several classes of functions such as rational functions and hypergeometric functions and so on. The algorithmic framework for creative telescoping is now called the Wilf-Zeilberger theory.
Most of algorithms for creative telescoping focus on the case of one bivariate function as input. There are only a few algorithms which deal with multivariate case (see \cite{chen-feng-li-singer,bostan-lairez-salvy,lairez,chen-hou-labahn-wang} etc). It is still a challenge to develop the multivariate analogue of the existing algorithms (see Section 5 of \cite{chen-kauers}). In the language of differential forms (with $m$ variables and one parameter), the results in \cite{chen-feng-li-singer} and \cite{lairez} dealt with the cases of differential 1-forms and differential $m$-forms respectively. On the other hand, in the applications to other domains such as mirror symmetry (see \cite{li-lian-yau,morrison-walcher,muller-weinzierl-zayadeh}), one needs to deal with the case of differential $p$-forms with $1\leq p \leq m$. Below is an example.
\begin{example}
\label{EX:calabi-yau}
Consider the following one-parameter family of the quintic polynomials
$$
W(t)=\frac{1}{5}(x_1^5+x_2^5+x_3^5+x_4^5+x_5^5) -t x_1x_2x_3x_4x_5
$$
where $t$ is a parameter. Set
$$
\omega=\sum_{i=1}^5 \frac{(-1)^{i-1} x_i}{W(t)} {\rm d} x_1\wedge \cdots \wedge \widehat{{\rm d} x_i} \wedge \cdots \wedge {\rm d} x_5.
$$
To obtain the Picard-Fuchs equation for the mirror quintic, the geometriests want to compute a fourth order linear differential operator $L$ in $t$ and ${\partial}_t$ such that $L(\omega)={\rm d} \eta$ for some differential 3-form $\eta$. Here one has that
$$
L=(1-t^5)\frac{\partial^4}{\partial t^4}-10t^4\frac{\partial^3}{\partial t^3}-25t^3\frac{\partial^2}{\partial t^2}-15t^2\frac{\partial}{\partial t}-1.
$$
Set $\theta_t=t\partial/\partial t$. Then
$$
\tilde{L}=-\frac{1}{5^4}L\frac{1}{t}=\theta_t^4-5t(5\theta_t+1)(5\theta_t+2)(5\theta_t+3)(5\theta_t+4)
$$
and the equation $\tilde{L}(y)=0$ is the required Picard-Fuchs equation.
\end{example}
We call the operator $L$ appearing in the above example a telescoper for the differential form $\omega$ (see Definition~\ref{DEF:telescopers}). In this paper, we study the telescopers for differential forms with $D$-finite function coefficients. Instead of the geometric method used in \cite{li-lian-yau,morrison-walcher,muller-weinzierl-zayadeh}, we provide an algebraic treatment. We give a sufficient and necessary condition guaranteeing the existence of telescopers and describe a method to compute them if they exist. Meanwhile, we also present algorithms to verify this condition.
The rest of this paper is organized as follows. In Section 2, we recall differential forms with $D$-finite function coefficients and introduce the notion of telescopers for differential forms. In Section 3, we give a sufficient and necessary condition for the existence of telescopers, which can be considered as a parametrized version of Poincar\'{e} lemma on differential manifolds. In Section 4, we give two algorithms for verifying the condition presented in Section 3.
{\bf Notations}: The following notations will be frequently used thoughout this paper.
\begin{longtable}{rl}
${\partial}_t$: &the usual derivation ${\partial}/{\partial}_t$ with respect to $t$,\\
${\partial}_{x_i}$:& the usual derivation ${\partial}/{\partial}_{x_i}$with respect to $x_i$,\\
$\vx$:& $\{x_1,\cdots,x_n\}$\\
${\bm \partial}_\vx$:& $\{{\partial}_{x_1},\cdots,{\partial}_{x_n}\}$,\\
\end{longtable}
The following formulas will also be frequently used:
\begin{align}
{\partial}_x^{\mu} x^{\nu} &= \begin{cases} \nu(\nu-1)\cdots(\nu-\mu+1)x^{\nu-\mu} + *{\partial}_x, & \nu \geq \mu \\
* {\partial}_x, & \nu<\mu \end{cases} \label{EQ:formula1} \\
x^{\mu}{\partial}_x^{\nu} &=\begin{cases} (-1)^{\nu}\mu(\mu-1)\cdots(\mu-\nu+1)x^{\mu-\nu} +{\partial}_x *, & \mu\geq \nu\\
{\partial}_x *, &\mu<\nu \end{cases} \label{EQ:formula2}
\end{align}
where $*\in k\langle x,{\partial}_x\rangle$.
\section{$D$-finite elements and differential forms}
Throughout this paper, let $k$ be an algebraically closed field of characteristic zero and let $K$ be the differential field $k(t,x_1,\cdots,x_n)$ with the derivations ${\partial}_t, {\partial}_{x_1}, \cdots,{\partial}_{x_n}$. Let ${\mathfrak D}=K\langle {\partial}_t, {\bm \partial}_\vx\rangle$ be the ring of linear differential operators with coefficients in $K$. For $S\subset \{t, \vx, {\partial}_t, {\bm \partial}_\vx\}$, denote by $k\langle S \rangle$ the subalgebra over $k$ of ${\mathfrak D}$ generated by $S$. For brevity, we denote $k\langle t, \vx, {\partial}_t, {\partial}_\vx\rangle$ by ${\mathfrak W}$. Let ${\cal U}$ be the universal differential extension of $K$ in which every algebraic differential equation having a solution in an extension of ${\cal U}$ has a solution (see page 133 of \cite{kolchin} for more precise description).
\begin{definition}
An element $f\in {\cal U}$ is said to be $D$-finite over $K$ if for every $\delta\in \{{\partial}_t, {\partial}_{x_1}, \cdots, {\partial}_{x_n}\}$, there is a nonzero operator $L_{\delta}\in K\langle \delta \rangle$ such that $L_{\delta}(f)=0$.
\end{definition}
Denote by $R$ the ring of $D$-finite elements over $K$, and by ${\cal M}$ a free $R$-module of rank $m$ with base $\{{\mathfrak a}_1,\cdots,{\mathfrak a}_m\}$. Define a map ${\mathfrak D}\times {\cal M}\rightarrow {\cal M}$ given by
$$\left(L,\sum_{i=1}^mf_i{\mathfrak a}_i\right)\rightarrow L\left(\sum_{i=1}^mf_i{\mathfrak a}_i\right):=\sum_{i=1}^mL(f_i){\mathfrak a}_i.$$
This map endows ${\cal M}$ with a left ${\mathfrak D}$-module structure.
Let
\[
\bigwedge({\cal M})=\bigoplus_{i=0}^m \bigwedge \nolimits^i({\cal M})
\]
be the exterior algebra of ${\cal M}$, where $\bigwedge^i({\cal M})$ denotes the $i$-th homogeneous part of $\bigwedge({\cal M})$ as a graded $R$-algebra. We call an element in $\bigwedge^i({\cal M})$ an $i$-form. $\bigwedge({\cal M})$ is also a left ${\mathfrak D}$-module.
Let ${\rm d}: R\rightarrow {\cal M}$ be a map defined as
\[
{\rm d} f={\partial}_{x_1}(f){\mathfrak a}_1+\cdots+{\partial}_{x_m}(f){\mathfrak a}_m
\]
for any $f\in R$. Then ${\rm d}$ is a derivation over $k$. Note that for each $i=1,\cdots,m$, ${\rm d} x_i={\mathfrak a}_i$. Hence in the rest of this paper we shall use $\{{\rm d} x_1,\cdots,{\rm d} x_m\}$ instead of $\{{\mathfrak a}_1,\cdots,{\mathfrak a}_m\}$. The map ${\rm d}$ can be extended to a derivation on $\bigwedge({\cal M})$ which is defined recursively as
$$
{\rm d}(\omega_1\wedge\omega_2)={\rm d}\omega_1\wedge\omega_2+(-1)^{i-1}\omega_1\wedge {\rm d}\omega_2
$$
for any $\omega_1\in \bigwedge^i({\cal M})$ and $\omega_2\in \bigwedge^j({\cal M})$. For detailed definitions on exterior algebra and differential forms, we refer the readers to Chapter 19 of \cite{lang} and Chapter 1 of \cite{weinstraub} respectively.
As the usual differential forms, we introduce the following definition.
\begin{definition} Let $\omega\in \bigwedge({\cal M})$ be a form.
\begin{itemize}
\item [$(1)$]
$\omega$ is said to be closed if ${\rm d}\omega=0$, and exact if there is $\eta\in \bigwedge({\cal M})$ such that $\omega={\rm d}\eta$.
\item [$(2)$]
$\omega$ is said to be ${\partial}_t$-closed (${\partial}_t$-exact) if there is a nonzero $L\in k(t)\langle {\partial}_t\rangle$ such that $L(\omega)$ is closed (exact).
\end{itemize}
\end{definition}
\begin{definition}
\label{DEF:telescopers}
Assume that $\omega\in \bigwedge({\cal M})$. A nonzero $L\in k(t)\langle {\partial}_t\rangle$ is called a telescoper for $\omega$ if $L(\omega)$ is exact.
\end{definition}
\section{Parametrized Poincar\'{e} lemma}
The famous Poincar\'e lemma states that if $B$ is an open ball in $\mathbb{R}^n$, any smooth closed $i$-form $\omega$ defined on $B$ is exact, for any integer $i$ with $1\leq i \leq n$. In this section, we shall prove the following lemma which can be viewed as a parametrized analogue of Poincar\'e lemma for $\bigwedge({\cal M})$.
\begin{lem}[Parameterized Poincar\'{e} lemma]
\label{LM:ppl}
Let~$\omega \in \bigwedge^p({\cal M})$. If $\omega$ is ${\partial}_t$-closed then it is ${\partial}_t$-exact.
\end{lem}
To the above lemma, we need some lemmas.
\begin{lem}[Lipshitz's lemma (Lemma 3 of \cite{lipshitz})]
Assume that $f$ is a $D$-finite element over $k(\vx)$. For each pair $1\leq i <j \leq n$, there is a nonzero operator $L\in k(x_1,x_3,\cdots, x_n)\langle {\partial}_{x_i}, {\partial}_{x_j}\rangle$ such that $L(f)=0$.
\end{lem}
The following lemma is a generalization of Lipshitz's lemma.
\begin{lem}
\label{LM:modifiedlipshitz}
Assume that $f_1,\cdots, f_m$ are $D$-finite elements over $k(\vx,t)$ and
$$S\subset \{t,x_1,\cdots,x_n,{\partial}_t,{\partial}_{x_1},\cdots,{\partial}_{x_n}\}$$
with $|S|>n+1$. Then one can compute a nonzero operator $T$ in $k\langle S\rangle$ such that $T(f_i)=0$ for all $i=1,\cdots,m$.
\end{lem}
\begin{proof}
For each $\delta\in \{t,{\partial}_{x_1},\cdots,{\partial}_{x_n}\}$ and $i=1,\cdots,m$, let $T_i$ be a nonzero operator in $K\langle \delta \rangle$ such that $T_i(f_i)=0$. Set $T$ to be the least common left multiple of $T_1,\dots,T_m$. Then $T(f_i)=0$ for all $i=1,\cdots,m$. The lemma then follows from an argument similar to that in the proof of Lipshitz's lemma.
\end{proof}
\begin{lem}
\label{LM:basecase}
Assume that $f_1,\cdots,f_m$ are $D$-finite over $k(\vx,t)$, $I,J\subset \{1,\cdots,n\}$ and $I\cap J=\emptyset$. Assume further that $V\subset \{x_i,{\partial}_{x_i} | i\in \{1,\cdots,n\}\setminus (I\cup J)\}$ with $|V|=n-|I|-|J|$. Then one can compute an operator $P$ of the form
\[
L+\sum_{i\in I} {\partial}_{x_i} M_i +\sum_{j\in J} N_j {\partial}_{x_j}
\]
such that $P(f_l)=0$ for all $l=1,\cdots,m$, where $L$ is a nonzero operator in $k\langle \{t, {\partial}_t\}\cup V\}\rangle$, $M_i,N_j\in {\mathfrak W}$
and $N_j$ is free of $x_i$ for all $i\in I$ and $j\in J$.
\end{lem}
\begin{proof}
Without loss of generality, we assume that $I=\{1,\cdots,r\}$ and $J=\{r+1,\cdots,r+s\}$ where $r=|I|$ and $s=|J|$.
Let $$S=\{t,{\partial}_t\}\cup \{{\partial}_{x_i} | i\in I\}\cup \{x_j | j=r+1,\cdots,r+s\}\cup V.$$
Then $|S|=n+2>n+1$. By Lemma~\ref{LM:modifiedlipshitz}, one can compute a $T\in k\langle S\rangle \setminus \{0\}$ such that $T(f_l)=0$ for all $l=1,\cdots,m$. Write
$$
T=\sum_{{\mathbf d}=(d_1,\cdots,d_r)\in \Gamma_1} {\partial}_{x_1}^{d_1}\cdots {\partial}_{x_r}^{d_r} T_{{\mathbf d}}
$$
where $T_{\mathbf d} \in k\langle \{t,{\partial}_t, x_{r+1},\cdots,x_{r+s}\}\cup V\}\rangle \setminus\{0\}$ and $\Gamma_1$ is a finite subset of ${\mathbb Z}^r$. Let $\bar{{\mathbf d}}=(\bar{d}_1,\cdots,\bar{d}_r)$ be the minimal element of $\Gamma_1$ with respect to the lex order on ${\mathbb Z}^r$. Multiplying $T$ by $\prod_{i=1}^r x_i^{\bar{d}_i}$ on the left and using the formula (\ref{EQ:formula2}) yield that
\begin{equation}
\label{EQ:part1}
\left(\prod_{i=1}^r x_i^{\bar{d}_i}\right)T=\alpha T_{\bar{{\mathbf d}}} +\sum_{i=1}^{r} {\partial}_{x_i} \tilde{T}_i
\end{equation}
where $\alpha$ is a nonzero integer and $\tilde{T}_i\in k\langle S\cup\{x_i |i\in I\} \rangle$. Write
$$
T_{\bar{{\mathbf d}}}=\sum_{{\mathbf e}=(e_1,\cdots,e_s)\in \Gamma_2} L_{{\mathbf e}} x_{r+1}^{e_1}\cdots x_{r+s}^{e_s}
$$
where $L_{{\mathbf e}}\in k\langle \{t,{\partial}_t\}\cup V \rangle\setminus\{0\}$ and $\Gamma_2$ is a finite subset of ${\mathbb Z}^s$. Let $\bar{{\mathbf e}}=(\bar{e}_1,\cdots,\bar{e}_s)$ be the maximal element of $\Gamma_2$ with respect to the lex order on ${\mathbb Z}^s$. Multiplying $T_{\bar{{\mathbf d}}}$ by $\prod_{i=1}^s {\partial}_{x_{r+i}}^{\bar{e}_i}$ on the left and using the formula (\ref{EQ:formula1}) yield that
\begin{equation}
\label{EQ:part2}
\left(\prod_{i=1}^s {\partial}_{x_{r+i}}^{\bar{e}_i}\right)T_{\bar{{\mathbf d}}}=\beta L_{\bar{{\mathbf e}}}+\sum_{j\in J} \tilde{L}_j {\partial}_{x_j}
\end{equation}
where $\tilde{L}_i\in k\langle \{t,{\partial}_t,x_{r+1},\cdots,x_{r+s},{\partial}_{x_{r+1}},\cdots,{\partial}_{x_{r+s}}\}\cup V\rangle$ and $\alpha$ is a nonzero integer. Combining (\ref{EQ:part1}) with (\ref{EQ:part2}) yields the required operator $P$.
\end{proof}
\begin{cor}
\label{COR:compatible}
Assume that $f_1,\cdots,f_m$ are $D$-finite over $k(\vx,t)$, $J$ is a subset of $\{1,\cdots,n\}$ and $V\subset \{x_i,{\partial}_{x_i} | i\in \{1,\cdots,n\}\setminus J\}$ with $|V|=n-|J|$. Assume further that ${\partial}_{x_j}(f_l)=0$ for all $j\in J$ and $l=1,\cdots,m$. Then one can compute a nonzero $L\in k\langle \{t,{\partial}_t\}\cup V \rangle$ such that $L(f_l)=0$ for all $l=1,\cdots,m$.
\end{cor}
\begin{proof}
In Lemma~\ref{LM:basecase}, set $I=\emptyset$.
\end{proof}
The main result of this section is the following theorem which can be viewed as a generalization of Corollary~\ref{COR:compatible} to differential forms. To describe and prove this theorem, let us recall some notation from the first chapter of \cite{weinstraub}. For any $f\in R$, we define ${\rm d}_0(f)=0$ and
\[
{\rm d}_s(f) = \partial_{x_1}(f){\rm d} x_1 + \cdots + \partial_{x_s}(f){\rm d} x_s
\]
for~$s\in \{1, 2, \ldots, n\}$.
We can extend~${\rm d}_s$ to the module $\bigwedge({\cal M})$ in a natural way. Precisely, let $\omega=\sum_{i=1}^m f_i {\mathfrak m}_i$ where ${\mathfrak m}_i$ is a monomial in ${\rm d} x_1,\cdots, {\rm d} x_n$. Then ${\rm d}_0(\omega)=0$ and
$$
{\rm d}_s(\omega)=\sum_{i=1}^m \sum_{j=1}^s {\partial}_{x_j}(f_i){\rm d} x_j\wedge {\mathfrak m}_i=\sum_{j=1}^s {\rm d} x_j \wedge {\partial}_{x_j}(\omega).
$$
By definition, one sees that
$${\rm d}_s(u\wedge {\rm d} x_s)={\rm d}_{s-1}(u)\wedge {\rm d} x_s\,\,\mbox{and}\,\,{\rm d}_s(u)={\rm d}_{s-1}(u)+{\rm d} x_s\wedge {\partial}_{x_s}(u).$$
\begin{thm}\label{THM:ppl}
Assume that~$0\leq s \leq n, V\subset \{x_{s+1},\cdots,x_n, {\partial}_{x_{x+1}},\cdots,{\partial}_{x_n}\}$ with $|V|=n-s$ and $\omega \in \bigwedge^p({\cal M})$. If~${\rm d}_s \omega =0$, then one can compute a nonzero $L\in k\langle \{t, \partial_t\}\cup V\rangle$ and~$\mu \in \bigwedge^{p-1}({\cal M})$ such that~
$
L(\omega) = {\rm d}_s \mu.
$
\end{thm}
\begin{remark}
\begin{enumerate}
\item
\label{rem:form} If $p=0$, then $\omega=f\in R$ and ${\rm d}_s f=0$ if and only if $s=0$ or ${\partial}_{x_i}(f)=0$ for all $1\leq i \leq s$ if $s>0$. Therefore Corollary~\ref{COR:compatible} is a special case of Theorem~\ref{THM:ppl}.
\item Note that the parametrized Poincar\'{e} lemma is just the special case of Theorem~\ref{THM:ppl} when $s=n$.
\end{enumerate}
\end{remark}
\begin{proof}
We proceed by induction on~$s$. Assume that $s=0$ and write
$$
\omega=\sum_{i=1}^m f_i {\mathfrak m}_i
$$
where ${\mathfrak m}_i$ a monomial in ${\rm d} x_1, {\rm d} x_2,\cdots, {\rm d} x_n$ and $f_i\in R$. By Corollary~\ref{COR:compatible} with $I=\emptyset$, one can compute a nonzero $L\in k\langle \{t,{\partial}_t\}\cup V\rangle$ such that
$
L(f_i)=0
$ for all $i=1,\cdots,m$.
Then one has that
$$
L(\omega)=\sum_{i=1}^m L(f_i){\mathfrak m}_i=0.
$$
This proves the base case. Now assume that the theorem holds for $s<\ell$ and consider the case $s=\ell$. Write
$$
\omega=u\wedge {\rm d} x_\ell + v
$$
where both $u$ and $v$ do not involve ${\rm d} x_\ell$. Then the assumption ${\rm d}_\ell \omega=0$ implies that
$$
{\rm d}_{\ell-1}u\wedge {\rm d} x_\ell+{\rm d}_\ell v={\rm d}_{\ell-1}u\wedge {\rm d} x_\ell+{\rm d}_{\ell-1} v+ {\rm d} x_\ell \wedge {\partial}_{x_l}(v)=0.
$$
Since all of ${\rm d}_{\ell-1}u, {\rm d}_{\ell-1}v, {\partial}_{x_\ell}(v)$ do not involve ${\rm d} x_\ell$, one has that
${\rm d}_{\ell-1} v=0$ and ${\rm d}_{\ell-1}(u)-{\partial}_{x_\ell}(v)=0$. By the induction hypothesis, one can compute a nonzero $\tilde{L}\in k\langle \{t, x_\ell, {\partial}_t\}\cup V\rangle$ and $\tilde{\mu}\in \bigwedge^{p-1}({\cal M})$ such that
\begin{equation}
\label{EQ:reduction1}
\tilde{L}(v)= {\rm d}_{\ell-1}(\tilde{\mu}).
\end{equation}
We claim that $\tilde{L}$ can be chosen to be free of $x_\ell$.
Write
$$\tilde{L}=\sum_{j=0}^d N_j x_\ell^d$$
where $N_j\in k\langle \{t,{\partial}_t\}\cup V\rangle$ and $N_d\neq 0$.
Multiplying $\tilde{L}$ by ${\partial}_{x_\ell}^d$ on the left and using the formula (\ref{EQ:formula2}) yield that
\begin{equation}
\label{EQ:reduction2}
{\partial}_{x_\ell}^d \tilde{L}=\sum_{j=0}^d N_j{\partial}_{x_\ell}^d x_\ell^j=\alpha N_d+\tilde{N}{\partial}_{x_\ell}
\end{equation}
where $\alpha$ is a nonzero integer and $\tilde{N}\in k\langle \{t, x_\ell,{\partial}_t,{\partial}_{x_\ell}\}\cup V\rangle$. The equalities (\ref{EQ:reduction1}) and (\ref{EQ:reduction2}) together with ${\partial}_{x_\ell}(v)={\rm d}_{\ell-1}(\tilde{u})$ yield that
$
N_d(v)={\rm d}_{\ell-1}(\pi)
$ for some $\pi\in \bigwedge^{p-1}({\cal M})$.
This proves the claim. Now one has that
$$
\tilde{L}(\omega)=\tilde{L}(u)\wedge {\rm d} x_\ell + {\rm d}_{\ell-1}(\tilde{\mu})=\tilde{L}(u)\wedge {\rm d} x_\ell+{\rm d} x_\ell\wedge {\partial}_{x_\ell}(\tilde{\mu})+{\rm d}_\ell(\tilde{\mu}).
$$
Since $\tilde{L}$ is free of $x_1,\cdots,x_\ell$, $\tilde{L}{\rm d}_\ell={\rm d}_\ell \tilde{L}$. This implies that
\begin{align*}
0=\tilde{L}({\rm d}_\ell (\omega))={\rm d}_\ell(\tilde{L}(\omega))&={\rm d}_{\ell-1}(\tilde{L}(u))\wedge {\rm d} x_\ell+ {\rm d} x_\ell \wedge {\rm d}_{\ell-1}({\partial}_{x_\ell}(\tilde{\mu}))\\
&={\rm d}_{\ell-1}\left(\tilde{L}(u)-{\partial}_{x_\ell}(\tilde{\mu})\right)\wedge {\rm d} x_\ell.
\end{align*}
Note that $\tilde{\mu}$ can always be chosen to be free of ${\rm d} x_\ell$. Hence one has that ${\rm d}_{\ell-1}(\tilde{L}(u)-{\partial}_{x_\ell}(\tilde{\mu}))=0$. By the induction hypothesis, one can compute a nonzero $\bar{L}\in k\langle \{t, {\partial}_{x_\ell},{\partial}_t\}\cup V\rangle$ and $\bar{\mu}\in \bigwedge^{p-1}({\cal M})$ such that
\begin{equation}
\label{EQ:reduction3}
\bar{L}\left(\tilde{L}(u)-{\partial}_{x_\ell}(\tilde{\mu})\right)={\rm d}_{\ell-1}(\bar{\mu}).
\end{equation}
Write
$$
\bar{L}=\sum_{j=e_1}^{e_2} {\partial}_{x_\ell}^j M_j
$$
where $M_j\in k\langle \{t, {\partial}_t\}\cup V\rangle$ and $M_{e_1}\neq 0$. Multiplying $\bar{L}$ by $x_\ell^{e_1}$ on the left and using the formula (\ref{EQ:formula2}) yield that
$$
x_\ell^{e_1} \bar{L}=\beta M_{e_1}+ {\partial}_{x_\ell}\tilde{M}
$$
where $\beta$ is a nonzero integer and $\tilde{M}\in k\langle \{t,{\partial}_t,{\partial}_{x_\ell}, x_\ell\}\cup V\rangle$. Hence applying $x_\ell^{e_1}$ to the equality (\ref{EQ:reduction3}), one gets that
$$
\beta M_{e_1}\left(\tilde{L}(u)-{\partial}_{x_\ell}(\tilde{\mu})\right)={\rm d}_{\ell-1}(x_\ell^{e_1}\bar{\mu})+{\partial}_{x_\ell}\left(\tilde{M}\left(\tilde{L}(u)-{\partial}_{x_\ell}(\tilde{\mu})\right)\right).
$$
Set $L=\beta M_{e_1}\tilde{L}$.
The one has that
\begin{align*}
L(\omega)&=\beta M_{e_1}\left((\tilde{L}(u)-{\partial}_{x_\ell}(\tilde{\mu}))\wedge {\rm d} x_\ell+{\rm d}_\ell (\tilde{\mu})\right)\\
&=\left(\beta M_{e_1}\left(\tilde{L}(u)-{\partial}_{x_\ell}(\tilde{\mu}\right)\right)\wedge {\rm d} x_\ell +{\rm d}_\ell(\beta M_{e_1}(\tilde{\mu}))\\
&={\rm d}_{\ell-1}(x_\ell^{e_1}\bar{\mu})\wedge {\rm d} x_\ell +{\partial}_{x_\ell}\tilde{M}\left(\tilde{L}(u)-{\partial}_{x_\ell}(\tilde{\mu})\right)\wedge {\rm d} x_\ell + {\rm d}_\ell(\beta M_{e_1}(\tilde{\mu}))\\
&={\rm d}_\ell\left(x_\ell^{e_1}\bar{\mu}+\tilde{M}\left(\tilde{L}(u)-{\partial}_{x_\ell}(\tilde{\mu})\right)+\beta M_{e_1}(\tilde{\mu})\right).
\end{align*}
The last equality holds because
$${\rm d}_{\ell-1}\left(\tilde{M}\left(\tilde{L}(u)-{\partial}_{x_\ell}(\tilde{\mu})\right)\right)=\tilde{M}{\rm d}_{\ell-1}\left(\tilde{L}(u)-{\partial}_{x_\ell}(\tilde{\mu})\right)=0.$$
\end{proof}
\begin{remark}
Lemma~\ref{LM:ppl} can be derived from the finiteness of the de Rham cohomology groups of $D$-modules in the Bernstein class. To see this, let $\omega$ be a differential $s$-form with coefficients in $R$ and let $M$ be the $D$-module generated by all coefficients of $\omega$ and all derivatives of these coefficients with respect to ${\partial}_t$. By Proposition 5.2 on page 12 of \cite{bjork}, $M$ is a $D$-module in the Bernstein class. Assume that $\omega$ is closed. Then ${\partial}_t^j(\omega)\in H_{DR}^s(M)$, the $j$-th de Rham cohomology group of $M$, for all nonnegative integer $j$. By Theorem 6.1 on page 16 of \cite{bjork}, $H_{DR}^s(M)$ is of finite dimension over $k(t)$. This implies that there are $a_0,\cdots, a_m\in k(t)$ such that $\sum_{j=0}^m a_j {\partial}_t^j(\omega)=0$ in $H_{DR}^s(M)$, i.e. $\sum_{j=0}^m a_j {\partial}_t^j(\omega)$ is exact. This proves the existence of telescopers for the ${\partial}_t$-closed differential forms. However the proof of Theorem~\ref{THM:ppl} is constructive and it provides a method to compute a telescoper if it exists.
\end{remark}
The proof of Theorem~\ref{THM:ppl} can be summarized as the following algorithm.
\begin{algorithm}
\label{ALG:telescopers}
Input: $\omega\in \bigwedge^{p}({\cal M})$ and $V\in \{x_i,{\partial}_{x_i}|i=s+1,\cdots,n\}$ satisfying that ${\rm d}_s(\omega)=0$ and $|V|=n-s$ \\
Output: a nonzero $L\in k\langle \{t,{\partial}_t\}\cup V\rangle$ such that $L(\omega)={\rm d}_s(\mu)$.
\begin{enumerate}
\item If $\omega\in R$, then by Corollary~\ref{COR:compatible}, compute a nonzero $L\in k\langle \{t, {\partial}_t\}\cup V \rangle$ such that $L(\omega)=0$. Return $L$.
\item Write $\omega=u\wedge {\rm d} x_s + v$ with $u,v$ not involving ${\rm d} x_s$.
\item Call Algorithm~\ref{ALG:telescopers} with $v$ and $V\cup \{x_s\}$ as inputs and let $\tilde{L}$ be the output.
\begin{enumerate}
\item Write $\tilde{L}=\sum_{j=0}^d N_j x_s^j $ with $N_j\in k\langle \{t,{\partial}_t, x_s\}\cup V\rangle$ and $N_d\neq 0$.
\item Compute a $\tilde{\mu}\in \bigwedge^{p-1}({\cal M})$ such that $N_d(v)={\rm d}_{s-1}(\tilde{\mu})$.
\end{enumerate}
\item Write $N_d(\omega)=(N_d(u)-{\partial}_{x_s}(\tilde{\mu}))\wedge {\rm d} x_s+ {\rm d}_s(\tilde{\mu})$.
\item Call Algorithm~\ref{ALG:telescopers} with $N_d(u)-{\partial}_{x_s}(\tilde{\mu})$ and $V\cup \{{\partial}_{x_s}\}$ as inputs and let $\bar{L}$ be the output.
\item Write $\bar{L}=\sum_{j=e_1}^{e_2} {\partial}_{x_s}^j M_j$ with $M_j\in k\langle \{t,{\partial}_t\}\cup V\rangle$ and $M_{e_1}\neq 0$.
\item Return $M_{e_1}N_d$.
\end{enumerate}
\end{algorithm}
\section{The existence of telescopers}
\label{sec:existence}
It is easy to see that if a differential form is ${\partial}_t$-exact then it is ${\partial}_t$-closed.
Therefore Lemma~\ref{LM:ppl} implies that given a $\omega\in \bigwedge^p({\cal M})$, to decide whether it has a telescoper, it suffices to decide whether there is a nonzero $L\in k\langle t, {\partial}_t \rangle$ such that $L({\rm d}\omega)=0$. Suppose that
$${\rm d} \omega=\sum_{1\leq i_1<\dots<i_{p+1}\leq n} a_{i_1,\dots,i_{p+1}}{\rm d} x_{i_1}\cdots{\rm d} x_{p+1}, \,\,a_{i_1,\dots,a_{p+1}}\in {\cal U}.$$
Then $L({\rm d} \omega)=0$ if and only if $L(a_{i_1,\dots,i_{p+1}})=0$ for all $1\leq i_1<\cdots<i_{p+1}\leq n$.
So the existence problem of telescopers can be reduced to the following problem.
\begin{problem}
\label{prob1}
Given an element $f\in {\cal U}$, decide whether there exists a nonzero $L\in k\langle t,{\partial}_t\rangle$ such that $L(f)=0$.
\end{problem}
Let $P\in K\langle {\partial}_t \rangle\setminus\{0\}$ be the monic operator of minimal order such that $P(f)=0$. Then $f$ is annihilated by a nonzero $L\in k(t)\langle {\partial}_t \rangle$ if and only if $P$ is a right-hand factor of $L$, i.e. $L=QP$ for some $Q\in K\langle {\partial}_t\rangle$. Such operator $P$ will be called a $(\mathbf x,t)$-separable operator.
Problem~\ref{prob1} then is equivalent to the following one.
\begin{problem}
\label{prob2}
Given a $P\in K\langle {\partial}_t \rangle\setminus\{0\}$, decide whether $P$ is $(\mathbf x,t)$-separable.
\end{problem}
The rest of this paper is aimed at developing an algorithm to solve the above problem. Let us first investigate the solutions of $(\mathbf x,t)$-separable operators.
\begin{notation}
$$C_t:=\left\{ c\in {\cal U} \mid {\partial}_t(c)=0\right\},\,\,C_\vx:=\left\{ c\in {\cal U} \mid \forall\, x\in \vx, {\partial}_x(c)=0\right\}.$$
\end{notation}
Assume that $L\in k(t)\langle {\partial}_t \rangle\setminus\{0\}$. By Corollary 1.2.12 of \cite{singer}, the solution space of $L=0$ in ${\cal U}$ is a $C_t$-vector space of dimension ${\rm ord}(L)$. Moreover we have the following lemma.
\begin{lem}
\label{LM:solutions}
If $L\in k(t)\langle {\partial}_t \rangle\setminus\{0\}$, then the solution space of $L=0$ in ${\cal U}$ has a basis in $C_\vx$.
\end{lem}
\begin{proof}
Let $d={\rm ord}(L)>0$ and $\{v_1,\cdots,v_d\}$ be a basis of the solution space of $L=0$ in ${\cal U}$. For all $1\leq i \leq d$ and all $1\leq l \leq m$,
$$
L({\partial}_{x_l}(v_i))={\partial}_{x_l}(L(v_i))=0.
$$
Set ${\mathbf v}=(v_1,\cdots,v_d)^t$. Then for each $l=1,\cdots,m$,
$$
{\partial}_{x_l}({\mathbf v})=A_l {\mathbf v}, \,\, A_l\in \Mat_d(C_t).
$$
Since ${\partial}_{x_i}{\partial}_{x_j}({\mathbf v})={\partial}_{x_j}{\partial}_{x_i}({\mathbf v})$ and $v_1,\cdots,v_d$ are linearly independent over $C_\vx$, for all $1\leq i < j \leq m$,
\begin{equation}
\label{EQ:compatible}
{\partial}_{x_i}(A_j)-{\partial}_{x_j}(A_i)=A_iA_j-A_jA_i
\end{equation}
On the other hand, ${\partial}_t(A_i)=0$ for all $1\leq i \leq d$. These together with (\ref{EQ:compatible}) imply that
the system
$${\partial}_{x_1}(Y)=A_1Y, \cdots, {\partial}_{x_m}(Y)=A_mY, {\partial}_t(Y)=0$$
is integrable. Then there is an invertible matrix $G$ with entries in ${\cal U}$ satisfying this system. Let $\bar{{\mathbf v}}=G^{-1}{\mathbf v}$. As ${\partial}_t(G^{-1})=0$, $\bar{{\mathbf v}}$ is still a basis of the solution space of $L=0$ in ${\cal U}$. Furthermore, for each $i=1,\cdots,m$, we have
$$
{\partial}_{x_i}(\bar{{\mathbf v}})={\partial}_{x_i}(G^{-1}{\mathbf v})={\partial}_{x_i}(G^{-1}){\mathbf v}+G^{-1}A_i{\mathbf v}=-G^{-1}A_i{\mathbf v}+G^{-1}A_i{\mathbf v}=0.
$$
Thus $\bar{{\mathbf v}}\in C_\vx^d$.
\end{proof}
As a consequence, we have the following corollary.
\begin{cor}
\label{COR:solutions}
Assume that $P\in K\langle {\partial}_t \rangle\setminus\{0\}$. Then $P$ is $(\mathbf x,t)$-separable if and only if the solutions of $P(y)=0$ in ${\cal U}$ are of the form
\begin{equation}
\label{EQ:removableform}
\sum_{i=1}^s g_i h_i,\,\, g_i\in C_t, h_i\in C_\vx\cap \{f\in {\cal U} \mid P(f)=0\}.
\end{equation}
\end{cor}
\begin{proof}
The ``only if" part is a direct consequence of Lemma~\ref{LM:solutions}. For the ``if" part, one only need to prove that if $h\in C_\vx\cap \{f\in {\cal U} \mid P(f)=0\}$ then $h$ is annihilated by a nonzero operator in $k(t)\langle {\partial}_t \rangle$. Suppose that $h\in C_\vx\cap\{f\in {\cal U} \mid P(f)=0\}$. Let $L$ be the monic operator in $K\langle {\partial}_t \rangle\setminus \{0\}$ which annihilates $h$ and is of minimal order. Write
$$
L={\partial}_t^\ell+\sum_{i=0}^{\ell-1} a_i {\partial}_t^i, a_i\in K.
$$
Then for every $j\in \{1,\dots,m\}$
$$
0={\partial}_{x_j}(L(h))=\sum_{i=0}^{\ell-1} {\partial}_{x_j}(a_i) {\partial}_t^i(h)+L({\partial}_{x_j}(h))=\sum_{i=0}^{\ell-1} {\partial}_{x_j}(a_i) {\partial}_t^i(h).
$$
The last equality holds because $h\in C_\mathbf x$. By the miniality of $L$, one sees that ${\partial}_{x_j}(a_i)=0$ for all $i=0,\dots,\ell-1$ and all $j=1,\dots,m$. Hence $a_i\in k(t)$ for all $i$. In other words, $L\in k(t)\langle {\partial}_t \rangle$.
\end{proof}
For convention, we introduce the following definition.
\begin{definition}
\begin{itemize}
\item [$(1)$]
We say $f\in {\cal U}$ is split if it can be written as the form $f=gh$ where $g\in C_t$ and $h\in C_\vx$, and say $f$ is semisplit if it is the sum of finitely many split elements.
\item [$(2)$] We say a nonzero operator $P\in K\langle {\partial}_t \rangle$ is semisplit if it is monic and all its coefficients are semisplit.
\end{itemize}
\end{definition}
The semisplit operators have the following property.
\begin{lem}
\label{LM:samecoefficients}
Assume that $P=Q_1Q_2$ where $P,Q_1,Q_2$ are monic operators in $K\langle {\partial}_t \rangle$. Assume further that $Q_2\in k(t)[\mathbf x,1/r]\langle {\partial}_t \rangle$ where $r\in k[\mathbf x,t]$. Then $P\in k(t)[\mathbf x,1/r]\langle {\partial}_t \rangle$ if and only if so is $Q_1$.
\end{lem}
\begin{proof}
Comparing the coefficients on both sides of $P=Q_1Q_2$ concludes the lemma.
\end{proof}
As a direct consequence, we have the following corollary.
\begin{cor}
\label{cor:semisplitoperators}
Assume that $P=Q_1Q_2$ where $P,Q_1,Q_2$ are monic operators in $K\langle {\partial}_t \rangle$. Assume further that $Q_2$ is semisplit. Then $P$ is semisplit if and only if so is $Q_1$.
\end{cor}
\subsection{The completely reducible case}
In Proposition 10 of \cite{chen-feng-li-singer}, we show that given a hyperexponential function $h$ over $K$, ${\rm ann}(h)\cap k(t)\langle {\partial}_t \rangle\neq \{0\}$ if and only if there is a nonzero $p\in k(\mathbf x)[t]$ and $r\in k(t)$ such that
$$
a=\frac{{\partial}_t(p)}{p}+r,
$$
where $a={\partial}_t(h)/h$. Remark that $a,p, r$ with $p\neq 0$ satisfy the above equality if and only if $\frac{1}{p}({\partial}_t-a)=({\partial}_t-r)\frac{1}{p}$. Under the notion of $(\mathbf x,t)$-separable and the language of differential operators, Proposition 10 of \cite{chen-feng-li-singer} states that ${\partial}_t-a$ is $(\mathbf x,t)$-separable if and only if it is similar to a first order operator in $k(t)\langle {\partial}_t\rangle$ by some $1/p$ with $p$ being nonzero polynomial in $t$. In this section, we shall generalize Proposition 10 of \cite{chen-feng-li-singer} to the case of completely reducible operators.
We shall use ${\rm lclm}(Q_1,Q_2)$ to denote the monic operator of minimal order which is divisible by both $Q_1$ and $Q_2$ on the right.
We shall prove that if $P$ is $(\mathbf x,t)$-separable and completely reducible then there is a nonzero $L\in k(t)\langle {\partial}_t\rangle$ such that $P$ is the transformation of $L$ by some $Q$ with semisplit coefficients.
To this end, we need to introduce some notations from \cite{ore}.
\begin{definition}
Assume that $P, Q\in K\langle {\partial}_t \rangle \setminus\{0\}$.
\begin{enumerate}
\item We say $\tilde{P}$ is the transformation of $P$ by $Q$ if $\tilde{P}$ is the monic operator satisfying that $\tilde{P}Q=\lambda {\rm lclm}(P,Q)$ for some $\lambda\in K$.
\item We say $\tilde{P}$ is similar to $P$ (by $Q$) if there is an operator $Q$ with ${\rm gcrd}(P,Q)=1$ such that $\tilde{P}$ is the transformation of $P$ by $Q$, where ${\rm gcrd}(P,Q)$ denotes the greatest common right-hand factor of $P$ and $Q$.
\end{enumerate}
\end{definition}
\begin{definition}
\begin{enumerate}
\item
We say $P\in K\langle {\partial}_t \rangle$ is completely reducible if it is the lclm of a family of irreducible operators in $K\langle {\partial}_t \rangle$.
\item
We say $Q\in K\langle {\partial}_t \rangle$ is the maximal completely reducible right-hand factor of $P\in K\langle {\partial}_t\rangle$ if $Q$ is the lclm of all irreducible right-hand factros of $P$.
\end{enumerate}
\end{definition}
Given a $P\in K\langle {\partial}_t \rangle$, Theorem 7 of \cite{ore} implies that $P$ has the following unique decomposition called the maximal completely reducible decomposition or the m.c.r. decomposition for short,
$$
P=\lambda H_r H_{r-1} \dots H_1
$$
where $\lambda\in K$ and $H_i$ is the maximal completely reducible right-hand factor of $H_r \dots H_i$. For an $L\in k(t)\langle {\partial}_t \rangle$, it has two m.c.r. decompositions viewed it as an operator in $k(t)\langle {\partial}_t \rangle$ and an operator in $K\langle {\partial}_t \rangle$ respectively. In the following, we shall prove that these two decompositions coincide.
For convenience, we shall denote by $P_{x_i=c_i}$ the operator obtained by replacing $x_i$ by $c_i\in k$ in $P$.
\begin{lem}
\label{LM:gcrd}
Assume that $P, L$ are two monic operators in $K\langle {\partial}_t \rangle$. Assume further that $P\in k(t)[\mathbf x,1/r]\langle {\partial}_t \rangle$ with $r\in k[\mathbf x,t]$, and $L\in k(t)\langle {\partial}_t \rangle$. Let ${\mathbf c}\in k^m$ be such that $r({\mathbf c})\neq 0$.
\begin{enumerate}
\item
If ${\rm gcrd}(P_{\mathbf x=\mathbf c}, L)=1$ then ${\rm gcrd}(P,L)=1$.
\item If ${\rm gcrd}(P,L)=1$ then there is ${\mathbf a}\in k^m$ such that $r({\mathbf a})\neq 0$ and ${\rm gcrd}(P_{\mathbf x={\mathbf a}},L)=1$.
\end{enumerate}
\end{lem}
\begin{proof}
1. We shall prove the lemma by induction on $m=|\mathbf x|$. Assume that $m=1$, and ${\rm gcrd}(P,L)\neq 1$. Then there are $M,N\in k(t)[x_1]\langle {\partial}_t \rangle$ with ${\rm ord}(M)<{\rm ord}(L)$ such that
$
MP+NL=0.
$
Write $$M=\sum_{i=0}^{n-1} a_i{\partial}_t^i, \quad N=\sum_{i=0}^s b_i{\partial}_t^i$$
where $n={\rm ord}(L)$. If the $a_i$'s have a common factor $c$ in $k(t_1)[x_1]$, then one sees that $c$ is a common factor of the $b_i$'s. Thus we can cancel this factor $c$. So without loss of generality, we may assume that the $a_i$'s have no common factor. This implies that $M_{x_1=c_1}\neq 0$ and $M_{x_1=c_1}P_{x_1=c_1}+N_{x_1=c_1}L=0$. Since ${\rm ord}(M_{x_1=c_1})<{\rm ord}(L)$, ${\rm gcrd}(P_{x_1=c_1}, L)\neq 1$, a contradiction. For the general case, set $Q=P_{x_1=c_1}$. Then $Q_{x_2=c_2,\dots,x_m=c_m}=P_{\mathbf x={\mathbf c}}$. This implies that ${\rm gcrd}(Q_{x_2=c_2,\dots,x_m=c_m},L)=1$. By the induction hypothesis, ${\rm gcrd}(Q,L)=1$. Finally, regarding $P$ and $L$ as operators with coefficients in $k(t,x_2,\dots,x_m)[x_1,1/r]$ and by the induction hypothesis again, we get ${\rm gcrd}(P,L)=1$.
2. Since ${\rm gcrd}(P,L)=1$, there are $M,N\in K\langle {\partial}_t \rangle$ such that $MP+NL=1$. Let ${\mathbf a}\in k^m$ be such that $r({\mathbf a})\neq 0$ and both $M_{\mathbf x={\mathbf a}}$ and $N_{\mathbf x={\mathbf a}}$ are well-defined. For such ${\mathbf a}$, one has that $M_{\mathbf x={\mathbf a}}P_{\mathbf x={\mathbf a}}+N_{\mathbf x={\mathbf a}}L=1$ and then ${\rm gcrd}(P_{\mathbf x={\mathbf a}},L)=1$.
\end{proof}
\begin{lem}
\label{LM:mcrdecomposition}
Let $L\in k(t)\langle {\partial}_t \rangle$. The m.c.r. decompositions of $L$ viewed as an operator in $k(t)\langle {\partial}_t \rangle$ and an operator in $K\langle {\partial}_t \rangle$ respectively coincide.
\end{lem}
\begin{proof}
We first claim that an irreducible operator of $k(t)\langle {\partial}_t \rangle$ is irreducible in $K\langle {\partial}_t \rangle$. Let $P$ be a monic irreducible operator in $k(t)\langle {\partial}_t \rangle$ and assume that $Q$ is a monic right-hand factor of $P$ in $K\langle {\partial}_t \rangle$ with $1\leq {\rm ord}(Q)<{\rm ord}(P)$. Then $P=\tilde{Q}Q$ for some $\tilde{Q}\in K\langle {\partial}_t \rangle$. Suppose that $Q\in k(t)[\mathbf x,1/r]\langle {\partial}_t \rangle$. By Lemma~\ref{LM:samecoefficients}, $\tilde{Q}$ belongs to $k(t)[\mathbf x,1/r]\langle {\partial}_t \rangle$. Let ${\mathbf c}\in k^m$ be such that $r({\mathbf c})\neq 0$. Then $P=\tilde{Q}_{\mathbf x={\mathbf c}}Q_{\mathbf x={\mathbf c}}$ and $1\leq {\rm ord}(Q_{\mathbf x={\mathbf c}})\leq {\rm ord}(P)$. These imply that $P$ is reducible, a contradiction. So $P$ is irreducible and thus the claim holds.
Let $L=\lambda H_r H_{r-1}\dots H_1$ be the m.c.r. decomposition in $k(t)\langle {\partial}_t \rangle$. The above claim implies that $H_1$ viewed as an operator in $K\langle {\partial}_t \rangle$ is completely reducible. Assume that $H_1$ is not the maximal compleltely reducible right-hand factor of $L$ in $K\langle {\partial}_t \rangle$. Let $M\in K\langle {\partial}_t \rangle\setminus K$ be a monic irreducible right-hand factor of $L$ satisfying that ${\rm gcrd}(M,H_1)=1$. Due to Lemma~\ref{LM:gcrd}, there is ${\mathbf a}\in k^m$ satisfying that ${\rm gcrd}(M_{\mathbf x={\mathbf a}},H_1)=1$. Note that $M_{\mathbf x={\mathbf a}}$ is a right-hand factor of $L$. Therefore $M_{\mathbf x={\mathbf a}}$ has some irreducible right-hand factor of $L$ as a right-hand factor. Such irreducible factor must be a right-hand factor of $H_1$ and thus ${\rm gcrd}(M_{\mathbf x={\mathbf a}}, H_1)\neq 1$, a contradiction. Therefore $H_1$ is the maximal completely reducible right-hand factor of $L$ in $K\langle {\partial}_t \rangle$. Using the induction on the order, one sees that $\lambda H_r H_{r-1}\dots H_1$ is the m.c.r. decomposition of $L$ in $K\langle {\partial}_t \rangle$.
\end{proof}
\begin{lem}
\label{LM:similarity}
Assume that $P$ is monic, $(\mathbf x,t)$-separable and completely reducible. Assume further that $P\in k(t)[\mathbf x,1/r]\langle {\partial}_t \rangle$ with $r\in k[\mathbf x,t]$. Let ${\mathbf c}\in k^m$ be such that $r({\mathbf c})\neq 0$. Then $P_{\mathbf x=\mathbf c}$ is similar to $P$.
\end{lem}
\begin{proof}
Let $\tilde{L}$ be a nonzero monic operator in $k(t)\langle {\partial}_t \rangle$ with $P$ as a right-hand factor. Since $P$ is completely reducible, by Theorem 8 of \cite{ore}, $P$ is a right-hand factor of the maximal completely reducible right-hand factor of $\tilde{L}$. By Lemma~\ref{LM:mcrdecomposition}, the maximal completely reducible right-hand factor of $\tilde{L}$ is in $k(t)\langle {\partial}_t \rangle$. Hence we may assume that $\tilde{L}$ is completely reducible after replacing $\tilde{L}$ by its maximal completely reducible right-hand factor. Assume that $\tilde{L}=QP$ for some $Q\in K\langle {\partial}_t \rangle$. By Lemma~\ref{LM:samecoefficients}, $Q\in k(t)[\mathbf x,1/r]\langle {\partial}_t \rangle$. Then $\tilde{L}=Q_{\mathbf x=\mathbf c}P_{\mathbf x=\mathbf c}$, i.e. $P_{\mathbf x=\mathbf c}$ is a right-hand factor of $\tilde{L}$. We claim that for a right-hand factor $T$ of $\tilde{L}$, there is a right-hand factor $L$ of $\tilde{L}$ satisfying that ${\rm gcrd}(T,L)=1$ and ${\rm lclm}(T,L)=\tilde{L}$. We prove this claim by induction on $s={\rm ord}(\tilde{L})-{\rm ord}(T)$. When $s=0$, there is nothing to prove. Assume that $s>0$. Then since $\tilde{L}$ is completely reducible, there is an irreducible right-hand factor $L_1$ of $\tilde{L}$ such that ${\rm gcrd}(T,L_1)=1$. Let $N={\rm lclm}(T,L_1)$. We have that ${\rm ord}(N)={\rm ord}(T)+{\rm ord}(L_1)$. Therefore ${\rm ord}(\tilde{L})-{\rm ord}(N)<s$. By induction hypothesis, there is a right-hand factor $L_2$ of $\tilde{L}$ such that ${\rm gcrd}(N,L_2)=1$ and ${\rm lclm}(N,L_2)=\tilde{L}$. Let $L={\rm lclm}(L_1,L_2)$. Then
$$
\tilde{L}={\rm lclm}(N,L_2)={\rm lclm}(T, L_1,L_2)={\rm lclm}(T,L).
$$
Taking the order of the operators in the above equality yields that
\begin{align*}
{\rm ord}({\rm lclm}(T,L))&={\rm ord}({\rm lclm}(N,L_2))={\rm ord}(N)+{\rm ord}(L_2)\\
&={\rm ord}(T)+{\rm ord}(L_1)+{\rm ord}(L_2).
\end{align*}
On the other hand, we have
$$
{\rm ord}({\rm lclm}(T,L))\leq {\rm ord}(T)+{\rm ord}(L) \leq {\rm ord}(T)+{\rm ord}(L_1)+{\rm ord}(L_2).
$$
This implies that
$$
{\rm ord}({\rm lclm}(T,L))= {\rm ord}(T)+{\rm ord}(L).
$$
So ${\rm gcrd}(T,L)=1$ and then $L$ is a required operator. This proves the claim. Now let $L_{{\mathbf c}}$ be a ritht-hand factor of $\tilde{L}$ satisfying that ${\rm gcrd}(P_{\mathbf x={\mathbf c}}, L_{\mathbf c})=1$ and ${\rm lclm}(P_{\mathbf x={\mathbf c}}, L_{\mathbf c})=\tilde{L}$. Let $M\in k(t)\langle {\partial}_t \rangle$ be such that $\tilde{L}=ML_{\mathbf c}$. Then $P_{\mathbf x=\mathbf c}$ is similar to $M$. It remains to show that $P$ is also similar to $M$. Due to Lemma~\ref{LM:gcrd}, ${\rm gcrd}(P,L_{\mathbf c})=1$. Then
$${\rm ord}({\rm lclm}(P,L_{\mathbf c}))={\rm ord}(P)+{\rm ord}(L_{\mathbf c})={\rm ord}(P_{\mathbf x=\mathbf c})+{\rm ord}(L_{\mathbf c})={\rm ord}(\tilde{L}).$$
Note that ${\rm lclm}(P,L_{\mathbf c})$ is a right-hand factor of $\tilde{L}$. Hence ${\rm lclm}(P,L_{\mathbf c})=\tilde{L}$ and thus $P$ is similar to $M$.
\end{proof}
For the general case, the above lemma is not true anymore as shown in the following example.
\begin{example}
Let $y=x_1\log(t+1)+x_2\log(t-1)$ and
$$P={\partial}_t^2+\frac{(t-1)^2x_1+(t+1)^2x_2}{(t^2-1)((t-1)x_1+(t+1)x_2)}{\partial}_t.$$
Then $P$ is $(x,t)$-separable since $\{1,y\}$ is a basis of the solution space of $P=0$ in ${\cal U}$. We claim that $P$ is not similar to $P_{\mathbf x=\mathbf c}$ for any $\mathbf c\in k^2\setminus\{(0,0)\}$. Suppose on the contrary that $P$ is similar to $P_{\mathbf x=\mathbf c}$ for some $\mathbf c=(c_1,c_2)\in k^2\setminus\{(0,0)\}$, i.e. there are $a,b\in k(\mathbf x,t)$, not all zero, such that ${\rm gcrd}(a{\partial}_t+b, P_{\mathbf x=\mathbf c})=1$ and $P$ is the transformation of $P_{\mathbf x=\mathbf c}$ by $a{\partial}_t+b$. Denote $Q=a{\partial}_t+b$. As $\{1, y_{\mathbf x=\mathbf c}\}$ is a basis of the solution space of $P_{\mathbf x=\mathbf c}$, $\{Q(1), Q(y_{\mathbf x=\mathbf c})\}$ is a basis of the solution space of $P=0$. In other words, there is $C\in {\rm GL}_2(C_t)$ such that
$$
\left(b, a\left(\frac{c_1}{t+1}+\frac{c_2}{t-1}\right)+by_{\mathbf x=\mathbf c}\right)=(1,y)C.
$$
Note that $\log(t+1),\log(t-1),1$ are linearly independent over $k(x_1,x_2,t)$. We have that $b\in C_t\setminus\{0\}$ and $bc_1=\tilde{c}x_1, bc_2=\tilde{c}x_2$ for some $\tilde{c}\in C_t$. This implies that $x_1/x_2=c_1/c_2\in k$, a contradiction.
\end{example}
When the given two operators are of length two, i.e. they are the products of two irreducible operators, a criterion for the similarity is presented in \cite{li-wang}. For the general case, suppose that $P$ is similar to $P_{\mathbf x={\mathbf c}}$ by $Q$. Then the operator $Q$ is a solution of the following mixed differential equation
\begin{equation}\label{EQ:mixedequation}
Pz\equiv 0\mod P_{\mathbf x={\mathbf c}}.
\end{equation}
An algorithm for computing all solutions of the above mixed differential equation is developed in \cite{vanhoeij1}. In the following, we shall show that if $P$ is $(\mathbf x,t)$-separable then $Q$ is an operator with semisplit coefficients. Note that $Q$ can be chosen to be of order less than ${\rm ord}(P_{\mathbf x={\mathbf c}})$ and all solutions of the mixed differential equation with order less than ${\rm ord}(P_{\mathbf x={\mathbf c}})$ form a vector space over $k(\mathbf x)$ of finite dimension. Furthermore $Q$ induces an isomorphism from the solution space of $P_{\mathbf x={\mathbf c}}(y)=0$ to that of $P(y)=0$.
\begin{prop}
\label{PROP:criterion}
Assume that $P$ is monic and completely reducible. Assume further that $P\in k(t)[\mathbf x,1/r]\langle {\partial}_t \rangle$ with $r\in k[\mathbf x,t]$. Let ${\mathbf c}\in k^m$ be such that $r({\mathbf c})\neq 0$. Then $P$ is $(\mathbf x,t)$-separable if and only if $P$ is similar to $P_{\mathbf x={\mathbf c}}$ by an operator $Q$ with semisplit coefficients.
\end{prop}
\begin{proof}
Denote $n={\rm ord}(P_{\mathbf x={\mathbf c}})={\rm ord}(P)$.
Assume that $\{\alpha_1,\cdots,\alpha_n\}$ is a basis of the solution space of $P_{\mathbf x=\mathbf c}(y)=0$ in $C_\mathbf x$ and $P$ is similar to $P_{\mathbf x={\mathbf c}}$ by $Q$. Write $Q=\sum_{i=0}^{n-1} a_i {\partial}_t^i$ where $a_i\in K$. Then
$$
\left(Q(\alpha_1),\dots,Q(\alpha_n)\right)=(a_0,\dots,a_{n-1})\begin{pmatrix}\alpha_1 & \alpha_2 & \dots & \alpha_n \\
\alpha_1' & \alpha_2' & \dots & \alpha_n' \\
\vdots & \vdots & & \vdots\\
\alpha_1^{(n-1)} & \alpha_2^{(n-1)} & \dots & \alpha_n^{(n-1)}
\end{pmatrix}
$$
and $Q(\alpha_1),\dots,Q(\alpha_n)$ form a basis of the solution space of $P(y)=0$.
Now suppose that $P$ is $(\mathbf x,t)$-separable. Due to Lemma~\ref{LM:similarity}, $P$ is similar to $P_{\mathbf x={\mathbf c}}$ by $Q$.
By Corollary~\ref{COR:solutions}, the $Q(\alpha_i)$ are semisplit. The above equalities then imply that the $a_i$ are semisplit. Conversely, assume that $P$ is similar to $P_{\mathbf x={\mathbf c}}$ by $Q$ and the $a_i$ are semisplit.
It is easy to see the $Q(\alpha_i)$ are semisplit. By Corollary~\ref{COR:solutions} again, $P$ is $(\mathbf x,t)$-separable.
\end{proof}
Using the algorithm developed in \cite{vanhoeij1}, we can compute a basis of the solution space over $k(\mathbf x)$ of the equation (\ref{EQ:mixedequation}). It is clear that the solutions with semisplit entries form a subspace. We can compute a basis for this subspace as follows. Suppose that $\{Q_1,\dots,Q_\ell\}$ is a basis of the solution space of the equation (\ref{EQ:mixedequation}) consisting of solutions with order less than ${\rm ord}(P_{\mathbf x={\mathbf c}})$. We may identity $Q_i$ with a vector ${\mathbf g}_i\in K^n$ under the basis $1,{\partial}_t,\dots,{\partial}_t^{n-1}$. Let $q\in k(\mathbf x)[t]$ be a common denominator of all entries of the ${\mathbf g}_i$. Write ${\mathbf g}_i={\mathbf p}_i/q$ for each $i=1,\dots,\ell$, where ${\mathbf p}_i\in k(\mathbf x)[t]^n$. Write $q=q_1 q_2$ where $q_2$ is split but $q_1$ is not. Note that a rational function in $t$ with coefficients in $k(\mathbf x)$ is semisplit if and only if its denominator is split. For $c_1,\dots, c_\ell\in k(\mathbf x)$, $\sum_{i=1}^\ell c_i {\mathbf g}_i$ is semisplit if and only if all entries of $\sum_{i=1}^\ell c_i {\mathbf p}_i$ are divided by $q_1$. For $i=1,\dots, \ell$, let ${\mathbf h}_i$ be the vector whose entries are the remainders of the corresponding entries of ${\mathbf p}_i$ by $q_1$. Then all entries of $\sum_{i=1}^\ell c_i {\mathbf p}_i$ are divided by $q_1$ if and oly if $\sum_{i=1}^\ell c_i{\mathbf h}_i=0$. Let ${\mathbf c}_1,\dots,{\mathbf c}_s$ be a basis of the solution space of $\sum_{i=1}^\ell z_i {\mathbf h}_i=0$. Then $\{(Q_1,\dots,Q_\ell) {\mathbf c}_i\mid i=1,\dots,s\}$ is the required basis. Consequently, the required basis can be computed by solving the system of linear equations $\sum_{i=1}^\ell z_i{\mathbf h}_i=0$.
In the following, for the sake of notations, we assume that $\{Q_1,\dots,Q_\ell\}$ is a basis of the solution space of the equation (\ref{EQ:mixedequation}) consisting of solutions with semisplit coefficients. By Proposition~\ref{PROP:criterion} and the definition of similarity, $P$ is $(\mathbf x,t)$-separable if and only if there is a nonzero $\tilde{Q}$ in the space spanned by $Q_1,\dots,Q_\ell$ such that ${\rm gcrd}(P_{\mathbf x={\mathbf c}}, \tilde{Q})=1$. Note that $\tilde{Q}$ induces a homomorphism from the solutions space of $P_{\mathbf x={\mathbf c}}(y)=0$ to that of $P(y)=0$. Moreover, one can easily see that ${\rm gcrd}(P_{\mathbf x={\mathbf c}},\tilde{Q})=1$ if and only if $\tilde{Q}$ is an isomorphism i.e. $\tilde{Q}(\alpha_1),\dots,\tilde{Q}(\alpha_n)$ form a basis of the solution space of $P(y)=0$ where $\{\alpha_1,\dots,\alpha_n\}$ is a basis of the solution space of $P_{\mathbf x={\mathbf c}}(y)=0$. Assume that $\tilde{Q}=\sum_{i=0}^{n-1} a_{0,i} {\partial}_t^i$ with $a_{0,i}\in K$. Using the relation $P_{\mathbf x={\mathbf c}}(\alpha_j)=0$ with $j=1,\dots,n$, one has that for all $j=1,\dots,n$
$$
\tilde{Q}(\alpha_j)'=\left(\sum_{i=0}^{n-1} a_{0,i} \alpha_j^{(i)}\right)'=\sum_{i=0}^{n-1} a_{1,i} \alpha_j^{(i)}
$$
for some $a_{1,i}\in K$. Repeating this process, we can compute $a_{l,i}\in K$ such that for all $j=1,\dots,n$ and $l=1,\dots, n-1$,
$$
\tilde{Q}(\alpha_j)^{(l)}=\sum_{i=0}^{n-1} a_{l,i} \alpha_j^{(i)}.
$$
Now suppose that $\tilde{Q}=\sum_{i=1}^\ell z_i Q_i$ with $z_i\in k(\mathbf x)$. One sees that the $a_{l,i}$ are linear in $z_1,\dots,z_\ell$. Set $A({\mathbf z})=(a_{i,j})_{0\leq i,j\leq n-1}$ with ${\mathbf z}=(z_1,\dots,z_\ell)$. Then one has that
\begin{equation}
\label{EQ:transformation}
A({\mathbf z})\begin{pmatrix}
\alpha_1 & \dots & \alpha_n \\
\vdots & & \vdots \\
\alpha^{(n-1)}& \dots & \alpha_n^{(n-1)}
\end{pmatrix}=\begin{pmatrix}
\tilde{Q}(\alpha_1) & \dots & \tilde{Q}(\alpha_n) \\
\vdots & & \vdots \\
\tilde{Q}(\alpha_1)^{(n-1)} & \dots & \tilde{Q}(\alpha_n)^{(n-1)}
\end{pmatrix}.
\end{equation}
It is well-known that $\tilde{Q}(\alpha_1),\dots,\tilde{Q}(\alpha_n)$ form a basis if and only if the right-hand side of the above equality is a nonsingular matrix and thus if and only if $A({\mathbf z})$ is nonsingular. In the sequel, one can reduce the problem of the existence of $\tilde{Q}$ satisfying ${\rm gcrd}(\tilde{Q},P_{\mathbf x={\mathbf c}})=1$ to the problem of the existence of ${\mathbf a}\in k(\mathbf x)^\ell$ in $k(\mathbf x)$ such that $\det(A({\mathbf a}))\neq 0$.
Suppose now we have already had an operator $Q$ with semisplit coefficients such that $P$ is similar to $P_{\mathbf x={\mathbf c}}$ by $Q$. Write $Q=\sum_{i=0}^{n-1} b_i {\partial}_t^i$ where $b_i\in K$ is semisplit. Write further $b_i=\sum_{j=1}^s h_{i,j}\beta_j$ where $h_{i,j}\in k(\mathbf x)$ and $\beta_j\in k(t)\setminus \{0\}$. Let $L_0=P_{\mathbf x=\mathbf c}$ and let $L_i$ be the transformation of $L_{i-1}$ by ${\partial}_t$ for $i=1,\cdots,n-1$. Then $L_i$ annihilates $\alpha_j^{(i)}$ for all $j=1,\cdots,n$ and $L_i \frac{1}{\beta_l}$ annihilates $\beta_l \alpha_j^{(i)}$ for all $l=1,\dots,s$ and $j=1,\dots,n$. Set
$$L={\rm lclm}\left(\left\{L_i \frac{1}{\beta_l}\mid i=0,\dots,n-1, l=1,\dots,s\right\}\right).$$
Then $L$ annihilates all $\tilde{Q}(\alpha_i)$ and thus has $P$ as a right-hand factor. We summarize the previous discussion as the following algorithm.
\begin{algorithm}
\label{ALG:completelyreducible}
Input: $P\in K\langle {\partial}_t \rangle$ that is monic and completely reducible.\\[2mm]
Output: a nonzero $L\in k(t)\langle {\partial}_t \rangle$ which is divided by $P$ on the right if it exists, otherwise 0.
\begin{enumerate}
\item Write
$$P={\partial}_t^n + \sum_{i=0}^{n-1}\frac{a_i}{r}{\partial}_t^i$$
where $a_i\in k(t)[\mathbf x], r\in k[\mathbf x,t]$.
\item Pick $\mathbf c\in k^m$ such that $r(\mathbf c)\neq 0$. By the algorithm in \cite{vanhoeij1}, compute a basis of the solution space $V$ of the equation (\ref{EQ:mixedequation}).
\item Compute a basis of the subspace of $V$ consisting of operators with semisplit coefficients, say $Q_1,\cdots,Q_\ell$.
\item Set $\tilde{Q}=\sum_{i=1}^\ell z_i Q_i$ and using $\tilde{Q}$, compute the matrix $A({\mathbf z})$ as in (\ref{EQ:transformation}).
\item If $\det(A({\mathbf z}))=0$ then return 0 and the algorithm terminates. Otherwise compute ${\mathbf a}=(a_1,\dots,a_\ell)\in k^\ell$ such that $\det(A({\mathbf a}))\neq 0$.
\item Set $b_i$ to be the coefficient of ${\partial}_t^i$ in $\sum_{j=1}^\ell a_j Q_j$ and write $b_i=\sum_{j=1}^s h_{i,j} \beta_j$ where $h_{i,j}\in k(\mathbf x)$ and $\beta_j\in k(t)$. Let $L_0=P_{\mathbf x=\mathbf c}$ and for each $i=1,\cdots,n-1$ compute $L_i$, the transformation of $L_{i-1}$ by ${\partial}_t$.
\item Return ${\rm lclm}\left(\left\{ L_i \frac{1}{\beta_j} \mid i=0,\dots,n-1, j=1,\dots,s\right\}\right)$.
\end{enumerate}
\end{algorithm}
\subsection{The general case}
Assume that $P$ is $(\mathbf x,t)$-separable and $P=Q_1Q_2$ where $Q_1,Q_2\in K\langle {\partial}_t \rangle$. It is clear that $Q_2$ is also $(\mathbf x,t)$-separable. One may wonder whether $Q_1$ is also $(\mathbf x,t)$-separable. The following example shows that $Q_1$ may not be $(\mathbf x,t)$-separable.
\begin{example} Let $K=k(x,t)$ and let $P={\partial}_t^2$. Then $P$ is $(\mathbf x,t)$-separable and
$${\partial}_t^2=\left({\partial}_t+\frac{x}{xt+1}\right)\left({\partial}_t-\frac{x}{xt+1}\right).$$
The operator ${\partial}_t+x/(xt+1)$ is not $(\mathbf x,t)$-separable, because $1/(xt+1)$ is one of its solutions and it is not semisplit.
\end{example}
While, the lemma below shows that if $Q_2$ is semisplit then $Q_1$ is also $(\mathbf x,t)$-separable.
\begin{lem}
\label{LM:composition}
\begin{itemize}
\item [$(1)$]
Assume that $Q_1,Q_2\in K\langle {\partial}_t \rangle\setminus\{0\}$, and $Q_2$ is semisplit. Then $Q_1Q_2$ is $(\mathbf x,t)$-separable if and only if both $Q_1$ and $Q_2$ are $(\mathbf x,t)$-separable.
\item [$(2)$]
Assume that $P\in K\langle {\partial}_t\rangle\setminus\{0\}$ and $L$ is a nonzero monic operator in $k(t)\langle {\partial}_t \rangle$. Then $P$ is $(\mathbf x,t)$-separable if and only if so is the transformation of $P$ by $L$.
\end{itemize}
\end{lem}
\begin{proof}
Note that the solution space of ${\rm lclm}(P_1,P_2)=0$ is spanned by those of $P_1=0$ and $P_2=0$. Hence ${\rm lclm}(P_1,P_2)$ is $(\mathbf x,t)$-separable if and only if so are both $P_1$ and $P_2$.
$(1)$ For the ``only if" part, one only need to prove that $Q_1$ is $(\mathbf x,t)$-separable.
Assume that $g$ is a solution of $Q_1=0$ in ${\cal U}$. Let $f$ be a solution of $Q_2(y)=g$ in ${\cal U}$. Such $f$ exists because ${\cal U}$ is the universal differential extension of $K$. Then $f$ is a solution of $Q_1Q_2=0$ in ${\cal U}$. By Corollary~\ref{COR:solutions}, $f$ is semisplit. Since $Q_2$ is semisplit, one sees that $g=Q_2(f)$ is semisplit. By Corollary~\ref{COR:solutions} again, $Q_1$ is $(\mathbf x,t)$-separable.
Now assume that both $Q_1$ and $Q_2$ are $(\mathbf x,t)$-separable. Let $\tilde{Q}\in K\langle {\partial}_t \rangle$ be such that $\tilde{Q}Q_2=L$ where $L\in k(t)\langle {\partial}_t \rangle$ is monic. By Corollary~\ref{cor:semisplitoperators} and the ``only if" part, $\tilde{Q}$ is semisplit and $(\mathbf x,t)$-separable. Thus ${\rm lclm}(Q_1,\tilde{Q})$ is $(\mathbf x,t)$-separable. Assume that ${\rm lclm}(Q_1,\tilde{Q})=N\tilde{Q}$ with $N\in K\langle {\partial}_t \rangle$. Since $\tilde{Q}$ is semisplit, by the ``only if" part again, $N$ is $(\mathbf x,t)$-separable. Let $M\in K\langle {\partial}_t \rangle$ be such that $MN$ is a nonzero operator in $k(t)\langle {\partial}_t \rangle$. We have that
$$
M{\rm lclm}(Q_1,\tilde{Q})Q_2=MN\tilde{Q}Q_2=MNL\in k(t)\langle {\partial}_t \rangle.
$$
On the other hand, $M{\rm lclm}(Q_1,\tilde{Q})Q_2=M\tilde{M}Q_1Q_2$ for some $\tilde{M}\in K\langle {\partial}_t \rangle$. Hence $P=Q_1Q_2$ is $(\mathbf x,t)$-separable.
$(2)$ Since $L$ is $(\mathbf x,t)$-separable, we have that $P$ is $(\mathbf x,t)$-separable if and only if so is ${\rm lclm}(P,L)$. Let $\tilde{P}$ be the transformation of $P$ by $L$. Then $\tilde{P}L={\rm lclm}(P,L)$. As $L$ is semisplit, the assertion then follows from $(1)$.
\end{proof}
Assume that $P$ is a nonzero operator in $K\langle {\partial}_t\rangle$. Let $P_0$ be an irreducible right-hand factor of $P$. By Algorithm~\ref{ALG:completelyreducible}, we can decide whether $P_0$ is $(\mathbf x,t)$-separable or not. Now assume that $P_0$ is $(\mathbf x,t)$-separable. Then we can compute a nonzero monic operator $L_0\in k(t)\langle {\partial}_t \rangle$ having $P_0$ as a right-hand factor. Let $P_1$ be the transformation of $P$ by $L_0$. Lemma~\ref{LM:composition} implies that $P$ is $(\mathbf x,t)$-separable if and only if so is $P_1$. Note that
\begin{align*}
{\rm ord}(P_1)&={\rm ord}({\rm lclm}(P,L_0))-{\rm ord}(L_0)\\
&\leq {\rm ord}(P)+{\rm ord}(L_0)-{\rm ord}(P_0)-{\rm ord}(L_0)={\rm ord}(P)-{\rm ord}(P_0).
\end{align*}
In other words, ${\rm ord}(P_1)<{\rm ord}(P)$. Replacing $P$ by $P_1$ and repeating the above process yield an algorithm to decide whether $P$ is $(\mathbf x,t)$-separable.
\begin{algorithm}
\label{ALG:generalcase}
Input: a nonzeor monic $P\in K\langle {\partial}_t \rangle$.\\[2mm]
Output: a nonzero $L\in k(t)\langle {\partial}_t \rangle$ which is divided by $P$ on the right if it exists, otherwise 0.
\begin{enumerate}
\item If $P=1$ then return 1 and the algorithm terminates.
\item Compute an irreducible right-hand factor $P_0$ of $P$ by algorithms developed in \cite{beke,vanderput-singer,vanhoeij2}.
\item Apply Algorithm~\ref{ALG:completelyreducible} to $P_0$ and let $L_0$ be the output.
\item If $L_0=0$ then return 0 and the algorithm terminates. Otherwise compute the transformation of $P$ by $L_0$, denoted by $P_1$.
\item Apply Algorithm~\ref{ALG:generalcase} to $P_1$ and let $L_1$ be the output.
\item Return $L_1L_0$.
\end{enumerate}
\end{algorithm}
The termination of the algorithm is obvious. Assume that $L_1\neq 0$. Then $L_1=Q_1P_1$ for some $Q_1\in K\langle {\partial}_t \rangle$. We have that
$P_1L_0={\rm lclm}(P,L_0)$. Therefore $$L_1L_0=Q_1P_1L_0=Q_1{\rm lclm}(P,L_0)=Q_1Q_0P$$
for some $Q_0\in K\langle {\partial}_t \rangle$. This proves the correctness of the algorithm.
\bibliographystyle{abbrv}
|
2,869,038,155,096 | arxiv | \section{Introduction}
Toroidal dipole is an elementary electromagnetic excitation that can
be visualized as currents flowing along minor loops of an infinitesimal
torus (poloidal currents) \cite{Afa95}. The radiation pattern of
a dynamic toroidal dipole is identical to that of a dynamic electric
dipole - a pair of opposite (oscillating) charges \cite{Afa95}.
The experimental exploration of the physical properties of toroidal
excitations became possible only recently, with the advent of electromagnetic
metamaterials, and is now attracting considerable attention\cite{Papasimakis16}.
The aim of this short note is to demonstrate that whilst the radiation
pattern of electric and toroidal dipoles may be identical, the power
emitted by these two excitations does scale differently with the ambient
refractive index. This question has already been explored in the context
of spontaneous decay rates of mesoscopic sources \cite{Tkalya02}.
Here we offer a much simpler exposition for point-like classical sources.
\section{Power radiated by a point-like electric dipole embedded into an isotropic
dielectric medium}
The current density of a point-like electric dipole with moment $\vec{p}$,
located at the origin, is given by \cite{Afa95}:
\[
\vec{J}_{p}=\frac{d\vec{p}}{dt}\delta^{\left(3\right)}\left(\vec{r}\right)=\dot{\vec{p}}\delta^{\left(3\right)}\left(\vec{r}\right)
\]
In case of time-harmonic current (angular frequency $\omega$; assumed
time-dependence $\exp\left(i\omega t\right)$), the vector potential
due to $\vec{J}_{p}=i\omega\vec{p}\delta^{\left(3\right)}\left(\vec{r}\right)$
is\footnote{Hereinafter we shall only consider the fields away from the source.
The source term, i.e. a delta-function, that appears in the complete
solution for the vector potential is therefore omitted.} \cite{jackson}:
\[
\vec{A}_{p}=\frac{\mu_{0}}{4\pi}\int d^{3}r'\frac{\exp\left(-ik_{0}n\left|\vec{r}-\vec{r}'\right|\right)}{\left|\vec{r}-\vec{r}'\right|}\vec{J}_{p}=\frac{i\omega\mu_{0}}{4\pi}\,\frac{\exp\left(-ik_{0}nr\right)}{r}\,\vec{p}
\]
Where $k_{0}=\omega/c$ is the free-space wave-number, $n$ is the
refractive index of the medium, $c$ is the speed of light, and $\mu_{0}$
is the vacuum permeability. The far-field electric ($\vec{E}_{p}$),
and magnetic ($\vec{B}_{p}$) fields are given by:
\begin{flalign*}
\lim_{r\to\infty}\vec{B}_{p}= & \lim_{r\to\infty}\vec{\nabla}\times\vec{A}_{p}=\frac{i\omega\mu_{0}}{4\pi}\lim_{r\to\infty}\vec{\nabla}\times\frac{\exp\left(-ik_{0}nr\right)}{r}\,\vec{p}\\
= & \frac{i\omega\mu_{0}}{4\pi}\left(-ik_{0}n\vec{\hat{r}}\right)\times\frac{\exp\left(-ik_{0}nr\right)}{r}\,\vec{p}\\
= & \frac{\omega^{2}\mu_{0}n}{4\pi c}\,\left(\vec{\hat{r}}\times\vec{p}\right)\,\frac{\exp\left(-ik_{0}nr\right)}{r}\\
\lim_{r\to\infty}\vec{E}_{p}= & \lim_{r\to\infty}\frac{c^{2}}{i\omega n^{2}}\vec{\nabla}\times\vec{B}_{p}=\frac{c^{2}}{i\omega n^{2}}\lim_{r\to\infty}\vec{\nabla}\times\vec{\nabla}\times\vec{A}_{p}\\
= & \frac{c^{2}\mu_{0}}{4\pi n^{2}}\lim_{r\to\infty}\vec{\nabla}\times\vec{\nabla}\times\frac{\exp\left(-ik_{0}nr\right)}{r}\,\vec{p}\\
= & \frac{c^{2}\mu_{0}}{4\pi n^{2}}\left(-ik_{0}n\vec{\hat{r}}\right)\times\left(-ik_{0}n\vec{\hat{r}}\right)\times\frac{\exp\left(-ik_{0}nr\right)}{r}\,\vec{p}\\
= & \frac{-\mu_{0}\omega^{2}}{4\pi}\left(\vec{\hat{r}}\times\vec{\hat{r}}\times\vec{p}\right)\,\frac{\exp\left(-ik_{0}nr\right)}{r}
\end{flalign*}
The radiated power is given by the integral of the radial component
of the time-averaged Poynting vector ($\langle\vec{S}\rangle=\frac{1}{2\mu_{0}}\Re\left(\vec{E}\times\vec{B}^{\dagger}\right)$)
over the surface of an origin-centered sphere with radius $R\to\infty$:
\begin{flalign*}
P_{p}= & \lim_{R\to\infty}\oint_{R}d^{2}r\,\vec{\hat{r}}.\left(\frac{1}{2\mu_{0}}\Re\left(\vec{E}_{p}\times\vec{B}_{p}^{\dagger}\right)\right)=\frac{1}{2\mu_{0}}\,\frac{-\mu_{0}\omega^{2}}{4\pi}\,\frac{\omega^{2}\mu_{0}n}{4\pi c}\int d^{2}\Omega\,\vec{\hat{r}}.\left(\left(\vec{\hat{r}}\times\vec{\hat{r}}\times\vec{p}\right)\times\left(\vec{\hat{r}}\times\vec{p}^{\dagger}\right)\right)\\
P_{p}= & \frac{-\omega^{4}\mu_{0}n}{32\pi^{2}c}\,\left(\frac{-8\pi}{3}\left|\vec{p}\right|^{2}\right)=\frac{\mu_{0}\omega^{4}}{12\pi c}\cdot n\cdot\left|\vec{p}\right|^{2}
\end{flalign*}
Thus the power radiated by the electric dipole is linearly proportional
to the refractive index of the medium ($n$).
\section{Power radiated by a point-like toroidal dipole embedded into an isotropic
dielectric medium}
The current density of a point-like toroidal dipole with moment $\vec{T}$,
located at the origin, is given by \cite{Afa95}:
\[
\vec{J}_{T}=\vec{\nabla}\times\vec{\nabla}\times c\vec{T}\delta^{\left(3\right)}\left(\vec{r}\right)
\]
The corresponding vector potential is\footnote{One can invoke integration by parts to transfer the derivatives from
delta function to Green function, e.~g. $\int d^{3}r'\,G\left(\vec{r}-\vec{r}'\right)\vec{\nabla}'\times\vec{M}\left(\vec{r}'\right)=-\int d^{3}r\,\vec{\nabla'}\times G\left(\vec{r}-\vec{r}'\right)\vec{M}\left(\vec{r}'\right)=\vec{\nabla}\times\int d^{3}r\,G\left(\vec{r}-\vec{r}'\right)\vec{M}\left(\vec{r}'\right)$
if $\vec{M}$ vanishes at the boundaries.}:
\[
\vec{A}_{T}=\frac{\mu_{0}}{4\pi}\int d^{3}r'\frac{\exp\left(-ik_{0}n\left|\vec{r}-\vec{r}'\right|\right)}{\left|\vec{r}-\vec{r}'\right|}\vec{J}_{T}=\frac{\mu_{0}}{4\pi}\,\vec{\nabla}\times\vec{\nabla}\times c\vec{T}\frac{\exp\left(-ik_{0}nr\right)}{r}
\]
The far-field electric ($\vec{E}_{T}$), and magnetic ($\vec{B}_{T}$)
fields are given by\footnote{Since for toroidal dipole the charge density is $\rho_{T}=\vec{\nabla}.\vec{J}_{T}/\left(-i\omega\right)=0$,
one can use $\vec{E}_{T}=-i\omega\vec{A}_{T}$}:
\begin{flalign*}
\lim_{r\to\infty}\vec{B}_{T}= & \lim_{r\to\infty}\vec{\nabla}\times\vec{A}_{T}=\frac{\mu_{0}}{4\pi}\lim_{r\to\infty}\vec{\nabla}\times\vec{\nabla}\times\vec{\nabla}\times\frac{\exp\left(-ik_{0}nr\right)}{r}\,c\vec{T}\\
= & \frac{\mu_{0}}{4\pi}\left(-ik_{0}n\vec{\hat{r}}\right)\times\left(-ik_{0}n\vec{\hat{r}}\right)\times\left(-ik_{0}n\vec{\hat{r}}\right)\times\frac{\exp\left(-ik_{0}nr\right)}{r}\,c\vec{T}\\
= & i\frac{\omega^{3}\mu_{0}n^{3}}{4\pi c^{3}}\,\left(\vec{\hat{r}}\times\vec{\hat{r}}\times\vec{\hat{r}}\times c\vec{T}\right)\,\frac{\exp\left(-ik_{0}nr\right)}{r}\\
\lim_{r\to\infty}\vec{E}_{T}= & \lim_{r\to\infty}-i\omega\vec{A}_{T}=\frac{-i\omega\mu_{0}}{4\pi}\lim_{r\to\infty}\vec{\nabla}\times\vec{\nabla}\times\frac{\exp\left(-ik_{0}nr\right)}{r}\,c\vec{T}\\
= & \frac{-i\omega\mu_{0}}{4\pi}\left(-ik_{0}n\vec{\hat{r}}\right)\times\left(-ik_{0}n\vec{\hat{r}}\right)\times\frac{\exp\left(-ik_{0}nr\right)}{r}\,c\vec{T}\\
= & i\frac{\mu_{0}\omega^{3}n^{2}}{4\pi c^{2}}\left(\vec{\hat{r}}\times\vec{\hat{r}}\times c\vec{T}\right)\,\frac{\exp\left(-ik_{0}nr\right)}{r}
\end{flalign*}
The radiated power:
\begin{flalign*}
P_{T}= & \lim_{R\to\infty}\oint_{R}d^{2}r\,\vec{\hat{r}}.\left(\frac{1}{2\mu_{0}}\Re\left(\vec{E}_{T}\times\vec{B}_{T}^{\dagger}\right)\right)\\
= & \frac{1}{2\mu_{0}}\,\frac{\mu_{0}\omega^{3}n^{2}}{4\pi c^{2}}\,\frac{\omega^{3}\mu_{0}n^{3}}{4\pi c^{3}}\int d^{2}\Omega\,\vec{\hat{r}}.\left(\left(\vec{\hat{r}}\times\vec{\hat{r}}\times c\vec{T}\right)\times\left(\vec{\hat{r}}\times\vec{\hat{r}}\times\vec{\hat{r}}\times c\vec{T}^{\dagger}\right)\right)\\
P_{T}= & \frac{\omega^{6}\mu_{0}n^{5}}{32\pi^{2}c^{5}}\,\left(\frac{8\pi}{3}\left|c\vec{T}\right|^{2}\right)=\frac{\mu_{0}\omega^{6}}{12\pi c^{3}}\cdot n^{5}\cdot\left|\vec{T}\right|^{2}
\end{flalign*}
Thus the power radiated by the toroidal dipole is proportional to
the fifth power of the refractive index of the medium.
\section{Conclusion}
It has been demonstrated that the power emitted by a point-like electric
dipole scales linearly with the refractive index of the ambient environment,
whereas the emission of a point-like toroidal dipole scales as the
fifth power of the ambient refractive index.
|
2,869,038,155,097 | arxiv | \section{Introduction}
Machine Learning with \acp{DNN} is a popular tool and highly successful in solving tasks in many fields of application~\cite{Szegedy2015}.
However, the associated increase of \ac{DNN} complexity in terms of the number of network layers or of neurons in each layer severely complicates the understanding of how \acp{DNN} solve their learned task \cite{Yosinski2015}.
To improve the explainability of \acp{DNN}, we transfer methods for analyzing complex and opaque systems from the field of neuroscience, which has been studying the brain over decades.
In this work, we focus on adapting how brain activity is typically visualized as a topographic map.
For example, brain activity recorded through \ac{EEG} measurements \cite{Makeig2009} is represented as a top view of the head with a superimposed topographic map of neural activity~\cite{maurer2012atlas}.
Adapting this approach to be applicable to \acp{DNN} can help to visualize and understand their internal processes more intuitively, too.
However, in contrast to the brain, \acp{DNN} typically do not have an informative order of neurons because there are no connections between neurons within the same layer.
Therefore, to be able to visualize activations of neurons in \acp{DNN} as topographic maps, we research techniques to layout the neurons in a two-dimensional space in which neurons of similar activity are in the vicinity of each other.
The idea of introducing a topographic neuron layout in artificial neural networks is not novel.
\acp{SOM} \cite{Kohonen1988} follow a similar motivation and already constrain the neurons to form a topographical layout during training.
However, traditional \acp{SOM} are only shallow neural networks and more recent approaches on training deep \acp{SOM} \cite{liu2015deep} have not gained popularity yet.
Most \ac{DL} models that are used in practice are implemented without a topographical layout of the neurons.
Our aim is to provide the possibility to create a topographic layout visualization for any \ac{DNN}, particularly those that are already trained and potentially deployed in the real world.
In this work, we introduce and compare different methods to obtain a topographic layout of neurons in the same layer of a \ac{DNN}.
Moreover, we demonstrate how to use the resulting visualization as topographic maps to analyze \acp{DNN}.
To this end, we show how to visually compare the activations of \ac{DNN}-based classifiers between different classes and how to derive potential reasons for erroneous predictions from this comparison.
In addition, we demonstrate how to visualize biases that are encoded in the representations of pre-trained \ac{DNN} models.
Our novel visualization technique improves the transparency of algorithmic decision-making systems that use \acp{DNN}.
In addition, our visual representation of complex activation patterns of \acp{DNN} is interpretable without expert-knowledge in Machine Learning and is therefore accessible to a broad audience.
In this work, we particularly focus on the visualization of internal representations of \acp{DNN}.
Fixing models or mitigating biases is out of the scope of this work, but detecting errors with our technique allows to apply existing strategies for improving the model in a more targeted manner.
\section{Related Work}
\subsection{Deep Neural Network Analysis}
Getting insight into the internal structures and processes of trained \acp{DNN} is crucial because such models work as black-boxes \cite{Yosinski2015}.
Consequently, researchers proposed various methods for visualizing and analyzing \acp{DNN}.
These approaches are often best suitable for image data \cite{Zeiler2014,Selvaraju2017,Bach2015} because input images and the corresponding model explanations are visually interpretable by a human.
Popular \ac{DNN} explanation techniques aim to visualize the learned features of the model \cite{Yosinski2015,Erhan2009,Mordvintsev2015}, highlighting prediction-relevant values in the input \cite{Erhan2009,Simonyan2013, Zeiler2014} or analyzing the learned representations \cite{Alain2017,Kim2018,Morcos2018a,krug2021snaps}.
\subsubsection{Feature Visualization}
Feature visualization aims to explain the internal structures of a trained \ac{DL} model.
The connection weights between the neurons of a \ac{DNN} are the trained parameters of the model.
Therefore, inspecting these weights can help to understand which patterns the trained model responds to.
In \acp{MLP}, the weight values of connections from the input to a single neuron in the first hidden layer can be directly visualized as a heat map \cite{Osindero2008}.
For two-dimensional data like images, the resulting heat maps are two-dimensional images, too.
For \acp{CNN}, plotting the weights is not a useful visualization because these models apply weights as convolution operations with filters that are typically very small, for example, $3\times3$.
Hence, the visualization of the weights is a tiny image and difficult to interpret.
Alternatively, \ac{CNN} features can be visualized by plotting the feature maps, that is, all activation values of the same convolutional filter.
However, feature maps only show which parts of the input the filter responds to but do not visualize the detected pattern \cite{Yosinski2015}.
To further investigate which pattern is detected by the convolutional filter, an artificial input which maximally activates this filter can be created through optimization \cite{Yosinski2015,Erhan2009,Mordvintsev2015}.
Using a data example or random values as initial input, its values are updated by an optimization algorithm to maximize the activation values of the feature map of interest.
This approach for \ac{CNN} feature visualization is also referred to as \ac{AM}.
If inputs are optimized without constraints, they can appear unrealistic to a human and hence be hard to interpret.
Therefore, \ac{AM} is typically performed with regularization techniques that penalize unnatural inputs \cite{Mordvintsev2015}.
While regularization makes optimized inputs more realistic, it can decrease the faithfulness of the pattern visualization if the optimization overemphasizes the similarity of the optimized input to natural data.
\subsubsection{Saliency Maps}
To explain the output of a \ac{DL} model for an individual input example, the influence of input values on the output can be quantified.
Attribution techniques aim to compute the relevance of each input value for the output of the \ac{DNN} \cite{Erhan2009,Simonyan2013, Zeiler2014,Springenberg2015, Selvaraju2017, Kindermans2018,Schulz2019}.
The relevance values are then visualized as a superimposed heat map on the input.
Commonly, the relevance heat maps are referred to as saliency maps \cite{Simonyan2013}.
Due to the complexity of \acp{DNN}, it is infeasible to compute the relevance values exactly.
Therefore, various attribution techniques have been suggested that, for example, compute gradients of the output with respect to the input values \cite{Erhan2009}, combine gradient information with activation values \cite{Selvaraju2017} or decompose the output \cite{Bach2015}.
Saliency maps are only interpretable if the input data are visually interpretable themselves because the saliency values are superimposed on the input.
Hence, saliency maps are particularly suitable for image data and data that can be interpreted as image, for example, audio data as spectrograms \cite{Becker2018,Thuillier2018,Perotin2019}.
Attribution techniques are convenient due to the ease of visually interpreting them but have disadvantages, as well.
Because saliency maps are computed for individual examples only, it is difficult to draw conclusions about how the model performs its task in general.
Moreover, some attribution techniques can be misleading because their relevance computation is not strongly enough related to the output \cite{Adebayo2018, Nie2018, Sixt2020}.
\subsubsection{Data Representation Analysis}
To investigate how the \ac{DNN} processes the input data in general, the model-internal representations of the data can be analyzed.
To this end, the hidden layer activations for the complete data set or a subset of it are computed and further evaluated in-depth.
For example, training linear models to classify the representations in a hidden layer indicates how well particular properties are encoded in this layer.
The classification targets of the linear models can be the targets of the \ac{DNN} itself \cite{Alain2017} or any user-defined property by providing groups of inputs \cite{Kim2018}.
A more general approach than investigating linear separability is to analyze the representational similarity between groups of related examples.
Previous work used \ac{PCA} \cite{Fiacco2019}, \ac{CCA} \cite{Morcos2018a} or clustering techniques \cite{Nagamine2015, krug2021snaps} to find co-activated neurons or to compare representations of different groups of examples.
Our introduced technique aims to analyze activations, as well, but in contrast to existing approaches, it allows to compare high-dimensional representations by visual inspection.
Explaining \acp{DNN} by investigating their activations only gives insight into model behavior for groups that are contained in the used test data.
\subsection{Bias detection in DNNs}
\acp{DNN} are prone to reproducing or emphasizing biases that are contained in the data used to train them.
Different approaches to detect and mitigate bias have been introduced.
For example, \citet{sweeney2013discrimination} shows racial discrimination in the online ad delivery by Google and \citet{bolukbasi2016man} debias commonly used word embeddings.
More recently, researchers investigate biases in modern transformer-based models like BERT \cite{devlin2019bert}, for example focusing on gender bias \cite{li2021detecting} or ethnic bias \cite{ahn-oh-2021-mitigating}.
Discrimination in facial recognition algorithms is evaluated by \citet{pmlr-v81-buolamwini18a} and they introduced the Gender Shades data set with a balanced distribution of gender and skin type.
Similarly, \citet{karkkainenfairface} introduced the Fair Face data set which is balanced for gender, race and age categories.
In this work, we perform racial bias detection as an exemplary use case of our novel visualization technique.
Different to \citet{pmlr-v81-buolamwini18a} or \citet{karkkainenfairface}, who investigate bias in the output and the performance of algorithmic decision systems, we focus on bias in the representations of the individual layers of a \ac{DNN}.
\section{Methods}
In this section, we describe our proposed pipeline to compute the topographic activation maps.
This includes obtaining the hidden layer representations of groups of examples, computing the layout of the topographic maps and visualizing the activations according to the layout.
A visual abstract of the pipeline is shown in \autoref{fig:methodsummary}.
The implementation is publicly available on \url{https://github.com/andreaskrug/ANN-topomaps}.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{figures/method_overview.pdf}
\caption{Visual summary of the computation of topographic activation maps. A: obtain NAPs to characterize the DNN activations for the groups of interest. B: compute a layout in which similarly activated neurons are in the vicinity of each other. C: scale the coordinates in both dimensions to a range from 0 to 1. D: apply interpolation to achieve a continuous coloring.}
\label{fig:methodsummary}
\end{figure}
\subsection{Hidden Layer Representations}
\label{sec:naps}
The topographic layout computation is based on neuron activation similarity.
Hence, the initial step is to obtain values that describe the \ac{DNN} activity for the groups of interest.
A straight-forward strategy is to use the stacked \ac{DNN} activations for multiple data examples.
However, the computational effort is high because the dimensionality of the stacked activation values increases with the number of provided examples and the activations do not specifically represent the investigated groups.
Therefore, we use an averaging approach with normalization, adapted from \citet{krug2021snaps}.
For each group, we compute the average activations in the layer of interest and normalize the result by subtracting the average activation over all groups (\autoref{fig:methodsummary}A).
For the layout computation, we stack the obtained values (\autoref{fig:methodsummary}B) to obtain a $N \times G$ matrix, where $N$ and $G$ denote the number of neurons and groups, respectively.
Following the terminology by \citet{krug2021snaps}, we refer to the resulting matrix as the \ac{NAP} of the layer.
Different from the original \ac{NAP} approach, we only use random subsets of examples from each investigated group to be able to efficiently apply our method to larger data sets.
Moreover, we omit the authors' proposed alignment procedure because we only use image data sets in which the objects are already centered.
Neurons in \acp{CNN} are arranged as feature maps and neurons in the same feature map detect a common feature.
Therefore, we characterize the feature map activations instead of each individual neuron.
In \acp{CNN}, we average and normalize each feature map across the groups.
Then, we flatten each averaged feature map of size $w \times h$ to a $w \times h$-dimensional vector and stack the vectors of all groups.
The resulting $w \times h \times G$-dimensional vector characterizes the activations of one feature map.
Finally, the \ac{NAP} of the layer is obtained by stacking these vectors for all feature maps, resulting in a $N \times (w \times h \times G)$ matrix.
\subsection{Topographic Map Layout}
\label{sec:layouting}
To compute the layout of the topographic maps, we distribute the neurons of a hidden layer in a two-dimensional space.
In general, we aim to compute a layout in which neurons of similar activity are in the vicinity of each other (\autoref{fig:methodsummary}B).
While using the same layout for all groups, we aim for activation similarity of nearby neurons in each individual group.
\subsubsection{Self-Organizing Map (SOM)}
We use the MiniSom\footnote{https://github.com/JustGlowing/minisom} package for Python to compute a \ac{SOM} \cite{Kohonen1988} layout of the neurons based on their activations.
For a layer of $N$ neurons, we compute a square \ac{SOM} with shape $d \times d$ with $d=\lfloor \sqrt{N}+1 \rfloor$ such that there can potentially be one \ac{SOM} position per neuron.
We train the \ac{SOM} for 10 epochs with the \acp{NAP} as training data and using the default parameters of the MiniSom package.
Then, we assign each neuron the coordinate of the \ac{SOM} position whose weights are most similar to the \ac{NAP} values of the neuron.
However, multiple neurons can match best to the same position in the trained \ac{SOM} and hence will be indistinguishable in the layout.
Therefore, for each set of neurons that share the same coordinate, we distribute the neurons uniformly on a circle centered at the coordinate assigned to the neurons.
To ensure that the redistributed neurons are still closer to each other than to other neurons, considering that the \ac{SOM} coordinates are integer-valued, we choose a circle radius of $0.2$.
\subsubsection{Co-Activation Graph}
The co-activation graph approach follows the idea of layouting a graph structure in which nodes and edges represent neurons and their activation similarity, respectively.
We first compute the pairwise Cosine similarity of neurons according to their \ac{NAP} values.
We then create a graph with one node corresponding to each neuron.
For each of the $7.5\%$ most similar pairs of neurons, we draw an edge between the corresponding nodes in the graph.
We empirically choose the distance threshold based on the connectedness of the graphs for several \ac{MLP} models and layers.
The resulting graph can have separated subsets of nodes, which leads to large gaps in the layout.
To avoid these gaps, we further ensure that the entire graph is connected.
To this end, we first identify all maximal subsets of nodes where a path exists between all nodes, called connected components.
Then, we link each smaller connected component to the largest one by drawing an edge between the most similar pair of neurons out of the two components.
Finally, we layout the connected graph with the force-directed Fruchterman Reingold algorithm \cite{fruchterman1991graph} from the NetworkX package \cite{schult2008exploring}.
From the obtained graph, we use the node coordinates as the topographic layout of the neurons.
For brevity, we refer to the co-activation graph technique as the ``graph'' method.
\subsubsection{Dimensionality Reduction}
We test popular dimensionality reduction methods
to project the high-dimensional neuron activation data into a lower-dimensional space while preserving most information.
\acf{PCA} \cite{doi:10.1080/14786440109462720, Hotelling1933AnalysisOA, Jolliffe} is a traditional unsupervised technique for dimensionality reduction.
The method linearly transforms the data points into a new coordinate system in which the new coordinates are the principal components, which are typically obtained from a \ac{SVD} of the data matrix.
We use the first and section principal component for projecting the data into two dimensions because these components describe the features of the highest explained variance.
We use the \ac{PCA} function from the decomposition module of Scikit-learn (sklearn) \cite{pedregosa2011scikit}.
\ac{tSNE} was first introduced by \citet{JMLR:v9:vandermaaten08a}.
Like \ac{PCA}, \ac{tSNE} is an unsupervised algorithm but the projection is non-linear.
\ac{tSNE} optimizes the pairwise similarities in the low-dimensional space to be similar to those in the original high-dimensional data using a cost function.
We use the \ac{tSNE} implementation from the sklearn manifold module.
For stability of the \ac{tSNE} algorithm, we initialize the learned embedding with \ac{PCA}.
\ac{UMAP} \cite{mcinnes2020umap} is a recent non-linear dimensionality reduction algorithm.
The authors claim that \ac{UMAP} preserves the global structure better than \ac{tSNE}.
However, there is also counter-evidence by \citet{Kobak2019.12.19.877522} who showed that, given the same initialization, \ac{UMAP} does not perform substantially better than \ac{tSNE}.
We create a two-dimensional \ac{UMAP} projection with the python module umap-learn\footnote{https://github.com/lmcinnes/umap}.
\subsubsection{\ac{PSO}}
PSO \cite{488968, 699146, 870279} is a biologically inspired metaheuristic algorithm used to search for optimal solutions.
It assigns each data point a particle in the solution space and searches the optimal solution by moving the particles based on simple mathematical formulas.
Each individual particle follows simple local rules but many particles that influence each other create a complex structure.
For our approach, we initialize a swarm of $N$ particles in a two-dimensional solution space, where each particle corresponds to a neuron.
With the \ac{PSO} we aim to optimize two aspects that we encourage with designated update rules.
First, similarly active neurons shall be in the vicinity of each other.
Second, the neuron density shall be consistent in the layout, without gaps or clusters of neurons.
To achieve activation similarity of neighboring particles, we introduce a global force which is computed based on the actual neuron activation values.
The global force encourages particles of similar neurons to attract each other while activation dissimilarity repels the corresponding particles:
\begin{equation} \label{eq:global_force}
f_{glob} = attr - rep \quad\quad \textrm{with} \quad attr = a \cdot \left( 1 - \dfrac{dist}{max(dist)^ 3} \right) \quad \textrm{and} \quad rep = b \cdot e^{-(dist/c)}
\end{equation}
In \autoref{eq:global_force}, $f_{glob}$ = global force, $attr$ = global attraction, $rep$ = global repulsion, $dist$ = Cosine distance matrix of the \acp{NAP}.
For the global force, we set the weight parameters to $a = 1.5, b = 0.5, c = 2$.
To obtain a well-distributed layout, we use a local force that only depends on the particle coordinates.
Like the global force, it consists of an attraction and a repulsion term.
However, in the local force, attraction closes gaps in the layout by penalizing large distances between pairs of particles and repulsion avoids that two particles occupy the same position:
\begin{equation} \label{eq:local_force}
f_{loc} = attr - rep \quad\quad \textrm{with} \quad attr = a \cdot \left( \dfrac{1}{(dist + 1) ^ 3} \right) \quad \textrm{and} \quad rep = b \cdot e^{-(dist/c)}
\end{equation}
In \autoref{eq:local_force}, $f_{loc}$ = local force, $attr$ = local attraction, $rep$ = local repulsion, $dist$ = pairwise Euclidean distances between particles.
The values that we use for the weight parameters of the local force are $a = 1.5, b = 15, c = 2$.
We optimize the \ac{PSO} for $T=1000$ steps by updating the coordinates according to the weighted average of global and local force (\autoref{eq:combined_force}).
In early steps $t$, we use a high global force weight $w_g$ to encourage the activation similarity of neighboring particles and then gradually increase the local force weight $w_l$ to better distribute the particles in the space.
\begin{equation}
\label{eq:combined_force}
f=\frac{1}{2} \cdot \left( w_g \cdot f_{glob} + w_l \cdot f_{loc} \right) \quad\quad w_l(t) = \frac{1}{2} \cdot \left( \frac{e^{s(t)}-e^{-s(t)}}{e^{s(t)}+e^{-s(t)}}+1 \right) = 1-w_g(t) \quad\textrm{with } s(t) = \frac{9 \cdot t}{1000}-3
\end{equation}
\subsubsection{PSO with non-random initialization}
The \ac{PSO} method with random initialization needs careful balancing of the weight parameters of global and local attraction and repulsion.
To require less fine-tuning of parameters, we investigate a variant of the \ac{PSO}.
Instead of optimizing the activation similarity with the global force, we compute an initial similarity-based layout with another method.
We then only use the local force to further optimize the resulting layout with \ac{PSO}.
As the local force is independent of the activation similarities, the \ac{PSO} is only used to equally distribute the neurons in the two-dimensional space.
We use the same parameters as for the \ac{PSO} method with random initialization except for setting $w_g=0$ (\autoref{eq:combined_force}) in all optimization steps.
Using either the UMAP, TSNE, graph, SOM or PCA method to initialize the PSO, we call the hybrid methods UMAP\_PSO, TSNE\_PSO, graph\_PSO, SOM\_PSO and PCA\_PSO.
\subsection{Visualization}
Finally, we use the \ac{NAP} values (Section \ref{sec:naps}) and the layout coordinates (Section \ref{sec:layouting}) to create topographic map images.
To be able to compare different layouts, we first scale the layout coordinates such that in both dimensions the minimum value is $0$ and the maximum value is $1$ (\autoref{fig:methodsummary}C).
Then, we assign each layout coordinate a color according to the \ac{NAP} value of the corresponding neuron for one group.
We choose this color by mapping the \ac{NAP} values to a symmetric continuous color scale, where blue represents $-max(|NAP|)$, white 0 and red $+max(|NAP|)$.
Then, we linearly interpolate the colors with a resolution of $100 \times 100$~px (\autoref{fig:methodsummary}D).
We use the same interpolation resolution for all methods because of the applied coordinate scaling.
Equal colors in topographic maps of different groups represent the same \ac{NAP} value, but the colors can correspond to different values in each experiment or layer.
\section{Experimental Setup}
This section describes the experiments that we perform to evaluate the quality of the topographic maps obtained with the proposed layouting methods.
To select the technique with the best quality, we use a simple data set and a shallow model.
In Section~\ref{sec:appl}, we demonstrate that the selected method is also applicable to more complex models and data sets.
\subsection{Data and Models}
\label{sec:datamodel}
We first test our method with MNIST \cite{lecun-mnisthandwrittendigit-2010}, a common benchmark data set for Machine Learning.
MNIST contains grayscale images of handwritten digits from $0$ to $9$, which are of size $28 \times 28$~px, centered and normalized in scale.
There are $60,000$ and $10,000$ training and test data examples, respectively.
We train a simple \ac{MLP} and \ac{CNN} on the MNIST data set.
The \ac{MLP} has one fully-connected hidden layer of 128 neurons and uses \ac{ReLU} \cite{nair2010rectified} activation.
The input images are flattened before providing them to the model.
The \ac{CNN} has two 2D-convolutional layers with kernel size 3 $\times$ 3, stride 2 and 128 filters, both using \ac{ReLU} activation.
The fully-connected classification layer takes the flattened feature maps of the second convolutional layer as input.
During training, we use dropout and spatial dropout for fully-connected and convolutional layers, respectively, with a dropout rate of 0.5.
An overview of the architectures is given in the Appendix \autoref{tab:models}.
Both models are trained for 20 epochs using the Adam optimizer \cite{Kingma2014} with default parameters and categorical cross-entropy as the loss function.
\subsection{Evaluation measures}
\paragraph{Qualitative criteria}
\label{sec:eval:qual}
Our technique aims to provide a comparative visual overview of the representations of groups in a \ac{DNN}.
The topographic maps are supposed to be easy to visually compare and they shall be perceived as similar to topographic maps in neuroscience.
We consider the topographic maps to be visually similar to their neuroscientific inspiration if they have a round shape, contain no regions without neurons and show distinguishable regions of similar activity in all groups.
To achieve comparability between topographic maps, they shall have a similar shape and size and they need to be discriminable for dissimilar groups.
We qualitatively evaluate these expectations by manual inspection.
\paragraph{Quantitative evaluation metrics}
In addition to the manual inspection, we quantify the quality of the topographic maps.
To test whether each position in a topographic map is similar to its neighborhood, we apply a Gaussian blur to the topographic map image and compute the \ac{MSE} between the original and the blurred image.
However, this metric penalizes boundaries between regions strongly, but clear boundaries can be beneficial to distinguish different regions from each other.
Therefore, we use a second metric based on image resizing with bicubic interpolation because it preserves edges better than using Gaussian blur.
For this metric, we downscale the images, upscale them to the original size again and compute the \ac{MSE} between the upscaled and the original image.
To not bias the quality metric based on the choice of a specific parameter, we compute the quality with different parameters for the the radius of the Gaussian blur and the downscaling size.
We use Gaussian blur with ten different radii, ranging from 2~px to 20~px in steps of 2~px and investigate ten different downscaling sizes for the original topomap image of size $300 \times 300$~px from $55 \times 55$~px down to $10 \times 10$~px in steps of $5$~px (see an example in \autoref{fig:quantexample}).
To aggregate the results for the different parameter choices while interpolating the values for parameters in between, we use an estimated \ac{AUC} value.
Using the parameters in the order of increasing effect of image alteration, we consider the resulting \ac{MSE} values as function values and apply the trapezoidal rule with width 1 to estimate the \ac{AUC}.
Furthermore, we investigate the robustness of the quality of the topographic maps.
To this end, we repeat each topographic map computation 100 times given the same input.
We also test whether the topographic map quality depends on the random choice of input examples for computing the \acp{NAP} by generating topographic maps for 100 resampled \acp{NAP}.
\label{sec:eval:quant}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/example_eval_quant.pdf}
\caption{Examples of applying the blur and resize technique for quantifying topographic map quality with different Gaussian blur radii (top) and downscale sizes (bottom).}
\label{fig:quantexample}
\end{figure}
\subsection{Experimental Plan}
\label{sec:exp}
For our simplest data set and model, MNIST and MLP, we compute \acp{NAP} in the first fully-connected layer, using the 10 classes as grouping and 200 random examples per group.
We then use the resulting \acp{NAP} to compute topographic maps with each of our 11 proposed layouting methods.
\paragraph{Pre-selecting layouting methods}
\label{sec:exp:qual}
First, we pre-select a subset of techniques that satisfy the qualitative expectations of the visualization described in Section~\ref{sec:eval:qual}.
For the exemplary class ``0'', we compare the methods with respect to the formation of regions of similar activations, the visual similarity to a topographic map in neuroscience and the ease of comparability.
\paragraph{Quantitative topographic map evaluation}
\label{sec:exp:mlps}
Next, for the pre-selected methods, we investigate the quality of the resulting topographic maps in further detail.
For each of the methods, we compute the quantitative evaluation measures described in Section \ref{sec:eval:quant} and compare the methods with respect to the visual quality of the topographic maps, the variation of the quality and the runtime.
\label{sec:exp:cnns}
For \acp{CNN}, we use the complete feature map \acp{NAP} to compute the topographic map layouts but we aggregate the activation values per feature map to obtain a color for the topographic map.
This difference to \acp{MLP} might affect which layouting method produces the best topographic maps.
Therefore, we perform the same quantitative evaluation for a \ac{CNN} model trained on MNIST.
Based on the evaluation results for the \ac{MLP} and the \ac{CNN}, we pick the best method for the following experiments.
In both evaluations, we include a random baseline layout to obtain an expected lower bound on the quality of the topographic maps.
This random layout is a local force-only PSO which we initialize with random uniform values.
\paragraph{Influence of NAP averaging}
\label{sec:exp:averaging}
Finally, we evaluate whether the layouting methods require the \ac{NAP} approach to perform well.
A disadvantage of using \acp{NAP} as input to compute the layout is that it introduces weak supervision.
In addition, the layout needs to be recomputed when changing the grouping.
Being independent of the group information can potentially generalize our approach.
Therefore, we investigate whether using a small random subset of the test data without averaging and normalization can approximate a layout of similar quality as with using \acp{NAP}.
Specifically, we test two alternative ways to provide the inputs.
As one alternative, we draw a random uniform number of examples of each class, such that the total number of examples adds up to 1000.
This way, we simulate a data set with class imbalance and keep the number of examples the same as for computing the \ac{NAP}.
The second alternative input is to draw 200 random examples per class, but only stacking the activations instead of applying averaging and normalization across the groups.
\section{Results and Discussion}
\label{sec:results}
\subsection{Pre-selecting layouting methods}
\label{sec:res:qual}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figures/layout_method_comparison.png}
\caption{Topographic maps for one exemplary class for all proposed layouting methods. The scatter plots show the layouted neurons with \ac{NAP} value-based coloring. Below are the resulting interpolated topographic maps. Left-most are examples of an electrode layout (top) and a topographic map (bottom) in neuroscience which we use as inspiration and qualitative target for our visualization. All layouts and colorings use the same class-based \acp{NAP} for an MNIST \ac{MLP} model as input.
}
\label{fig:layouteval}
\end{figure}
We first pre-select layouting methods that produce topographic maps which satisfy the qualitative criteria of Section~\ref{sec:eval:qual}.
Topographic maps generated with each of the 11 methods for the MNIST \ac{MLP} and the exemplary class ``0'' are shown in \autoref{fig:layouteval}.
The Figure shows the topographic maps as scatter plots to see the positions of the neurons and potential gaps in the layout as well as the interpolated visualization as the final topographic maps.
All methods are capable of distributing the neurons to form regions of similar activations.
Only the \ac{SOM} technique splits up sets of co-activated neurons into multiple regions in the layout.
This can happen because a \ac{SOM} does not penalize the similarity of distant coordinates, for example, when initializing two distant positions with similar neurons.
Another criterion is that the neurons are well-distributed in the two-dimensional space.
One reason for this criterion is that there are no empty regions in a topographic map of a brain either.
In addition, a layout with varying neuron density leads to disproportionated regions in the interpolated topographic maps.
This effect can, for example, be observed in the TSNE topographic map.
The region of highly active neurons in the center is surrounded by areas without assigned neurons.
In the interpolated image, the gaps cause the red region to be enlarged, which wrongly suggests that the highly active neurons are in the majority for this class.
We observe the strongest neuron density variation for UMAP and TSNE, and the graph method leads to high density for groups of co-activated neurons.
With the SOM and PCA methods, the neurons are well-distributed in the shown example but a higher variation of the neuron density can happen for different data, models or parameters, too.
The best distribution is achieved with the PSO method, which is almost free of gaps in the layout and the density of neurons is similar across the whole layout.
This observation is not surprising because the PSO method optimizes this property of the layout with the local force component.
For the images to be visually similar to topographic maps in neuroscience, we expect them to have a round shape.
This property is particularly well satisfied with the \ac{PSO} method, regardless of the initialization, which is again achieved by the local force.
However, when initializing the \ac{PSO} randomly, the quality of the topographic map is unreliable.
In the shown example of a PSO topographic map, we observe several neurons of low activity within regions of high activity.
This effect is likely related to the simultaneous optimization of activation similarity and neuron distribution that can interfere with each other.
This supports our idea of first layouting the neurons by activation similarity, followed by distributing the neurons with a \ac{PSO} using the local force only.
Therefore, we conclude that the PSO methods with non-random initialization are the most promising techniques.
We will use these methods for the quantitative evaluation and keep the randomly initialized \ac{PSO} for comparison, as well.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/topomap_eval.pdf}
\caption{Quantification of the topographic map quality.
Line plots: average quality values at each individual parameter of the evaluation measure.
Violin plots: AUC value distributions across 100 trials, highlighting the mean AUC value and the extrema with markers.
For A and B, the rows of each violin plot are sorted by the respective mean AUC values.
Because AUC values are error values, the layouting methods are decreasing in quality from the top to the bottom row.
C: Runtimes are mean values over 10 repeated computations per method and number of neurons.
D: The averaging influence is investigated with UMAP\_PSO-based topographic maps using different strategies for computing activation distances between neurons.
``random'' and ``balanced'' pick 1000 random examples from the MNIST data set and use the concatenated activations of all examples.
``balanced'' additionally ensures that each group is represented equally often in the set.
The ``NAPs'' approach further averages and normalizes the activations across each of the group.}
\label{fig:topoeval}
\end{figure}
\subsection{Quantitative Evaluation}
\label{sec:res:mlps}
For the methods that we pre-selected through qualitative evaluation, we further quantify their topographic map quality.
The results for the first fully-connected layer of the \ac{MLP} model trained on MNIST are shown in \autoref{fig:topoeval}A.
Additionally, \autoref{fig:topoeval}C shows the corresponding runtimes of each method for different numbers of neurons.
In \acp{MLP}, all methods obtain significantly better topographic maps than the random baseline layout.
As expected, the PSO method with random initialization has the worst quality of results among the techniques, however, it is comparably as good as the PCA\_PSO technique.
The quality of the PSO and PCA\_PSO methods mainly differ in their robustness across trials.
While both have similar variations when using different random subsets per group as input, the PCA\_PSO layout quality is reproducible using the same input because PCA is a deterministic algorithm.
SOM\_PSO and graph\_PSO are in the medium quality range.
UMAP\_PSO and TSNE\_PSO obtain the highest quality results for the \ac{MLP} according to both of the evaluation metrics.
Like PCA\_PSO, TSNE\_PSO has a very small variation of quality when using the same inputs for computing the layout.
This is because we use a TSNE algorithm that initializes the embedding positions with a PCA.
Therefore, the result of TSNE\_PSO is robust although it is a stochastic algorithm.
In the exemplary model layer with a small number of 128 neurons, UMAP\_PSO and TSNE\_PSO have a similar runtime.
However, in wider layers with more neurons UMAP\_PSO is faster than TSNE\_PSO, for example, it needs around one minute less for a layer of 4096 neurons.
Considering that we want to apply the same method to layers with any number of neurons, we conclude that UMAP\_PSO is the best layouting method.
\paragraph{Applicability to CNNs}
\label{sec:res:cnns}
We perform the same experiment as for the \ac{MLP} for an exemplary \ac{CNN} model, as well.
The results for the second convolutional layer of the \ac{CNN} model are visualized in \autoref{fig:topoeval}B.
We observe a major difference between \ac{MLP} and \ac{CNN} for the graph\_PSO method.
For the \ac{CNN}, the graph\_PSO is as low quality as the random baseline layout.
This is due to the difficulty of choosing suitable parameters for creating the initial co-activation graph.
The distance threshold used to create the graph in the \ac{MLP} model is not suitable for the \ac{CNN} model because of the change in the distribution of distance values.
While this can be counteracted by optimizing the threshold value, it indicates that the graph\_PSO does not generalize well to different models.
The findings for the remaining methods are comparable between \ac{MLP} and \ac{CNN}.
UMAP\_PSO and TSNE\_PSO still produce results of similarly high quality.
In the used \ac{CNN}, TSNE\_PSO performs on average slightly better than UMAP\_PSO.
However, the difference is neglectable and we argue that UMAP\_PSO is still the best choice taking the faster runtime for wider layers into account.
\subsection{Influence of NAP averaging}
\label{sec:res:averaging}
Finally, using the UMAP\_PSO method and the \ac{MLP} model, we investigate whether the averaging and normalizing of the \ac{NAP} computation is necessary to achieve high-quality topographic maps.
\autoref{fig:topoeval}D shows the quality value distributions for the different ways of providing inputs to the layouting technique.
``NAPs'' indicates the default input we use in the previous section.
Instead of applying averaging and normalization, we compare stacked random examples, either with a equal number of examples of each group (``balanced'') or with enforced class imbalance (``random'').
The resulting layout quality of all three approaches does not differ substantially.
Even with providing random examples without a balanced class distribution, the quality of the layout does not deteriorate much.
This indicates that our approach allows to compute a topographic map layout without providing any information about the grouping.
We expected that using the \acp{NAP} as input leads to the best quality because the values for computing the layout are the same values as we used for the coloring.
Surprisingly, drawing random examples with balanced group distribution performs slightly better than with \acp{NAP} as inputs.
We suspect that the higher dimensionality when not averaging the activations provides more information for estimating the similarity of the neuron activations and therefore supports the layout computation.
However, the higher dimensionality of the ``balanced'' approach also comes with higher computational requirements.
This particularly limits its applicability for \acp{CNN} or fully-connected layers with a large number of neurons.
In conclusion of this experiment, all three tested approaches are useful.
Picking random examples without information about the labels is suitable for unsupervised data or when testing different groupings with the same layout.
A group-balanced random choice of examples produces the highest quality if it is computationally feasible.
Using \acp{NAP} is the most generally applicable technique because it leads to high-quality topographic maps and scales well for complex models.
\section{Exemplary Applications}
\label{sec:appl}
For UMAP\_PSO, which we identified as the best layouting method in Section \ref{sec:results}, we showcase exemplary use cases of our visualization.
We demonstrate two toy applications to introduce how to use the visualization followed by a fairness application to show the real-world applicability of our technique.
\subsection{Detecting systematic annotation errors in data sets}
Topographic map visualizations can be used to identify whether classification errors are caused by wrong target annotations in the data set.
Annotation errors can occur either in training data or test data which has different effects on the topographic maps.
We demonstrate how to use our visualization technique for two toy examples, where we deliberately introduce annotation errors in either the training or test data.
\subsubsection{Toy examples design}
For the first toy example, we use the Fashion MNIST data set \cite{Xiao2017} which is similar to MNIST.
Both data sets contain the same number of training and test images, share a common image size and are grayscale.
Fashion MNIST, in contrast, contains images of 10 different clothings or accessories which are more difficult object categories than handwritten digits.
In the test data, we introduce a systematic error by changing
the target class of 90\% of the examples of class ``0'' (``T-shirt/Top'') to class ``1'' (``trouser'').
Using this altered test data, we create topographic maps for a \ac{MLP} model that is trained on the original Fashion MNIST training data.
In the second toy example, we use the MNIST data set and introduce a systematic error in its training data.
Specifically, we change 90\% of the class ``0'' examples to class ``1'' and train a \ac{MLP} model on this altered training data set.
For this model, we create topographic maps for the original MNIST test data.
Both toy examples only use the shallow \ac{MLP} architecture as described in Section~\ref{sec:datamodel} and we use an unrealistically high amount of mislabeled examples to facilitate the understanding of the visualization.
Nevertheless, both examples represent real-world scenarios that we discuss in the corresponding sections.
The groups of interest in both toy examples are the classes according to the test data annotations.
Because we are further interested in the errors, we separate the examples of each class into whether they are correctly predicted by the model, resulting in 20 groups of interest.
We create topographic maps for the first fully-connected layer of the respective \ac{MLP} model using \ac{NAP} values computed for 200 examples per group.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/appl61annerrors.png}
\caption{Topographic maps for correctly and wrongly classified examples using data sets with annotation errors. Top: annotation errors in the test data. Bottom: training data with annotation errors. The activation similarity is shown as a dendrogram and used to sort the classes. The shown example images are randomly chosen from the respective group, while 200 examples per group are used to compute the \ac{NAP}. The green and purple annotations highlight the pairs of topographic maps that indicate the error in the respective example.}
\label{fig:annerrors}
\end{figure}
\subsubsection{Annotation errors in the test data}
\autoref{fig:annerrors} (top) shows the topographic maps of all Fashion MNIST classes, separated into correctly and wrongly classified groups.
Two characteristics of the topographic maps indicate potential errors in the annotation of the test data.
First, erroneous test data annotations lead to a high difference of the activity between the topographic maps of correctly and wrongly classified examples of the same label.
Second, the activity of the wrongly classified group is similar to the activity of another group, that is, the class which the examples are supposed to be annotated as.
In the shown example, the first criterion is met for several classes, for example, ``bag'', ``T-shirt/top'' and ``trouser''.
Only for the wrongly classified ``trouser'' group, we observe that the topographic map is similar to the activation of correctly classified ``T-shirt/top'' images (highlighted in green).
This matches the error that we injected in the test data, that is, changing 90\% of the ``T-shirt/top'' labels to ``trouser''.
In realistic models, a dissimilarity between the topographic maps of the correctly and wrongly classified groups of the same annotation indicates that there is a distribution difference in this class.
If no other topographic map is similar, this low similarity can be related to using non-representative training data for this class or using test data with a highly different distribution.
The model can potentially be improved by including a part of the out-of-distribution examples in the training data or by introducing a new class which represents them.
In the upper left of the topographic maps in \autoref{fig:annerrors} (top), we observe a white region in all groups.
This region is not empty because the PSO distributes the neurons in a round shape.
Instead, a white region that exists in all groups corresponds to a larger subset of neurons whose activity is highly similar across the groups.
This indicates that the model is not using its full capacity, for example, because it is too complex for the given task or due to training problems like \acp{ReLU} that never activate \cite{maas2013rectifier}.
\subsubsection{Annotation errors in the training data}
\label{sec:appl:trainerrors}
To investigate training data annotation errors, we train a model using an erroneous MNIST data set.
The resulting topographic maps of correctly and wrongly classified groups using the original MNIST data set are shown in \autoref{fig:annerrors} (bottom).
Like in the previous example, we visually compare the topographic maps to identify potential errors in the training data.
However, the criteria are different from the ones for the test data.
Annotation errors in the training data lead to a high similarity of topographic maps for wrongly and correctly classified examples of the same group.
This means that the model detects similar patterns in both groups but still categorizes the examples differently, which indicates an error in the classification decision that is based on the training data annotations.
In addition, there is typically no other topographic map that is highly similar to the potential error candidate.
We observe this pattern for class ``0'' (purple highlight), which again matches our injected annotation error of changing 90\% of the ``0'' labels.
However, using the binary split into correctly and wrongly classified examples does not show which class the training examples are mislabeled as.
To identify the class of the wrong annotations, we can extend the analysis by creating a confusion matrix-like topographic maps visualization.
There, we can find that the topographic map at the ``0''-classified-as-``1'' position is highly similar to the correctly classified ``0'' examples (see Appendix \autoref{fig:confusion_matrix}).
Observing similar topographic maps of the correctly and wrongly classified examples of the same class can also give insight in realistic models.
Commonly, this pattern occurs if the model cannot discriminate between two or more classes properly.
In this case, there are multiple classes that share similar topographic maps in the correctly and wrongly classified groups.
One example can be found in the classes ``sneaker'' and ``ankle boot'' of \autoref{fig:annerrors} (top).
There are no changes in the annotations of these two classes but the topographic maps of the corresponding groups look similar.
This indicates that the classes are too similar to each other for the model to discriminate between them.
Improvements of the model can be achieved by merging the classes that are similar to each other or by increasing the model capacity to strengthen its ability to distinguish between the similar classes.
\subsection{Visualization of bias in representations}
\label{sec:appl:bias}
As a real-world example, we investigate the bias in the representations in VGG16 \cite{DBLP:journals/corr/SimonyanZ14a}, which is a pre-trained \ac{CNN} model that is commonly used as feature extractor for downstream applications like image recognition \acp{DNN}.
As test data, we use FairFace \cite{karkkainenfairface}, a balanced data set of images of people from different age groups, races and binary genders.
Here, we choose the ``race'' variable as grouping to compute the topographic maps.
Moreover, as a random baseline to compare with, we add several groups of randomly picked examples.
We investigate the representations of a pre-trained VGG16 model, obtained from the TensorFlow Keras applications\footnote{https://github.com/keras-team/keras} module, using the second maxpooling layer as an example.
In the appendix, we provide the VGG16 architecture (\autoref{tab:models}) and topographic maps of multiple layers of the network (\autoref{fig:biaslayers}).
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{figures/appl63bias_layer6.png}
\caption{Topographic maps of VGG16 activations in the second maxpooling layer for different FairFace ``race'' categories (top) and random groups (bottom). In each row, the groups are sorted by activation similarity.}
\label{fig:bias_l6}
\end{figure}
By comparing the topographic maps between the values of the sensitive variables from FairFace, we investigate whether it is possible to visually discriminate the groups based on the representations of the VGG16 model.
If the topographic maps of the bias variable categories are easier to discriminate than topographic maps of random groups, we consider the representations to be biased.
\autoref{fig:bias_l6} shows the topographic maps for the seven ``race'' categories of the FairFace data set and of seven random groups in the second maxpooling layer of the VGG16 model.
First, we observe that it is clearly easier to discriminate the ``race'' categories than the random groups.
The topographic maps of the ``race'' categories show distinct regions of highly and weakly active neurons and are of higher contrast and color intensity than those of the random groups.
This indicates that the representations are biased towards the ``race'' variable.
Only the ``Latino-Hispanic'' topographic map can be confused with a random group, either because the input images or the model representations for this category are too heterogeneous.
We further observe that ``Indian'' and ``Black'' are perceived as particularly similar by the model.
``East Asian`` and ``Southeast Asian`` are similar to each other and to the ``White'' category.
``Middle Eastern'' and ``Latino-Hispanic'' are dissimilar to the other groups.
Because the representations are visually discriminable, they will likely affect downstream applications using the pre-trained model.
For example, a classifier that uses these pre-trained representations can easily learn different decisions for the ``White'' and ``Black'' categories because their representations differ substantially.
We emphasize, however, that this observation does not imply that a downstream application must include a racial bias.
Instead, we suggest to use the findings from the visualization to formulate hypotheses about which bias to look for.
In this specific case, the topographic map visualization indicates that there is a higher risk for learning a biased decision between the ``White'' and ``Black'' category.
Further, the observations indicate that there might be unintended correlations of the decisions for the representationally similar categories ``Indian'' and ``Black'' or ``East Asian``, ``Southeast Asian`` and ``White''.
This allows to test for biases in a targeted way instead of running a computationally expensive test for all potential biases.
\section{Conclusion}
Topographic activation maps are a promising visualization tool to get insight into the internal representations of \acp{DNN}.
Our technique simplifies high-dimensional neural activity in a hidden layer of the \ac{DNN} model into a two-dimensional visualization.
This allows to obtain a graphical overview that can be used to visually compare \ac{DNN} representations between groups of interest without being restricted to using only the output classes of the model.
Our visualization technique alone does not explicitly provide explanations of the model representations or decisions.
It still requires a human to visually interpret the results or to perform further downstream analyses, like computing feature visualization or saliency maps, to explain what the regions are responsible for.
While topographic maps are easy to interpret by visual inspection, relating the visualization to useful insight into the model requires some practice, especially for highly-complex models that are used in practical applications.
We therefore recommend to first get familiar with the technique by using toy examples before applying it to real-world models.
In future research, we will investigate how to generate explanations of the regions in our topographic map visualization.
For example, we will automatically detect group-responsive regions and perform feature visualization for the corresponding filters of the model to understand which patterns it uses to detect the respective group.
Moreover, we will research extensions of our technique to create topographic map visualizations that span multiple layers.
\section*{Acknowledgments}
This research has been funded by the Federal Ministry of Education and Research of Germany (BMBF) as part of the project “CogXAI--Cognitive neuroscience inspired techniques for eXplainable AI”.
\bibliographystyle{plainnat}
|
2,869,038,155,098 | arxiv | \section{Introduction}
If $n$ is prime, in view of Fermat's little theorem, the congruence $$a^{n-1}\equiv 1 \mod n$$ holds for every $a$ with gcd($a,n$)=1. There are composite numbers also satisfying the congruence. Such an odd composite number $n$ is called a pseudoprime to base $a$ (psp($a$) for short). Moreover for an odd prime $n$, let $n-1=2^{s}d$ with $d$ odd, we have $$a^d\equiv 1 \mod n$$ or $$a^{2^kd}\equiv -1 \mod n$$ for some $k$ satisfying $0\le k< d$. If a composite number $n$ satisfies these two equations, we call $n$ a strong pseudoprime to baes $a$ (spsp($a$) for short). This is the basic of Rabin-Miller test\cite{MR}.
Define $\psi_m$ to be the smallest strong pseudoprime to all the first $m$ prime bases. If $n<\psi_m$, then only $m$ strong pseudoprime tests are needed to find out whether $n$ is prime or not. If we know the exact value of $\psi_m$, then for integers $n<\psi_m$, there is a deterministic primality testing algorithm which is easier to understand and also faster than ever known other tests. The exact value of $\psi_m$ for $1\le m\le 8$ is known\cite{J,PSS}.
\begin{eqnarray*}
\psi_1 &=& 2047\\
\psi_2 &=& 1373653\\
\psi_3 &=& 25326001\\
\psi_4 &=& 32150\,31751\\
\psi_5 &=& 215\,23028\,98747\\
\psi_6 &=& 347\,47496\,60383\\
\psi_7 &=& 34155\,00717\,28321\\
\psi_8 &=& 34155\,00717\,28321
\end{eqnarray*}
In paper \cite{J}, Jaeschke also gave upper bounds for $\psi_9,\ \psi_{10},\ \psi_{11}$. These bounds were improved by Z. Zhang for several times
and finally he conjectured that
\begin{eqnarray*}
\psi_9=\psi_{10}=\psi_{11}=Q_{11}&=&3825\,12305\,65464\,13051\\
&=&149491\cdot747451\cdot34233211
\end{eqnarray*}
Zhang also gave upper bounds and conjectures for $\psi_m$, with $12\le m\le 20$ (see \cite{ZH1,ZH2,ZHT}).
In this paper, we develop several algorithms to get the following conclusion.
\begin{Clm}
$\psi_9=\psi_{10}=\psi_{11}=Q_{11}=3825\,12305\,65464\,13051.$
\end{Clm}
This article is organized like this. In \S 2 we give notations and basic facts needed for our algorithms. In \S 3 we get the properties of primes up to $\sqrt{Q_{11}}$ which give us much information to design our algorithm. Just as in \cite{J}, we consider the number of prime divisors of the testing number.
Let $n=p_1\cdot p_2\dots p_{t}$. In \S 4 we consider $t\ge5$ and $t=4$ respectively, \S 5 for $t=3$ and \S 6 for $t=2$. In \S 7 we get our conclusion and give the total time we need for our algorithms.
\section{Foundations of algorithms}
In this section, we give the foundations for our algorithm. Let $p$ be a prime, $a$ is an integer with gcd($a, p$)=1, denote the smallest positive integer $e$
such that $a^e\equiv 1\mod p$ by $Ord_{p}(a)$. For example, we have $Ord_{5}(2)=4$. Moreover for any integer $n$, if $n=p^en'$ with gcd($n, n'$)=1, we denote $e$ by $Val_p(n)$. In this article, we only use $Val_p(n)$ for $p=2$, we write $Val(n)$ by abbreviation. For $v\in \mathbb{Z}^n$, $v=(a_1, \dots, a_n)$ with all gcd($a_i, p$)=1
we define $$\sigma_{p}^{v}=(Val(Ord_p(a_1)), \dots, Val(Ord_p(a_n))).$$ If $n$ is a pseudoprime (or strong pseudoprime) for all the $a_i$s, we denote it by psp($v$) (or spsp($v$)).
We need to check all odd integers less than $Q_{11}$ to see if there are strong pseudoprimes to the first nine bases.
First we are going to exclude the integers having square divisors. If $n$ is a psp($a$) and $p^2|n$ for some prime $p$,
then we have $$a^{n-1}\equiv 1\mod p^2.$$ also $$a^{p(p-1)}\equiv 1\mod p^2.$$ As gcd($p,n-1$)=1, we have $$a^{p-1}\equiv 1\mod p^2$$
For $a=2$ and 3, $$2^{p-1}\equiv 1\mod p^2, \qquad 3^{p-1}\equiv 1\mod p^2$$ These two equation do not hold simultaneously for any prime $p$ less than $3\cdot10^9$\cite{PSS}, which is greater than $\sqrt{Q_{11}}\approx 1.9\cdot10^9$, so we only need to consider squarefree integers.
Now we give the following important proposition(also see \cite{J}).
\begin{Prop}
Let $n=p_1\dots p_t$ with different primes $p_1, \dots, p_t$, $v=(a_1, \dots, a_m)$ with different integers such that gcd($a_i,p_j$)=1 for all $i=1,\dots,m,\ j=1,\dots,t$. Then $n$ is an spsp(v) iff $n$ is a psp(v) and $\sigma_{p_1}^{v}=\dots=\sigma_{p_t}^{v}$.
\end{Prop}
\begin{proof}
Let $n-1=2^sd$ with $d$ odd. By Chinese Remainder Theorem $$a^{2^kd}\equiv -1 \mod n\Longleftrightarrow a^{2^kd}\equiv -1\mod p_i$$ for all $1\le i\le t$, so $Val(Ord_{p_i}(a))=k+1$ for all $i$. And $$a^{d}\equiv 1 \mod n\Longleftrightarrow a^{d}\equiv 1\mod p_i$$ for all $1\le i\le t$, so $Val(Ord_{p_i}(a))=0$ for all $i$. The proposition is an immediate consequence of the above argument.
\end{proof}
This is the main necessary condition that we use to find strong pseudoprimes. In our algorithm, $v=(2,3,5,7,11,13,17,19,23)$, For a given prime $p$, we need to find prime $q$ satisfying $\sigma_{p}^v=\sigma_{q}^v$. A problem we have to face is that there are too many candidates of $q$, so we need another proposition(also see \cite{J}).
\begin{Prop}
For primes $p,q$, if $Val(p-1)=Val(q-1)$ and $\sigma_p^{(a)}=\sigma_q^{(a)}$, then the Legendre symbol $(\frac{a}{p})=(\frac{a}{q})$.
\end{Prop}
\begin{proof}
This follows from $$\sigma_p^{(a)}=Val(p-1)\Longleftrightarrow (\frac{a}{p})=-1.$$
\end{proof}
Notice that if $p\equiv q\equiv 3 \mod 4$ in the above proposition, the inverse is also true. This is important and then we can use Chinese Remainder Theorem to reduce candidates. We'll give details in the following sections.
\section{Primes up to $\sqrt{Q_{11}}$}
From now on, we fix $v=(2,3,5,7,11,13,17,19,23)$. If $n$ is a psp($v$) and prime $p|n$, as $a^{n-1}\equiv1\mod p$, then $$Ord_p(a)|(n-1),\qquad a=2,3,5,7,11,13,17,19,23.$$ Define $\lambda_p$ to be the least common multiple of the nine orders, then we have $$\lambda_p|(n-1),\qquad \lambda_p|(p-1).$$ This point is helpful when designing our algorithms. Let $\mu_p=(p-1)/\lambda_p$, we develop an algorithm to calculate $\mu_p$ for $p$ up to $\sqrt{Q_{11}}$. It takes about 15 hours and find that $\mu_p$ is very small. We tabulate our results as following.
In the table, for each value of $\mu_p$, we give the first and last several primes. There are two rows with $\mu_p=2$, one for $p\equiv 3\mod 4$ and the other for $p\equiv 1\mod 4$. The binary row is for primes $p$ with $$p\equiv 1\mod 4,\qquad\sigma_p^{v}\in\{0,1\}^9.$$ Since $(\frac{2}{p})=-1$ for $p\equiv 5\mod 8$, in the second $\mu_p=2$ row all $p$ are in the residue class $1\mod 8$. For the same reason, in the binary row also with $p\equiv 1\mod 8$, as there is no prime with $\mu_p\ge 8$, all primes in binary row are $9\mod 16$ and with $\mu_p=4$. In the last column, we give the total number of each kind of primes.
\begin{center}
\begin{tabular}{|c|l|c|}
\multicolumn{3}{c}{$\mu_p$ for $p$ up to $\sqrt{Q_{11}}$ }\\[5pt]
\hline
$\mu_p$ & \multicolumn{1}{c|}{primes} & total \\ \hline
\raisebox{-7pt}[0pt][0pt]{2} & 18191,\ 31391,\ 35279,\ 38639,\ 63839,\ 95471, & \\
& 104711,\ 147671,\dots,\ 1955593559,\ 1955627519, & 93878 \\
\raisebox{7pt}[0pt]{$p\equiv 3(4)$} & 1955645831,\ 1955687159,\ 1955728199 & \\ \hline
\raisebox{-7pt}[0pt][0pt]{2} & 87481,\ 185641,\ 336361,\ 394969,\ 483289, & \\
& 504001,\ 515761,\dots,\ 1955712529,\ 1955713369, & 91541 \\
\raisebox{7pt}[0pt]{$p\equiv 1(4)$} & 1955740609,\ 1955743729,\ 1955760361 & \\ \hline
& 4775569,\ 5057839,\ 5532619,\ 7340227,\ 7561207 & \\
3 & 8685379,\ 9734161,\dots,\ 1953162751, & 2226 \\
& 1953185551,\ 1954279519,\ 1955425393 & \\ \hline
& 25433521,\ 120543721,\ 129560209,\ 138156769, & \\
4 & 148405321,\ 174865681,\dots,\ 1838581369, & 111 \\
& 1867026001,\ 1892769649,\ 1918361041 & \\ \hline
& 650086271,\ 792798571,\ 858613901, & \\
\raisebox{6.5pt}[0pt]{5} & 1794251801,\ 1820572771,\ 1947963301 & \raisebox{6.5pt}[0pt]{6} \\ \hline
6 & 1785200041 & 1 \\ \hline
7 & 945552637 & 1 \\ \hline
& 120543721,\ 148405321,\ 200893081,\ 224683369, & \\
binary & 421725529,\ 481266361,\dots,\ 1717490329, & 45 \\
& 1810589881,\ 1828463641,\ 1838581369 & \\ \hline
\end{tabular}
\end{center}
\section{$t\ge5$ and $t=4$}
As from the above, we only need to consider squarefree integers. we always denote $n=p_1\dots p_t$ with $p_1<\dots<p_t$. In this section, we are going to exclude the two cases when $t\ge5$ and $t=4$.
\subsection{$t\ge5$}
For $p$ up to $[\sqrt[5]{Q_{11}}]=5206$, let $S_p$ be the set of all primes $q$ with $\sigma_q^v=\sigma_p^v$, and denote $k$th element in $S_p$ by $s_{p,k}$ in ascending order. Our algorithm puts out the first $l$ elements of $S_p$ with $l>5$ and $$\prod_{i=1}^5s_{p,i} \le Q_{11},\qquad(\prod_{i=1}^4s_{p,i})s_{p,l}>Q_{11}.$$ It takes less than 22 seconds and puts out six sequences. We give our result in the following table.
\begin{center}
\begin{tabular}{|l|c|c|}
\multicolumn{3}{c}{sequence with equal $\sigma_p^v$}\\[5pt] \hline
& $\sigma_p^v$ & No.\\ \hline
167, 3167, 11087, 14423, 21383, 75407 & (0,0,1,0,0,1,1,0,1) & 1 \\ \hline
263, 1583, 8423, 9767, 12503, 18743, 50423, & & \\
54623, 106367, 127247 & \raisebox{7pt}[0pt]{(0,0,1,1,0,0,0,1,0)} & \raisebox{7pt}[0pt]{13} \\ \hline
443, 4547, 5483, 8243, 19163, 26987, 42683 & (1,0,1,1,1,0,0,1,1) & 2 \\ \hline
463, 1087, 13687, 17383, 25447, 37447 & (0,1,1,1,1,1,0,1,1) & 1 \\ \hline
479, 4919, 5519, 6599, 7559, 29399, 51719 & (0,0,0,0,0,1,1,1,0) & 4 \\ \hline
2503, 2767, 5167, 5623, 11887, 31543 & (0,1,1,1,0,1,0,0,0) & 1 \\ \hline
\end{tabular}
\end{center}
At first glance we know $t>5$ is impossible, Then we check these six sequences if they can make up an spsp($v$) with 5 prime divisors. The last column is the number of integers with $t=5$ and less than $Q_{11}$ in each sequence. Our checking algorithm terminates in less than 0.1 second and finds no strong pseudoprime.
There are details about our algorithm needing to explain. Notice that when $p_1\equiv3 \mod4$, and finding $q$ with $\sigma_{p_1}^v=\sigma_q^v$, as the least binary prime is 120543721. In fact we only need to check $q\equiv 3\mod 4$. by proposition 2, consider $$(\frac{2}{p_1})=(\frac{2}{q}),\qquad(\frac{3}{p_1})=(\frac{3}{q}).$$ we only need to check $q\equiv p_1\mod 24$. also in $p_1\equiv3\mod 4$ case, we calculate the Lengedre symbol $(\frac{\cdot}{p_1})$ instead of $Val(Ord_{p_1}(\cdot))$.
\subsection{t=4}
For $t=4$, we first define $(p_1, p_2, p_3)$ to be a feasible 3-tuple if it satisfies $$p_1<p_2<p_3,\quad \sigma_{p_1}^v=\sigma_{p_2}^v=\sigma_{p_3}^v,\quad p_1p_2p_3^2<Q_{11}.$$ Our algorithm goes like this: for each $p_1$ up to $[\sqrt[4]{Q_{11}}]=44224$, find feasible 3-tuples $(p_1, p_2, p_3)$. As $\lambda_{p_i}|n-1$, for $i=1,2,3$. let $\lambda$ be the least common multiple of these three numbers, and $b=p_1p_2p_3$, then we have $$n=bp_4\equiv 1\mod \lambda.$$ If gcd$(b,\lambda)\ne 1$, it is impossible to have such $n$. If gcd$(b,\lambda)=1$, we need to check all $p_4$ with
$$p_3<p_4\le Q_{11}/b,\qquad p_4\equiv b^{-1}\mod \lambda$$ Our algorithm takes about 15 minutes, finding 88729 feasible 3-tuples and no spsp($v$) with $t=4$. As for $t=5$, when $p_1\equiv 3\mod 4$, we use Legendre symbol and $q\equiv p_1\mod 24$ to shorten our running time.
\section{$t=3$}
As above, we define feasible 2-tuple $(p_1, p_2)$ with $$p_1<p_2, \quad\sigma_{p_1}^v=\sigma_{p_2}^v,\quad p_1p_2^2<Q_{11}$$
Our algorithm is just as $t=4$ case, for each $p_1$ up to $[\sqrt[3]{Q_{11}}]=1563922$, find feasible 2-tuples $(p_1, p_2)$. Let $b=p_1p_2$ and $\lambda=\text{lcm}(\lambda_{p_1},\lambda_{p_2})$, then $\lambda |n-1$. If gcd$(b,\lambda)=1$, we check all $p_3$ with
$$p_2<p_3\le Q_{11}/b,\qquad p_3\equiv b^{-1}\mod \lambda.$$ We divide our algorithm into three parts according $p_1\equiv 3\mod 4$, $p_1\equiv 5\mod 8$ and $p_1\equiv 1\mod 8$, also we use Chinese Remainder Theorem to reduce candidates.
\subsection{$p_1\equiv3\mod 4$}
For $p_1\equiv3 \mod 4$, we first assume $p_2\equiv 3\mod 4$. as from \S 3, we know if $p_2\equiv1 \mod 4$, $p_2$ must be a binary prime and so $\mu_{p_2}=4$. There are only 111 such primes up to $\sqrt{Q_{11}}$, we'll check these numbers later. By proposition 2, we use the first 5 primes and $$(\frac{a}{p_1})=(\frac{a}{p_2}),\qquad a=2,3,5,7,11$$ reducing to 30 residue classes module $9240=8\cdot3\cdot5\cdot7\cdot11$.
\noindent{\bf{Example 1}}. For $p_1=31$, the first module 4 equaling 3 prime. Feasible 2-tuple $(31,p_2)$ must with $$p_2<[\sqrt{Q_{11}/31}]=351270645$$ If we do not have \S 3, we need to check all the odd number greater than 31, there are about $1.7\cdot 10^8$ candidates. If we do it as for $t=5$ and 4, there are $1.4\cdot10^7$ candidates. For our method, there are only $30\cdot\frac{351270645}{9240}\approx1.1\cdot10^6$ candidates.
There is another trick we used. if $b=p_1p_2$ is less than $2\cdot10^6$, the correspond $\lambda$ may be too small. We do not find $p_3$ as the above describes. In fact, as $$n=bp_3\equiv b\mod {p_3-1}$$ and $$a^{n-1}\equiv a^{b-1}\equiv 1\mod p_3, \qquad a=2,3$$ We calculate gcd$(2^{b-1}-1, 3^{b-1}-1)$ then factor it to get the prime divisor which is greater than $p_2$ and less than $Q_{11}/b$. Without this trick, our algorithm run more than 24 hours and still din't terminate. When using the trick, the algorithm takes less than 5 hours. It gives 10524046 feasible 2-tuples and the single spsp($v$) $$Q_{11}=3825\ 12305\ 65464\ 13051= 149491\cdot747451\cdot34233211.$$
The following table gives all the 37 feasible 2-tuples with multiple less than $2\cdot10^6$, which can explains why the first case takes so long time.
\noindent{\bf{Example 2}}. Notice that for some $b$ the $\lambda$ is small. For $b=43\cdot9283=339169$, we need to check all $p_3$ with
$$9283<p_3<Q_{11}/b\approx1.1\cdot10^{13},\qquad p_3\equiv7771\mod9282$$
and for $b=571\cdot2851=1627921$, all $p_3$ with
$$p_2<p_3<Q_{11}/b\approx2.3\cdot10^{12},\qquad p_3\equiv2281\mod2580$$
These are really time-consuming.
\subsection{$p_1\equiv5\mod 8$}
If $p_1\equiv 5\mod 8$, as $(\frac{2}{p_1})=-1$, $Val(Ord_{p_1}(2))=2$, so for each $p_2$ with $\sigma_{p_2}^v=\sigma_{p_1}^v$, we must have $p_2\equiv1\mod4$. If $p_2\equiv 5\mod 8$, by proposition 2, we use the first 5 primes then $$(\frac{a}{p_2})=(\frac{a}{p_1}),\qquad a=2,3,5,7,11.$$ There are 30 residue classes module 9240.
If $p_2\equiv 1\mod 8$, for $p_2\equiv 1\mod 16$, we must have $\mu_{p_2}=4$, we'll check these numbers later. For $p_2\equiv 9\mod 16$,
we must have$$(\frac{a}{p_2})=1,\qquad a=2,3,5,7,11.$$ There are 30 residue classes module 18480. The total time for checking all $p_1$ up to 1563922 is about 10 hours and we find 522239 feasible 2-tuples with no spsp($v$).
\subsection{$p_1\equiv1\mod8$}
For $p_1\equiv1\mod8$, denote $e=Val(p_1-1)$ and $f=Val(\lambda_{p_1})$, then $f\le e$ and $$p_1\equiv 1+2^{e}\mod{2^{e+1}},\qquad p_2\equiv 1\mod {2^f}$$ for $\sigma_{p_2}^v=\sigma_{p_1}^v$. If $f=e$, then we consider two cases. For $p_2\equiv 1+2^{e}\mod{2^{e+1}}$, we have
$$(\frac{a}{p_2})=(\frac{a}{p_1}),\qquad a=2,3,5,7,11$$
\begin{center}
\begin{tabular}{|l|l|l|l|c|}
\multicolumn{5}{c}{feasible $(p_1,p_2)$ with $b<2\cdot10^6$}\\[5pt]\hline
b & $p_1$ & $p_2$ & $\lambda$ & $\sigma_{p_1}^v$ \\ \hline
685441 & 31 & 22111 & 22110 & ( 0, 1, 0, 0, 1, 1, 1, 0, 1 ) \\ \hline
919801 & 31 & 29671 & 29670 & ( 0, 1, 0, 0, 1, 1, 1, 0, 1 ) \\ \hline
1267249 & 31 & 40879 & 204390 & ( 0, 1, 0, 0, 1, 1, 1, 0, 1 ) \\ \hline
399169 & 43 & 9283 & 9282 & ( 1, 1, 1, 1, 0, 0, 0, 1, 0 ) \\ \hline
703609 & 43 & 16363 & 114534 & ( 1, 1, 1, 1, 0, 0, 0, 1, 0 ) \\ \hline
1379569 & 43 & 32083 & 224574 & ( 1, 1, 1, 1, 0, 0, 0, 1, 0 ) \\ \hline
1487929 & 43 & 34603 & 242214 & ( 1, 1, 1, 1, 0, 0, 0, 1, 0 ) \\ \hline
1772761 & 43 & 41227 & 288582 & ( 1, 1, 1, 1, 0, 0, 0, 1, 0 ) \\ \hline
741049 & 47 & 15767 & 362618 & ( 0, 0, 1, 0, 1, 1, 0, 1, 1 ) \\ \hline
1879201 & 47 & 39983 & 919586 & ( 0, 0, 1, 0, 1, 1, 0, 1, 1 ) \\ \hline
117049 & 67 & 1747 & 19206 & ( 1, 1, 1, 1, 1, 1, 0, 0, 0 ) \\ \hline
1578721 & 67 & 23563 & 23562 & ( 1, 1, 1, 1, 1, 1, 0, 0, 0 ) \\ \hline
1354609 & 71 & 19079 & 667730 & ( 0, 0, 0, 1, 1, 1, 1, 0, 1 ) \\ \hline
722929 & 79 & 9151 & 118950 & ( 0, 1, 0, 1, 0, 0, 1, 0, 0 ) \\ \hline
1272769 & 79 & 16111 & 209430 & ( 0, 1, 0, 1, 0, 0, 1, 0, 0 ) \\ \hline
457081 & 83 & 5507 & 225746 & ( 1, 0, 1, 0, 0, 1, 0, 1, 0 ) \\ \hline
1391329 & 83 & 16763 & 687242 & ( 1, 0, 1, 0, 0, 1, 0, 1, 0 ) \\ \hline
1739929 & 83 & 20963 & 859442 & ( 1, 0, 1, 0, 0, 1, 0, 1, 0 ) \\ \hline
1652401 & 107 & 15443 & 818426 & ( 1, 0, 1, 1, 0, 0, 1, 0, 0 ) \\ \hline
1730689 & 139 & 12451 & 286350 & ( 1, 1, 0, 0, 0, 0, 1, 1, 1 ) \\ \hline
1790881 & 163 & 10987 & 296622 & ( 1, 1, 1, 1, 1, 1, 1, 1, 1 ) \\ \hline
528889 & 167 & 3167 & 262778 & ( 0, 0, 1, 0, 0, 1, 1, 0, 1 ) \\ \hline
1851529 & 167 & 11087 & 920138 & ( 0, 0, 1, 0, 0, 1, 1, 0, 1 ) \\ \hline
1892881 & 211 & 8971 & 62790 & ( 1, 1, 0, 1, 0, 0, 1, 0, 1 ) \\ \hline
1552849 & 229 & 6781 & 128820 & ( 2, 0, 1, 2, 1, 2, 0, 0, 2 ) \\ \hline
416329 & 263 & 1583 & 207242 & ( 0, 0, 1, 1, 0, 0, 0, 1, 0 ) \\ \hline
223609 & 311 & 719 & 111290 & ( 0, 0, 0, 0, 1, 0, 1, 1, 1 ) \\ \hline
1912849 & 331 & 5779 & 317790 & ( 1, 1, 0, 1, 1, 1, 0, 0, 1 ) \\ \hline
825841 & 379 & 2179 & 45738 & ( 1, 1, 0, 1, 1, 1, 1, 0, 0 ) \\ \hline
540409 & 439 & 1231 & 89790 & ( 0, 1, 0, 0, 0, 0, 1, 0, 1 ) \\ \hline
503281 & 463 & 1087 & 83622 & ( 0, 1, 1, 1, 1, 1, 0, 1, 1 ) \\ \hline
929041 & 503 & 1847 & 463346 & ( 0, 0, 1, 0, 0, 0, 1, 1, 0 ) \\ \hline
1627921 & 571 & 2851 & 2850 & ( 1, 1, 0, 1, 0, 0, 1, 1, 0 ) \\ \hline
1280449 & 787 & 1627 & 213006 & ( 1, 1, 1, 0, 0, 1, 1, 0, 0 ) \\ \hline
1616521 & 919 & 1759 & 268974 & ( 0, 1, 0, 1, 0, 0, 0, 1, 0 ) \\ \hline
1538161 & 1063 & 1447 & 255942 & ( 0, 1, 1, 0, 0, 0, 0, 0, 0 ) \\ \hline
1772521 & 1103 & 1607 & 884906 & ( 0, 0, 1, 1, 1, 1, 0, 1, 0 ) \\ \hline
\end{tabular}
\end{center}
There are 30 residue classes module $2^{e+1}\cdot1155$. For $p_2\equiv1+2^{e+1}\mod{2^{e+2}}$, we have
$$(\frac{a}{p_2})=1,\qquad a=2,3,5,7,11$$ 30 residue classer module $2^{e+2}\cdot1155$. The $p_2\equiv1\mod{2^{e+2}}$ case is left for the prime with $\mu_{p_2}=4$. If $f<e$, we only check $p_2\equiv p_1\mod 2^f$. In fact, according to \S 3, this only happens when $f=e-1$ and $\mu_{p_1}=2$. There are only 50 such primes up to 1563922. Our algorithm takes less than 100 minutes and finds 30728 feasible 2-tuples and no spsp($v$).
\subsection{$\mu_{p_2}=4$}
In the above three cases, we don't consider the case $\mu_{p_2}=4$. Now we assume $\mu_{p_2}=4$, as we also have $$p_1\ge 29, \qquad p_1p_2^2\le Q_{11}$$ So $p_2<363181490$. According \S 3, there only 12 primes under this bound. We check all of them and find no feasible 2-tuples. Until now we finish the $t=3$ case and find only one spsp($v$) $Q_{11}$. The total time is less than 17 hours.
\section{t=2}
For $t=2$, there is no need to define feasible 1-tuples. As $\lambda_{p_1}|n-1$ we have $$p_1<p_2\le Q_{11}/p_1, \qquad p_2\equiv 1\mod\lambda_{p_1}.$$ Since $\lambda_{p_1}$ is close to $p_1-1$, there are about $Q_{11}/(p_1)^2$ candidates for each $p_1$. When $p_1$ is small, there are too many. According the value of $p_1$, we divide into three parts.
\subsection{small and large $p_1$}
If $p_1<10^6$, we'll use the same method as for $t=3$, $p_1p_2<2\cdot10^6$. We have $$a^{p_1-1}\equiv a^{n-1}\equiv1\mod p_2,\qquad a=2,3$$ so we calculate gcd$(2^{p_1-1}-1,3^{p_1-1}-1)$ and factor it to get prime divisors $p_2$ with $$p_1<p_2\le Q_{11}/p_1.$$ Our algorithm takes about 9 hours and finds no spsp($v$).
For $p_1>10^8$, There are less than 380 candidates, we just run our algorithm as described at the beginning of this section. It takes about 18 hours and find no spsp($v$).
\subsection{$10^6<p_1<10^8$}
When $p_1$ is in this interval, we divide into three parts according to $p_1\equiv 3\mod4$, $p_1\equiv5\mod 8$ and $p_1\equiv 1\mod 8$. In each case, just as $t=3$ we use Chinese Remainder Theorem to reduce candidates. This time we use the first 6 primes.
For $p_1\equiv 3\mod 4$, $p_2$ with $\sigma_{p_2}^v=\sigma_{p_1}^v$. If $p_2\equiv 3\mod4$ then we have $p_2\equiv1 \mod \lambda_{p_1}$ and $$(\frac{a}{p_1})=(\frac{a}{p_2}),\qquad a=2,3,5,7,11,13.$$ If $p_2\equiv1\mod 4$, then we have $p_2\equiv 1\mod \lambda_{p_1}$ and
$$(\frac{a}{p_2})=1,\qquad a=2,3,5,7,11,13.$$ Our algorithm takes about 15 hours and finds no spsp($v$).
For $p_1\equiv 5\mod 8$, then $p_2\equiv 1\mod 4$. If $p_2\equiv 5\mod 8$ then we have $p_2\equiv 1\mod \lambda_{p_1}$ and
$$(\frac{a}{p_1})=(\frac{a}{p_2}),\qquad a=2,3,5,7,11,13.$$ If $p_2\equiv 1\mod 8$, then we have $p_2\equiv 1\mod \lambda_{p_1}$ and
$$(\frac{a}{p_2})=1,\qquad a=2,3,5,7,11,13.$$ Our algorithm takes about 15 hours and finds no spsp($v$).
For $p_1\equiv1 \mod 8$, denote $e=Val(p_1-1)$, $f=Val(\sigma_{p_1})$, then $f\le e$. If $f=e$, there are two cases. For $p_2\equiv 1+2^e\mod 2^{e+1}$, then
$p_2\equiv 1\mod \lambda_{p_1}$ and $$(\frac{a}{p_1})=(\frac{a}{p_2}),\qquad a=2,3,5,7,11,13.$$ For $p_2\equiv1\mod 2^{e+1}$, then $p_2\equiv 1\mod\lambda_{p_1}$ and
$$(\frac{a}{p_2})=1,\qquad a=2,3,5,7,11,13.$$
If $f<e$, we only use $p_2\equiv 1\mod \lambda_{p_1}$, Our algorithm takes about 16 hours and finds no spsp($v$).
We also run an algorithm for these cases without use Chinese Remainder Theorem, it took more than 10 days and didn't halt. So the Chinese Remainder Theorem is really helpful here. We need to be careful when writing our algorithm because gcd$(a,\lambda_{p_1})\ne 1$ for some $p_1$ and $a=2,3,5,7,11,13.$
Then we finish the $t=2$ case and find no strong pseudoprime to the first 9 primes.
\section{Conclusion}
Until now, we have checked all the odd composite numbers up to $Q_{11}$, and find only one strong pseudoprime $Q_{11}$ to the first 9 primes. As it is easy to check that $Q_{11}$ is also strong pseudoprime to the bases 29 and 31, we have our claim in \S 1. $$\psi_{9}=\psi_{10}=\psi_{11}=Q_{11}$$ So for an integer less than $Q_{11}$, only 9 strong pseudoprime tests are needed to judge its primality and compositeness. We use the software Magma and all algorithms are run in my PC(an Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz with 2Gb of RAM). The total time is about 105 hours.
|
2,869,038,155,099 | arxiv | \section{Introduction}
\label{sec:intro}
Coalescing compact-object binary systems (binaries, for short) are
among the most promising sources of gravitational waves (GWs) for
detectors like the U.S. Laser Interferometer Gravitational-Wave
Observatory (LIGO), the British-German GEO, and the French-Italian
Virgo~\cite{Abbott:2007, Grote:2008zz, Acernese:2008}. LIGO and Virgo
are undergoing upgrades to Advanced
configurations~\cite{Shoemaker2009}, which will improve sensitivity by
about a factor of 10. A detailed and accurate understanding of the GWs
radiated as the bodies in a binary spiral towards each other is
crucial not only for the initial detection of such sources, but also
for maximizing the information that can be obtained from the GW
signals once they are observed.
The matched-filtering technique is the primary data-analysis tool used
to extract the GW signals from the detectors' noise. It requires
accurate waveform models of the expected GW signals. Analytical
templates based on the post-Newtonian (PN)
approximation~\cite{Sasaki:2003xr, Blanchet2006, Futamase:2007zz,
Goldberger:2004jt} of the Einstein field equations developed over
the past thirty years accurately describe the inspiraling stage of the
binary evolution. In 1999 a new approach to the two-body dynamics of
compact objects, the so-called effective-one-body (EOB) approach, was
proposed with the goal of extending the analytical templates
throughout the last stages of inspiral, plunge, merger, and
ringdown. The EOB approach uses the results of PN theory, black-hole
perturbation theory, and, more recently, the gravitational self-force
formalism. It does not, however, use the PN results in their original
Taylor-expanded form (i.e., as polynomials in $v/c$), but in a
resummed form.
The EOB formalism was first proposed in Refs.~\cite{Buonanno99,
Buonanno00} and subsequently improved in Refs.~\cite{DJS00,
Damour01c, Buonanno06}. Using physical intuition and results from
black-hole perturbation theory and the close-limit approximation,
Refs.~\cite{Buonanno00, Buonanno06} computed preliminary plunge,
merger, and ringdown signals of nonspinning and spinning black-hole
binaries. After breakthroughs in numerical relativity
(NR)~\cite{Pretorius2005a, Baker2006a, Campanelli2006a}, the EOB
inspiral-merger-ringdown waveforms were improved by calibrating the
model to progressively more accurate NR simulations, spanning larger
regions of the parameter space~\cite{Buonanno-Cook-Pretorius:2007,
Pan2007, Buonanno2007, Damour2007a, DN2007b, Boyle:2008, DN2008,
Buonanno:2009qa, Pan:2009wj, Damour2009a, Pan:2011gk}. More
recently, an EOB model for the dominant $(2,2)$ mode and four
subdominant modes was built for nonspinning binaries of
comparable masses~\cite{Pan:2011gk} and the small-mass-ratio
limit~\cite{Barausse:2011kb}. These results, at the interface between
numerical and analytical relativity, have already had an impact in
LIGO and Virgo searches. The first searches of high-mass and intermediate-mass
black-hole binaries in LIGO/Virgo data~\cite{Abadie:2011kd,Abadie:2012} used the
inspiral-merger-ringdown templates generated by the EOB model
calibrated in Ref.~\cite{Buonanno2007}, as well as the
phenomenological templates proposed in Ref.~\cite{Ajith:2008}.
Stellar-mass black holes are expected to carry spins, which
significantly increases the dimension of the binary parameter
space. The first EOB Hamiltonian with leading-order (1.5PN) spin-orbit
and (2PN) spin-spin couplings was developed in
Ref.~\cite{Damour01c}. Then, Ref.~\cite{Buonanno06} worked out the
radiation-reaction force in the EOB equations of motion in the
presence of spins and computed inspiral-merger-ringdown waveforms for
generic spinning binaries, capturing their main features, including
the so-called ``hang up''. Later, Ref.~\cite{Damour:024009}
incorporated the next-to-leading-order (2.5PN) spin-orbit couplings in
the EOB Hamiltonian. By construction, in the test-particle limit the
Hamiltonian of Ref.~\cite{Damour:024009} does not reduce to the
Hamiltonian of a spinning test particle in the Kerr spacetime.
Moreover, the Hamiltonian of Ref.~\cite{Damour:024009} rewrites the
EOB radial potential using Pad\'e summation, causing spurious poles in
some regions of parameter space. Nevertheless, the Hamiltonian of
Ref.~\cite{Damour:024009} was adopted in Ref.~\cite{Pan:2009wj} to
demonstrate the possibility of calibrating the EOB model for spinning
binaries.
Since then, substantial progress has been made towards improving the
spin EOB Hamiltonian. Ref.~\cite{Barausse:2009aa} worked out the
Hamiltonian for a spinning test-particle in a generic spacetime, which
was used in Ref.~\cite{Barausse:2009xi} to derive a spin EOB
Hamiltonian having the correct test-particle limit. Furthermore,
Ref.~\cite{Barausse:2009xi} rewrote the EOB radial potential in a way
that guarantees the absence of poles without employing the Pad\'e
summation. As a consequence, the EOB Hamiltonian of
Ref.~\cite{Barausse:2009xi} has desirable strong-field circular-orbit
features, such as the existence of an innermost-stable circular orbit
(ISCO), a photon circular orbit (or light-ring), and a maximum in the
orbital frequency during the plunge. Still preserving these
properties, the spin EOB Hamiltonian of Ref.~\cite{Barausse:2009xi}
was recently extended to include the next-to-next-to-leading-order
(3.5PN) spin-orbit couplings in Ref.~\cite{Barausse:2011ys}. The EOB
Hamiltonian of Ref.~\cite{Damour:024009} was also recently extended
through 3.5PN order in the spin-orbit sector in
Ref.~\cite{Nagar:2011fx}.
In the non-conservative sector of the EOB model, the
radiation-reaction force in the EOB equations of motion is built from
the GW energy flux, which, in turn, is computed from a decomposition
of the waveform into spherical harmonic $(\ell, m)$ modes. These
modes, instead of being used in their Taylor-expanded form, are
resummed (or factorized). This factorization was originally proposed
in Refs.~\cite{Damour2007, DIN} for nonspinning black-hole binaries,
and was then extended to include spin effects in Ref.~\cite{Pan2010hz}
and higher-order PN spinless terms in Refs.~\cite{Fujita:2010xj,
Fujita:2011zk}. In the test-particle limit, the factorized waveforms
are known at very high PN order---for example their sum generates the
GW energy flux for nonspinning binaries through
14PN~\cite{Fujita:2011zk} order and to 4PN order in terms involving
the black-hole spins. However, in the comparable-mass case the GW
modes are known only at a much lower PN order. Despite the fact that
the GW energy flux in the comparable-mass case is known through
3.5PN~\cite{Kidder2008, BFIS} and 3PN~\cite{Blanchet:2011zv} order in
the nonspinning and spin-orbit sectors, and 2PN order in the
spin-spin sector, the GW modes have been computed only through 1.5PN
order for spin-orbit couplings and 2PN order for spin-spin
couplings~\cite{Arun:2009, Pan2010hz}. Currently, this lack of
information in the GW modes is the main limitation of our spin EOB
model, and, as we will see, it affects the performance of the model
for prograde orbits and large spin values.
In this paper, we build upon the past success in analytically modeling
inspiral-merger-ringdown waveforms through the EOB formalism, and
develop a prototype EOB model for non-precessing spinning black-hole
binaries that covers a large region of the parameter space and can be
used for detection purposes and future calibrations. More
specifically, we adopt the EOB Hamiltonian derived in
Refs.~\cite{Barausse:2009xi, Barausse:2011ys}, the GW energy flux and
factorized waveforms derived in Refs.~\cite{DIN, Pan2010hz}, and
calibrate the EOB (2,2) dominant mode to seven NR waveforms: five
nonspinning waveforms with mass ratios $1,1/2,1/3,1/4$ and
$1/6$~\cite{Pan:2011gk} and two equal-mass non-precessing spinning
waveforms of spin magnitudes $0.44$~\cite{Chu2009}. We combine the
above results with recent small-mass-ratio results produced by the
Teukolsky equation~\cite{Barausse:2011kb} to build a prototype EOB
model for inspiral-merger-ringdown waveforms for non-precessing
spinning black-hole binaries with any mass ratio and individual
black-hole spins $-1 \leq \chi_i \lesssim 0.7$. For $\chi_i \gtrsim
0.7$, although the EOB dynamics can be evolved until the end of the
plunge, the EOB (2,2) mode peaks too early in the evolution, where the
motion is still quasicircular. As a consequence, we cannot correct the
EOB (2,2) mode to agree with the NR (2,2) mode peak using
non-quasicircular amplitude coefficients. This limitation, which also
affects the small-mass-ratio limit results~\cite{Barausse:2011kb}, is
caused by the poor knowledge of PN spin effects in the GW modes and
makes the prototype EOB waveforms unreliable for $\chi_i \gtrsim
0.7$. Two NR waveforms with nearly extremal spin
magnitudes~\cite{Lovelace2010, Lovelace:2011nu} became available to us
when we were finishing calibration of the spin EOB model. We use them
to examine the limitations of the spin prototype EOB model, and
extract from them useful information for future work.
The paper is organized as follows. In Sec.~\ref{sec:EOB-model}, we
describe the spin EOB model used in this work, its dynamics,
waveforms, and adjustable parameters. Section~\ref{sec:EOB-cal}
discusses the alignment procedure used to compare EOB and NR waveforms
at low frequency, and the statistics used to quantify the differences
between the waveforms. We then calibrate the EOB model to the NR
waveforms in Sec.~\ref{sec:calibration}. In
Sec.~\ref{sec:firstorder-model}, we combine the results of
Sec.~\ref{sec:EOB-cal} with those of Ref.~\cite{Barausse:2011kb} to
build a prototype EOB model that interpolates between the calibrated
EOB waveforms and extends them to a larger region of the parameter
space. We also investigate how this prototype EOB model performs with
respect to two NR waveforms with nearly extremal spin, which were not
used in the calibration. Finally, Sec.~\ref{sec:concl} summarizes our
main conclusions. In Appendix~\ref{sec:AppendixFactModes} we
explicitly write the factorized waveforms used in this work, including
spin effects.
\section{Effective-one-body dynamics and waveforms in the presence of
spin effects}
\label{sec:EOB-model}
In this section, we define the spin EOB model that we will later
calibrate using NR waveforms. Henceforth, we use geometric units
$G=c=1$.
In the spin EOB model~\cite{Damour01c, Damour:024009, Barausse:2009xi,
Nagar:2011fx, Barausse:2011ys} the dynamics of two black holes of
masses $m_1$ and $m_2$ and spins $\mbox{\boldmath${S}$}_1$ and $\mbox{\boldmath${S}$}_2$ is mapped into
the dynamics of an effective particle of mass $\mu =
m_1\,m_2/(m_1+m_2)$ and spin $\mbox{\boldmath${S}$}_*$ moving in a deformed Kerr metric
with mass $M =m_1+m_2$ and spin $\mbox{\boldmath${S}$}_\text{Kerr}$. The position and
momentum vectors of the effective particle are described by $\mbox{\boldmath${R}$}$ and
$\mbox{\boldmath${P}$}$, respectively. Here, for convenience, we use the reduced
variables
\begin{equation}
\mbox{\boldmath${r}$}\equiv\frac{\mbox{\boldmath${R}$}}{M}\,, \qquad\qquad \mbox{\boldmath${p}$}\equiv\frac{\mbox{\boldmath${P}$}}{\mu}\,.
\end{equation}
Since we will restrict the discussion to spins aligned or anti-aligned
with the orbital angular momentum, we define the (dimensionless) spin
variables $\chi_i$ as $\mbox{\boldmath${S}$}_i\equiv\chi_i\,m_i^2\,\mathbf{\hat{L}}$,
where $\mathbf{\hat{L}}$ is the unit vector along the direction of the
orbital angular momentum. We also write $\mbox{\boldmath${S}$}_\text{Kerr}\equiv
\chi_{\text{Kerr}} M^2 \mathbf{\hat{L}}$.
\subsection{The effective-one-body dynamics}
\label{sec:EOB-dyn}
In this paper we adopt the spin EOB Hamiltonian proposed in
Refs.~\cite{Barausse:2009aa, Barausse:2009xi, Barausse:2011ys}. The
real (or EOB) Hamiltonian is related to the effective Hamiltonian
$H_{\text{eff}}$ through the relation
\begin{equation}
\label{Hreal}
H_{\text{real}}\equiv\mu\hat{H}_{\text{real}}=M\sqrt{1+2\nu\left(\frac{H_{\text{eff}}}{\mu}-1\right)}-M\,,
\end{equation}
where $H_{\text{eff}}$ describes the conservative dynamics of an
effective spinning particle of mass $\mu$ and spin $\mbox{\boldmath${S}$}^*$ moving in a
deformed Kerr spacetime of mass $M$ and spin $\mbox{\boldmath${S}$}_{\text{Kerr}}$. The
symmetric mass ratio $\nu=\mu/M$ acts as the deformation parameter.
Through 3.5PN order in the spin-orbit coupling, the mapping between
the effective and real spin variables reads~\cite{Barausse:2009xi,
Barausse:2011ys}
\begin{subequations}
\begin{eqnarray}
\label{spinmapping1}
\mbox{\boldmath${S}$}_{\text{Kerr}} &=& \mbox{\boldmath${S}$}_1+\mbox{\boldmath${S}$}_2 \,, \\
\label{spinmapping2}
\mbox{\boldmath${S}$}^* &=& \frac{m_2}{m_1}\,\mbox{\boldmath${S}$}_1+\frac{m_1}{m_2}\,\mbox{\boldmath${S}$}_2 + \mbox{\boldmath${\Delta}$}_{\sigma^*}^{(1)} + \mbox{\boldmath${\Delta}$}_{\sigma^*}^{(2)}\,,
\end{eqnarray}
\end{subequations}
where $\mbox{\boldmath${\Delta}$}_{\sigma^*}^{(1)}$ and $\mbox{\boldmath${\Delta}$}_{\sigma^*}^{(2)}$ are
the 2.5PN and 3.5PN spin-orbit terms given explicitly in Eqs. (51) and
(52) of Ref.~\cite{Barausse:2011ys}. They depend on the dynamical
variables $\mbox{\boldmath${r}$}$ and $\mbox{\boldmath${p}$}$, the spin variables $\mbox{\boldmath${S}$}_i$, and on several
gauge parameters. These parameters are present because of the large
class of canonical transformations that can map between the real and
effective descriptions. Their physical effects would cancel out if
the PN dynamics were known at arbitrarily high orders; since this is
clearly not the case, the gauge parameters can have a noticeable
effect~\cite{Barausse:2011ys} and may in principle be used as spin EOB
adjustable parameters. In this paper however, we set all gauge
parameters to zero and introduce a spin EOB adjustable parameter at
4.5PN order in the spin-orbit sector by adding the following term to
Eq.~\eqref{spinmapping2}
\begin{equation}
\mbox{\boldmath${\Delta}$}_{\sigma^*}^{(3)}=\frac{d_{\text{SO}}\,\nu}{r^3}\,\left
(\frac{m_2}{m_1}\,\mbox{\boldmath${S}$}_1+\frac{m_1}{m_2}\,\mbox{\boldmath${S}$}_2\right )\,.
\end{equation}
Here $d_{\text{SO}}$ is the spin-orbit EOB adjustable parameter. The
effective Hamiltonian reads~\cite{Barausse:2009xi}
\begin{equation}\label{Heff}
\begin{split}
\frac{H_{\text{eff}}}{\mu} &= \beta^i p_i + \alpha \sqrt{1 +
\gamma^{ij} p_i p_j + \mathcal{Q}_4(\mbox{\boldmath${p}$})} +
\frac{H_{\text{SO}}}{\mu} + \frac{H_{\text{SS}}}{\mu} \\
&\quad -\frac{1}{2Mr^5}(r^2\delta^{ij}-3r^i r^j)S_i^*S_j^* \,,
\end{split}
\end{equation}
where the first two terms are the Hamiltonian of a nonspinning test
particle in the deformed Kerr spacetime, $\alpha$, $\beta^i$ and
$\gamma^{ij}$ are the lapse, shift and 3-dimensional metric of the
effective geometry and $\mathcal{Q}_4(\mbox{\boldmath${p}$})$ is a non-geodesic term
quartic in the linear momentum introduced in
Ref.~\cite{Damour00a}. The quantities $H_{\text{SO}}$ and
$H_{\text{SS}}$ in Eq.~\eqref{Heff} contain respectively spin-orbit
and spin-spin couplings that are \textit{linear} in the effective
particle's spin $\boldsymbol{S}^*$, while the term
$-1/(2Mr^5)(r^2\delta^{ij}-3r^i r^j)S_i^*S_j^*$ is the leading-order
coupling of the particle's spin to itself, with $\delta^{ij}$ being
the Kronecker delta. More explicitly, using
Ref.~\cite{Barausse:2009xi} we can obtain $H_{\text{SO}}$ and
$H_{\text{SS}}$ by inserting Eqs.~(5.31), (5.32),
Eqs.~(5.47a)--(5.47h), and Eqs.~(5.48)--(5.52) into Eqs.~(4.18) and
(4.19); $\alpha$, $\beta^i$ and $\gamma^{ij}$ are given by inserting
Eqs.~(5.36a)--(5.36e), Eqs.~(5.38)--(5.40) and Eqs.~(5.71)--(5.76)
into Eqs.~(5.44)--(5.46). We will elucidate our choice of the quartic
term $\mathcal{Q}_4(\mbox{\boldmath${p}$})$ at the end of this section, when introducing
the tortoise variables.
Following Ref.~\cite{Pan:2009wj}, we introduce another spin EOB
adjustable parameter in the spin-spin sector. Thus, we add to
Eq.~\eqref{Heff} the following 3PN term
\begin{equation}
\frac{d_{\text{SS}}\,\nu}{r^4}\,\left
(\frac{m_2}{m_1}\,\mbox{\boldmath${S}$}_1+\frac{m_1}{m_2}\,\mbox{\boldmath${S}$}_2\right )\cdot
(\mbox{\boldmath${S}$}_1+\mbox{\boldmath${S}$}_2)\,,
\end{equation}
with $d_{\text{SS}}$ the spin-spin EOB adjustable parameter. For what
concerns the nonspinning EOB sector, we adopt the following choice
for the EOB potentials $\Delta_t$ and $\Delta_r$ entering $\alpha$,
$\beta_i$ and $\gamma_{ij}$ (see Eq.~(5.36) in
Ref.~\cite{Barausse:2009xi}). The potential $\Delta_t$ is given
through 3PN order by
\begin{subequations}
\begin{eqnarray}
\label{deltatu}
\Delta_t (u) &=& \frac{1}{u^2}\, \Delta_u(u)\,, \\
\Delta_u(u) &=& A(u) + \chi^2_{\text{Kerr}}\,{u^2}\,,\\
A(u) &=& 1 - 2 u + 2 \nu\, u^3 + \nu\,\left (\frac{94}{3} - \frac{41}{32} \pi^2\right)\, u^4\,,
\label{deltauu}
\end{eqnarray}
\end{subequations}
where $u \equiv 1/r$. Reference~\cite{Barausse:2009xi} suggested
rewriting the quantity $\Delta_u(u) $ as
\begin{eqnarray}
\label{delta_t_1}
\Delta_u(u) &=& \bar{\Delta}_u(u)\, \left [1 + \nu\,\Delta_0 +
\nu \,\log \left (1 + \Delta_1 \,u + \Delta_2\,u^2 \right. \right.
\nonumber \\
&& \left. \left. + \Delta_3\,u^3 + \Delta_4\,u^4\right ) \right ]\,,
\end{eqnarray}
where $\Delta_i$ with $i = 1, 2, 3, 4$ are explicitly given in
Eqs.~(5.77)--(5.81) of Ref.~\cite{Barausse:2009xi}, and
\begin{subequations}
\begin{align}
\bar{\Delta}_u(u)=&\,\chi_{\text{Kerr}}^2\,\left(u -
\frac{1}{r^{\text{EOB}}_{+}}\right)\,
\left(u - \frac{1}{r^{\text{EOB}}_{-}}\right)\,,\\
\label{eq:hor}
r^{\text{EOB}}_{\pm} =&\, \left(1\pm
\sqrt{1-\chi^2_\text{Kerr}}\right)\,(1-K\,\nu)\,.
\end{align}
\end{subequations}
Here, $r^{\text{EOB}}_{\pm}$ are radii reducing to those of
the Kerr event and Cauchy horizons when the EOB adjustable parameter
$K$ goes to zero. The logarithm in Eq.~\eqref{delta_t_1} was
introduced in Ref.~\cite{Barausse:2009xi} to quench the divergence of
the powers of $u$ at small radii. Its presence also allows the
existence of an ISCO, a photon circular orbit (or light-ring), and a
maximum in the orbital frequency during the plunge. The reason for
modeling $\Delta_u(u)$ with Eq.~\eqref{delta_t_1} instead of using the
Pad\'e summation of $\Delta_u(u)$, as proposed in
Ref.~\cite{Damour:024009}, is threefold. First, we did not want to
use the Pad\'e summation of $\Delta_u(u)$ because
Ref.~\cite{Pan:2009wj} found that for certain regions of the parameter
space spurious poles can appear. Second, although we could have
applied the Pad\'e summation only to $A(u)$ and used the Pad\'e
potential $A(u)$ calibrated to nonspinning waveforms in
Ref.~\cite{Pan:2011gk}, we want to take advantage of the good
properties of the potential \eqref{delta_t_1} during the late
inspiral, as found in Ref.~\cite{Barausse:2009xi}. Third, we find it
useful to develop a variant of the EOB potential so that in the future
we can test how two different EOB potentials (both calibrated to NR
waveforms at high frequency) compare at low frequency.
Furthermore, for the potential $\Delta_r$ at 3PN order entering the
EOB metric components (5.36) in Ref.~\cite{Barausse:2009xi}, we choose
\begin{subequations}
\begin{eqnarray}
\label{eq:D}
\Delta_r (u)&=& \Delta_t(u)\,D^{-1}(u)\,,\label{eq:deltaR}\\
D^{-1}(u) &=& 1+\log[1 + 6 \nu\, u^2 + 2 (26 - 3 \nu)\, \nu\, u^3]\,.\nonumber \\
\end{eqnarray}
\end{subequations}
Once expanded in PN orders, the EOB Hamiltonian \eqref{Hreal} with the
effective Hamiltonian defined in Eq.~\eqref{Heff} and the spin mapping
defined in Eqs.~\eqref{spinmapping1} and \eqref{spinmapping2},
reproduces all known PN orders---at 3PN, 3.5PN and 2PN order in the
nonspinning, spin-orbit and spin-spin sectors, respectively---except
for the spin-spin terms at 3PN and 4PN order, which have been recently
computed in Refs.~\cite{Porto:2006bt, Porto:2005ac, PR08a, PR08b,
Porto:2010tr, Porto:2010zg, Levi:2008nh, Levi:2011eq}. Furthermore,
in the test-particle limit the real Hamiltonian contains the correct
spin-orbit couplings linear in the test-particle spin, at \emph{all}
PN orders~\cite{Barausse:2009aa, Barausse:2009xi}.
Let $\hat{t}\equiv t/M$. In terms of the reduced Hamiltonian
$\hat{H}_{\text{real}}$, the EOB Hamilton equations are given in
dimensionless form by~\cite{Pan:2009wj}
\begin{subequations}
\begin{eqnarray}
\frac{d\mbox{\boldmath${r}$}}{d\hat{t}}&=&\{\mbox{\boldmath${r}$},\hat{H}_{\text{real}}\}=\frac{\partial \hat{H}_{\text{real} }}{\partial \mbox{\boldmath${p}$}}\,,\\
\frac{d\mbox{\boldmath${p}$}}{d\hat{t}}&=&\{\mbox{\boldmath${p}$},\hat{H}_{\text{real}}\}+\hat{\bm{\mathcal{F}}}=-\frac{\partial \hat{H}_{\text{real}}}{\partial \mbox{\boldmath${r}$}}
+\hat{\bm{\mathcal{F}}}\,,
\end{eqnarray}
\end{subequations}
where $\hat{\bm{\mathcal{F}}}$ denotes the non-conservative force that
accounts for radiation-reaction effects. Following
Ref.~\cite{Buonanno06}, we use~\footnote{The over-dot stands for
$d/dt$.}
\begin{equation}
\hat{\bm{\mathcal{F}}}=\frac{1}{\nu \hat{\Omega} |\mbox{\boldmath${r}$}\times
\mbox{\boldmath${p}$}|}\frac{dE}{dt}\mbox{\boldmath${p}$}\,,
\end{equation}
where $\hat{\Omega}\equiv M |\mbox{\boldmath${r}$}\times\dot{\mbox{\boldmath${r}$}}|/r^2$ is the
dimensionless orbital frequency and $dE/dt$ is the GW energy flux for
quasicircular orbits obtained by summing over the modes $(\ell,m)$ as
\begin{equation}\label{Edot}
\frac{dE}{dt}=\frac{\hat{\Omega}^2}{8\pi}\sum_{\ell=2}^8\sum_{m=0}^{\ell}m^2\left|\frac{\mathcal{R}}{M}h_{\ell
m}\right|^2\,.
\end{equation}
Here $\mathcal{R}$ is the distance to the source, and simply
eliminates the dominant behavior of $h_{\ell m}$. We sum over positive
$m$ modes only since $|h_{\ell,m}|=|h_{\ell,-m}|$. Expressions for the
modes $h_{\ell m}$ are given in the next section. In this paper, we
restrict the calibration to non-precessing binaries, and thus we omit
the Hamilton equations of the spin variables.
It was demonstrated in previous work~\cite{Damour:2007cb, Damour2007}
that by replacing the radial component of the linear momentum $p_r
\equiv (\mbox{\boldmath${p}$} \cdot \mbox{\boldmath${r}$})/r$ with $p_{r^*}$, which is the conjugate
momentum of the EOB tortoise radial coordinate $r^*$, one can improve
the numerical stability of the EOB equations of motion. This happens
because $p_r$ diverges when approaching $r_{+}^{\text{EOB}}$ while
$p_{r^*}$ does not. In this paper we follow the definition of the EOB
tortoise radial coordinate in Appendix~A of
Ref.~\cite{Pan:2009wj}.\footnote{Note that all the formulas in
Appendix~A of Ref.~\cite{Pan:2009wj} are written in physical
dynamical variables, namely $\mbox{\boldmath${R}$}$ and $\mbox{\boldmath${P}$}$, while here we use
reduced variables $\mbox{\boldmath${r}$}$ and $\mbox{\boldmath${p}$}$.} However, when applying the
tortoise coordinate transformation to the quartic term in
Eq.~\eqref{Heff}, we get~\cite{Pan:2009wj}
\begin{equation}
\label{Q4div}
\mathcal{Q}_4(\mbox{\boldmath${p}$}) \propto \frac{p_{r^*}^4}{r^2}
\frac{D^2}{\Delta_t^4} (r^2+\chi_{\text{Kerr}}^2)^4\,,
\end{equation}
which clearly diverges at $r=r^{\text{EOB}}_+$. As in the nonspinning
case~\cite{Damour:2007cb, Damour2007, Pan:2011gk}, we neglect
contributions higher than 3PN order and rewrite Eq.~\eqref{Q4div} as
\begin{equation}
\mathcal{Q}_4(\mbox{\boldmath${p}$}) \propto \frac{p_{r^*}^4}{r^2}
(r^2+\chi_{\text{Kerr}}^2)^4\,,
\end{equation}
which is well behaved throughout the EOB orbital evolution.
Lastly, we integrate the EOB Hamilton equations. In order to get rid
of any residual eccentricity when the EOB orbital frequency is close
to the initial frequency of the NR run, we start the EOB evolution at
large separation, say $50M$, and use the quasispherical initial
conditions developed in Ref.~\cite{Buonanno06}. We stop the
integration when the orbital frequency $\Omega$ reaches a maximum.
\subsection{The effective-one-body waveforms}
\label{sec:EOB-wave}
Following Refs.~\cite{Damour2007, Damour2009a, Buonanno:2009qa,
Pan:2009wj, Pan:2011gk} we write the inspiral-plunge modes as
\begin{equation}
h_{\ell m}^{\text{insp-plunge}}=h^{\text{F}}_{\ell m}\,N_{\ell m},
\end{equation}
where the $h_{\ell m}^{\text{F}}$ are the factorized modes developed
in Refs.~\cite{Damour2007, DIN, Pan2010hz}, and the $N_{\ell m}$ are
non-quasicircular (NQC) corrections that model deviations from
quasicircular motion, which is assumed when deriving the $h_{\ell
m}^{\text{F}}$. The factorized modes read
\begin{equation}\label{hlm}
h^\mathrm{F}_{\ell m}=h_{\ell m}^{(N,\epsilon)}\,\hat{S}_\text{
eff}^{(\epsilon)}\, T_{\ell m}\, e^{i\delta_{\ell
m}}\left(\rho_{\ell m}\right)^\ell\,,
\end{equation}
where $\epsilon$ is the parity of the waveform. All the factors
entering the $h_{\ell m}^{\text{F}}$ can be explicitly found in
Appendix~\ref{sec:AppendixFactModes}. We emphasize here again that
despite the fact that the GW energy flux in the comparable-mass case
is known through 3PN order in the spin-orbit
sector~\cite{Blanchet:2011zv}, the spin-orbit couplings in the
factorized (or PN-expanded) modes have been computed only through
1.5PN order~\cite{Arun:2009, Pan2010hz}. This limitation will degrade
the performances of our spin EOB model for prograde orbits and large
spin values, as already observed in the test-particle limit in
Refs.~\cite{Pan2010hz, Barausse:2011kb}. To improve the knowledge of
spin effects in the GW modes, Refs.~\cite{Pan:2009wj, Pan2010hz} added
spin couplings in the test-particle limit through 4PN order in the
factorized waveforms. However, since the mapping between the Kerr
spin parameter in the test-particle limit and the black-hole spins in
the comparable-mass case is not yet unambiguously
determined~\cite{Barausse:2009xi, Barausse:2011ys}, and since we do
not have many NR spinning waveforms at our disposal to test the
mapping, we decide not to include here the spinning
test-particle-limit couplings in the factorized waveforms computed in
Ref.~\cite{Pan2010hz}. We have checked before performing any
calibration that EOB models with or without test-particle spin effects
(with Kerr spin parameter $\chi_{\text{Kerr}}$) give similar
performances.
In all the calibrations of the nonspinning EOB model, two EOB
adjustable parameters were needed to calibrate the EOB Hamilton
equations---for example Refs.~\cite{Damour2009a, Pan:2011gk} used the
4PN and 5PN order coefficients in the EOB potential $A(r)$. As
discussed in the previous section, for the EOB model adopted in this
paper, the EOB nonspinning conservative dynamics depend so far only
on the adjustable parameter $K$. We introduce a second EOB adjustable
parameter in the non-conservative non-spinning EOB sector by adding a
4PN order non-spinning term in $\rho_{22}$ and denote the coefficient
of this unknown PN term by $ \rho_{22}^{(4)}$ [see
Eq.~\eqref{rho22}]. This adjustable parameter enters the EOB Hamilton
equations through the energy flux defined in Eq.~\eqref{Edot}.
As shown in Ref.~\cite{Pan:2011gk}, the NQC corrections of modes with
$(\ell,m) \neq (2,2)$ have marginal effects on the dynamics. Also, our
goal in this work is to calibrate only the $(2,2)$ mode, so in the
following we set $N_{\ell m}=1$ for $(\ell,m) \neq (2,2)$. We
have\footnote{Note that in Ref.~\cite{Barausse:2011kb} the $N_{\ell
m}$ were written in terms of physical dynamical variables, rather
than the reduced variables used here.}
\begin{equation}\label{NQC}
\begin{split}
N_{22} &= \Bigg[1 + \left( \frac{p_{r^*}}{r\,\hat{\Omega}}
\right)^{\!2}\! \Bigg(a_1^{h_{22}}\! +\! \frac{a_2^{h_{22}}}{r}\! +\!
\frac{a_3^{h_{22}}}{r^{3/2}}
\!+\!\frac{a_4^{h_{22}}}{r^{2}}\! +\! \frac{a_5^{h_{22}}}{r^{5/2}}
\Bigg) \Bigg]\\
& \times \exp \Bigg[i \frac{p_{r^*}}{r\,\hat{\Omega}}
\Bigg(b_1^{h_{22}}
+ p_{r^*}^2 b_2^{h_{22}} \!+\! \frac{p_{r^*}^2}{r^{1/2}}
b_3^{h_{22}}+ \frac{p_{r^*}^2}{r} b_4^{h_{22}} \Bigg) \Bigg],
\end{split}
\end{equation}
where $a_i^{h_{22}}$ (with $i=1...5$) are the (real) NQC amplitude
coefficients and $b^{h_{22}}_i$ (with $i=1...4$) are the (real) NQC
phase coefficients. We will explain in detail how these coefficients
are determined at the end of this section.
The EOB merger-ringdown waveform is built as a linear superposition of
the quasinormal modes (QNMs) of the final Kerr black
hole~\cite{Buonanno00, Damour06, Buonanno-Cook-Pretorius:2007,
Buonanno2007, DN2007b, DN2008, Buonanno:2009qa}, as
\begin{equation}\label{ringdown}
h_{22}^{\text{merger-RD}}(t)=\sum_{n=0}^{N-1}
A_{22n}\,e^{-i\sigma_{22n}(t-t_{\text{match}}^{22})}\,,
\end{equation}
where $N$ is the number of overtones, $A_{22n}$ is the complex
amplitude of the $n$-th overtone, and
$\sigma_{22n}=\omega_{22n}-i/\tau_{22n}$ is the complex frequency of
this overtone with positive (real) frequency $\omega_{22n}$ and decay
time $\tau_{22n}$. The complex QNM frequencies are known functions of
the mass and spin of the final Kerr black hole. Their numerical values
can be found in Ref.~\cite{Berti2006a}. The mass and spin of the
final black hole, $M_f$ and $a_f$, can be computed through analytical
phenomenological formulas reproducing the NR predictions. Here, we
adopt the formulas given in Eq.~(8) of Ref.~\cite{Tichy2008} and in
Eqs.~(1) and (3) of Ref.~\cite{Barausse2009}. We notice that the
formula for the final mass in Ref.~\cite{Tichy2008} was obtained using
numerical simulations of small-spin black-hole binaries with mildly
unequal masses. As a consequence, the formula is not very accurate for
the large-spin, unequal-mass binaries considered in this
paper. However, other formulas available in the literature are either
very accurate but only valid for equal-mass
binaries~\cite{Reisswig:2009vc}, or have not been yet extensively
tested against NR simulations~\cite{Kesden:2008ga, Lousto:2009mf}.
Thus, for the time being we stick with Eq.~(8) of
Ref.~\cite{Tichy2008}, but we plan to construct a better formula in
the future using all recent data in the literature.
Furthermore, we follow the hybrid matching procedure of
Ref.~\cite{Pan:2011gk} to fix the $N$ complex amplitude coefficients
$A_{22n}$ in Eq.~\eqref{ringdown}. We set up $N$ complex linear
equations by imposing that the inspiral-plunge and merger-ringdown
waveforms $h_{22}^{\text{inspiral-plunge}}$ and
$h_{22}^{\text{merger-RD}}$ coincide on $N-2$ points (evenly sampled
over a range $[t_{\text{match}}^{22}-\Delta
t_{\text{match}}^{22},t_{\text{match}}^{22}]$) and that their time
derivatives $\dot{h}_{22}^{\text{inspiral-plunge}}$ and
$\dot{h}_{22}^{\text{merger-RD}}$ coincide at
$t_{\text{match}}^{22}-\Delta t_{\text{match}}^{22}$ and
$t_{\text{match}}^{22}$. As in previous works, we introduce the EOB
adjustable parameter $\Delta t_{\text{match}}^{22}$ which describes
the size of the comb over which we impose continuous and smooth
matching in order to determine the ringdown waveform.
In Refs.~~\cite{Buonanno:2009qa, Pan:2011gk, Barausse:2011kb}, pseudo
QNMs (pQNMs) were proposed and applied to moderate the rise of the EOB
GW frequency during the merger-ringdown transition---for example
Sec.~IIC of Ref.~\cite{Pan:2011gk} discussed in some detail the
advantage of using pQNMs for higher-order GW modes. In this paper, we
find it useful to introduce a pQNM for the $(2,2)$ mode. Therefore, we
choose $N\!=\!8$ in Eq.~\eqref{ringdown} and replace the highest
overtone in the summation with this pQNM.
Finally, we build the full inspiral-plunge-merger-ringdown EOB
waveform by joining the inspiral-plunge waveform
$h_{22}^{\text{inspiral-plunge}}(t)$ and the merger-ringdown waveform
$h_{22}^{\text{merger-RD}}(t)$ at the matching time
$t_{\text{match}}^{22}$ as
\begin{equation}
\begin{split}
h^{\text{EOB}}_{22}(t) &= h_{22}^{\text{inspiral-plunge}}(t)\,
\theta(t_{\text{match}}^{22}-t) \\
&\quad +h_{22}^{\text{merger-RD}}(t)\,
\theta(t-t_{\text{match}}^{22}) \,.
\end{split}
\end{equation}
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=0.45\textwidth]{EOBSpacetime_q1}
\caption{\label{fig:EOBSpacetime} We show in the spacetime diagram $(\hat{t},r^\ast)$
the trajectory of the effective particle in the EOB description
(black solid line in the left part of the diagram)
and the
EOB (2,2) gravitational mode (red solid oscillating line)
for an equal-mass nonspinning black-hole binary.
Although we only need to evolve the EOB trajectory until the orbital frequency reaches its maximum (``light ring''),
the model's dynamics allows the trajectory to continue to negative $r^\ast$ (short-dashed black line
in the left part of the diagram).
The blue dashed lines represent $\hat{t}\pm r^*=\textrm{const.}$ surfaces and ingoing/outgoing null rays. The EOB (2,2) mode is a function of the retarded time $\hat{t}-r^*$, plotted here orthogonal to $\hat{t}- r^*=\textrm{const.}$ surfaces, at a finite $\hat{t}+r^*$ distance. The two outgoing null rays are drawn at
the $\hat{t}-r^*$ retarded times when the EOB particle
crosses the EOB ISCO and light-ring radii, respectively. The
shaded green area is a rough sketch of the potential barrier around the newborn black hole.}
\end{center}
\end{figure}
In Fig.~\ref{fig:EOBSpacetime}, we summarize how the inspiral-plunge--merger-ringdown EOB
waveform is constructed. Beyond the ISCO, the quasi-circular inspiral waveform is followed by a
short plunge waveform~\footnote{The number of gravitational-wave cycles during the
plunge scales roughly as $\nu^{-1/5}$~\cite{Buonanno00}.} where
the radial motion is no longer negligible and NQC corrections quickly become important.
The plunge ends roughly when the effective particle in the EOB description crosses the
light-ring, which, in the nonspinning case, coincides approximately with
the peak of EOB orbital frequency $\hat{\Omega}$ and waveform amplitude $|h_{22}|$.
Until this moment, the GW radiation in the EOB description
is obtained directly from the motion of the effective particle.
After this moment that we identify as the merger, the direct emission of GWs
from the effective particle is strongly attenuated and filtered by the
potential barrier formed around the newborn black hole. Thus, in the
EOB description the merger-ringdown waveform is no longer obtained from the motion
of the effective particle, but it is built through a superposition of QNMs.
This procedure of constructing the full EOB waveform, in particular replacing the direct
emission with a superposition of QNMs beyond the light ring, was first proposed in Refs.~\cite{Buonanno00,
Buonanno06} for nonspinning and spinning comparable-mass black-hole binaries.
It was inspired by the close limit approximation~\cite{price_pullin94} and results in Refs.~\cite{1971PhRvL..27.1466D,1972PhRvD...5.2932D} where it was observed that once the radially infalling particle
is inside the potential barrier which peaks around the light ring, the
direct gravitational radiation from the particle is strongly filtered by the potential barrier.
Part of the energy produced in the strong merger-burst remains stored in the resonant
cavity of the geometry, i.e., inside the potential barrier, and what is released outside is
just the ringdown signal. The non-linear scattering of GW radiation (tails) against the
curvature potential of the newborn black hole also contributes to the merger-ringdown
waveform. Currently, in the EOB description the merger-ringdown waveform is effectively
the tail of a $\delta$-function impulse at merger. When spin effects are present, the overall
picture depicted in Fig.~\ref{fig:EOBSpacetime} survives, but with some differences due to the fact that the
EOB light-ring position, peak of the orbital frequency $\hat{\Omega}$ and waveform amplitude $|h_{22}|$
can be displaced in time~\cite{Barausse:2011kb}. We notice that the physical picture of the merger-ringdown that emerged from the studies in
Refs.~\cite{price_pullin94,1971PhRvL..27.1466D,1972PhRvD...5.2932D} and was incorporated in the EOB description in Refs.~\cite{Buonanno00,Buonanno06}, has also
recently motivated the hybrid approach of Refs.~\cite{Nichols2010,Nichols:2011ih}.
\begin{table*}
\caption{\label{tab:InputValues} Exact NR-input values used in the
right-hand side of Eqs.~\eqref{NQCCond1}--\eqref{NQCCond5} to
calibrate the EOB inspiral-plunge waveforms.}
\begin{ruledtabular}
\begin{tabular}{cccccccc}%
$q$ & 1 & 1/2 & 1/3 & 1/4& 1/6 & 1 & 1 \\ %
$\chi_1=\chi_2$ & 0 & 0& 0& 0& 0 & +0.43655 & -0.43757 \\[2pt] %
\hline \\[-8pt]%
$\; |h^{\text{NR}}_{22,\text{peak}}| \;$ & 0.3940 & 0.3446 & 0.2855
& 0.2403 & 0.1810 & 0.3942 & 0.3935 \\[4pt] %
$\; 10^4 M^2 \partial_t^2|h^{\text{NR}}_{22,\text{peak}}|\;$ & -10.3
& -8.8 & -6.9 & -5.5 & -3.9 & -7.7 & -12.4 \\[4pt] %
$\; M \omega^{\text{NR}}_{22,\text{peak}}\;$ & 0.3593 & 0.3467 &
0.3324 & 0.3218 & 0.3084 & 0.3989 & 0.3342 \\[4pt] %
$\; 10^3 M^2 \dot{\omega}^{\text{NR}}_{22,\text{peak}}\;$ & 11.3 &
10.5 & 9.6 & 8.9 & 8.1 & 11.2 & 10.7 \\ %
\end{tabular}
\end{ruledtabular}
\end{table*}
We now continue our detailed review of how the EOB waveform is built and
discuss how we fix the NQC coefficients in Eq.~\eqref{NQC}.
Since we do not expect spin effects in the NQC correction until 1.5PN
order in either amplitude or phase, the coefficients $a^{h_{22}}_i$
with $i = 1,2$ and $b^{h_{22}}_i$ with $i=1,2$ only depend on $\nu$,
while $a^{h_{22}}_i$ with $i = 4,5$ and $b^{h_{22}}_i$ with $i = 3,4$
are functions of $\nu$ linearly proportional to the spins
$\chi_{1,2}$. The coefficient $a_3^{h_{22}}$ is given by the sum of a
nonspinning term (dependent only on $\nu$) and a spinning term
(proportional to the spins $\chi_{1,2}$). In Sec.~\ref{sec:EOB-ca} we
first calibrate the nonspinning waveforms, and then the spinning
ones. Thus, we determine the ten coefficients in Eq.~\eqref{NQC} in
two steps. First, we set $\chi_1=\chi_2=0$, thus $a^{h_{22}}_i=0$
(with $i = 4,5$) and $b^{h_{22}}_i=0$ (with $i = 3,4$) and calculate
the values of the five NQC coefficients $a^{h_{22}}_i$ (with
$i=1,2,3$) and $b^{h_{22}}_i$ (with $i = 1,2$) by imposing the
following five conditions~\cite{Pan:2011gk, Barausse:2011kb}:
\begin{enumerate}
\item Let $t_{\text{peak}}^{\Omega}$ be the time at which the EOB
orbital frequency reaches its peak. Then, the peak of the EOB
$(2,2)$ mode must happen at the matching time $t_{\text{match}}^{22}
= t_{\text{peak}}^{\Omega}+\Delta t^{22}_{\text{peak}}$, that is
\begin{equation}
\left. \frac{d |h^{\text{EOB}}_{22}|}{dt}\right
|_{t_{\text{peak}}^{\Omega}+\Delta
t^{22}_{\text{peak}}}=0 \label{NQCCond1}\,,
\end{equation}
where $\Delta t^{22}_{\text{peak}}$ is an EOB adjustable parameter,
which will be specified in Sec.~\ref{sec:EOB-ca}. We note that in
Ref.~\cite{Barausse:2011kb} the quantity $\Delta
t^{22}_{\text{peak}}$ was computed by comparing the times at which
the Teukolsky (2,2) mode and the EOB orbital frequency reach their
peaks. This was possible because the EOB trajectory was used in the
Teukolsky equation to evolve the dynamics. However, in the NR
simulation, we do not know what $\Delta t^{22}_{\text{peak}}$ is,
because the EOB dynamics does not determine the NR dynamics.
\item The amplitudes of the NR and EOB $(2,2)$ modes are the same,
\begin{equation}
|h^{\text{EOB}}_{22}(t_{\text{peak}}^{\Omega}+\Delta
t_{\text{peak}}^{22})|=|h^{\text{NR}}_{22}(t_{\text{peak}}^{\text{NR}})|\label{NQCCond2}\,.
\end{equation}
\item The curvatures of the amplitudes of the NR and EOB $(2,2)$
modes are the same,
\begin{equation}
\left. \frac{d^2 |h^{\text{EOB}}_{22}|}{dt^2}\right
|_{t_{\text{peak}}^{\Omega}+\Delta
t^{22}_{\text{peak}}}=\left. \frac{d^2
|h^{\text{NR}}_{22}|}{dt^2}\right
|_{t_{\text{peak}}^{\text{NR}}}\label{SpinCurvature} \,.
\end{equation}
\item The GW frequencies of the NR and EOB $(2,2)$ modes are the
same,
\begin{equation}
\omega_{22}^{\text{EOB}}(t_{\text{peak}}^{\Omega}+\Delta
t_{\text{peak}}^{22})=\omega_{22}^{\text{NR}}(t_{\text{peak}}^{\text{NR}})
\,.
\end{equation}
\item The time derivatives of the GW frequency of the NR and EOB
$(2,2)$ modes are the same,
\begin{equation}
\left. \frac{d \omega_{22}^{\text{EOB}}}{dt}\right
|_{t_{\text{peak}}^{\Omega}+\Delta
t_{\text{peak}}^{22}}=\left. \frac{d
\omega_{22}^{\text{NR}}}{dt}\right
|_{t_{\text{peak}}^{\text{NR}}}\label{NQCCond5} \,.
\end{equation}
\end{enumerate}
We summarize in Table~\ref{tab:InputValues} all the NR-input values
that we use in the right-hand side of
Eqs.~\eqref{NQCCond2}--\eqref{NQCCond5}. After the five nonspinning
NQC coefficients have been computed, we plug them back into the EOB
dynamics through the energy flux, start a new EOB evolution, generate
a new EOB $(2,2)$ mode, and calculate new NQC coefficients. We repeat
this procedure until the values of the NQC coefficients
converge. Then, when calibrating spinning waveforms, we set
$a^{h_{22}}_{i}$ and $b^{h_{22}}_{i}$ (with $i=1,2$), as well as the
nonspinning part of $a_3^{h_{22}}$, to the values just calculated for
$\chi_1=\chi_2=0$, and apply the five conditions above in an iterative
way, obtaining the final coefficients $a^{h_{22}}_{i}$ (with
$i=3,4,5$) and $b^{h_{22}}_{i}$ (with $i=3,4$). Note that in order to
generate GW templates, this procedure can be computationally
expensive, since to generate one EOB $(2,2)$ mode one has to evolve
the dynamics a few times. The current computational cost of generating
an EOB waveform long enough for the LIGO bandwidth varies between a
fraction of a second to a few seconds,\footnote{The time is measured
by running a code that is not optimized in speed on a single CPU.}
depending on the masses. The iterative procedure can increase this
cost by a factor of a few.
In order for the NQC coefficients to be effective in correcting the
EOB mode peak, the latter has to occur in a region where the radial
motion is comparable to or at least $\roughly 30\%$ of the tangential
motion. Such a condition is in principle not a necessary requirement
for the EOB model to work. In fact, the radial motion \textit{is}
expected to be strongly suppressed for almost extremal black holes, at
least in the test-particle limit, since the ISCO coincides with the
horizon for $\chi=1$~\cite{1972ApJ...178..347B}. However, if the
factorized (2,2) mode, given by Eq.~\eqref{hlm}, differs substantially
from the NR (2,2) mode because of the lack of high-order spin-orbit
terms, the inability of the NQC coefficient to change the waveform
during the plunge at high spins may prevent the EOB model to work
properly. This is because the NQC coefficients cannot artificially
compensate the missing higher-order spin orbit terms in the waveforms,
as they partially do at low spins. In fact, we will see that this
problem arises for $\chi_i \gtrsim 0.7$, making the EOB prototype
waveforms unreliable for large positive spins.
We list in Table~\ref{tab:adjparams} all the EOB adjustable parameters
that we exploit in this work to calibrate the EOB model to NR
simulations.
\section{Effective-one-body calibration}
\label{sec:EOB-ca}
In this section, we calibrate the EOB model using seven NR waveforms,
namely five nonspinning waveforms of mass ratios $q \equiv m_2/m_1 =
1,1/2,1/3,1/4$ and $1/6$ and two equal-mass spinning waveforms with
$\chi_1=\chi_2=+0.43655$ and $\chi_1=\chi_2=-0.43757$. The calibration
is achieved by minimizing the amplitude and phase differences between
the NR and EOB $(2,2)$ modes over the six EOB adjustable parameters:
$K$, $d_{\text{SO}}$ and $d_{\text{SS}}$ in the EOB conservative
dynamics, and $\rho_{22}^{(4)}$, $\Delta t^{22}_{\text{peak}}$,
$\Delta t_{\text{match}}^{22}$, $\omega_{22}^{\text{pQNM}}$ and
$\tau_{22}^{\text{pQNM}}$ in the EOB waveforms (see
Table~\ref{tab:adjparams}).
\subsection{Alignment of EOB and NR waveforms}
\label{sec:EOB-cal}
When calibrating NR and EOB waveforms, we first align the waveforms at
low frequency following the procedure of Refs.~\cite{Buonanno:2009qa,
Pan:2009wj, Pan:2011gk}. This procedure consists of minimizing the
square of the difference between the NR and EOB $(2,2)$-mode phases
$\phi^{\text{NR}}_{22}$ and $\phi^{\text{EOB}}_{22}$, integrated over
the time window $(t_1,t_2)$,
\begin{equation}\label{alignment}
\int_{t_1}^{t_2} \left[
\phi^{\text{EOB}}_{22}(t+t_0)+\phi_0-\phi^{\text{NR}}_{22}(t)
\right]^2 dt\,,
\end{equation}
with respect to the time shift $t_0$ and phase shift $\phi_0$, where
it is understood that $\phi_{22}^{\text{EOB}}$ is computed for a
chosen set of adjustable parameters. The time window $(t_1,t_2)$
should: (i) begin as early as possible, where the NR and EOB GW-phase
evolutions agree best, (ii) begin late enough to avoid the junk
radiation present in the numerical simulation, (iii) be long enough to
average over numerical noise, and (iv) extend from peak to peak (or
trough to trough) over an integer number of oscillations in the GW
frequency, which are caused by the residual eccentricity in the
numerical initial conditions. In Table~\ref{tab:alignwindow}, we list
our choices of $(t_1,t_2)$ for the seven numerical waveforms at our
disposal. Each time window extends through 10 eccentricity oscillation
cycles in the numerical frequency evolution.
Let $\bar{\phi}_0$ and $\bar{t}_0$ be the alignment parameters. Then,
we define the phase and relative amplitude differences between the EOB
and NR (2,2) modes as follows:
\begin{equation}
\Delta\phi(t)=\phi^{\text{EOB}}_{22}(t+\bar{t}_0)+\bar{\phi}_0-\phi^{\text{NR}}_{22}(t)\,,
\end{equation}
and
\begin{equation}
\left(\frac{\Delta A}{A}\right)(t)=\frac
{|h^{\text{EOB}}_{22}|(t+\bar{t}_0)}{|h^{\text{NR}}_{22}|(t)}-1\,.
\end{equation}
We then define the global phase and relative amplitude differences
over a time window $(t_1,t_3)$ with
\begin{equation}\label{phasediff}
\Delta\phi_{\text{global}} =\max_{t\in (t_1,t_3)}|\Delta\phi(t)|\,,
\end{equation}
and
\begin{equation}\label{ampdiff}
\left(\frac{\Delta A}{A}\right)_{\text{global}}=\max_{t\in
(t_1,t_3)}\left| \left(\frac{\Delta A}{A}\right)(t)\right|\,.
\end{equation}
In the following, when measuring the difference between NR and EOB
inspiral-plunge waveforms we set $t_3=t_{\text{match}}^{22}$, while
when we measure the difference between full inspiral-merger-ringdown
waveforms we use $t_3=t_{\text{end}}$, where $t_{\text{end}}$ is
chosen as late as possible into the ringdown stage, but before
numerical errors due to gauge effects become
noticeable~\cite{Buonanno:2009qa}. We list the values of
$t_{\text{match}}^{22}$ and $t_{\text{end}}$ for the seven NR
waveforms in Table~\ref{tab:alignwindow}.
\begingroup
\begin{table}
\caption{\label{tab:adjparams} Summary of adjustable parameters of
the spin EOB model considered in this paper. The values of the
EOB adjustable parameters used in this paper are given in
Eqs.~\eqref{Delta22}, \eqref{combs}, \eqref{pQNM}, \eqref{A8},
\eqref{K}, and \eqref{SpinParams}. In addition, the NQC
parameters $a_i^{h_{22}}$ and $b_i^{h_{22}}$ are fixed from
NR-input values through
Eqs.~\eqref{NQCCond1}--\eqref{NQCCond5}.}
\begin{ruledtabular}
\begin{tabular}{cc}
EOB dynamics & EOB waveform\\
adjustable parameters & adjustable parameters\\ %
\hline %
$K$ & $\rho_{22}^{(4)}$ \\[3pt] %
$d_{\text{SO}}, d_{\text{SS}}$ & $\Delta t_\text{match}^{22},
\Delta t_{\text{peak}}^{22}$ \\[3pt] %
& $\omega^\text{pQNM}_{22}, \tau^\text{pQNM}_{22}$ \\ %
\end{tabular}
\end{ruledtabular}
\end{table}
\endgroup
\subsection{Procedure to calibrate the EOB adjustable parameters}
\label{sec:calibration}
Recently, Ref.~\cite{Barausse:2011kb} computed the waveforms in the
small-mass-ratio limit by evolving a time-domain Teukolsky equation in
which the source term is evaluated using an EOB trajectory. It was
found that there exists a time difference between the Teukolsky
$(2,2)$-mode amplitude peak and the EOB orbital-frequency peak. This
difference is parametrized by the quantity $\Delta
t_{\text{peak}}^{22}$ introduced in Eq.~\eqref{NQCCond1}. Table III
in Ref.~\cite{Barausse:2011kb} lists this difference as a function of
the Kerr spin parameter: for nonspinning and retrograde cases $-3M
\,\raise 0.4ex\hbox{$<$}\kern -0.8em\lower 0.62ex\hbox{$\sim$}\, \Delta t^{22}_{\text{peak}}\,\raise 0.4ex\hbox{$<$}\kern -0.8em\lower 0.62ex\hbox{$\sim$}\, 1.6 M$, while for
prograde cases $\Delta t^{22}_{\text{peak}}$ decreases quickly as
function of the spin. Let us consider $\chi_{\text{Kerr}}$, which
explicitly reads
\begin{equation}
\chi_{\text{Kerr}} =
(1-2\nu)\,\chi_{\text{S}}+\sqrt{1-4\nu}\,\chi_{\text{A}}\,,
\end{equation}
and also define
\begin{equation}
\chi\equiv \chi_{\text{S}}+
\chi_{\text{A}}\,\frac{\sqrt{1-4\nu}}{1-2\nu}\,, \label{chi}
\end{equation}
where $\chi_{\text{S,A}}\equiv(\chi_1\pm\chi_2)/2$. For an equal-mass,
equal-spin binary $(\nu=1/4, \chi_1=\chi, \chi_2=\chi)$ we have
$\chi_{\text{Kerr}}= \chi/2$, while in the test-particle limit we have
$\chi_{\text{Kerr}}= \chi$ (that is the spin parameter of the
background spacetime). Therefore, inspired by the results in the
test-particle limit, we assume here that for an equal-mass, equal-spin
binary $\Delta t^{22}_{\text{peak}}$ depends on the black-hole spins
through $\chi$. Explicitly we choose
\begin{equation}
\label{Delta22}
\Delta t^{22}_{\text{peak}} =
\begin{cases}
-2.5M & \text{if } \chi \leq 0\,,\\
-2.5M-1.77M \left(\frac{\chi}{0.437}\right)^4 & \text{if } \chi>0\,,\\
\end{cases}
\end{equation}
which models qualitatively Table III in Ref.~\cite{Barausse:2011kb}.
Following Refs.~\cite{Buonanno:2009qa, Pan:2009wj, Pan:2011gk}, we
calibrate the EOB adjustable parameters in two steps. These steps are
performed for each of our seven calibration NR waveforms separately,
resulting in seven sets of calibration parameters. First, for each of
the NR waveform at our disposal, we use $\Delta t_{\text{peak}}^{22}$
in Eq.~\eqref{Delta22}, insert the NR-input values from
Table~\ref{tab:InputValues} into
Eqs.~\eqref{NQCCond1}--\eqref{NQCCond5}, solve them iteratively for
the NQC coefficients, and calibrate $K$, $\rho_{22}^{(4)}$ (or
$d_{\text{SO}}$ and $d_{\text{SS}}$ if spins are present) by
minimizing Eq.~\eqref{phasediff} with
$t_3=t_{\text{match}}^{22}$. This process provides us with the EOB
inspiral-plunge waveform. Second, to obtain the EOB merger-ringdown
waveform, we calibrate the size of the comb $\Delta
t_{\text{match}}^{22}$ and the pQNM (complex) frequency by applying
Eq.~\eqref{phasediff} with $t_3=t_{\text{end}}$. As in
Ref.~\cite{Pan:2011gk}, we find that a constant value for the comb
size, notably
\begin{equation}
\label{combs}
\Delta t^{22}_{\text{match}} = 7.5M\,,
\end{equation}
gives a very good performance for all the different mass ratios and
spins. A detailed study of the pQNM (complex) frequency has revealed
that the best result is obtained when $\omega^{\text{pQNM}}_{22}$ lies
between the GW frequency $\omega^{\text{EOB}}_{22}M/M_f$ at
$t^{22}_{\text{match}}$ and the frequency of the least-damped QNM
$\omega_{220}$, and when $\tau^{\text{pQNM}}_{22}$ is (not much)
shorter than $\tau_{220}$. Specifically, we use the simple choice
\begin{subequations}
\label{pQNM}
\begin{eqnarray}
\omega_{22}^{\text{pQNM}} &=&\frac{1}{2} \left[\omega^{\text{EOB}}_{22}(t_{\text{match}}^{22})\frac{M}{M_f} +\omega_{220}\right]\,,\\
\tau_{22}^{\text{pQNM}} &=& \frac{3}{10} \tau_{220}\,,
\end{eqnarray}
\end{subequations}
for all different mass ratios and spins. Before ending this section,
we discuss in more detail how we carry out the calibration of the
parameters $K$, $\rho_{22}^{(4)}$, for the nonspinning sector, and
the parameters $d_{\text{SO}}$, $d_{\text{SS}}$, for the spinning
sector.
\begin{table}
\caption{\label{tab:alignwindow} We list the parameters $t_1$, $t_2$
entering the alignment procedure defined in Eq.~\eqref{alignment},
and the parameter $t_3$ (both $t_{\text{match}}^{22}$ and
$t_{\text{end}}$) entering the computation of waveforms'
differences in Eqs.~\eqref{phasediff} and \eqref{ampdiff}.}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
$q$ & 1 & 1/2 & 1/3 & 1/4& 1/6 & 1 & 1\\
$\chi_1=\chi_2$ & 0 & 0& 0& 0& 0 & +0.43655 & -0.43757\\
\hline
$t_1/M \;$ & 820 & 770 & 570 & 670 & 870 & 800 & 610 \\
$t_2/M \;$ & 2250 & 2255 & 1985 & 1985 & 2310 & 2150 & 1850 \\
$t_{\text{match}}^{22}/M \;$ & 3943 & 3729 & 3515 & 3326 & 4892
& 3367& 2402 \\
$t_{\text{end}}/M \;$ & 3990 & 3770 & 3560 & 3370 & 4940 & 3410
& 2430 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsubsection{Calibrating nonspinning waveforms}
In general, the adjustable parameters $K$ and $\rho_{22}^{(4)}$ depend
on the mass ratio and we assume that they are polynomial functions of
$\nu$. In principle, we should determine $K(\nu)$ and
$\rho_{22}^{(4)}(\nu)$ by a global minimization of
$\Delta\phi_{\text{global}}$ and $(\Delta A/A)_{\text{global}}$ [as
defined in Eqs.~\eqref{phasediff} and \eqref{ampdiff} using $t_3 =
t_{\text{match}}^{22}$] with respect to the unknown coefficients
entering the $K(\nu)$ and $\rho_{22}^{(4)}(\nu)$ polynomials. However,
as in previous studies ~\cite{Damour2009a, Pan:2011gk}, we find a
strong degeneracy among the EOB adjustable parameters, when
calibrating each mass ratio separately. The degeneracy is partially
broken when we combine all the available mass ratios together, but it
is not completely lifted. In particular, different choices of $K(\nu)$
and $\rho_{22}^{(4)}(\nu)$ lead to EOB models that can match equally
well with NR waveforms. We are thus relieved from a rigorous yet
expensive global search and follow a simplified procedure to find
satisfactory $K(\nu)$ and $\rho_{22}^{(4)}(\nu)$. First, we locate
two points $(0.8154,-35)$ and $(1.188,-20)$ in the
$K$--$\rho_{22}^{(4)}$ plane where $\Delta\phi_{\text{global}}<0.1$
rad and $(\Delta A/A)_{\text{global}}<0.1$ for $q=1$ and $q=1/6$
($\nu=0.25$ and $\nu=0.1224$), respectively. We then determine a
linear function $\rho_{22}^{(4)}(\nu)$ by imposing that
$\rho_{22}^{(4)}(0.25)=-35$ and $\rho_{22}^{(4)}(0.1224)=-20$, leading
to
\begin{equation}
\rho_{22}^{(4)}(\nu) = -5.6 - 117.6\,\nu\,.\label{A8}
\end{equation}
At $q=1/2,1/3$ and $1/4$, we choose $\rho_{22}^{(4)}$ according to
Eq.~\eqref{A8} and determine the value of $K$ that minimizes
$\Delta\phi_{\text{global}}$ and a range of $K$ values that satisfy
$\Delta\phi_{\text{global}}<0.1$ rad.
We now have a complete set of calibration parameters for each of our
nonspinning NR waveforms. In order to obtain calibration parameters
that interpolate between the NR waveforms, we build a least-squares
fit quadratic in $\nu$ against these $K$ values. By construction, we
fix two of the three free parameters in the fit by requiring that in
the test-particle limit $K(\nu)$ reproduces the ISCO shift of
Refs.~\cite{BarackSago09, Barausse:2009xi, LeTiec:2011dp} and that the
optimal equal-mass value $K(0.25)$ is recovered exactly. Even with
these two constraints and just one free parameter to fit, the
residuals are within $1\%$ (see Fig.~\ref{fig:KFit}). We find
\begin{equation}
K(\nu) = 1.447 - 1.715\,\nu - 3.246\,\nu^2\,.\label{K}
\end{equation}
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=0.45\textwidth]{KFit}
\caption{ \label{fig:KFit} We show the quadratic fit in $\nu$ for
the adjustable parameter $K$. This parameter is calibrated using
the five nonspinning NR waveforms, assuming
$\rho_{22}^{(4)}(\nu)$ in Eq.~\eqref{A8}. The error bars are
determined by the intersection of the contours of
$\Delta\phi_{\text{global}} = 0.1$ rads with
$\rho_{22}^{(4)}(\nu)$ for each mass ratio considered.}
\end{center}
\end{figure}
Finally, since the iterative procedure to compute the NQC coefficients
through Eqs.~\eqref{NQCCond1}--\eqref{NQCCond5} can be expensive, we
have parametrized them through quadratic fits, finding rather small
residuals. Explicitly, we obtain
\begin{subequations}
\label{NQCNS}
\begin{eqnarray}
a_1^{h_{22}} &=& -12.68 + 75.42\,\nu - 106.6\,\nu^2,\\\label{a1NS}
a_2^{h_{22}} &=& 101.5 - 757.3\,\nu + 1473\,\nu^2,\\
a_3^{h_{22}} &=& -107.7 + 857.6\,\nu - 1776\,\nu^2,\\
b_1^{h_{22}} &=& -1.464 + 12.82\,\nu - 60.10\,\nu^2,\\
b_2^{h_{22}} &=& 7.477 - 85.26\,\nu + 353.3\,\nu^2. \label{b2NS}
\end{eqnarray}
\end{subequations}
\subsubsection{Calibrating spinning waveforms}
When calibrating the EOB inspiral-plunge waveforms to the two NR
equal-mass, equal-spin waveforms at our disposal
($\chi_1=\chi_2=+0.43655$ and $\chi_1=\chi_2=-0.43757$), we use the
nonspinning EOB adjustable parameters $K$ and $\rho_{22}^{(4)}$ in
Eqs.~\eqref{K}-\eqref{A8}, and calibrate the spinning EOB adjustable
parameters $d_{\text{SO}}$ and $d_{\text{SS}}$. We reach this goal by
building contour plots in the plane $d_{\text{SO}}$--$d_{\text{SS}}$
for $\Delta \phi_{\text{global}}$ in Eq.~\eqref{phasediff} with $t_3 =
t_{\text{match}}^{22}$. We find that the contours of $\Delta
\phi_{\text{global}} = 0.2$ rads associated with the two NR spinning
waveforms intersect each other for the following choice of the
adjustable parameters
\begin{equation}
d_{\text{SO}}=-69.5\,, \quad \quad
d_{\text{SS}}=2.75\,.\label{SpinParams}
\end{equation}
Note that when computing the spinning NQC coefficients, we use the NQC
coefficients parametrized in Eq.~\eqref{NQCNS}, and solve iteratively
the five conditions \eqref{NQCCond1}--\eqref{NQCCond5} for
$a_i^{h_{22}}$ ($i=3,4,5$) and $b_i^{h_{22}}$ ($i=3,4$).\footnote{Note
that the NQC coefficient $a_3^{h_{22}}$ is solved for twice, first
in the nonspinning calibration and then in the spinning one.}
\section{A prototype effective-one-body model for non-precessing
spinning waveforms}
\label{sec:firstorder-model}
We now build on the results of Sec.~\ref{sec:EOB-ca}, and also on
recent outcomes of small-mass-ratio simulations produced by the
Teukolsky equation~\cite{Barausse:2011kb}, to construct a
self-contained set of prescriptions to generate EOB
inspiral-merger-ringdown waveforms in a larger region of the parameter
space $(\nu,\chi_1,\chi_2)$ of the binary.
\begingroup
\setlength{\tabcolsep}{5pt}
\begin{table*}
\begin{minipage}{0.7\linewidth}
\caption{\label{tab:InputValuesFits} Fits of the NR-input values
$f^{\text{NR}}$ that are used to build the global fits in
Eq.~\eqref{f} for the test-particle and equal-mass limits.}
\begin{ruledtabular}
\begin{tabular}{ccc}
$f^{\text{NR}}$ & Curve & Fit \\
\hline \\[-6pt]
\multirow{2}{*}{$|h_{22,\text{peak}}^{\text{NR}}|$} &
$(\nu=0,\chi)$ & 0 \\
& $(\nu=1/4,\chi)$ & 0.3961\\[6pt]
\multirow{2}{*}{$M^2 \partial_t^2|h_{22,\text{peak}}^{\text{NR}}|$}
& $(\nu=0,\chi)$ & 0 \\[2pt]
& $(\nu=1/4,\chi)$ & $10^{-3}\times (-1.007 + 0.5415 \chi)$
\\[6pt]
\multirow{2}{*}{$M \omega_{22,\text{peak}}^{\text{NR}}$} &
$(\nu=0,\chi)$ & $0.2758 - 0.08898\log(1-\chi)$ \\[2pt]
& $(\nu=1/4,\chi)$ & $0.3604 + 0.08242 \chi + 0.02794 \chi^2$
\\[6pt]
\multirow{2}{*}{$M^2 \dot{\omega}_{22,\text{peak}}^{\text{NR}}$} &
$(\nu=0,\chi)$ & $10^{-3}\times [5.953+(0.7199+1.210 \chi)
\log(1 - \chi)]$ \\[2pt]
& $(\nu=1/4,\chi)$ & 0.01113 \\[3pt]
\end{tabular}
\end{ruledtabular}
\end{minipage}
\end{table*}
\endgroup
\subsection{Interpolating the EOB model outside the domain of
calibration}
\label{sec:interpolation}
Since we only have seven NR waveforms at our disposal (and just two of
them with spins), when extending the EOB model to regions of the
parameter space without NR waveforms, we are forced to make
assumptions on the behavior of the adjustable parameter $\Delta
t_{\text{peak}}^{22}$ and the NR-input values in
Table~\ref{tab:InputValues}. In this work we assume that the 3
dimensional space $(\nu,\chi_1,\chi_2)$ can be treated as the 2
dimensional space $(\nu,\chi)$. [Note that $\nu \in [0,1/4]$ and
$\chi \in [-1,1]$.] More specifically, given a binary described by the
parameters $(\nu,\chi_1,\chi_2)$ having in general $\chi_1 \neq
\chi_2$, we consider an auxiliary equal-spin binary with parameters
$(\nu,\chi,\chi)$, where $\chi$ is defined as in Eq.~\eqref{chi}.
With this choice, the auxiliary binary has the same value of
$\chi_{\text{Kerr}}$ as the original binary. We stress that the
auxiliary binary is used only to extend the EOB adjustable parameters
and the NR-input values to regions of the parameter space in which we
do not have NR results. Of course the EOB dynamics and waveforms are
computed for the original binary, not the auxiliary one.
Thus, in the prototype EOB model, the EOB adjustable parameter $\Delta
t_{\text{peak}}^{22}$ in Eq.~\eqref{Delta22} is evaluated using for
$\chi$ the value from Eq.~\eqref{chi}. To compute the spinning NQC
coefficients in the prototype model, we need to prescribe the input
values in the right-hand side of
Eqs.~\eqref{NQCCond2}--\eqref{NQCCond5} using the parameters of the
auxiliary binary. We proceed as follows. We only have knowledge of the
NR-input values at merger for a few regions of the $(\nu, \chi)$
parameter space. We can obtain the NR-input values along the curve
$(\nu=0,\chi)$ from the Teukolsky waveforms of
Ref.~\cite{Barausse:2011kb}. In particular, both
$|h_{22,\text{peak}}^{\text{NR}}|$ and
$\partial_t^2|h_{22,\text{peak}}^{\text{NR}}|$ are set to 0 (since
they are proportional to $\nu$), while for
$\omega_{22,\text{peak}}^{\text{NR}}$ and
$\dot{\omega}_{22,\text{peak}}^{\text{NR}}$ we use the data in Table V
of Ref.~\cite{Barausse:2011kb}. We can extract the peak information
along the curve $(\nu=1/4,\chi)$ from the three equal-mass waveforms
used in the calibration of this paper, together with the two nearly
extremal spin cases $\chi_1=\chi_2=-0.94905$ and
$\chi_1=\chi_2=+0.9695$ (not used for the calibration of the
adjustable parameters $d_{\text{SO}}$ and $d_{\text{SS}}$), which we
will discuss in Sec.~\ref{sec:perf-otherwaveforms}. Along the curve
$(\nu,\chi=0)$ we can use the NR-input values of the nonspinning
waveforms from Refs.~\cite{Barausse:2011kb, Pan:2011gk}. In
Table~\ref{tab:InputValuesFits} we list the fits for each NR-input
value
$f^{\text{NR}}\in\{|h_{22,\text{peak}}^{\text{NR}}|, \partial_t^2|h_{22,\text{peak}}^{\text{NR}}|,
\omega_{22,\text{peak}}^{\text{NR}},
\dot{\omega}_{22,\text{peak}}^{\text{NR}}\}$ in the test-particle and
equal mass limits. Along the nonspinning profile, fits quadratic in
$\nu$ give a good description of the exact NR-input values, hence we
assume that the dependence of $f^{\text{NR}}$ on $\nu$ is quadratic as
well and has the simple form
\begin{equation}
f^{\text{NR}}(\nu,\chi)=c_2(\chi)\,\nu^2+c_1(\chi)\,\nu+c_0(\chi)\,.
\end{equation}
We can fix two of the coefficients $c_i$ by imposing that the
test-particle limit and equal-mass cases are exactly recovered when
$\nu=0$ and $\nu=1/4$, respectively. We can fit the third coefficient
to the exact NR-input values along the nonspinning direction. This
means that the fits along the nonspinning profile are not exactly
recovered by the global fits $f^{\text{NR}}(\nu,\chi)$, but we find
that the residuals are negligible. Explicitly, we fit $c_1$ in the
following expression
\begin{eqnarray}
f^{\text{NR}}(\nu,0;c_1)&=&\{16[f^{\text{NR}}(1/4,0)-f^{\text{NR}}(0,0)]-4c_1\}\,\nu^2\nonumber\\
&+&c_1 \nu+f^{\text{NR}}(0,0)\,,
\end{eqnarray}
and denote the fitted value with $\bar{c}_1$. Finally, we extend the
result outside the nonspinning profile assuming that the global fit
reads
\begin{eqnarray}
f^{\text{NR}}(\nu,\chi)&=&\{16[f^{\text{NR}}(1/4,\chi)-f^{\text{NR}}(0,\chi)]-4\bar{c}_1\}\,\nu^2\nonumber\\
&+&\bar{c}_1 \nu+f^{\text{NR}}(0,\chi)\,.\label{f}
\end{eqnarray}
In Table~\ref{tab:c1bar} we list the values of $\bar{c}_1$ for the
four NR-input values that are needed to compute the right-hand sides
in Eqs.~\eqref{NQCCond2}--\eqref{NQCCond5}.
\begin{table}
\caption{\label{tab:c1bar} Fitted values of $\bar{c}_1$ for the four
NR-input values as defined in Eq.~\eqref{f}.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
& $|h_{22,\text{peak}}^{\text{NR}}|$ &
$M^2 \partial_t^2|h_{22,\text{peak}}^{\text{NR}}|$ & $M
\omega_{22,\text{peak}}^{\text{NR}}$ & $M^2
\dot{\omega}_{22,\text{peak}}^{\text{NR}}$ \\[2pt]
\hline
$\; \bar{c}_1 \;$ & 1.355 & $-2.5 \times 10^{-3}$ & 0.1935 &
0.01204 \\
\end{tabular}
\end{ruledtabular}
\end{table}
Having in hand $\Delta t_{\text{peak}}^{22}$ and the NR-input values,
we complete the construction of the prototype EOB model by fixing the
EOB adjustable parameters $K$, $\rho_{22}^{(4)}$, and $d_{\text{SO}}$,
$d_{\text{SS}}$ to the values in Eqs.~\eqref{K}-\eqref{A8} and
\eqref{SpinParams}, respectively, employing the pQNM (complex)
frequency in Eq.~\eqref{pQNM}, the comb size in Eq.~\eqref{combs}, and
the NQC coefficients in Eqs.~\eqref{NQCNS}.
To test the robustness of the construction of the quantity
$f^{\text{NR}}(\nu,\chi)$, we study how the spinning NQC coefficients
change across the plane $(\nu,\chi)$. We focus on binaries with
$\chi_1=\chi_2=\chi$. We compute iteratively the NQC amplitude
coefficients $a_i^{h_{22}}$ (with $i=3,4,5$) for different mass ratios
in the range $1/100 \leq q\leq 1$ and for different spins in the range
$-1 \leq \chi_i \lesssim 0.7$ ($i=1,2$). Typically, we get convergence
of the NQC coefficients within five iterations. Unfortunately, we
cannot span larger, positive values of $\chi_i$ since the NQC
corrections tend to diverge as the spin magnitude grows in the
prograde case. The reason is that they become less effective in
reshaping the EOB (2,2) peak as prescribed by the fits
$f^{\text{NR}}(\nu,\chi)$. This happens because the peak of the EOB
(2,2) mode occurs too early in the evolution when the orbital motion
is still quasicircular. Hence the NQC coefficients must be very large
to compensate for the small values of $p_{r^*}/(r \hat{\Omega})$ and
be able to reshape the EOB (2,2) amplitude around the peak in a
satisfactory way. As discussed earlier, this would not be a problem in
principle if higher-order spin-orbit terms were known in the
factorized waveforms, but, as a result of the lack of knowledge of
those, our EOB prototype waveforms are reliable only up to $\chi_i
\lesssim 0.7$.
\subsection{Performance for nonspinning waveforms}
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=0.45\textwidth]{NS_q_1}
\caption{\label{fig:NS_q_1} Comparison of the NR and EOB (2, 2)
mode for $q\!=\!1$, $\chi_1\!=\!\chi_2\!=\!0$. In the upper
panels we show the comparison between the real part of the two
waveforms, zooming into the merger region in the upper right
plot. In the lower panels we show the dephasing and relative
amplitude difference over the same time ranges as the upper
panels. A vertical dashed line marks the position of the NR
amplitude peak. The dotted curves are the NR errors.}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=0.45\textwidth]{NS_q_6}
\caption{\label{fig:NS_q_6} Same as in Fig.~\ref{fig:NS_q_1} but
for $q=1/6$, $\chi_1=\chi_2=0$.}
\end{center}
\end{figure}
In Figs.~\ref{fig:NS_q_1} and \ref{fig:NS_q_6} we show how the
inspiral-merger-ringdown EOB waveforms computed according to the
prescriptions of Sec.~\ref{sec:interpolation} compare with the NR
waveforms for two representative mass ratios $q=1,1/6$. In general,
for all the nonspinning waveforms we find that the dephasing is
typically within $0.1$ rads up until $t_{\text{match}}^{22}$ (merger
time) and always within $0.2$ rads when including the ringdown stage.
The figures also show in dotted lines the NR phase and amplitude
errors obtained by combining the extrapolation and resolution errors
in quadrature. We notice that the EOB and NR amplitudes' agreement is
remarkably good up to the merger time, while during the ringdown the
relative amplitude difference may grow up to about $15\%$, approaching
the estimated NR error.
In Ref.~\cite{Pan:2011gk} the authors calibrated a different version
of the nonspinning EOB model to the same set of nonspinning NR
waveforms used in this paper, the main difference between the two EOB
models being the choice of the EOB potential $A(r)$, as we discussed
in Sec.~\ref{sec:EOB-dyn}. We find that the difference between the EOB
inspiral-merger-ringdown waveforms and the NR waveforms in
Ref.~\cite{Pan:2011gk} is comparable to and for some mass ratios
marginally worse than what we have achieved in this work using the
prototype EOB model. The only noticeable qualitative difference is
that the phase error of the prototype EOB model accumulates more
slowly during the merger-ringdown transition because of the
introduction of the pQNM in the $(2,2)$ mode. We point out that the
inclusion of the pQNM (complex) frequency in the EOB merger-ringdown
waveform is not strictly needed for the nonspinning case, but we use
it even in this case for uniformity with the spinning sector, where
the pQNM frequency is instead crucial.
We can quantify the differences between NR and EOB waveforms by
computing the mismatch ($\mathcal{M}$), as defined in Eq.~(43) of
Ref.~\cite{Pan:2011gk}, which is one minus the overlap between two
waveforms, weighted by the noise spectral density of the detector and
maximized over the initial time, phase and binary parameters. If we
use an Advanced LIGO noise curve, named \texttt{ZERO\_DET\_HIGH\_P} in
Ref.~\cite{Shoemaker2009}, we obtain that the $\mathcal{M}$, maximizing only over
the initial phase and time, is always smaller than 0.001 when the
binary total mass varies between $20 M_\odot$ and $200 M_\odot$. For
these total masses, the NR waveforms start in band. We taper them
using the Planck-taper window function~\cite{McKechan:2010kp} to
reduce numerical artifacts. The width of the window function is set to
the length of NR waveforms, ranging from $0.35(M/20M_\odot)$ to
$0.65(M/20M_\odot)$ seconds. The window function smoothly rises from 0
to 1 in the first 0.0625 seconds and falls from 1 to 0 in the last
0.0125 seconds. We restrict the $\mathcal{M}$ integration to the frequency band
for which NR waveform is available.
\subsection{Performance for spinning waveforms}
In Figs.~\ref{fig:DD} and \ref{fig:UU} we present the results of the
prototype EOB model for the two moderately spinning waveforms at our disposal. We
observe that the choice \eqref{SpinParams} gives a larger dephasing
for $\chi_1=\chi_2=+0.43655$ than for $\chi_1=\chi_2=-0.43757$ or the
nonspinning runs. In fact at the merger time the dephasing for the
$\chi_1=\chi_2=+0.43655$ waveform grows beyond the NR error. For the
amplitude, we instead get a similar performance, on the same level as
the other runs. The worse performance of the $\chi_1=\chi_2=+0.43655$
waveform can be explained by the more relativistic nature of this
run. In fact, in this case the EOB ISCO moves to smaller radial
separations as the spin parameter $\chi$ increases towards positive
values (aligned runs). On the other hand, for negative values of
$\chi$ (anti-aligned runs) the EOB ISCO moves outwards to a less
relativistic regime and one expects a better behavior of the EOB
model. This expectation is confirmed by the calibration of the
$\chi_1=\chi_2=-0.43757$ run, for which we find that very good
performances can be achieved in large regions of the EOB adjustable
parameter space. Fig.~\ref{fig:DD} shows that in this case the
dephasing is well within the NR error at the merger time.
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=0.45\textwidth]{DD}
\caption{\label{fig:DD} Same as in Fig.~\ref{fig:NS_q_1} but for
$q=1$, $\chi_1=\chi_2=-0.43655$.}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=0.45\textwidth]{UU}
\caption{\label{fig:UU} Same as in Fig.~\ref{fig:NS_q_1} but for
$q=1$, $\chi_1=\chi_2=+0.43756$.}
\end{center}
\end{figure}
For these spinning waveforms, we obtain that the $\mathcal{M}$, maximizing only
over the initial phase and time, is always smaller than 0.003 when the
binary total mass varies between $20 M_\odot$ and $200 M_\odot$.
\subsection{Performance for nearly extremal spin waveforms}
\label{sec:perf-otherwaveforms}
Here we compare the EOB waveforms of the prototype model developed in
Sec.~\ref{sec:interpolation}, against two equal-mass NR waveforms with
nearly extremal spins: $\chi_1=\chi_2=-0.94905$ and
$\chi_1=\chi_2=+0.9695$~\cite{Lovelace2010, Lovelace:2011nu}. We
stress that these NR waveforms were not used when calibrating the spin
EOB adjustable parameters $d_{\text{SO}}$ and $d_{\text{SS}}$ in
Eq.~\eqref{SpinParams}. The only information that we used from these
two nearly extremal spin waveforms was their NR-input values when
building the fits $f^{\text{NR}}(\nu,\chi)$.
As already discussed, when the spins are anti-aligned, the EOB ISCO
moves towards larger radial separations, so that the binary is less
relativistic throughout its orbital evolution as compared to the
aligned configurations. Therefore, we expect that in this case the
EOB model is more effective. The results in Fig.~\ref{fig:ExtDD} for
the case $\chi_1=\chi_2=-0.94905$ confirm this expectation. The
dephasing grows up to about 2 rads during the ringdown, while the
relative amplitude difference grows up to about $40\%$. Despite the
large phase difference at merger, we find that, even without
maximizing over the binary parameters but only the initial phase and
time, the $\mathcal{M}$ is always smaller than $0.005$ for systems with total
mass between 20$M_{\odot}$ and 200$M_{\odot}$.
For the case $\chi_1=\chi_2=+0.9695$, which is outside the domain of
validity of our prototype EOB model, we cannot successfully run the
NQC iterations, since the NQC corrections are so large that they cause
a divergent sequence of NQC coefficients. Nonetheless, we deem it
interesting to generate the EOB inspiral-plunge waveform where only
the nonspinning NQC coefficients $a_i^{h_{22}}$ ($i=1,2,3$) and
$b_i^{h_{22}}$ ($i=1,2$) are used and compare it to the NR
waveform. In Fig.~\ref{fig:ExtUU} we show how our waveform
performs. We notice that the NR waveform is very long, almost 50 GW
cycles. The phase difference between the EOB and NR waveforms is
smaller than $0.04$ rads over the first 20 GW cycles, and then grows
up to $0.18$ rads during the subsequent 10 GW cycles and it becomes
$0.9$ rads when 10 GW cycles are left before merger. The fractional
amplitude difference is only $3\%$ when 10 GW cycles are left before
merger.
It is worth emphasizing that although our prototype model is not yet
able to generate merger-ringdown waveforms for spins larger than
$+0.7$, nevertheless, as the comparison with the nearly extremal case
$\chi_1=\chi_2=+0.9695$ has proven, the Hamiltonian of Refs.
~\cite{Barausse:2009xi, Barausse:2011ys} and the resummed flux of
Refs.~\cite{DIN, Pan2010hz} can evolve the EOB dynamics in this highly
relativistic case beyond the orbital-frequency's peak, until $r
\approx 1.9M$, without encountering unphysical features. This suggests
that relevant strong-field effects are well grasped by the EOB
dynamics and waveforms~\cite{Barausse:2009xi, Barausse:2011ys, DIN,
Pan2010hz}, at least as far as the NR runs used in this paper are
concerned. Moreover, the large amplitude difference causing the NQC
iteration to break down for large, positive spins was already observed
in Refs.~\cite{Pan2010hz, Barausse:2011kb} where it was pointed out
that it is important to improve the modeling of spin effects in the
EOB waveform amplitude. Finally, as observed above, the breaking down
of the NQC procedure in this highly relativistic case, although not a
problem in principle if higher-order spin-orbit terms were known in
the factorized waveforms, is due to the fact that the peak of the EOB
(2,2) mode occurs too early in the orbital evolution where
non-quasicircular orbit effects are still negligible.
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=0.45\textwidth]{ExtDD}
\caption{ \label{fig:ExtDD} Same as in Fig.~\ref{fig:NS_q_1} but
for $q=1$, $\chi_1=\chi_2=-0.94905$. This NR waveform was
\emph{not} used to calibrate the adjustable parameters
$d_{\text{SO}}$ and $d_{\text{SS}}$. Alignment between the NR
and EOB waveforms was performed using Eq.~\eqref{alignment},
with $t_{1} = 860\,M$ and $t_{2} = 2470\, M$.}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics*[width=0.45\textwidth]{ExtUU_Partial}
\caption{ \label{fig:ExtUU} Same as in Fig.~\ref{fig:NS_q_1} but
for $q=1$, $\chi_1=\chi_2=+0.9695$ and only the inspiral
portion. This NR waveform was \emph{not} used to calibrate the
adjustable parameters $d_{\text{SO}}$ and $d_{\text{SS}}$. Also,
in the aligned case our prototype EOB model only covers
$\chi_{1,2} \lesssim 0.7$. Note that in this plot we do not
include spinning NQC corrections in our EOB waveform. Alignment
between the NR and EOB waveforms was performed using
Eq.~\eqref{alignment}, with $t_{1} = 1170\,M$ and $t_{2} =
2790\, M$.}
\end{center}
\end{figure}
\section{Conclusions}
\label{sec:concl}
Using the EOB spin Hamiltonian in Refs.~\cite{Barausse:2009xi,
Barausse:2011ys}, the factorized waveforms in Refs.~\cite{DIN,
Pan2010hz}, and the adjustable parameters in
Table~\ref{tab:adjparams}, we have developed a prototype EOB model for
non-precessing spinning black-hole binaries that can be used for
detection purposes in LIGO and Virgo searches and employed for future
calibrations~\cite{NRARwebsite}. The prototype model is built by first
calibrating the EOB adjustable parameters against five nonspinning
waveforms with mass ratios $q = 1, 1/2, 1/3, 1/4, 1/6$ and two
equal-mass, equal-spin NR waveforms with moderate spins. Then, those
results, at the interface with NR, are combined with recent results at
the interface with black-hole perturbation
theory~\cite{Barausse:2011kb}. The resulting prototype EOB model
interpolates between calibrated points in the binary parameter space,
and generates inspiral-merger-ringdown waveforms with any mass ratio
and individual spin magnitudes $-1\leq \chi_i \lesssim 0.7$. This EOB
model has been implemented in the freely available LIGO Algorithm
Library (LAL)~\cite{LAL} with the model name
``SEOBNRv1''.\footnote{Two nonspinning EOB models are also available
in LAL, ``EOBNRv1'' and ``EOBNRv2'', which were calibrated to NR
waveforms in Refs.~\cite{Buonanno2007, Pan:2011gk}.}
We found that the EOB waveforms generated with the prototype model
agree with the NR waveforms used to calibrate them within $\roughly
0.1$ rads at merger for the nonspinning sector, and within $\roughly
0.15$ rads at merger for the spinning sector. In terms of amplitude
differences at merger, both nonspinning and spinning runs agree to
within $5\%$. The $\mathcal{M}$s for Advanced LIGO computed by maximizing only
with respect to the initial phase and time are always smaller than
0.003 for binaries with total masses between $20 M_\odot$ and $200
M_\odot$.
We also compared the prototype EOB model to two equal-mass, equal-spin
NR waveforms of black holes with nearly extremal spins, notably
$\chi_i = -0.94905, +0.9695$. Those NR waveforms were not part of the
original set of waveforms used to calibrate the EOB model. We found
that for the anti-aligned case the prototype EOB model performs quite
well for detection purposes, with $\mathcal{M}$s smaller than $0.003$ without
maximizing over the binary parameters, but only on initial phase and
time. In the aligned case, which is highly relativistic due to a spin
as large as $+0.9695$ (outside the range of validity of our prototype
model), we compared the inspiral-plunge waveform for 40 GW cycles and
found a dephasing of $\roughly 0.8$ rad. During the last 10 GW cycles
before merger the dephasing grows up to several radians. This
non-satisfactory performance during plunge and merger for large,
positive spins is not surprising. In our prototype spin EOB model the
factorized modes~\cite{Pan2010hz} used in the radiation-reaction force
generate spin couplings in the GW energy flux at a PN order much lower
than what is known today. In fact, the GW energy flux is currently
known through 3PN order in the spin-orbit
sector\footnote{Reference~\cite{Lovelace:2011nu} found that the tail
spin-orbit terms in the energy flux at 3PN order dominate all the
other spin-orbit contributions and improve the agreement with NR
waveforms.}~\cite{Blanchet:2011zv} and 2PN order in the spin-spin
sector. However, the $-2$ spin-weighted spherical harmonics that are
used to build the factorized waveforms employed in this paper are
known only through 1.5PN order in the spin-orbit
sector~\cite{Arun:2009}. Moreover, the performance we found for large
spin values and prograde orbits confirms what was already found in
Ref.~\cite{Barausse:2011kb}, where EOB waveforms in the test-particle
limit could be calibrated to Teukolsky-type waveforms only up to a
Kerr spin value of $\roughly +0.7$. For larger spin values, the
factorized waveforms start deviating from the exact ones even before
reaching the ISCO~\cite{Pan2010hz, Barausse:2011kb}.
The prototype spin EOB model can be improved in the future in
different directions. First, the choice of the spin EOB adjustable
parameters done in Sec.~\ref{sec:EOB-model} was rather arbitrary and
assumed that all gauge parameters that enter the spin EOB conservative
dynamics are zero. Of course, it would have been difficult to carry
out a more sophisticated study in this work considering that we had at
our disposal only two equal-mass, equal-spin NR waveforms. When
several more spin NR waveforms will be available, the spin EOB
parameters (together with the nonspinning ones) should be explored
and calibrated simultaneously against all the available NR
waveforms. Second, it is urgent to compute higher-order PN spin-orbit
terms in the -2 spin weighted spherical harmonics and in the
factorized modes, thus making the EOB spin model reliable also for
large, positive spins, i.e., for $\chi_i > 0.7$. Third, the spin EOB
Hamiltonian at 3.5PN order used in this paper predicts for large,
positive spins that the position of the peak of the EOB
orbital-frequency varies non-monotonically as function of the spin and
lies in a region which is not very relativistic. It would be important
to correct this behavior calibrating the gauge parameters present in
the spin EOB Hamiltonian. Fourth, recent results in
Refs.~\cite{Damour:2009sm, Barack:2010ny, LeTiec:2011ab,
LeTiec:2011dp} at the interface between PN theory and the self-force
formalism, have allowed Ref.~\cite{Barausse:2011dq} to compute the
nonspinning EOB potentials at all orders in PN theory, but linear in
the symmetric mass ratio $\nu$. These new results will be incorporated
in the future to improve the nonspinning conservative dynamics of the
prototype EOB model, and will be extended to include spin effects.
\begin{acknowledgments}
E.B., A.B., Y.P., and A.T. acknowledge support from NSF Grants
No. PHY-0903631. A.B. also acknowledges support from NASA grant
NNX09AI81G. T.C., G.L., M.B., and M.S. are supported in part by
grants from the Sherman Fairchild Foundation to Caltech and Cornell,
and from the Brinson Foundation to Caltech; by NSF Grants
No. PHY-0601459 and No. PHY-0652995 at Caltech; by NASA Grant
NNX09AF97G at Caltech; by NSF Grants No. PHY-0652952 and
No. PHY-0652929 at Cornell; and by NASA Grant No. NNX09AF96G at
Cornell. H.P. gratefully acknowledges support from the NSERC of
Canada, from Canada Research Chairs Program, and from the Canadian
Institute for Advanced Research.
\end{acknowledgments}
|
2,869,038,155,100 | arxiv | \section{Statement of the problem}\label{sec:sofprob}
Control problems in Banach or Hilbert spaces arise naturally in processes described by partial differential equations (see for example \cite{Lions, CurZwart, Fur, WWX, AS, ErvZua, BMZ, CMZ, CorXinag} and references therein). Sometimes it is useful to reduce the control problem for partial differential equations to infinite systems of ODEs \cite{Cher90, Cher, AzRuz, AzBakAkh}. Also, it is of independent interest to consider control systems governed by infinite system as models in Banach spaces. For example in \cite{SatimovTukht2007, TukhtMamatov2008} control problems for infinite systems are considered.
A considerable amount of work devoted to differential game problems for infinite systems in Hilbert spaces (see for example \cite{Idham_IGI_Ask2016, Ibragimov-Al-Kuch2014} and references therein). Optimal strategies for players in suitable classes of strategies have been constructed in \cite{Ibragimov-ScAs2013}.
Often it is useful to study finite dimensional approximations of the infinite system, such an approach is taken in \cite{AzRuz, AzBakAkh}. The main difficulty is then to prove that the approximate solutions converge to a solution of the initial control problem. In the above works the authors obtain infinite linear ODEs, where the right hand has a diagonal form. Hence it is not difficult to show that finite dimensional approximations converge to the solutions of the original system in a suitable sense.
The proofs suggest that similar results maybe proven for linear systems with block diagonal form under certain mild assumptions.
In fact, as it is shown in \cite{ZMI} for certain linear systems with quadratic cost there are approximation schemes that converge, but the approximating controls do not even stabilize the original system and also the costs does not converge.
In this work we consider a simple infinite linear controllable system in $\ell^2$. The main feature of the system is that it is an infinite Jordan block, with $\lambda\in \RR$ on the main diagonal. Therefore, any finite dimensional approximation of the system is asymptotically stable whenever $\lambda<0$, but the infinite system is stable if and only if $\lambda\le-1$ and when $\lambda>-1$ solutions in certain directions grow exponentially fast. This shows fine difference between finite dimensional and infinite systems. Another main feature of this notes is that using Gramian operators, we give explicit form of control functions that stabilize the system.
In the rest of this section we formulate the problem and state the main results. In section \ref{sec:proof} we prove global asymptotic stability. In section \ref{sec:control} we show global null-controllability and in Section \ref{sec:disc} we discuss the results and further generalizations.
Let $\ell^2=\{\y=(y_1, y_2, \dots )\mid y_n\in \RR, \sum_{n\ge1} y_n^2< \infty\}$. We consider $\ell^2$ with it's natural norm: $\|\y\|_2^2=\sum_{n\ge1} y_n^2$, which turns it into a Hilbert space.
\medskip
Given an infinite system of ODEs:
\begin{equation}\label{eq:syst1}
\dot{y}_{n}=\lambda y_{n}+y_{n+1}, y_n(0)=y_{n,0},
\end{equation}
where $\lambda\in\RR$ is a fixed number and $\y_0=\{y_{n,0}\}_{n\in\NN}\in \ell^2$.
We can rewrite the system in an operator form
\begin{equation}\label{eq:1}
\dot{\y}=A\y, \y(0)=\y_0,
\end{equation}
where $\y_0=\{y_{n,0}\}_{n\in\NN}$ and $A:\ell^2\to \ell^2$ is a linear operator defined by
\[A\y=\{\lambda y_n+y_{n+1}\}_{n\in \NN}.\]
This is an example of an ODE in a Banach space, which is a well studied topic (see for example \cite{Dei, CurZwart}), here we study the stability and control problems. In particular, we construct controls function explicitly.
Observe that $A$ is a bounded linear operator, in fact we have
\[\begin{aligned}
\|A\y\|_2^2=\sum_{n\ge 1} (\lambda y_n+y_{n+1})^2
\le (1+|\lambda|)^2\|\y\|_2^2.
\end{aligned}\]
Hence, $\|A\|=\sup_{{\|\y\|}_{2}=1}\|A\y\|_2\le 1+|\lambda|$.
Now, it is standard to define $e^{tA}$ as
\[
e^{tA}:=\sum_{n\ge 0}\frac{t^nA^n}{n!},
\]
which is bounded on $\ell^2$ for every $t\in \RR$. Further, $e^{tA}$ admits all the properties of analogues operator for matrices. In particular, $e^{tA}$ defines a group of operators.
The solution of \eqref{eq:1} can be written in the form
\[\y(t)=e^{tA}\y_0.\]
We also consider the Cauchy problem for non-homogeneous equation
\begin{equation}\label{eq:controlsyst}
\dot{\y}=A\y+\f, \quad\y(0)=\y_0,
\end{equation}
for $\textbf f:\RR\to \ell^2$, $\f \in L^2([0, T], \ell^2)$, i.e. $\|\f\|_{L^2}^2=\int_{0}^T\|\f(t)\|^2_2 dt<+\infty$\footnote{We note that this norm coincides with $\|\f\|_{L^2}^2=\sum_{n\ge 1}\int_{0}^T |f_n(t)|^2dt$ thanks to Beppo-Levi's theorem.}.
A function $\y:[0, T]\to \ell^2$ defined as
\[
\y(t)=e^{tA}\y_0 + e^{tA}\int_{0}^te^{-sA}\f(s)ds
\]
is called a mild solution of \eqref{eq:controlsyst} if $\y\in C([0, T], \ell^2)$. Here the integration is understood componentwise.
For completeness we start with the following.
\begin{proposition}\label{prop:exst}
For every $\f \in L^2([0, T], \ell^2)$ and $\y_0\in \ell^2$ we have $\y\in C([0, T], \ell^2)$.
\end{proposition}
The next result is about stability. In this simple setting we can characterize the system completely. We have the following.
\begin{proposition}\label{prop:stability}
Let $\y(t)$ be the solution of \eqref{eq:syst1} with an initial condition $\y_0\in\ell^2$.
System \eqref{eq:syst1} is asymptotically stable if and only if $\lambda\le-1$. Moreover for every $\y_0\in\ell^2$ and for every $t\in\RR$ holds $\|e^{tA}\y_0\|_2\le e^{(1+\lambda)t}\|\y_0\|_2$ .
\end{proposition}
Let $\rho>0$ be fixed. A control function $\textbf f:\RR\to \ell^2$ is called admissible if
\[\|\f\|_{L^2}^2=\int_{0}^T\|\f(t)\|^2_2 dt\le \rho^2.\]
We say that the system \eqref{eq:controlsyst} is \textit{null-controllable} from $\y_0\in\ell^2$ an admissible control $ \textbf f:\RR\to \ell^2$ and $T=T(f)\in\RR$ such that the solution of \eqref{eq:controlsyst} satisfies $\y(T)=0$.
We say that the system \eqref{eq:controlsyst} is \textit{locally null-controllable} if there exists $\delta=\delta(\rho)>0$ such that \eqref{eq:controlsyst} is null-controllable from any $\y_0\in\ell^2$ with $\|\y_0\|\le \delta$.
We say that the system \eqref{eq:controlsyst} is \textit{globally null-controllable} if it is null-controllable from any $\y_0\in\ell^2$.
The main result of this notes is the following
\begin{theorem}\label{main}
\begin{itemize}
\item[(i)]The system \eqref{eq:controlsyst} is locally null-controllable for every $\lambda\in \RR$.
\item[(ii)] If $\lambda\le-1$, then system \eqref{eq:controlsyst} is globally null-controllable.
\item[(iii)] If $\lambda<-1$ the systems can be transferred from an initial point $\y_0\in\ell^2$ into the origin for time $\tau\ge {\|\y_0\|^4_2}/{\kappa\rho^4}$, where $\kappa$ is a constant independent of $y_0$.
\end{itemize}
\end{theorem}
Notice that we didn't aim to state the results in the most general form. Also, in Proposition \ref{prop:stability} for $-1<\lambda<0$ we construct solutions going to $\infty$ as $t\to+\infty$, i.e. $0$ is not Lyapunov stable. Thus jump when passing from $\lambda=-1$ somewhat unusual. But apparently it is due to the structure of $\ell^2$ and very special structure of $A=\lambda \id+E$, i.e. the shift operator $E:\ell^2\to \ell^2$ is weakly contracting ($E^n\y\to 0$ as $n\to \infty$ for all $\y\in\ell^2$) in this case. The proofs show that analogues results are true for all $\ell^p$ spaces with $1\le p<+\infty$. However, in $\ell^\infty$ the trivial solution $0$ is Lyapunov stable, but it is not asymptotically stable when $\lambda=-1$, see subsection \eqref{sec:stab}.
\section{Asymptotic stability}\label{sec:proof}
We start this section with the proof of Proposition \ref{prop:exst}.
\begin{proof}[Proof of Proposition \ref{prop:exst}]
For any $T\in\RR$, $t_0,t\in[0, T]$ and $y_0\in\ell^2$ we have
\begin{equation}\label{eq:et-t0}
\|e^{tA}-e^{t_0A}\|_2\le \|e^{t_0A}\|_2\cdot \|e^{(t-t_0)A}-\id\|_2\le |t-t_0|\cdot\|A\|_2\cdot e^{2T\|A\|_2},
\end{equation}
where in the last inequality we have used the definition of $e^{tA}$ and $\|e^{tA}\|_2\le e^{|t|\cdot \|A\|_2}$. For any $\f \in L^2([0, T], \ell^2)$ and $0\le t_0\le t\le T$ we have
\begin{equation}\label{eq:intt0-t}
\begin{aligned}
\left\|\int_{t_0}^te^{-sA}\f(s)ds\right\|_2 &\le\int_{t_0}^t\|e^{-sA}\|_2\cdot \|\f(s)\|_2ds\\
&\le\left(\int_{t_0}^t e^{2s \cdot \|A\|_2}\right)^{1/2}\left(\int_{t_0}^t\|\f(s)\|_2^2ds\right)^{1/2}\\
&\le |t-t_0|^{1/2}e^{t\cdot \|A\|_2}\|\f\|_{L^2},
\end{aligned}
\end{equation}
where in the second inequality the Cauchy-Schwarz inequality is used.
We have
\begin{equation}\label{difference}
\begin{aligned}
\|\y(t)-\y(t_0)\|_2\le &\|e^{tA}-e^{t_0A}\|_2\cdot\|\y_0\|_2 \\ &+\|e^{tA}\int_{0}^te^{-sA}\f(s)ds-e^{t_0A}\int_{0}^{t_0}e^{-sA}\f(s)ds\|_2.
\end{aligned}
\end{equation}
The first term on the right hand side of the inequality tends to $0$ when $t\to t_0$ by \eqref{eq:et-t0}. We will show that the second term also tends to $0$ as $t$ approaches $t_0$. Without loss of generality, assume $t>t_0$ then the second term of \eqref{difference} is bounded by
\[
\|e^{tA}\|_2\cdot \|\int_{t_0}^te^{-sA}\f(s)ds\|_2+ \|e^{tA}-e^{t_0A}\|_2\cdot \|\int_{0}^{t_0}e^{-sA}\f(s)ds\|_2,
\]
which tends to $0$ by \eqref{eq:intt0-t}. The proof is finished.
\end{proof}
\subsection{Stability of the system} \label{sec:stab}
Here we give necessary and sufficient condition for the asymptotic stability of the system in $\ell^2$. Recall that the system \eqref{eq:syst1} is called globally asymptotically stable if $\lim_{t\to +\infty} \y(t)=0$ for the solution $\y(t)$ of \eqref{eq:syst1} with any initial condition $\y_0\in\ell^2$.
\begin{proof}[Proof of Proposition \ref{prop:stability}] We write $A=\lambda \id+ E$, where
$\id$ is the identity map, and $E:\ell^2\to \ell^2$ is the shift map, i.e. $[Ef]_i=f_{i+1}$. Then we have $e^{tA}=e^{\lambda t}e^{tE}$. Now, we obtain item (i) directly:
\[
\|\y(t)\|_2\le \|e^{tA}\|\cdot \|\y_0\|_2=e^{\lambda t}\|e^{tE}\|\cdot \|\y_0\|_2\le e^{(\lambda+1) t}\cdot \|\y_0\|_2.
\]
If $\lambda<-1$ the latter inequality implies $\lim_{t\to +\infty}\y(t)=0$.
If $\lambda=-1$ then the above argument doesn't imply the desired conclusion. Thus we proceed as follows. Observe that
\begin{equation}\label{eq:etE}
e^{tE}= \begin{pmatrix}
1 & t & \frac{t^2}{2!} &\dots &\frac{t^k}{k!} &\dots \\
0 & 1 & t &\dots &\frac{t^{k-1}}{(k-1)!} &\dots \\
0 & 0 & 1 &\dots & \frac{t^{k-2}}{(k-2)!} &\dots \\
\vdots& \vdots& \vdots &\ddots& \vdots &\ddots
\end{pmatrix}.
\end{equation}
Then for any $\z\in\ell^2$ with $\|\z\|_2=1$ and for the solution $\y(\cdot)$ started from $\y_0=(y_{10}, y_{20}, \dots)\in\ell^2$ we obtain
\begin{equation}\label{eq:ytz}
\langle\y(t), \z\rangle = e^{-t} \langle\sum_{j\ge 0}\frac{t^j}{j!}E^j\y_0, \z\rangle= e^{-t}\sum_{j\ge 0} \frac{t^j}{j!} \langle E^j\y_0, \z\rangle.
\end{equation}
From the definition of $E$ we have $\|E^j\y_0\|_2\le \|\y_0\|_2$ for all $j\in\NN_0$ and $\|E^j\y_0\|_2\to 0$ as $j\to \infty$. Thus for any $\eps>0$ there exists $N=N(\y_0)\in\NN_0$ such that $\|E^j\y_0\|_2\le \eps/2$ for all $j\ge N$. Fixing such an $N$ and using $|\langle E^j\y_0, \z\rangle| \le \|E^j\y_0\|_2\le \|\y_0\|_2$ for $j\in \NN$ from \eqref{eq:ytz} we obtain
\begin{equation}\label{eq:|ytz|}
\begin{aligned}
|\langle\y(t), \z\rangle|&\le e^{-t}\sum_{j=0}^N \frac{t^j}{j!}\|E^j\y_0\|_2 + e^{-t}\sum_{j=N+1}^\infty \frac{t^j}{j!} \|E^j\y_0\|_2\\
&\le \|\y_0\|_2e^{-t} \sum_{j=0}^N \frac{t^j}{j!}+ \frac{\eps}2e^{-t}\sum_{j=0}^\infty \frac{t^j}{j!}\le \|\y_0\|_2C_Ne^{-t/2}+\frac{\eps}2.
\end{aligned}
\end{equation}
Notice that the choice of $N$ and hence $C_N$ is independent of $t$. Therefore, \eqref{eq:|ytz|} implies that there exists $ t(\eps)>0$ such that $\|\y_0\|_2C_Ne^{-t/2}\le \eps/2$ for all $t\ge t(\eps)$. Finally, taking $\z=\y(t)/\|\y(t)\|_2$ in \eqref{eq:|ytz|} results to
\[
\|\y(t)\|_2\le \eps \text{ for all } t\ge t(\eps).
\]
This finishes the proof of item (ii).
Now we show item (iii). Suppose that $\lambda>-1$.
Since for $\lambda>0$ the system \eqref{eq:syst1} is not stable, it suffices to consider the case $-1<\lambda\le 0$.
Let $\theta\in(0, 1)$ and $\Theta=(1, \theta, \theta^2, \theta^3, \dots )$. Obviously, $\Theta\in\ell^2$ and $e^{tE}\Theta=e^{t\theta}\Theta$. Since
$-1<\lambda\le0$ if we let $\theta= -\lambda+\frac{|1+\lambda|}{2}\in (0, 1)$ then as $t\to +\infty$ one gets
\begin{equation}\label{eq:etatheta}
\|e^{tA}\Theta\|_2=e^{t\lambda}\|e^{tE}\Theta\|_2=e^{t|1+\lambda|/2}\|\Theta\|_2\to +\infty.
\end{equation}
This implies that if $\lambda>-1$ then \eqref{eq:syst1} is not stable.
This completes the proof.
\end{proof}
\begin{remark}
Consider the system $\dot\y=A\y$, $\y(0)=\y_0\in\ell^\infty$, where
\[
\ell^\infty=\{\y=(y_1, y_2, \dots)\mid \sup_{n\in\NN}|y_n|<\infty\}.
\]
Then $\textbf{e}=(1,1,1\dots)\in\ell^\infty$ is an eigenvector of $e^{tE}$ corresponding to the eigenvalue $e^{t}$. Thus $0$ is Lyapunov stable but it is not asymptotically stable.
\end{remark}
\section{Null-controllability}\label{sec:control}
Here we show that system \eqref{eq:controlsyst} is null controllable.
We start with a standard lemma from operator theory, which will be useful below.
\begin{lemma}\label{lem:inv}
Let $L:\mathcal H\to \mathcal H$ be a self adjoint operator defined on a Hilbert space $(\mathcal H, \|\cdot\|)$. Assume that
there exists $\kappa>0$ such that $\|Lx\|\ge \kappa\|x\|$ for all $x\in L$. Then $L$ is invertible and $\|L^{-1}\|\le \kappa^{-1}$.
\end{lemma}
To prove controllability we use Gramian operators and prove an observability inequality. For $\tau\in \RR$ define
\[
W(\tau)=\int_0^\tau e^{-sA}\cdot e^{-sA^\ast}ds,
\]
where and $A^\ast$ is the adjoint of $A$ in $\ell^2$. The following lemma is the main technical tool
\begin{lemma}
For every $\tau\in \RR$ the operator $W(\tau)$ is bounded, self adjoint, positive definite and invertible. Moreover, there exists $\kappa >0$ such that $\|W(\tau)\y\|_2\ge \kappa\|\y\|_2$ for any $\y\in\ell^2$.
\end{lemma}
\begin{proof}
One can easily verify that $E^\ast\f=(0, \f)=(0, f_1, f_2, \dots)$. Then $e^{tA}e^{tA^\ast}=e^{2t\lambda}e^{tE}\cdot e^{tE^\ast}$. Further,
\begin{equation}\label{eq:|Wxy|}
\begin{aligned}
& |\langle W(\tau)\y, \z\rangle |\le\| \int_0^\tau e^{-2t\lambda}e^{-tE}\cdot e^{-tE^\ast}\y dt\|_2\cdot \|\z\|_2 \le\\
& \le \int_0^\tau e^{2t(1-\lambda)} dt \cdot\|\y\|_2\cdot \| \z\|_2\le M(\tau)\cdot \|\y\|_2\cdot \| \z\|_2,
\end{aligned}
\end{equation}
where the constant $M(\tau)$ depends only on $\tau$.
Let $e_{ij}(t)$, $i,j\in \NN$, denote an element of $e^{tE}\cdot e^{tE^\ast}$. For $\y,\z\in\ell^2$ we have
\begin{equation}\label{eq:Wxy}
\langle W(\tau)\y, \z\rangle = \sum_{i=1}^\infty\sum_{j=1}^\infty\int_0^\tau e^{-2t\lambda}e_{ij}(-t)y_jz_idt.
\end{equation}
By \eqref{eq:|Wxy|} the right hand side of \eqref{eq:Wxy} is absolutely convergent. Thus,
\begin{equation*}
\langle W(\tau)\y, \z\rangle = \sum_{j=1}^\infty\sum_{i=1}^\infty\int_0^\tau e^{-2t\lambda}e_{ij}(-t)y_jz_idt = \langle \y, W(\tau)\z\rangle.
\end{equation*}
This implies that $W(\tau)$ is self adjoint for every $\tau\in\mathbb R$.
Notice that $e^{tE^\ast}$ is just the transpose of $e^{tE}$. Therefore, by \eqref{eq:etE} for $i,j\in \NN$ we have
\begin{equation*}
e_{ij}(t)=\sum_{m=|i-j|}^\infty\frac{t^m}{m!}\cdot\frac{t^{m-|i-j|}}{(m-|i-j|)!},
\end{equation*}
which implies that both of the series
\begin{equation*}
\sum_{i=1}^\infty e_{ij}(-t)y_jz_i \quad\text{and}\quad \sum_{j=1}^\infty\sum_{i=1}^\infty e_{ij}(-t)y_jz_i
\end{equation*}
converge uniformly in $[0, \tau]$, hence
\begin{equation}\begin{aligned}\label{eq:Wyz}
& \langle W(\tau)\y, \z\rangle = \sum_{j=1}^\infty\sum_{i=1}^\infty\int_0^\tau e^{-2t\lambda}e_{ij}(-t)y_jz_idt \\
&=\int_0^\tau\sum_{j=1}^\infty\sum_{i=1}^\infty e^{-2t\lambda}e_{ij}(-t)y_jz_idt =\int_0^\tau e^{-2t\lambda}\langle e^{-tE^\ast}\y,e^{-tE^\ast}\z\rangle dt.
\end{aligned}
\end{equation}
The above equation immediately implies that $\langle W(\tau)\y, \y \rangle>0$ for every $\y \neq 0$ i.e. $W(\tau)$ is positive definite. In \eqref{eq:Wyz} we have showed that we can take integration out of the scalar product $\langle W(\tau)\y, \z\rangle$. We will use this property several times below.
For every $\varepsilon\in [0, \tau]$
\[
\langle W(\tau)\y, \y \rangle \ge\int_0^{\varepsilon} e^{-2t\lambda} \langle e^{-tE^\ast}\y,e^{-tE^\ast}\y\rangle dt.
\]
Now we look at the operator $e^{-tE}\cdot e^{-tE^\ast}$. Note that $EE^\ast=\id$, we have
\begin{equation}\label{eq:expet}
e^{-tE}\cdot e^{-tE^\ast}=\sum_{n=0}^\infty\frac{t^{2n}}{(n!)^2}\id+ \sum_{n=0}^\infty \sum_{m=n+1}^\infty \frac{(-t)^{n+m}}{n!m!}(E^{m-n}+{(E^\ast)}^{m-n}).
\end{equation}
It follows that for sufficiently small $\varepsilon>0$ and $t\in(0, \eps)$ we have
\[
e^{-tE}\cdot e^{-tE^\ast}=\id-t(E+E^\ast)+o(t),
\]
where $o(t)$ is a linear operator whose $\ell^2$ norm is $o(t)$ in the usual sense. Finally,
\begin{equation}\begin{aligned}\label{eq:weakes}
\int_0^\varepsilon e^{-2t\lambda} \langle e^{-tE}\cdot e^{-tE^\ast}\y, \y\rangle dt=\int_0^\varepsilon e^{-2t\lambda} \langle (\id -t(E+E^\ast)+o(t))y, y\rangle dt\\
>(1-3\varepsilon)\|y\|_2^2\int_0^\varepsilon e^{-2t\lambda}dt=\frac{1-3\varepsilon}{-2\lambda}(e^{-2\lambda\varepsilon}-1)\|y\|_2^2,
\end{aligned}
\end{equation}
where we used $\langle E\y, \y \rangle\le \|y\|_2^2$. This proves
\begin{equation}\label{eq:kappa}
\|W(\tau)\y\|_2\ge \kappa\|\y\|_2, \text{ with } \kappa^2=\frac{1-3\varepsilon}{-2\lambda}(e^{-2\lambda\varepsilon}-1)>0.
\end{equation}
Thus Lemma \ref{lem:inv} is applicable and implies that $W(\tau)$ is invertible for every $\tau >0$ and $W^{-1}(\tau):\ell^2\to \ell^2$ is a bounded linear operator with the norm $\|W(\tau)^{-1}\|\le \kappa^{-1}$, where $\kappa$ is independent of $\tau$.
\end{proof}
Now we are ready to prove Theorem \ref{main}.
\begin{proof}[Proof of Theorem \ref{main}]
Below we assume that $\rho>0$ and the set of admissible control is defined as in Section \ref{sec:sofprob}.
Recall that $\y(t)=e^{tA}\y_0 + e^{tA}\int_{0}^te^{-sA}\f(s)ds$ is the unique solution of system \eqref{eq:controlsyst} with an initial state $\y(0)=\y_0$.
We look for a solution of the control problem in the form
\begin{equation}\label{eq:contol}
\f_0(t)=-e^{-tA^\ast}\cdot W^{-1}(\tau)\y_0 \quad\text{for every}\quad \y_0\in \ell^2, \tau\in\RR^+.
\end{equation}
We show that $\int_0^\tau e^{-sA}\f_0(s)ds=-\y_0$ for every fixed $\tau\in \RR^+$. Indeed, by \eqref{eq:Wyz} we have
\begin{equation}\label{eq:int=y_0}
-\int_0^\tau e^{-tA}\f_0dt=
\int_0^\tau e^{-tA}e^{-tA^\ast}dt\cdot W^{-1}(\tau)\y_0= \y_0.
\end{equation}
It remains to show that $\f_0$ is admissible, i.e. there exists $\tau>0$ such that $\|\f_0\|_{L^2}\le \rho$.
By definition of $W(\tau)$ and \eqref{eq:Wyz} we have
\begin{equation}\label{eq:norm}
\begin{aligned}
\int_0^\tau\|\f_0(t)\|^2_2dt&=\int_0^\tau \|e^{-tA^\ast}W^{-1}(\tau)\y_0\|_2dt\\
&= \int_0^\tau \left\langle e^{-tA}\cdot e^{-tA^\ast}W^{-1}(\tau)\y_0, W^{-1}(\tau)\y_0\right\rangle dt\\
&=\langle \y_0, W^{-1}(\tau)\y_0\rangle\le \|\y_0\|_2\cdot\|W^{-1}(\tau)\y_0\|_2.
\end{aligned}
\end{equation}
To prove item (i) we look for the set of $\y_0\in\ell^2$ with $\|\y_0\|_2^2\le \kappa\rho^2$. Then by \eqref{eq:int=y_0} we have that $\y(\tau)=0$ for the solution started from $\y_0$. Also, by \eqref{eq:norm} and the choice of $\y_0$ the function $\f_0$ defined by \eqref{eq:contol} is admissible.
To prove item (ii) we consider cases $\lambda<-1$ and $\lambda=-1$ separately.
\textbf{Global null-controllability for $\lambda<-1$.} We will prove that $\|W^{-1}(\tau)\y_0\|_2\to 0$ as $\tau\to +\infty$. To this end we refine the inequality in \eqref{eq:weakes} as follows.
Since $e^{tA^\ast}$ is invertible,
\[
\|\y\|_2=\|e^{tA^\ast}e^{-tA^\ast}\y\|_2\le \|e^{tA^\ast}\|\cdot \|e^{-tA^\ast}\y\|.
\]
Thus, by Proposition \ref{prop:stability} we have
\[
\|e^{-tA^\ast}\y\|_2\ge e^{-t(1+\lambda)}\|\y\|_2.
\]
Consequently, for any $\y\in \ell^2$ holds
\[
\langle W(\tau)\y, \y\rangle= \int_0^\tau \|e^{-tA^\ast}\y\|^2_2dt\ge \int_0^\tau e^{-2t(1+\lambda)}\|\y\|^2_2dt
= \|\y\|^2_2\cdot\frac{e^{-2(1+\lambda)\tau}-1}{-2(1+\lambda)}.
\]
Recalling $\|W^{-1}(\tau)\|\le \kappa^{-1}$ and letting $\z(\tau)=W(\tau)^{-1}\y_0\in\ell^2$ by the above inequality we have
\[
\kappa^{-1}\|\y_0\|^2_2\ge \langle \y_0, W^{-1}(\tau)\y_0\rangle=\langle W(\tau)\z(\tau), \z(\tau)\rangle\ge \|\z(\tau)\|^2_2\cdot\frac{e^{-2(1+\lambda)\tau}-1}{-2(1+\lambda)}.
\]
Hence,
\begin{equation}\label{eq:||ztau||}
\|\z(\tau)\|_2\le \left(\frac{-2(1+\lambda)}{\kappa(e^{-2(1+\lambda)\tau}-1)}\right)^{1/2}\|\y_0\|_2.
\end{equation}
Since $\lambda<-1$ the right hand side of the above inequality converges to $0$ exponentially fast as $\tau\to +\infty$ and so does $\|\z(\tau)\|_2$. Thus, by \eqref{eq:norm} there exists $\tau_0$ such that
\[
\int_0^\tau\|\f_0(t)\|^2_2dt\le \rho^2 \quad\text{for all}\quad\tau>\tau_0.
\]
This finishes the proof of global controllability for $\lambda<-1$.
\textbf{Global null-controllability for $\lambda=-1$.}
This case needs a slightly different argument. Recall that that in this case the system is locally null controllable i.e. the control function defined in \eqref{eq:contol} remains admissible in the neighbourhood of the origin: if $\|\y_0\|_2\le \rho\sqrt\kappa$, where $\kappa$ is the constant defined in \eqref{eq:kappa}, we set \begin{equation}\label{eq:contol1}
\f_1(t)=-e^{-tA^\ast}\cdot W^{-1}(1)\y_0 \quad\text{for every}\quad \y_0\in \ell^2.
\end{equation}
Then by \eqref{eq:norm} we get
\begin{equation*}
\int_0^\tau\|\f_1(t)\|^2_2dt\le \|\y_0\|_2^2\cdot \|W^{-1}(1)\| \le \rho^2,
\end{equation*}
and
\[
\y(1)=e^{A}\y_0 + e^{A}\int_{0}^1e^{-sA}\f(s)ds=0.
\]
Further, by stability of the system \eqref{eq:syst1} for any $\y_0\in \ell^2$ there exists $\tau_0=\tau(\kappa, \rho, \y_0)$ such that $\|e^{tA}\y_0\|_2\le \rho\sqrt\kappa$ for any $t\ge \tau_0$. Therefore, we set
\begin{equation}\label{eq:l=1}
\f_0(t)=\begin{cases}
0, \text{ if } t\le \tau_0,\\
\f_1(t), \tau_0\le t\le \tau_0+1.
\end{cases}
\end{equation}
One can easily check that $\f_0$ is admissible and $\y(\tau_0+1)=0$ for the corresponding solution of \eqref{eq:controlsyst}, which finishes the proof. This finishes the proof of item (ii)
Observe that to prove the item (iii) it is sufficient to obtain estimates on $\tau$ satisfying
\[\|\y_0\|_2\cdot \|\z(\tau)\|_2\le \rho^2,\]
where $z(\tau)$ is given by \eqref{eq:||ztau||}, which is equivalent to
\[
\left(\frac{-2(1+\lambda)}{\kappa(e^{-2(1+\lambda)\tau}-1)}\right)^{1/2}\|\y_0\|_2^2\le \rho^2,
\]
which is satisfied if
\[
\tau\ge \frac{\|\y_0\|^4_2}{\kappa\rho^4}\ge \frac{1}{2|\lambda+1|} \log\left(1+\frac{2|\lambda+1|}{\kappa}\frac{\|\y_0\|^4_2}{\rho^4}\right).
\]
This completes the proof of the Theorem.
\end{proof}
\section{Discussion of the results and further questions}\label{sec:disc}
In this paper we addressed an infinite system of linear ODEs with a special operator $A=\lambda \id+ E$ on the right hand side. We obtained stability and controllability of the system when $\lambda\le -1$. Initially, the main motivation for this choice was to construct an example whose finite dimensional projections having qualitatively different behavior than the system itself. In the proofs we used Gramian operators, which raised a natural question whether or not the constructed control functions are optimal, since in the finite dimensional setting this method is known to produce optimal control. In the setting of the current paper, when $\lambda<-1$ we expect to obtain optimal control. But we were unable to find an analogue of a general result in the spirit of (for example, \cite[Propostion 2.]{Ibragimov-ScAs2013}), in the infinite dimensional setting; when $\lambda=-1$ we don't control the system until it gets closer to the origin. Therefore, we don't expect to obtain optimal control. Notice that, in the proofs we used the special form of $A$.
It would be interesting to obtain similar results for more general system
\begin{equation}\label{eq:gen}
\dot{\y}=A\y+B\f, \quad\y(0)=\y_0,
\end{equation}
where $A:\ell^2\to \ell^2$ is a bounded operator, and $B:\mathcal L\to \mathcal L$ is an operator from (possible finite dimensional) subspace $\mathcal L$ of $\ell^2$. The proofs suggest that if $B$ is identity and the spectrum of $A$ lies on the left hand side of the imaginary axes, then \eqref{eq:gen} is globally asymptotically stable. Invertibility of the Gramians seems also to work since it is a perturbative argument. But for the global null controllability, one needs different estimates to the inverses of the Gramians, or another approach is needed. However, for general
$B$ the situation is unclear, it would be nice to obtain a similar conditions
to the classical Kalman (See for example, \cite[Theorem 1.16]{Coron}) or
an analog of Fattorini-Hautus but in both situations, it isn't clear what should be the exact conditions. Since for Kalman condition injectivity of an operator isn't sufficient for invertibility, and for Fattorini-Hautus usually one assumes countable spectrum with certain properties (see for example \cite{BadTaka} and references therein).
|
2,869,038,155,101 | arxiv |
\section{Introduction}
\label{sec:introduction}
The CERN LHC produced millions of top quark
pairs (\tt) in 2011 and 2012. This allows for a detailed investigation of the kinematic event properties of
\ttbar production such as the missing transverse energy (\MET),
the scalar sum of the jet transverse momenta (\HT),
the scalar sum of the transverse momenta
of all objects (\ensuremath{S_\mathrm{T}}\xspace), and
the transverse momentum (\ensuremath{ p_\mathrm{T}^{W} }\xspace}}{\newcommand{\wpt}{\ensuremath{ p_\mathrm{T}^{\mathrm{W}} }\xspace}) of leptonically decaying $\PW$ bosons produced in top quark decays.
These measurements can be used to verify current theoretical models, along with their implementation
in simulations of \ttbar production, and also to measure rare standard model (SM)
processes such as \ttbar production in association with a $\PW$, $\PZ$, or Higgs boson.
Since top quark pair production is a major background for many searches for physics beyond the SM,
it is important that the properties of \ttbar events are well understood.
Here, we report measurements carried out using the CMS detector~\cite{bib:JINST} at the LHC at
two different proton--proton center-of-mass energies. The data samples used include integrated
luminosities of 5.0\fbinv collected in 2011 at \ensuremath{\sqrt{s}=7\TeV}\xspace and 19.7\fbinv from 2012 at \ensuremath{\sqrt{s}=8\TeV}\xspace.
The \ttbar production cross section is measured as a function of \MET, \HT, \ensuremath{S_\mathrm{T}}\xspace, and \ensuremath{ p_\mathrm{T}^{W} }\xspace}}{\newcommand{\wpt}{\ensuremath{ p_\mathrm{T}^{\mathrm{W}} }\xspace}, corrected for detector effects,
and compared with the predictions from different event generators. Differential \ttbar cross sections
have previously been measured at the Tevatron~\cite{CDFDiffTop,DZeroDiffTop}, and at the
LHC~\cite{CMSDiffTop7TeV, CMSDiffTop8TeV, CMSDiffTopExtraJets8TeV, CMSDiffTopHighPt8TeV,ATLASDiffTop7TeV, ATLASDiffTopHighPt8TeV}.
These previous measurements study the \ttbar production cross section as a function of the top quark kinematics
and the kinematics of the \ttbar system. The results presented here are complementary,
since the \ttbar production cross section is measured as a function of variables that do not require the reconstruction
of the top quarks from their decay products.
Top quarks decay with close to 100\% probability into a $\PW$ boson and a
bottom quark. In this article, we consider the channel
in which one of the $\PW$ bosons decays leptonically into a charged lepton (electron or
muon) along with its associated neutrino, while the other $\PW$ boson decays hadronically.
This channel has a branching fraction of around 15\% for direct decay to each lepton
flavor and a relatively clean experimental signature, including an
isolated, high-transverse-momentum lepton, large \MET\ from the undetected
neutrino, and multiple hadronic jets. Two jets are expected to contain $\PQb$ hadrons
from the hadronization of the $\PQb$ quarks produced directly in the
$\PQt \to \PQb \PW$ decay, while other jets (from the hadronic $\PW$ boson
decay or gluon radiation) will typically contain only light and charm quarks.
\section{The CMS detector}
The central feature of the CMS apparatus is a superconducting solenoid of 6\unit{m} internal diameter, providing a
magnetic field of 3.8\unit{T}. Within the solenoid volume are a silicon pixel and strip tracker, a lead
tungstate crystal electromagnetic calorimeter, and a brass and scintillator hadron calorimeter,
each composed of a barrel and two endcap sections. Muons are measured in gas-ionisation detectors embedded in the steel flux-return
yoke outside the solenoid. Extensive forward calorimetry complements the coverage provided by the barrel and endcap
detectors.
A more detailed description of the CMS detector, together with a definition of the coordinate system used and the
relevant kinematic variables, can be found in Ref.~\cite{bib:JINST}.
\section{Simulation}
\label{sec:mc_modelling}
For the Monte Carlo (MC) simulation of the \ttbar signal sample the leading-order \MADGRAPH v5.1.5.11 event generator~\cite{MadGraph}
is used with relevant matrix elements for up to three additional partons implemented.
Theoretical production cross section\ values
of
$177.3\ ^{+4.6}_{-6.0}\ \mathrm{(scale)}\pm9.0\ \mathrm{(PDF +}\alpS \mathrm{)}\unit{pb}$ at \ensuremath{\sqrt{s}=7\TeV}\xspace,
and
$252.9\ ^{+6.4}_{-8.6}\ \mathrm{(scale)}\pm11.7\ \mathrm{(PDF +}\alpS \mathrm{)}\unit{pb}$ at \ensuremath{\sqrt{s}=8\TeV}\xspace,
are used for the normalization of these samples. These cross sections are calculated with the Top++2.0 program
to next-to-next-to-leading order (NNLO) in perturbative QCD, including soft-gluon resummation to
next-to-next-to-leading-logarithm (NNLL) order~\cite{Topplusplus}, and assuming a top quark mass $\ensuremath{ m_{t} }\xspace}}{\newcommand{\mtop}{\ensuremath{ m_{\mathrm{t}} }\xspace} = 172.5\GeV$.
The first uncertainty comes from the independent variation of the renormalization ($\mu_{\mathrm{R}}$)
and factorization ($\mu_{\mathrm{F}}$) scales, while the second one is associated with variations in the
parton distribution function (PDF) and \alpS,
following the PDF4LHC prescription with the MSTW2008 68\% CL NNLO,
CT10 NNLO, and NNPDF2.3 5f FFN PDF sets~\cite{Botje:2011sn,Alekhin:2011sk,alphaSUnc,ct10TTbarXsec,pdfWithLHCData}.
The generated events are subsequently processed with \PYTHIA v6.426~\cite{pythia} for parton showering and hadronization.
The \PYTHIA parton shower is matched to the jets from the hard quantum chromodynamics (QCD)
matrix element via the MLM prescription~\cite{Hoche:2006ph}
with a transverse momentum (\pt) threshold of $20\GeV$. The CMS detector response is simulated using \GEANTfour~\cite{Agostinelli:2002hh}.
Independent \ttbar samples are also generated at both \ensuremath{\sqrt{s}=7\TeV}\xspace and \ensuremath{\sqrt{s}=8\TeV}\xspace with \POWHEG v2 r2819~\cite{POWHEG1:Nason:2004rx,POWHEG2:Frixione:2007vw,POWHEG3:Alioli:2010xd}.
At 8\TeV, additional samples are generated with both \MCATNLO v3.41~\cite{Frixione:2002ik}
and \POWHEG v1.0 r1380~\cite{POWHEG1:Nason:2004rx,POWHEG2:Frixione:2007vw,POWHEG3:Alioli:2010xd}.
All of the \POWHEG samples are interfaced with both \PYTHIA and \HERWIG v6.520~\cite{Herwig6}, whereas the \MCATNLO generator
is interfaced with \HERWIG for parton showering. These samples, which are all generated to next-to-leading order accuracy,
are used for comparison with the final results.
The most significant backgrounds to \ttbar production are events in which a $\PW$ boson is produced in association
with additional jets.
Other backgrounds include single top quark production, $\PZ$ boson production in association with
multiple jets, and QCD multijet events where hadronic activity is misidentified as a lepton.
The simulation of background from $\PW$ and $\PZ$ boson production in association with jets is also performed using
the combination of \MADGRAPH and \PYTHIA, with a \pt\ matching threshold of 10\GeV in this case.
These samples are referred to as $\PW$+jets and $\PZ$+jets, respectively.
Single top quark production via $t$- and $s$-channel $\PW$ boson exchange~\cite{singletopPOWHEG:stchannel}
and with an associated on-shell $\PW$ boson~\cite{singletopPOWHEG:tWchannel} are generated using \POWHEG.
The QCD multijet processes are simulated using \PYTHIA.
The event yields of the background processes are normalized according to their predicted production cross section\ values.
These are from NNLO calculations for $\PW$+jets and $\PZ$+jets events~\cite{fewz,Wproduction},
next-to-leading order calculations with NNLL corrections for single top quark events~\cite{singletopxsec},
and leading-order calculations for QCD multijet events~\cite{pythia}.
Samples are generated using the {CTEQ6L} PDFs~\cite{Pumplin:2002vw}
for \MADGRAPH samples, the {CT10} PDFs~\cite{ct10} for \POWHEG samples, and the {CTEQ6M}
PDFs~\cite{Pumplin:2002vw} for \MCATNLO.
The \PYTHIA Z2 tune is used to describe the underlying event in both the \MADGRAPH and {\textsc{powheg}\ +\ \textsc{pythia}}\xspace samples at \ensuremath{\sqrt{s}=7\TeV}\xspace,
whereas the Z2* tune is used for the corresponding samples at \ensuremath{\sqrt{s}=8\TeV}\xspace~\cite{Z2UE}.
The underlying event in the {\textsc{powheg}\ +\ \textsc{herwig}}\xspace samples is described by the AUET2 tune~\cite{AUET2UE},
whereas the default tune is used in the \MCATNLOHERWIG sample.
The value of the top quark mass is fixed to $\ensuremath{ m_{t} }\xspace}}{\newcommand{\mtop}{\ensuremath{ m_{\mathrm{t}} }\xspace} = 172.5\GeV$ in all samples.
In all cases, \PYTHIA is used for simulating the gluon radiation and fragmentation, following the prescriptions of
Ref.~\cite{bib:io}. Additional simulated hadronic $\Pp\Pp$ interactions (``pileup''), in the same or nearby beam crossings,
are overlaid on each simulated event to
match the high-luminosity conditions in actual data taking.
Previous measurements of differential \ttbar production cross sections at the LHC~\cite{CMSDiffTop7TeV, CMSDiffTop8TeV,ATLASDiffTop7TeV}
showed that several of the \ttbar event generators considered in this analysis predict a harder top quark \pt spectrum
than that observed in data. An additional simulated \ttbar sample is considered here, where the sample produced with the \MADGRAPH event generator
is reweighted to improve the agreement of the top quark \pt spectrum with data.
\section{Event reconstruction and selection}
\label{sec:selection}
Parallel selection paths for the two lepton types are implemented, resulting
in samples classified as electron+jets and muon+jets.
The trigger for the electron+jets channel during the \ensuremath{\sqrt{s}=7\TeV}\xspace data taking
selects events containing an electron candidate with $\pt>25\GeV$
and at least three reconstructed hadronic jets with $\pt>30\GeV$. In the \ensuremath{\sqrt{s}=8\TeV}\xspace data,
at least one electron candidate with $\pt > 27\GeV$ is required, with
no additional requirement for jets.
In the muon+jets channel, at least one isolated muon candidate with $\pt > 24\GeV$ is required at the trigger
level. Each candidate event is required to contain at least one well-measured vertex~\cite{Chatrchyan:2014fea},
located within the $\Pp\Pp$ luminous region in the center of CMS.
Events are reconstructed using a particle-flow (PF) technique~\cite{bib:pf2009,bib:pf2010}, which combines
information from all subdetectors to optimize the reconstruction and identification of
individual long-lived particles.
Electron candidates are selected with a multivariate technique
using calorimetry and tracking information~\cite{Khachatryan:2015hwa}.
Inputs to the discriminant include information about the calorimeter shower shape, track quality,
track-shower matching, and a possible photon conversion veto.
Electron candidates are required to have $\et>30\GeV$ and pseudorapidity in the range $|\eta|<2.5$.
The low-efficiency region $1.44 < |\eta| < 1.57$ between
the barrel and endcap sections of the detector is excluded.
Muon candidates are selected with tight requirements on track and vertex quality, and on
hit multiplicity in the tracker and muon detectors~\cite{Chatrchyan:2012xi}.
These requirements suppress cosmic rays, misidentified muons, and nonprompt muons from decay of hadrons in flight.
Muon candidates are required to have $\pt>26\GeV$ and $|\eta|<2.1$.
For the lepton isolation requirement, a cone of size $\Delta R=\sqrt{\smash[b]{(\Delta\eta)^2 + (\Delta\phi)^2}}$ is
constructed around the lepton direction, where $\Delta\eta$ and $\Delta\phi$ are the differences in
pseudorapidity and azimuthal angle (in radians), respectively,
between the directions of the lepton and another particle.
The \pt values of charged and neutral particles found in this cone are summed,
excluding the lepton itself and correcting for the effects of pileup~\cite{Khachatryan:2015hwa}.
The relative isolation variable $I(\Delta R)$ is defined as the ratio of this
sum to the lepton \pt.
Lepton candidates are selected if they satisfy $I(0.3) < 0.1$ for electrons,
and $I(0.4) < 0.12$ for muons.
Reconstructed particles are clustered into jets using the
anti-$\kt$ algorithm~\cite{bib:antikt} with a distance parameter of 0.5.
The measured \pt of each jet is corrected~\cite{Chatrchyan:2011ds} for known
variations in the jet energy response as a function of the measured
jet $\eta$ and \pt.
The jet energy is also corrected for the extra energy deposition from
pileup interactions~\cite{bib:PUSubtraction,Cacciari:2007fd}.
Jets are required to
pass loose identification
requirements to remove calorimeter noise~\cite{Chatrchyan:2013txa}.
Any such jet whose direction is less than $\Delta R = 0.3$ from the
identified lepton direction is removed.
For the identification of $\PQb$ quark jets (``$\PQb$ tagging''),
a ``combined secondary vertex'' algorithm~\cite{bib:btag2012} is used,
taking into account the reconstructed secondary vertices and track-based lifetime information.
The $\PQb$ tagging threshold is chosen to give an acceptance of 1\%
for light-quark and gluon jets with a tagging efficiency of 65\% for $\PQb$ quark jets.
The final selection requires exactly one high-\pt, isolated electron or
muon.
Events are vetoed if they contain an additional lepton candidate
satisfying either of the following criteria: an electron with $\pt>20\GeV$,
$|\eta|<2.5$, and $I(0.3) < 0.15$; or a
muon, with looser requirements on hit multiplicity, and with $\pt>10\GeV$, $|\eta|<2.5$, and $I(0.4) <
0.2$. The event must have at least four jets with $\pt >30\GeV$, of which at
least two are tagged as containing $\PQb$ hadrons.
After the final selection, 26~290 data events are found at \ensuremath{\sqrt{s}=7\TeV}\xspace, and 153~223 at \ensuremath{\sqrt{s}=8\TeV}\xspace.
The \ttbar contribution to these event samples, as estimated from simulation, is about 92\%.
The fraction of true signal events in the samples is 78\%.
Misidentified all-hadronic or dileptonic \ttbar events, and events containing tau leptons among the \ttbar decay products,
comprise 14\% of the samples.
The remaining events are approximately 4\% single top quark events, 2\% $\PW$/$\PZ$+jets events,
and 2\% QCD multijet events. The efficiency for signal events to satisfy the final selection
criteria is about 8\%, as determined from simulation.
\section{Cross section\ measurements}
\label{sec:cross_section}
We study the normalized \ttbar differential production cross section
as a function of four kinematic event variables: \MET, \HT, \ensuremath{S_\mathrm{T}}\xspace,
and \ensuremath{ p_\mathrm{T}^{W} }\xspace}}{\newcommand{\wpt}{\ensuremath{ p_\mathrm{T}^{\mathrm{W}} }\xspace}.
The variable \MET\ is the magnitude of the missing transverse momentum vector \ptvecmiss,
which is defined as the projection on the plane
perpendicular to the beams of the negative vector sum of the momenta of all PF candidates in the
event:
\begin{equation*}
\MET = \left[ \left(-\sum_i{p_{x}^i}\right)^2 + \left(-\sum_i{p_{y}^i}\right)^2 \right]^{\frac{1}{2}},
\end{equation*}
where $p_x^i$ and $p_y^i$ are the $x$ and $y$ momentum components of the \textit{i}th candidate,
and the sums extend over all PF candidates. The measured \MET\ is corrected for pileup and
nonuniformities in response as a function of $\phi$~\cite{Khachatryan:2014gga}.
The variable \HT is defined as the scalar sum of the transverse momenta of all jets in the event,
\[\HT = \sum_{\text{all jets}}\pt^{\mathrm{jet}} ,\]
where the sum extends over all jets having $\pt>20\gev$ and $\left|\eta\right|<2.5$.
The variable \ensuremath{S_\mathrm{T}}\xspace\ is the scalar sum of \HT, \MET,\ and the \pt of the identified lepton,
\[\ensuremath{S_\mathrm{T}}\xspace = \HT + \MET + p_\mathrm{T}^\mathrm{lepton}.\]
Finally, \ensuremath{ p_\mathrm{T}^{W} }\xspace}}{\newcommand{\wpt}{\ensuremath{ p_\mathrm{T}^{\mathrm{W}} }\xspace}\ is the magnitude of the transverse momentum of the leptonically decaying $\PW$ boson,
which is derived from the momentum of the isolated lepton and \ptvecmiss\
\[\ensuremath{ p_\mathrm{T}^{W} }\xspace}}{\newcommand{\wpt}{\ensuremath{ p_\mathrm{T}^{\mathrm{W}} }\xspace} = \sqrt{\left(p_x^{\mathrm{lepton}} + p_x^{\mathrm{miss}}\right)^2 +
\left(p_y^{\mathrm{lepton}} + p_y^{\mathrm{miss}}\right)^2} ,\]
where $p_{x}^{\mathrm{lepton}}$ and $p_{y}^{\mathrm{lepton}}$ are the transverse components of
$\vec{p}^{\mathrm{lepton}}$, and $p_x^{\mathrm{miss}}$ and $p_y^{\mathrm{miss}}$ are the transverse components
of \ptvecmiss.
Figures~\ref{fig:control_MET_HT}~and~\ref{fig:control_ST_WPT} show the observed distributions of
\MET, \HT, \st, and \ensuremath{ p_\mathrm{T}^{W} }\xspace}}{\newcommand{\wpt}{\ensuremath{ p_\mathrm{T}^{\mathrm{W}} }\xspace}, in the \ensuremath{\sqrt{s}=8\TeV}\xspace data samples,
compared to the sum of the corresponding signal and background distributions from simulation.
\begin{figure*}[hbtp]
\begin{center}
\includegraphics[width=\cmsFigWidth]{Figure_001-a.pdf}\hfill
\includegraphics[width=\cmsFigWidth]{Figure_001-b.pdf}\\
\includegraphics[width=\cmsFigWidth]{Figure_001-c.pdf}\hfill
\includegraphics[width=\cmsFigWidth]{Figure_001-d.pdf}
\caption{\label{fig:control_MET_HT}The observed distributions of
\MET\ (top) and \HT (bottom) in the $\sqrt{s}=8\TeV$
electron+jets (left) and muon+jets (right) data samples,
compared to predictions from simulation.
The points are the data histograms, with the vertical bars showing the statistical uncertainty,
and the predictions from the simulation are the solid histograms.
The shaded region shows the uncertainty in the values from simulation.
These include contributions from the statistical uncertainty and the uncertainty in the \ttbar cross section.
The lower plots show the ratio of the number of events from data
and the prediction from the MC simulation.}
\end{center}
\end{figure*}
\begin{figure*}[hbtp]
\begin{center}
\includegraphics[width=\cmsFigWidth]{Figure_002-a.pdf}\hfill
\includegraphics[width=\cmsFigWidth]{Figure_002-b.pdf}\\
\includegraphics[width=\cmsFigWidth]{Figure_002-c.pdf}\hfill
\includegraphics[width=\cmsFigWidth]{Figure_002-d.pdf}
\caption{\label{fig:control_ST_WPT}The observed distributions of
\ensuremath{S_\mathrm{T}}\xspace\ (top) and \ensuremath{ p_\mathrm{T}^{W} }\xspace}}{\newcommand{\wpt}{\ensuremath{ p_\mathrm{T}^{\mathrm{W}} }\xspace} (bottom) in the $\sqrt{s}=8\TeV$
electron+jets (left) and muon+jets (right) data samples,
compared to predictions from simulation.
The points are the data histograms, with the vertical bars showing the statistical uncertainty,
and the predictions from the simulation are the solid histograms.
The shaded region shows the uncertainty in the values from simulation.
These include contributions from the statistical uncertainty and the uncertainty in the \ttbar cross section.
The lower plots show the ratio of the number of events from data
and the prediction from the MC simulation.}
\end{center}
\end{figure*}
For simulated \tt\ signal events, these four kinematic variables are also calculated
using the momenta of particles in the event, before the simulation of the detector response.
We refer to the quantities calculated in this way as the generated variables.
The generated value of \MET\ is the magnitude of the vector sum of the \pt\ of all neutrinos in the event.
The long-lived particles in the event are clustered into jets in the same way as the reconstructed
particles.
The generated value of \HT\ is the sum of the magnitudes of the \pt\ of these jets with $\pt>20\gev$ and $\left|\eta\right|<2.5$.
The generated values of \ensuremath{S_\mathrm{T}}\xspace\ and \ensuremath{ p_\mathrm{T}^{W} }\xspace}}{\newcommand{\wpt}{\ensuremath{ p_\mathrm{T}^{\mathrm{W}} }\xspace}\ are calculated in the same way as the corresponding reconstructed variables,
using the \ptvec\ of the charged lepton from the leptonic decay of a $\PW$ boson coming from $\PQt \to \PQb \PW$ decay.
The choice of bin widths for this measurement is optimized separately for each kinematic event variable
to minimize the migration between bins. This optimization is based on three criteria:
(i) of the simulated signal events for which the value of the generated variable falls in the bin,
at least 50\% are required to have the reconstructed variable in the same bin (this is sensitive to migration of
events out of the bin);
(ii) of the simulated signal events for which the value of the reconstructed variable falls in the bin,
at least 50\% are required to have the generated variable in the same bin (this is sensitive to migration of
events into the bin);
(iii) the number of reconstructed
simulation events in a bin is required to be more than 100.
These criteria ensure that bin-to-bin migrations are kept small, while allowing a differential
cross section measurement with reasonable granularity.
The number of \ttbar events in each bin of each kinematic event variable, and in each channel, is obtained by subtracting the expected
contributions of background processes from data. The contributions of single top quark, and $\PW$ or $\PZ$ boson plus jet events are estimated
from simulation.
In the case of the QCD multijet background, the contribution is estimated from data using a control region where the
selection criteria are modified to enrich the contribution of QCD multijet events.
In the electron+jets channel, the control region is obtained by inverting the photon conversion veto on the electron.
In addition to this, the number of $\PQb$-tagged jets is required to be exactly zero.
The small contamination of \tt, single top, $\PW$+jets, and $\PZ$+jets events in this control region, as estimated from simulation,
is subtracted from the data.
Then, the ratio of simulated QCD multijet events in the control region and the signal region is used to scale
the normalization of the data-driven QCD multijet estimate from the control region to the signal region in the data.
The control region in the muon+jets channel is obtained by inverting the isolation criterion on the muon in the selected events,
and by requiring exactly zero $\PQb$-tagged jets. The jet selection criterion is also modified,
requiring at least three jets.
The same procedure is then followed to estimate the contribution of QCD mulitjet events in the muon+jets signal region.
The number of \ttbar events from data in each bin is then corrected
for the small fractions of dileptonic, all-hadronic, and tau \ttbar events in the
final sample, as determined from simulation,
and for experimental effects, such as detector resolution, acceptance, and
efficiency. This correction is performed by constructing a
response matrix that maps the generated values to the reconstructed values for the
four kinematic variables in the simulated \ttbar signal events. The response
matrix is constructed using the \MADGRAPH \ttbar sample.
This matrix is then inverted, using regularized singular-value
decomposition \cite{SVD_Unfolding} in the {\sc RooUnfold} \cite{Adye:2011gm}
software framework. Since we impose no
requirements on the generated events, the procedure corrects to the
full signal phase space.
The fully-corrected numbers of \ttbar events in the electron+jets and
muon+jets channels yield consistent results. These are then added and used to calculate the
normalized \ttbar differential production cross section with respect to each kinematic event variable, $X$, using:
\begin{equation}
\frac{1}{\sigma}\frac{\rd \sigma_j}{\rd X}=\frac{1}{N}\frac{x_{j}}{\Delta_{j}^{X}}\;,
\end{equation}
where $x_{j}$ represents the number of unfolded signal events in bin $j$,
$\Delta_{j}^{X}$ is the width of bin $j$; $\sigma$ is the total \ttbar production cross section,
and $N=\sum_{i}{x_{i}}$ is the total number of unfolded signal events.
\section{Systematic uncertainties}
\label{sec:systematic_errors}
The systematic uncertainties in the experimental and theoretical input quantities
are evaluated and propagated to the final results, taking correlations
into account. Since the final result is normalized to the total number of events,
the effect of uncertainties that are correlated across all bins
is negligible.
As such, only uncertainties that affect the shape of the measured distributions are significant.
The uncertainty coming from the choice of
renormalization and factorization scales in the physics modeling
of \ttbar events is determined by producing two additional simulated event samples. These samples are generated with
both scales simultaneously varied by a factor of two up or down from their default values
equal to the $Q$ of the hard process in the event;
$Q$ is defined via $Q^2 = \ensuremath{ m_{t} }\xspace}}{\newcommand{\mtop}{\ensuremath{ m_{\mathrm{t}} }\xspace}^{2}+\sum{p_{\mathrm{T}}^{2}}$,
where the sum is over all additional final-state partons in the matrix element.
The effect of varying the renormalization and factorization scales
in the $\PW$+jets and $\PZ$+jets samples is also considered to determine the uncertainty in the shape of this background.
The uncertainty arising from the choice of parton shower matching threshold in the event generation
is determined in a similar fashion, using additional samples in which the threshold is varied up or down.
Uncertainties from the
modeling of the hadronization are evaluated by comparing \POWHEG v1 simulated samples
with two different hadron shower generators (\PYTHIA and \HERWIG).
The uncertainty owing to the choice of the PDF is determined by reweighting the
simulated events and repeating the analysis using the 44 {CTEQ6L} PDF error sets~\cite{Pumplin:2002vw}.
The maximum variation is taken as the uncertainty.
Simulated samples with the top quark mass varied by $\pm1\GeV$, which corresponds to the precision of the measured
top quark mass~\cite{PDG}, are generated to evaluate the effect of the uncertainty in this parameter.
The effect of reweighting the top quark \pt spectrum in simulation, as described in Section~\ref{sec:mc_modelling},
is found to have a negligible effect for low values of the kinematic event variables, and
increases to 3--7\% for the highest values.
Other uncertainties are associated with imperfect understanding
of efficiencies, resolutions, and scales describing the detector response.
The uncertainty arising from each source is estimated, and the analysis repeated with
each corresponding parameter varied within its uncertainty.
The efficiencies and associated uncertainties for triggering and lepton
identification are determined from data by a tag-and-probe
method~\cite{Khachatryan:2010xn}.
The probabilities for identifying and misidentifying $\PQb$ jets in the simulation
are compared to those measured in data, and the resulting correction factors and
their uncertainties are determined as a function of jet energy and quark
flavor. The uncertainties in the correction factors are typically 2\%.
The uncertainty in the jet energy scale (JES) is determined as a function of the jet \pt
and $\eta$~\cite{Chatrchyan:2011ds}, and an uncertainty of $10\%$
is included in the jet energy resolution (JER)~\cite{Chatrchyan:2011ds}. The effect of this limited knowledge of the
JES and JER is determined by varying the JES and JER in the simulated samples within their uncertainties.
The uncertainty in the JES and JER, as well as uncertainties in the
electron, photon, tau, and muon energy scale, are propagated into the calculation of \MET.
The uncertainty in the electron and photon energy scale is $0.6\%$ in
the barrel, and $1.5\%$ in the endcap \cite{Khachatryan:2015hwa}. The uncertainty in the tau lepton
energy scale is estimated to be $\pm3\%$ \cite{CMS:Higgstaupaper},
while the effect of the uncertainty in the muon momentum
measurement is found to be negligible.
A 10\% uncertainty is assigned to the estimate of the nonclustered
energy used in the calculation of \MET~\cite{Khachatryan:2014gga}.
The effect of the uncertainty in the level of pileup is estimated by varying the inelastic $\Pp\Pp$ cross section
used in the simulation by $\pm 5\%$~\cite{InelasticPP}.
The uncertainty in the normalization of the background is determined by varying the normalization
of the single top, $\PW$+jets, and $\PZ$+jets processes by $\pm30\%$, and the QCD multijet processes by $\pm100\%$.
The uncertainty in the shape of the QCD multijet distribution in the electron channel is
estimated by using an alternative control region in data to determine the contribution of QCD multijet events.
This uncertainty is found to have a negligible effect.
The dominant systematic effects are caused by the
uncertainties in the modeling of the hadronization and the \ttbar signal.
For illustrative purposes, typical systematic uncertainties in the \ensuremath{\sqrt{s}=8\TeV}\xspace results coming from each of
the sources described above are presented in
Table~\ref{tab:typical_systematics_8TeV_combined}. The values shown for each kinematic event variable are the median
uncertainties over all of the bins for that variable.
\begin{table}[htbp]
\renewcommand{\arraystretch}{1.1}
\centering
\topcaption{Typical relative systematic uncertainties in percent (median values)
in the normalized \ttbar differential cross
section measurement as a function of the four kinematic event variables
at a center-of-mass energy of 8\TeV (combination of electron and muon channels).
Typical values of the total systematic uncertainty are also shown.}
\label{tab:typical_systematics_8TeV_combined}
\begin{scotch}{c....}
\multirow{2}{*}{Uncertainty source} & \multicolumn{4}{c}{Relative (\%) } \\
& \multicolumn{1}{c}{\MET} & \multicolumn{1}{c}{\HT} & \multicolumn{1}{c}{\ensuremath{S_\mathrm{T}}\xspace} & \multicolumn{1}{c}{\ensuremath{ p_\mathrm{T}^{W} }\xspace}}{\newcommand{\wpt}{\ensuremath{ p_\mathrm{T}^{\mathrm{W}} }\xspace}} \\
\hline
\begin{tabular}{c}Fact./Renorm. scales\\and matching threshold\end{tabular}& 7.6 & 4.0 & 2.6 & 3.3 \\
Hadronization & 4.3 & 5.0 & 8.5 & 3.0 \\
PDF & 0.5 & 0.6 & 0.6 & 0.4 \\
Top quark mass & 0.4 & 0.7 & 0.8 & 0.3 \\
Top quark $p_\mathrm{T}$ reweighting & 1.4 & 0.9 & 0.6 & 0.6 \\
\begin{tabular}{c}Lepton trigger efficiency\\\& selection\end{tabular}& {<}0.1 & {<}0.1 & {<}0.1 & {<}0.1 \\
$\PQb$ tagging & 0.3 & 0.1 & 0.3 & {<}0.1 \\
Jet energy scale & 0.3 & 0.2 & 0.3 & {<}0.1 \\
Jet energy resolution & {<}0.1 & {<}0.1 & {<}0.1 & {<}0.1 \\
\MET & 0.2 & \multicolumn{1}{c}{\NA} & {<}0.1 & 0.1 \\
Pileup & 0.4 & {<}0.1 & 0.1 & 0.2 \\
Background Normalization & 2.6 & 1.0 & 2.1 & 1.4 \\
QCD shape & 0.4 & 0.2 & 0.5 & 0.4 \\
\hline
Total & 9.9 & 8.6 & 9.5 & 4.4 \\
\end{scotch}
\end{table}
\section{Results}
\label{sec:comb_results}
The normalized differential \ttbar cross sections as a function of each of the kinematic event variables are shown in
Figs.~\ref{fig:MET_HT_result_combined_7TeV} and \ref{fig:ST_WPT_result_combined_7TeV} for the \ensuremath{\sqrt{s}=7\TeV}\xspace data, and in
Figs.~\ref{fig:MET_HT_result_combined_8TeV} and \ref{fig:ST_WPT_result_combined_8TeV} for the \ensuremath{\sqrt{s}=8\TeV}\xspace data.
The results are also presented in \suppMaterial.
The data distributions in the figures are compared with the predictions
from the event generators in the left-hand plots: \MADGRAPH
and \textsc{powheg v2}\xspace with two different hadron shower generators, \PYTHIA and \HERWIG.
For the \ensuremath{\sqrt{s}=8\TeV}\xspace results, the predictions from the \MCATNLO and \textsc{powheg v1}\xspace generators are also shown.
The effect on the predicted distributions from varying the modeling parameters (the matching threshold and
renormalization scale $Q^2$) up and down by a factor of two for the \MADGRAPH event generator is shown in the
right-hand plots for the two \MADGRAPH simulations. The uncertainties shown by the vertical bars
on the points in the figures and given in the tables
include both the statistical uncertainties and those resulting from the unfolding procedure.
The measurements at \ensuremath{\sqrt{s}=7\TeV}\xspace are well described by all the event generators
in the distribution of \MET. For \ensuremath{S_\mathrm{T}}\xspace, \ensuremath{ p_\mathrm{T}^{W} }\xspace}}{\newcommand{\wpt}{\ensuremath{ p_\mathrm{T}^{\mathrm{W}} }\xspace}, and \HT,
the event generators predict a somewhat harder spectrum than seen in data.
However, the {\textsc{powheg v2}\ +\ \textsc{pythia}}\xspace event generator
provides a reasonable description of the \HT and \ensuremath{S_\mathrm{T}}\xspace differential cross sections.
The results at \ensuremath{\sqrt{s}=8\TeV}\xspace are generally well described by the \MCATNLO and the
{\textsc{powheg v2}\ +\ \textsc{pythia}}\xspace event generators.
The {\textsc{powheg v2}\ +\ \textsc{herwig}}\xspace event generator describes the \MET and \ensuremath{ p_\mathrm{T}^{W} }\xspace}}{\newcommand{\wpt}{\ensuremath{ p_\mathrm{T}^{\mathrm{W}} }\xspace} distributions well.
However, for \HT and \ensuremath{S_\mathrm{T}}\xspace~
this event generator predicts a harder spectrum than seen in data, at both center-of-mass energies.
The \MADGRAPH event generator generally predicts a harder spectrum than seen in data for all variables.
The variations in matching threshold and $Q^2$ in the \MADGRAPH event generator are not sufficient to
explain this difference between the prediction and data.
However, the \MADGRAPH event generator provides a good description of the
data after reweighting the top quark \pt spectrum, as described in Section~\ref{sec:mc_modelling}. The prediction
obtained from the \MADGRAPH event generator after the reweighting is shown on all the plots.
\section{Summary}
A measurement of the normalized differential cross section of top quark pair production with respect to the four
kinematic event variables \MET, \HT, \ensuremath{S_\mathrm{T}}\xspace,\ and \ensuremath{ p_\mathrm{T}^{W} }\xspace}}{\newcommand{\wpt}{\ensuremath{ p_\mathrm{T}^{\mathrm{W}} }\xspace}\
has been performed in $\Pp\Pp$ collisions at a center-of-mass energy of 7\TeV using 5.0\fbinv and at 8\TeV using
19.7\fbinv of data collected by the CMS experiment.
This study confirms previous CMS findings that the observed
top quark \pt spectrum is softer than predicted
by the \MADGRAPH, \POWHEG, and
\MCATNLO event generators, but otherwise there is broad consistency
between the MC event generators and observation.
This result provides confidence in the description
of \ttbar production in the SM and its implementation
in the most frequently used simulation packages.
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=\cmsFigWidth]{Figure_003-a.pdf} \hfill
\includegraphics[width=\cmsFigWidth]{Figure_003-b.pdf} \\
\includegraphics[width=\cmsFigWidth]{Figure_003-c.pdf} \hfill
\includegraphics[width=\cmsFigWidth]{Figure_003-d.pdf}
\caption{Normalized \MET (top) and \HT (bottom) differential \ttbar cross sections from the combined
electron and muon data at $\sqrt{s}=7\TeV$.
The vertical bars on the data points represent the statistical and systematic uncertainties added in quadrature.
The inner section of the vertical bars, denoted by the tick marks, show the statistical uncertainty.
Left: comparison with different
simulation event generators: {\MADGRAPH\ +\ \textsc{pythia}}\xspace (both the default and after reweighting the top quark \pt spectrum),
{\textsc{powheg v2}\ +\ \textsc{herwig}}\xspace, and {\textsc{powheg v2}\ +\ \textsc{pythia}}\xspace. Right: comparison with
predictions from the {\MADGRAPH\ +\ \textsc{pythia}}\xspace event generator found by varying the matching threshold and renormalization
scales ($\mu_{\mathrm{R}}$, $\mu_{\mathrm{F}}$) up and down by a factor of two. The lower plots show the ratio of the
predictions to the data, with the statistical and total uncertainties in the ratios indicated by the two shaded bands.}
\label{fig:MET_HT_result_combined_7TeV}
\end{center}
\end{figure*}
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=\cmsFigWidth]{Figure_004-a.pdf}\hfill
\includegraphics[width=\cmsFigWidth]{Figure_004-b.pdf} \\
\includegraphics[width=\cmsFigWidth]{Figure_004-c.pdf}\hfill
\includegraphics[width=\cmsFigWidth]{Figure_004-d.pdf}
\caption{Normalized \ensuremath{S_\mathrm{T}}\xspace (top) and \ensuremath{ p_\mathrm{T}^{W} }\xspace}}{\newcommand{\wpt}{\ensuremath{ p_\mathrm{T}^{\mathrm{W}} }\xspace} (bottom) differential \ttbar cross sections from combined
electron and muon data at $\sqrt{s}=7\TeV$.
The vertical bars on the data points represent the statistical and systematic uncertainties added in quadrature.
The inner section of the vertical bars, denoted by the tick marks, show the statistical uncertainty.
Left: comparison with different
simulation event generators: {\MADGRAPH\ +\ \textsc{pythia}}\xspace (both the default and after reweighting the top quark \pt spectrum),
{\textsc{powheg v2}\ +\ \textsc{herwig}}\xspace, and {\textsc{powheg v2}\ +\ \textsc{pythia}}\xspace. Right: comparison with
predictions from the {\MADGRAPH\ +\ \textsc{pythia}}\xspace event generator found by varying the matching threshold and renormalization
scales ($\mu_{\mathrm{R}}$, $\mu_{\mathrm{F}}$) up and down by a factor of two. The lower plots show the ratio of the
predictions to the data, with the statistical and total uncertainties in the ratios indicated by the two shaded bands.}
\label{fig:ST_WPT_result_combined_7TeV}
\end{center}
\end{figure*}
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=\cmsFigWidth]{Figure_005-a.pdf}\hfill
\includegraphics[width=\cmsFigWidth]{Figure_005-b.pdf} \\
\includegraphics[width=\cmsFigWidth]{Figure_005-c.pdf}\hfill
\includegraphics[width=\cmsFigWidth]{Figure_005-d.pdf}
\caption{Normalized \MET (top) and \HT (bottom) differential \ttbar cross sections from combined
electron and muon data at $\sqrt{s}=8\TeV$.
The vertical bars on the data points represent the statistical and systematic uncertainties added in quadrature.
The inner section of the vertical bars, denoted by the tick marks, show the statistical uncertainty.
Left: comparison with different
simulation event generators: {\MADGRAPH\ +\ \textsc{pythia}}\xspace (both the default and after reweighting the top quark \pt spectrum),
\MCATNLOHERWIG, {\textsc{powheg v1}\ +\ \textsc{herwig}}\xspace, {\textsc{powheg v1}\ +\ \textsc{pythia}}\xspace, {\textsc{powheg v2}\ +\ \textsc{herwig}}\xspace, and {\textsc{powheg v2}\ +\ \textsc{pythia}}\xspace.
Right: comparison with
predictions from the \PYTHIA event generator found by varying the matching threshold and renormalization
scales ($\mu_{\mathrm{R}}$, $\mu_{\mathrm{F}}$) up and down by a factor of two. The lower plots show the ratio of the
predictions to the data, with the statistical and total uncertainties in the ratios indicated by the two shaded bands.}
\label{fig:MET_HT_result_combined_8TeV}
\end{center}
\end{figure*}
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=\cmsFigWidth]{Figure_006-a.pdf}\hfill
\includegraphics[width=\cmsFigWidth]{Figure_006-b.pdf} \\
\includegraphics[width=\cmsFigWidth]{Figure_006-c.pdf}\hfill
\includegraphics[width=\cmsFigWidth]{Figure_006-d.pdf}
\caption{Normalized \st (top) and \ensuremath{ p_\mathrm{T}^{W} }\xspace}}{\newcommand{\wpt}{\ensuremath{ p_\mathrm{T}^{\mathrm{W}} }\xspace} (bottom) differential \ttbar cross sections from combined
electron and muon data at $\sqrt{s}=8\TeV$.
The vertical bars on the data points represent the statistical and systematic uncertainties added in quadrature.
The inner section of the vertical bars, denoted by the tick marks, show the statistical uncertainty.
Left: comparison with different
simulation event generators: {\MADGRAPH\ +\ \textsc{pythia}}\xspace (both the default and after reweighting the top quark \pt spectrum),
\MCATNLOHERWIG, {\textsc{powheg v1}\ +\ \textsc{herwig}}\xspace, {\textsc{powheg v1}\ +\ \textsc{pythia}}\xspace, {\textsc{powheg v2}\ +\ \textsc{herwig}}\xspace, and {\textsc{powheg v2}\ +\ \textsc{pythia}}\xspace.
Right: comparison with
predictions from the {\MADGRAPH\ +\ \textsc{pythia}}\xspace event generator found by varying the matching threshold and renormalization
scales ($\mu_{\mathrm{R}}$, $\mu_{\mathrm{F}}$) up and down by a factor of two. The lower plots show the ratio of the
predictions to the data, with the statistical and total uncertainties in the ratios indicated by the two shaded bands.}
\label{fig:ST_WPT_result_combined_8TeV}
\end{center}
\end{figure*}
\begin{acknowledgments}
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMWFW and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NIH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS and RFBR (Russia); MESTD (Serbia); SEIDI and CPAN (Spain); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU and SFFR (Ukraine); STFC (United Kingdom); DOE and NSF (USA).
Individuals have received support from the Marie-Curie program and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus program of the Ministry of Science and Higher Education, the OPUS program contract 2014/13/B/ST2/02543 and contract Sonata-bis DEC-2012/07/E/ST2/01406 of the National Science Center (Poland); the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF; the National Priorities Research Program by Qatar National Research Fund; the Programa Clar\'in-COFUND del Principado de Asturias; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); and the Welch Foundation, contract C-1845.
\end{acknowledgments}
|
2,869,038,155,102 | arxiv | \section{Introduction}
\label{sec:intro}
This paper considers estimation of parameters of distributions whose domain is a particular non-Euclidean geometry: a topological space divided into $M$ equivalence classes by actions of a finite spherical symmetry group. A well known example of a finite spherical symmetry group is the point group in 3 dimensions describing the soccer ball, or football, with truncated icosahedral symmetry that also corresponds to the symmetry of the Carbon-60 molecule. This paper formulates a general approach to parameter estimation in distributions defined over such domains. First we establish a restricted finite mixture representation for probability distributions that are invariant to actions of any topological group. This representation has the property that the number of mixture components is equal to the order of the group, the distributions in the mixture are all parameterized by the same parameters, and the mixture coefficients are all equal. This is practically significant since many reliable algorithms have been developed for parameter estimation when samples come from finite mixture distributions.
We illustrate the power of the representation for an important problem in materials science: analysis of mean orientation in polycrystals. Crystal orientation characterizes properties of materials including electrical conductivity and thermal conductivity. Mechanical properties, such as, stiffness, elasticity, and deformability, can also depend on the distribution of crystal orientations over the material. Thus accurate estimation of crystal orientation is useful for materials evaluation, testing and prediction.
The mean orientation of the crystal, characterized by its Euler angles, can only be specified modulo a set of angular rotations determined by the symmetry group associated with the specific type of crystal. This multiplicity of equivalent Euler angles complicates the development of reliable mean orientation estimators. By extending the Von Mises Fisher (VMF) model under the proposed finite mixture representation, and applying the expectation maximization (EM) maximum likelihood (ML) algorithm for mixtures, we obtain an accurate iterative estimator of the mean Euler angle parameter and angular concentration parameter of the extended VMF distribution. Specifically, the VMF extension is accomplished as follows. We start with the standard VMF model, which is a density parameterized by location (angle mean) and scale (angle concentration) defined over the $p$-dimensional sphere \cite{mardia_directional_1999}. In this model, a point on the sphere is specified by its direction vector, and the angle between two vectors is the arc-cosine of the normalized inner product between them. The spherical symmetry group extension is accomplished by applying the mixture representation to the standard VMF distribution using the group of quaternion rotation matrices.
The performance of the proposed EM-ML orientation estimator is evaluated by simulation and compared to two other angle estimators. The ML orientation estimator is then illustrated on EBSD data collected from a Nickel alloy whose crystal form induces the $m\overline{3}m$ cubic point symmetry group. We establish that the ML orientation estimator results in significantly improved estimates of the mean direction in addition to providing an accurate estimate of concentration about the mean.
The paper is organized as follows. Section~\ref{sec:group-invariant} describes group invariant random variables and gives the mixture representation for their densities. Section \ref{sec:spherical_symmetry_group} specializes to random variables invariant relative to actions of the spherical symmetry group and develops the $\mathcal G$-invariant VMF distribution along with EM-ML parameter estimator. The crystallography application is presented in section~\ref{sec:app_crystal_orientation_estimation} along with experimental comparisons. Section~\ref{sec:conclusion} has concluding remarks.
\section{Group-invariant random variables}
\label{sec:group-invariant}
\def{\mathbf x}{{\mathbf x}}
Consider a finite topological group $\mathcal G=\{G_1, \ldots, G_M\}$ of $M$ distinct actions on a topological space $\mathcal X$, $G_i: \mathcal X \rightarrow \mathcal X$ and a binary operation "*" defining the action composition $G_i * G_j$, denoted $G_i G_j$. $\mathcal G$ has the properties that composition of multiple actions is associative, for every action there exists an inverse action, and there exists an identity action \cite{birkhoff_brief_1963}. A real valued function $f({\mathbf x})$ on $\mathcal X$ is said to be invariant under $\mathcal G$ if: $f(G{\mathbf x})=f({\mathbf x})$ for $G\in \mathcal G$. Let ${\mathbf X}$ be a random variable defined on $\mathcal X$. We have the following theorem for the probability density $f({\mathbf x})$ of ${\mathbf X}$.
\begin{theorem}
The density function $f: \mathcal X\rightarrow {\mathbb R}$ is invariant under $\mathcal G$ if and only if
\begin{eqnarray}
f({\mathbf x})= \frac{1}{M} \sum_{i=1}^M f(G_i{\mathbf x}).
\label{eq:representation}
\end{eqnarray}
\label{thm:1}
\end{theorem}
\noindent{\em Proof:} If (\ref{eq:representation}) holds then $f(G{\mathbf x})=M^{-1} \sum_{i=1}^M f(G_i G {\mathbf x})$. Since $\mathcal G$ is a group $\mathcal G G=\mathcal G$ so that $M^{-1} \sum_{i=1}^M f(G_i G {\mathbf x})=M^{-1} \sum_{j=1}^M f(G_j{\mathbf x})$ and $f(G{\mathbf x})=f({\mathbf x})$. On the other hand, if $f(G{\mathbf x})=f({\mathbf x})$ then $\frac{1}{M} \sum_{i=1}^M f(G_i{\mathbf x})=\frac{1}{M} \sum_{i=1}^M f({\mathbf x})=f({\mathbf x})$.
\qed
Theorem \ref{thm:1} says that any density $f({\mathbf x})$ that is invariant under group $\mathcal G$ can be represented as a finite mixture of its translates $f(G_i{\mathbf x})$ under the group's actions $G_i \in \mathcal G$. This simple result has important implications on $\mathcal G$-invariant density estimation and parameter estimation. In particular it can be used to construct maximum likelihood estimators for parametric densities and kernel density estimators of non-parametric $\mathcal G$-invariant densities with finite sample guaranteed performance.
To illustrate the non-parametric case, assume that $\mathcal X$ has topological dimension $d$ with Lebesgue $\mathcal G$-invariant density $f({\mathbf x})$. Define the symmetric non-negative second order kernel function $\phi:\mathcal X \rightarrow {\mathbb R}$, i.e., $\phi({\mathbf x})\geq 0$, $\phi({\mathbf x})=\phi(\|{\mathbf x}\|,0,\ldots,0)$, $\int \phi({\mathbf x}) d{\mathbf x} =1$, and $\int \|{\mathbf x}\|^2 \phi({\mathbf x}) d{\mathbf x}<\infty$. For the finite group $\mathcal G$, define the $\mathcal G$-invariant kernel function $K({\mathbf x})=M^{-1} \sum_{i=1}^M \phi(G_i{\mathbf x})$. Given a realization $\{{\mathbf x}_i\}_{i=1}^n$ of $n$ i.i.d. samples from $f$ define the kernel density estimator $\hat{f}_{h}({\mathbf x})= n^{-1}\sum_{i=1}^n K\left(\frac{{\mathbf x}-{\mathbf x}_i}{h}\right)$. Assume that $h_n$ is a sequence of kernel widths that satisfies $\lim_{n\rightarrow \infty} h_n =0$ while $\lim_{n\rightarrow \infty} h_n n^d =\infty$. Then, if $f$ is smooth, using Thm~\ref{thm:1} and concentration results from \cite{devroye_combinatorial_2001}, it can be shown that as $n$ goes to infinity
$$E[\|f-\hat{f}_{h_n}\|] = O(n^{-2/(4+d)}), $$
where $\|f-\hat{f}_{h_n}\|=\int |f({\mathbf x})-\hat{f}_{h_n}({\mathbf x})|d{\mathbf x} $ is the $\ell 1$ norm difference.
For the parametric case, let $h({\mathbf x};{\boldsymbol \theta})$ be a density on $\mathcal X$ that is parameterized by a parameter ${\boldsymbol \theta}$ in a parameter space $\Theta$. We extend $h({\mathbf x};{\boldsymbol \theta})$ to a $\mathcal G$-invariant density $f$ by using Thm. \ref{thm:1}, obtaining:
\begin{eqnarray}
f({\mathbf x};{\boldsymbol \theta})=\frac{1}{M} \sum_{i=1}^M h_i({\mathbf x};{\boldsymbol \theta}),
\label{eq:SSG}
\end{eqnarray}
where $h_i({\mathbf x};{\boldsymbol \theta})=h(G_i{\mathbf x};{\boldsymbol \theta})$. This density is of the form of a finite mixture of densities $h_i({\mathbf x};{\boldsymbol \theta})$ of known parametric form where the mixture coefficients are all identical and equal to $1/M$. Maximum likelihood (ML) estimation of the parameter ${\boldsymbol \theta}$ from an i.i.d. sample $\{{\mathbf X}_i\}_{i=1}^n$ from any $\mathcal G$-invariant density $f$ can now be performed using finite mixture model methods \cite{mclachlan_finite_2004} such as the Expectation-Maximization (EM) algorithm~\cite{dempster_maximum_1977} or the restricted Boltzman machine (RBM) \cite{sohn_efficient_2011}.
\section{ML within a spherical symmetry group}
\label{sec:spherical_symmetry_group}
In this section we specialize to estimation of parameters for the case that the probability density is on a sphere and is invariant to actions in a spherical symmetry group. In Sec.~\ref{sec:app_crystal_orientation_estimation} this will be applied to a crystallography example under a Von-Mises-Fisher likelihood model for the mean crystal orientation. The measured and mean orientations can be represented in three equivalent ways.
\begin{description}
\item[Euler angles $\mathcal{E}$:] The orientation is defined by a set of three successive rotations of a reference unit vector about the specified axes~\cite{eberly_euler_2008}. Denote the Euler angles as $\mathbf{e} = (\alpha, \beta, \mathbf{\gamma})\in\mathcal{E}$, where $\alpha, \mathbf{\gamma}\in[0, 2\pi]$ and $\beta\in[0,\pi]$.
\item[Quaternion $\mathcal{Q}$:] The quaternion representation describes the orientation as a 4D vector on the 3D sphere \cite{altmann_rotations_2005}: $\mathbf{q}=(q_1,q_2,q_3,q_4)\in\mathcal{Q}$, where $\|\mathbf{q}\|=1$. The main advantage of this representation is that any rotation of $\mathbf q$ is simply accomplished via left multiplication by a $4\times 4$ orthogonal matrix ${\mathbf{Q}}$ called a quaternion matrix.
\item[Rodrigues Vector $\mathcal{D}$:] The Rodrigues vector describes the orientation by rotating a reference vector along one direction $\mathbf{v}$ by angle $\theta$ according to the right hand rule \cite{rodrigues_lois_1840}. It is denoted as $\mathbf{d}=\mathbf{v}\tan{w/2}=(r_1,r_2,r_3)\in\mathcal{D}$, where $\|\mathbf{v}\|=1$ and $w\in[0,\pi]$.
\end{description}
Any of the aforementioned orientation representations have inherent ambiguity due to crystal symmetries. For example, if the crystal has cubic symmetry, its orientation is only uniquely defined up to a 48-fold set of rotations, reflections and inversions of the cube about its symmetry axes. These rotations reflections, and inversions can be represented as a point symmetry group $\mathcal G$, called $m\overline{3}m$, of quaternionic matrices $\{{\mathbf{Q}}_1, \ldots, {\mathbf{Q}}_{48}\}$ operating on the 3D sphere. Two orientations, e.g., represented in Euler angle, Quaternion or Rodrigues forms, are called symmetry-equivalent to each other if they are mapped to an equivalent orientation under $\mathcal G$. A fundamental zone (FZ), also called the fundamental domain, is a conic solid that can be specified to disambiguate any particular orientation ${\mathbf X}_i$. However, as will be seen in Sec. \ref{sec:app_crystal_orientation_estimation}, reduction of the entire data sample $\{{\mathbf X}_i\}_{i=1}^n$ to a FZ destroys information necessary for maximum likelihood estimation: the entire $\mathcal G$-invariant density (\ref{eq:SSG}) must be used.
\subsection{$\mathcal G$-invariant Von-Mises Fisher distribution}
The von Mises-Fisher (VMF) distribution arises in directional statistics~\cite{mardia_directional_1999} as a natural generalization of the multivariate Gaussian distribution to the $(p-1)$-dimensional sphere $S^{(p-1)}\subset {\mathbb R}^p$, where $p\geq 2$. The VMF distribution is parameterized by the mean direction ${\boldsymbol \mu}\in S^{(p-1)}$ and the concentration parameter $\kappa\ge 0$:
\begin{equation}
f(\mathbf{x};\mathbf{{\boldsymbol \mu}},\kappa) = c_p(\kappa)\exp{(\kappa{\boldsymbol \mu}^T\mathbf{x})},
\label{eq:VMF}
\end{equation}
where $c_p(\kappa) = \frac{\kappa^{p/2-1}}{(2\pi)^{p/2}I_{p/2-1}(\kappa)}$ and $I_p( \cdot)$ is the modified Bessel function of the first kind of order $p$. Given an i.i.d sample $\{{\mathbf X}_i\}_{i=1}^n$ from the VMF distribution the ML estimator has the closed form expressions \cite{mardia_directional_1999}
\begin{align}
\label{eq:ML_estimator}
\hat{{\boldsymbol \mu}}=\frac{\mathbf{\gamma}}{\|\mathbf{\gamma}\|},\ \hat{\kappa}=A_p^{-1}\left(\frac{\|\mathbf{\gamma}\|}{n}\right),\
\end{align}
where $\mathbf{\gamma}=\sum_{i=1}^{n}\mathbf{{\mathbf x}}_i$ and $A_p(u)=\frac{I_{p/2}(u)}{I_{p/2-1}(u)}$.
Let $\mathcal G$ be a group of symmetric actions $\{{\mathbf{Q}}_1, \ldots, {\mathbf{Q}}_M\}$ acting on the quaternionic representation of orientation on the $p-1$ dimensional sphere $S^{(p-1)}$. This group is called a spherical symmetry group. We extend the the VMF distribution (\ref{eq:VMF}) using the mixture representation Thm~\ref{thm:1}:
\begin{eqnarray}
\label{eq:mixture_densityp}
f(\mathbf{x};{\boldsymbol \mu},\kappa)
&=&\sum_{m=1}^M\frac{1}{M}f({\mathbf{Q}}_m\mathbf{x};{\boldsymbol \mu}, \kappa)\\
\label{eq:mixture_density}
&=&\sum_{m=1}^M\frac{1}{M}f(\mathbf{x};{\mathbf{Q}}_m{\boldsymbol \mu}, \kappa)
\end{eqnarray}
where in going from (\ref{eq:mixture_densityp}) to (\ref{eq:mixture_density}) we used the inner product form ${\boldsymbol \mu}^T {\mathbf x}$ in (\ref{eq:VMF}) and the symmetry of ${\mathbf{Q}}_m$. The expression (\ref{eq:mixture_density}) for the extended VMF distribution is in the form of
a finite mixture of standard VMF distributions on the same random variable ${\mathbf X}$ having different mean parameters ${\boldsymbol \mu}_m ={\mathbf{Q}}_m{\boldsymbol \mu}$ but having the same concentration parameters $\kappa$.
The finite mixture (\ref{eq:mixture_density}) for the $\mathcal G$-invariant density $f(\mathbf{x};{\boldsymbol \mu},\kappa)$ is in a form for which an EM algorithm~\cite{dempster_maximum_1977} can be implemented to compute the ML estimates of ${\boldsymbol \mu}$ and $\kappa$.
Denoting the parameter pair as ${\boldsymbol \omega}=\{{\boldsymbol \mu},\kappa\}$ the EM algorithm generates a sequence $\{{\boldsymbol \omega}_k\}_k$ of estimates that monotonically increase the likelihood and are given by ${\boldsymbol \omega}_{k+1}= {\mathbf{amax}}_{{\boldsymbol \omega}} E_{S|X,{\boldsymbol \omega}_k}[\log{L({\boldsymbol \omega};\{{\mathbf X}_i,S_i\})}]$, where $S_i$ is a latent variable assigning ${\mathbf X}_i$ to a particular mixture component in (\ref{eq:mixture_density}) and $L({\boldsymbol \omega},\{{\mathbf X}_i, S_i\})$ is the likelihood function of ${\boldsymbol \omega}$ given the complete data $\{{\mathbf X}_i, S_i\}_{i=1}^n$. Specifically,
\begin{eqnarray}
\label{eq:qfunction}
&&E_{S|X,{\boldsymbol \omega}}[\log{L({\boldsymbol \omega};\{{\mathbf X}_i,S_i\})}] \\
&=& \sum_{i=1}^{n}\sum_{m=1}^Mr_{i,m}(\log{c_p(\kappa)}+\kappa({\mathbf{Q}}_m{\boldsymbol \mu})^T\mathbf{x}_i),
\nonumber
\end{eqnarray}
where $r_{i,m}=P(S_i=m|\mathbf{X}_i;{\boldsymbol \omega}_k)$. The EM algorithm takes the form:
E-step:
\begin{equation}
\label{eq:EM_Estep}
\begin{split}
r_{i,m}
&= \frac{f(\mathbf{X}_i; {\mathbf{Q}}_m{\boldsymbol \mu}, \kappa)}{\sum_{l=1}^Mf(\mathbf{X}_i; {\mathbf{Q}}_l{\boldsymbol \mu}, \kappa)}.
\end{split}
\end{equation}
M-step:
\begin{equation}
\label{eq:EM_Mstep}
\hat{{\boldsymbol \mu}}=\frac{{\boldsymbol \gamma}}{\|{\boldsymbol \gamma}\|},\ \hat{\kappa}=A_p^{-1}\left(\frac{\|{\boldsymbol \gamma}\|}{n}\right),
\end{equation}
where ${\boldsymbol \gamma}=\sum_{i=1}^{n}\sum_{m=1}^Mr_{i,m}{\mathbf{Q}}_m^T\mathbf{X}_i$.
\section{Crystallographic Orientation Estimation}
\label{sec:app_crystal_orientation_estimation}
Polycrystalline materials are composed of grains, of varying size and orientation, where each grain contains crystal forms with similar orientations. The quality of the material is mainly determined by the grain structure i.e. the arrangement of the grains and their sizes, as well as the distribution of the precipitates. Analyzing the crystal orientation of the grains helps us predict how materials fail and what modes of failure are more likely to occur \cite{de_graef_structure_2008}.
Electron backscatter diffraction (EBSD) microscopy acquires crystal orientation at multiple locations within a grain by capturing the Kikuchi diffraction patterns of the backscatter electrons ~\cite{saruwatari_crystal_2007}. A Kikuchi pattern can be translated to crystal orientation through Hough Transformation analysis~\cite{lassen_automated_1994} or Dictionary-Based indexing~\cite{park_ebsd_2013}. The process of assigning mean orientation values to each grain is known as indexing. Crystal forms possess point symmetries, e.g. triclinic, tetragonal, or cubic, leading to a probability density of measured orientations that is invariant over an associated spherical symmetry group $\mathcal G$. Therefore, when the type of material has known symmetries, e.g., cubic-type symmetry for nickel or gold, the extended VMF model introduced in the previous section can be applied to estimate the mean orientation ${\boldsymbol \mu}_g$ and the concentration $\kappa_g$ associated with each grain.
\subsection{Simulation studies of $\mathcal G$-invariant EM-ML estimator}
A set of $n$ i.i.d. samples were simulated from the $\mathcal G$-invariant VMF distribution with given ${\boldsymbol \mu}={\boldsymbol \mu}_o,\kappa=\kappa_o$ for the $m\overline{3}m$ point symmetry group associated with the symmetries of cubic crystal lattice planes. The number of samples for each simulation was set to $n=1000$ and $\kappa_o$ was swept from $1$ to $100$ while, for each simulation run, ${\boldsymbol \mu}_o$ was selected uniformly at random. The experiment was repeated $100$ times and the average values of $\hat{\kappa}$ and the inner product $\hat{{\boldsymbol \mu}}^T {\boldsymbol \mu}_o$ are shown in Fig. \ref{fig:kappa_estimation} and \ref{fig:mu_estimation}. In the figures we compare performance for the following methods: (1) the naive ML estimator for the standard VMF model that does not account for the point group structure (\ref{eq:ML_estimator}) (labeled "ML for VMF"). (2) Mapping the $n$ samples to a single fundamental zone of $m\overline{3}m$ on the sphere followed by performing ML for the standard VMF distribution over this FZ (labeled "Modified ML for VMF"). (3) Applying our proposed exact EM-ML algorithm directly to the $n$ samples using the the mixture of VMF distribution (\ref{eq:EM_Estep})-(\ref{eq:EM_Mstep}) (labeled "EM-ML for mVMF").
Figure \ref{fig:mu_estimation} shows the inner product values ${\boldsymbol \mu}_o^T\hat{{\boldsymbol \mu}}$. The proposed EM-ML estimator achieves perfect recovery of the mean orientation (${\boldsymbol \mu}_o^T\hat{{\boldsymbol \mu}}=1$) much faster than the other methods as the concentration parameter $\kappa_o$ increases (lower dispersion of the samples about the mean). Notice that when $\kappa_o< 20$, none of the methods can accurately estimate the mean orientation. The reason is that when $\kappa_o$ is small the samples become nearly uniformly distributed over the sphere. The threshold $\kappa_o$ value at which performance starts to degrade depends on the point symmetry group. In Fig. \ref{fig:kappa_estimation} it is seen that the bias of the proposed EM-ML $\kappa$ estimator is significantly lower than that of the other methods compared. While the modified ML for VMF performs better than the naive ML estimator for VMF, its bias is significantly worse bias than the proposed EM-ML approach.
\begin{figure}[htb]
\centering
\centerline{\includegraphics[width=7cm]{figures/SimExp_Mu}}
\caption{Mean orientation estimator comparisons for the $\mathcal G$-invariant density when $\mathcal G$ is the $m \overline{3}m$ point symmetry group. Shown is the average inner product ${\boldsymbol \mu}_o^T\hat{{\boldsymbol \mu}}$ of three estimators $\hat{{\boldsymbol \mu}}$ when ${\boldsymbol \mu}_o$ is the true mean orientation as a function of the true concentration parameter $\kappa_o$. Each estimator was implemented with $n=1000$ i.i.d. samples from the $\mathcal G$-invariant density and the inner product shown is averaged over $100$ trials. The naive estimator ("ML for VMF" in blue line) does not attain perfect estimation (inner product $=1$) for any $\kappa_o$ since it does not account for the spherical symmetry group structure. A modified ML estimator ("modified ML for VMF" in green dashed line) achieves perfect estimation as $\kappa_o$ becomes large. The proposed EM-ML method ("EM-ML for mVMF") achieves perfect estimation much faster than the other methods.}
\label{fig:mu_estimation}
\end{figure}
\begin{figure}[htb]
\centering
\centerline{\includegraphics[width=7cm]{figures/SimExp_Kappa}}
\caption{Concentration parameter estimator bias as a function of the true concentration $\kappa_o$. The bias of the naive ML for VMF (blue solid line) is large over the full range of $\kappa_o$. The modified ML for VMF (green dashed line) estimates $\kappa$ more accurately when $\kappa_o$ is small. Our proposed method EM-ML estimator (black dotted line) has lower bias than the other estimators.}
\label{fig:kappa_estimation}
\end{figure}
\subsection{EM-ML orientation estimator for IN100 Nickel sample}
We next illustrate the proposed EM-ML orientation estimator on a real IN100 sample acquired from US Air Force Research Laboratory (AFRL) \cite{park_ebsd_2013}. The IN100 sample is a polycrystalline Ni superalloy which has cubic symmetry in the $m\overline{3}m$ point symmetry group. EBSD orientation measurements were aquired on a $512\times 384$ pixel grid, corresponding to spatial resolution of $297.7$ nm. The Kikuchi diffraction patterns were recorded on a $80\times 60$ photosensitive detector for each of the pixels.
Figure \ref{fig:IN100} (a) shows a $200\times 200$ sub-region of the full EBSD sample where the orientations are shown in the inverse pole figure (IPF) coloring obtained from the OEM EBSD imaging software and (c) is the back-scattered electron (BSE) image. Note that the OEM-estimated orientations in some grain regions of the IPF image are very inhomogeneous, which is likely due to a fundamental zone wrap-around problem. Figure \ref{fig:IN100} (b) shows the estimates of the mean orientations of each grain using the proposed EM-ML algorithm. Figures \ref{fig:IN100} (d) show the estimated concentration parameter $\kappa$ for the grains using the proposed EM-ML algorithm.
\begin{figure}[htb]
\begin{minipage}[b]{.48\linewidth}
\centering
\centerline{\includegraphics[width=3.2cm]{figures/EA}}
\centerline{(a) IPF from OEM}\medskip
\end{minipage}
\begin{minipage}[b]{.48\linewidth}
\centering
\centerline{\includegraphics[width=3.2cm]{figures/EA_Mu}}
\centerline{(b) IPF for proposed $\hat{{\boldsymbol \mu}}$}\medskip
\end{minipage}
\begin{minipage}[b]{.48\linewidth}
\centering
\centerline{\includegraphics[width=3.5cm]{figures/BSE}}
\centerline{(c) BSE from OEM}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[width=3.5cm]{figures/Kappa_EM}}
\centerline{(d)$\hat{\kappa}$ for proposed $\hat{{\boldsymbol \mu}}$}\medskip
\end{minipage}
\caption{A $200\times 200$ sub-region of the IN100 sample. (a) is the IPF image for the Euler angles extracted from EBSD by OEM imaging software. IPF coloring in some grains is not homogeneous, likely due to the ambiguity problem. (b) is the IPF image for the mean orientation of the grains estimated by the proposed EM-ML algorithm. (c) is the BSE image of the sample and (d) is the concentration parameters $\kappa$ estimated by the proposed EM-ML for the $\mathcal G$-invariant VMF density. Our proposed EM-ML estimator has high concentration $\kappa$ even for those grains with inhomogeneous Euler angles than does the naive ML estimator.}
\label{fig:IN100}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We have obtained a general finite mixture representation for densities on domains whose topologies have group invariances. This representation was used to extend the Von-Mises-Fisher distribution to a mixture VMF distributions that possess spherically symmetric group invariances. An efficient EM algorithm was derived for estimation of parameters of this extended VMF model. The extended VMF model was applied to the problem of estimation of mean grain orientation parameters in polycrystalline materials whose orientations lie in the $m\overline{3}m$ point symmetry group. Application of the finite mixture representation to other types of groups would be worthwhile future work.
\section*{Acknowledgment}
The authors are grateful for inputs from Megna Shah, Mike Jackson and Mike Groeber.
\bibliographystyle{IEEEtran}
|
2,869,038,155,103 | arxiv | \section{Introduction}
\label{sect1}
\noindent
In many extensions of the Standard Model bosonic states carrying
both lepton and quark quantum numbers, so--called
leptoquarks, are contained.
Leptoquarks may exist in the mass range reached by high energy colliders
if their couplings are $B$ and $L$ conserving. A general classification
of these states was given in ref.~\cite{BRW} demanding also
non--derivative and family diagonal couplings.
In most of the scenarios
the fermionic leptoquark couplings are not predicted.
Moreover,
a detailed analysis of low energy data~\cite{LEUR}
showed that the these leptoquark couplings are small in the mass range
up to $O(1~{\rm TeV})$.
Thus processes depending on the fermionic
couplings can not be used to obtain rigorous mass bounds for these
states.
On the other hand, the couplings of the leptoquarks to the electroweak
gauge bosons and gluons are determined by the respective gauge
symmetries. In the case of scalar leptoquarks the couplings are thus
completely predicted.
For vector leptoquarks additionally anomalous couplings may contribute.
Due to the small fermionic couplings the pair production cross sections
depend only on the bosonic couplings and mass limits may be derived
directly.
In the present paper a brief account is given on results obtained
in refs.~\cite{JB1,JB2} and estimates are presented for the search
potential in the HERA energy range.
\section{The Pair Production Cross Sections}
\label{sect2}
\noindent
The integral
leptoquark pair production cross sections in deep inelastic $ep$
collisions are described by
\begin{equation}
\sigma_{S,V}^{ep,tot} = \sigma_{S,V}^{ep,dir} + \sigma_{S,V}^{ep,res},
\end{equation}
containing a direct and a resolved photon contribution which are
given by
\begin{equation}
\sigma_{S,V}^{ep,dir} = \int_{y_{min}}^{y_{max}} dy
\int_{x_{min}}^{x_{max}} dx \phi_{\gamma/e}(y)
G_{p}(x,\mu^2)
\hat{\sigma}_{S,V}^{dir}(\hat{s},M_{\Phi})
\theta(\hat{s} - 4M^2_{\Phi}),
\end{equation}
and
\begin{eqnarray}
\label{xsep}
\sigma_{S,V}^{ep, res}(s,M_{\Phi}) &=&
\int_{y_{min}}^{y_{max}} dy
\int_{4 \MP/Sy}^1 dz
\int_{4 \MP/Syz}^1 dx
\phi_{\gamma/e}(y)
\theta(\hat{s} - 4 M_{\Phi}^2)
\nonumber\\
&\times&
\left \{
\sum_{f=1}^{N_f} \left [ q_f^{\gamma}(z, \mu_1)
\overline{q}_f^p(x,\mu_2) +
\overline{q}_f^{\gamma}(z, \mu_1)
q_f^p(x,\mu_2) \right ]
\hat{\sigma}_{S,V}^q(\sh, M_{\Phi})
\right. \nonumber\\
&+& \left.
G^{\gamma}(z, \mu_1) G^p(x,\mu_2)
\hat{\sigma}_{S,V}^g(\sh, M_{\Phi})
\right \},
\end{eqnarray}
respectively.
Here $\phi_{\gamma/e}$ denotes the Weizs\"acker--Williams distribution
and $M_{\Phi}$ is the leptoquark mass.
$q_f^{\gamma}$ and $G^{p(\gamma)}$ are the quark and gluon distributions
in the photon and proton, respectively, $\hat{s} = S x y$,
and $\mu_1$ and $\mu_2$
denote the factorization scales.
The subsystem
scattering cross sections $\hat{\sigma}_{S,V}^{q,g}(\hat{s}, M_{\Phi})$
were calculated in \cite{JB1} for the direct process and in \cite{JB2}
for the resolved processes, both for scalar and vector leptoquarks.
There also the differential scattering cross
were derived. In the case of vector leptoquarks the scattering cross
sections were calculated accounting both
for anomalous photon $\kappa_A, \lambda_A$, and gluon couplings,
$\kappa_G, \lambda_G$. These contributions are understood
in an effective description being valid
in the threshold range, i.e. for
$S \sim 4 M_{\Phi}^2$. Due to the anomalous couplings the pair production
cross sections for vector leptoquarks obtain as well unitarity violating
pieces which however are
assumed to never become large.
It
is hardly possible in general,
to provide a
correct high energy description in a
model--independent way,
as intended in the present paper focussing on the threshold range only.
This, instead,
requests to consider a specific
scenario accounting
also for the details of
the respective pattern of symmetry
breaking.
For all
details of the calculation we refer to
refs.~\cite{JB1} and \cite{JB2}.
\section{Numerical Results}
\label{sect3}
\noindent
In figures~1 and 2 the integrated scattering cross sections for a
series of scalar and vector leptoquarks are shown in dependence of
the leptoquark mass and charges. For the vector leptoquarks different
choices of anomalous couplings are also considered. For simplicity we
identified $\kappa_A = \kappa_G$ and $\lambda_A = \lambda_G$.
It is interesting to note that not the Yang--Mills type couplings,
$\kappa = \lambda = 0$, but the so--called minimal couplings,
$\kappa = 1,
\lambda = 0$, result in the smallest cross section. In further
experimental studies it might be interesting to vary even all the four
anomalous
couplings independently. As seen in figures~1 and 2 the integral cross
sections behave about like
\begin{equation}
\label{SIG}
\sigma_{tot}^{S,V}(M_{\Phi}) \sim A \exp(-B M_{\phi}).
\end{equation}
This relation can be used to obtain an estimate of the respective search
limits which can be reached at a given integrated
luminosity, ${\cal L}$.
For ${\cal L} = 100~{\rm pb}^{-1}$ and $\sqrt{s} = 314 \GeV$
the search limits
for charge $|Q_{\Phi}| = 5/3$
scalar leptoquarks ranges up to
$60~(45)~{\rm GeV}$ and for
vector leptoquarks up to $70~(55)~{\rm GeV}$,
given a
signal sample of 10 (100) events,
respectively.
For most of the channels the experiments at LEP~1 have
excluded leptoquarks with masses below $M_Z/2$. At present the
most stringent mass bound for both scalar~\cite{TEVA}
and vector leptoquarks\footnote{Studies considering also
anomalous leptoquark couplings were not performed yet.}
decaying into the fermions of the first
and second
family come from
TEVATRON and exclude the range $M_{\Phi} \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 90~{\rm GeV}$.
For some leptoquark types the
range $M_{\Phi} \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 130~{\rm GeV}$ is excluded~\cite{TEVA}.
No bounds were yet derived for
3rd generation leptoquarks, e.g. those decaying as
$\Phi_{S,V} \rightarrow b~\tau$,~etc.,
in the TEVATRON analyses. Due to the
lower background rates, an investigation of particularly
this channel may be more
suited to $ep$ or $e^+e^-$ collisions than for proton collisions.
|
2,869,038,155,104 | arxiv | \section{Introduction}
Word-based approaches to statistical machine translation, starting with the
work from IBM in the early 1990s \cite{DBLP:journals/coling/BrownPPM94} have
been successful both in use in production translation systems and in
invigorating MT research. Since then, newer phrase-based MT techniques such as
the alignment template model \cite{DBLP:journals/coling/OchN04}, and
hierarchical phrase-based models \cite{chiang:2005:ACL} have made significant
improvements in SMT translation quality.
Despite their sophistication and apparent complexity, many word-based and
phrase-based SMT models can be implemented entirely in terms of finite-state
transducers. This allows researchers to make use of the rich automata
literature for finding clean and efficient algorithms; it is also useful from a
software engineering perspective, making it possible to do experiments quickly,
using generic toolkits for programmatically manipulating finite-state
transducers. Several such packages are freely available, such as OpenFST
\cite{openfst} and the WRTH FSA Toolkit \cite{kanthak-ney:2004:ACL}.
However, since they make no attempt to explicitly model the syntax of either
involved language, and typically use simple n-gram models to guide generation,
the output of word-based SMT systems can be syntactically incoherent,
especially in light of long-distance dependencies. Additionally, word-based SMT
models have difficulties encoding word order differences across languages.
So we have seen new methods in MT that explicitly model syntax, where typically
the grammar of a language, and the relationships between the grammars of two
languages, can be learned from treebanks. There are many different available
theoretical frameworks for describing syntax and transformations over syntactic
representations. Both from a theoretical standpoint, and as MT implementors, we
would like a framework that is clean and general, and is suitably expressive
for explicitly capturing syntactic structures and the divergences across
languages. We would also like one for which there are efficient algorithms for
training rules and performing transduction (i.e., decoding at translation
time), and ideally one for which a good software toolkit is freely available.
Not all syntactic relationships can be cleanly represented with every syntactic
formalism; each formalism has its own expressive power. Bonnie Dorr provides us
with an excellent test bed of seven cross-language divergences that may occur
when we want to perform translation, even between languages as closely related
as English, Spanish and German \cite{DBLP:journals/coling/Dorr94}. While these
divergences do not totally describe the ways in which languages can differ in
their typical descriptions of an event, they provide a concrete starting point,
and are easily accessible.
In this paper, I specifically investigate T and xT transducers, situate them in
the space of formalisms for describing syntax-based translation, and
demonstrate that xT transducers are sufficient for modeling all of the
syntactic divergences identified by Dorr. I also present \texttt{kurt}, a small
software toolkit for experimenting with these transducers, which comes with
sample translation rules that handle each of Dorr's divergences.
In the rest of this paper, we will discuss some relevant grammar and transducer
formalisms, including a more in-depth look at T and xT transducers; go through
the linguistic divergences discussed by Dorr and explain why they might cause
difficulties for MT systems; show how xT transducers can be used to address
each of these divergences; present the software that I have built; review some
of the related work that has informed this paper; and finally, suggest future
possible directions for work with tree transducer-based MT.
\section{Grammars and Transducers}
Here we contrast several kinds of formalisms over strings, trees, and pairs of
strings and trees; please see Figure \ref{glossary} for a glossary of different
kinds of automata and grammars that will be referenced in this paper. A
grammar describes a single set of strings or trees, and consists of a finite
set of rules that describes those strings or trees. Familiar formalisms for
grammars that describe sets of strings include context-free grammars and the
other members of the Chomsky Hierarchy. Some grammars describe sets of trees,
and these will be the main focus of the rest of this paper; when discussing
grammars over strings, I will specifically mention it. For example, regular
tree grammars (RTG) is the class of grammars corresponding to context-free
grammars but describing trees; they describe the trees whose \textit{yield}
(string concatenation of the symbols at the leaves) is a context-free grammar
\cite{KnightGraehlOverview}.
Contrastingly, \textit{synchronous} grammars describe sets of pairs of objects;
here again, we are mostly concerned with synchronous grammars that describe
trees. Formally, a synchronous grammar over trees establishes a mathematical
relation over two sets of trees, and allows us to answer the question of
whether, for a given pair of trees, that pair is in the relation. The
production rules of a synchronous grammar do not just describe one language,
but have pairs of production rules $<r_1,r_2>$, such that when $r_1$ is used to
derive a string in language $L_1$, $r_2$ must be used in the derivation of a
string in $L_2$.
Thus synchronous grammars can be used for several kinds of tasks, such as
parsing parallel texts, generating parallel text, or most intuitively useful
for a machine translation setting, parsing text in one language while jointly
generating parse trees that yield text the other. All of these operations
are described for synchronous context-free grammars in David Chiang's tutorial
\cite{Chiang06anintroduction}. In his tutorial, Chiang describes some of the
limitations of using synchronous CFGs; notably, they cannot rearrange parts of
parse trees that are not sisters. Of particular interest in this work is
raising and lowering elements; Chiang gives the example of swapping subjects
and objects, as in the example of translation between English and French in
Figure \ref{missesmary}. Chiang points out that, for syntax-aware MT, we would
like to be able to use some more powerful formalism that can perform
transformations like this. Synchronous tree substitution grammars, for example,
are able to describe transformations of this form, but not the transformation
from cross-serial dependencies in subordinate clauses in Dutch to the nested
clause structure of English. This latter transformation would require more
formal power, which is offered by tree-adjoining grammars.
\begin{figure*}
\begin{center}
\Tree [.S [.NP John ] [.VP [.V misses ] [.NP Mary ] ] ]
\Tree [.S [.NP Mary ] [.VP [.V manque ] [.PP [.P à ] [.NP John ] ] ] ]
\end{center}
\caption{Switching subject and object in translation to French. Example from
\cite{Chiang06anintroduction}. Also note the structural difference: there is a
PP subtree in French, not present in English.}
\label{missesmary}
\end{figure*}
\subsection{TAG and Related Formalisms}
\label{sec:tagfamily}
Tree adjoining grammar, introduced by Joshi
\cite{Joshi:1975:TAG:1739967.1740303}, has been a popular formalism for
describing grammars over trees. It provides additional expressive power not
available in regular tree grammars, handling some, but not all
context-sensitive languages. TAG can cleanly describe many of the
non-context-free features observed in human languages, such as the cross-serial
dependencies in Dutch. TAG is thus called ``weakly context-sensitive",
and has been shown formally equivalent to several other syntactic formalisms,
such as Combinatory Categorial Grammar (CCG) and Linear Indexed Grammars
\cite{vw94}.
The operations of TAG are substitution and adjunction, which combine the two
different kinds of elementary trees present in a given TAG grammar, initial
trees and auxiliary trees. The substitution operation takes two trees, one with
a leaf that is an unresolved nonterminal $\alpha$, and produces a new tree in
which that node has been replaced with an entire subtree (copied from another
initial tree in the grammar, or one that has already been derived) whose root
node is also $\alpha$. For example, an initial tree may have an unresolved
nonterminal that wants to have an NP attached to it (it has a leaf labelled
NP); the substitution operation attaches an existing subtree whose root is NP,
producing a new tree where that nonterminal is now resolved.
The adjunction operation takes an existing tree and an auxiliary tree, which
has a special node marked as the ``foot", and grafts the auxiliary tree in
place in the middle of the existing tree, attaching the tree material at the
target location to the foot node of the auxiliary tree. For a very clear
tutorial on TAG with good examples, please see \cite{vannoord93}, Section
4.2.4. Synchronous TAG has also been investigated, and its use in machine
translation has been advocated by Shieber, who argues that its expressive power
may make up for its computational complexity \cite{textscshieber:2007:SSST}.
Restricted versions of TAG and their synchronous analogues have also been
investigated. These do not provide the full expressive power of TAG, but can be
parsed and trained more efficiently. The two limited versions of TAG that are
most prominently discussed in the literature are tree substitution grammars
(TSG) and tree insertion grammars (TIG). TSG only provides the substitution
operation, and does not have auxiliary trees or adjunction
\cite{eisner:2003:ACL-companion}. TIG, on the other hand, includes both the
substitution and adjunction operations, but places constraints on the
permissible shapes of auxiliary trees: their foot nodes must be at the leftmost
or rightmost edge of the frontier, and a given derivation may not adjoin
``left" auxiliary trees into ``right" ones, or vice-versa. These restrictions
are sufficient to limit the weak generative capacity of TIGs to that of CFGs,
but they also ensure that algorithms on TIGs can run more efficiently. While
parsing with a TAG takes in the general case $O(n^6)$ complexity, TIG (like the
general case for CFGs) can be parsed in $O(n^3)$ \cite{Nesson:2006:IPS}. Both
STIG and STSG have seen use in machine translation; for example, probabilistic
STIG is used in \cite{Nesson:2006:IPS}, and STSG has been notably used in
\cite{eisner:2003:ACL-companion}.
\subsection{Tree Transducers}
While synchronous grammars provide a \emph{declarative} description of a
relation that holds between two sets of trees, tree transducers more explicitly
describe the process by which a tree may be converted into other trees. Like
finite-state transducers, which operate over strings, tree transducers
typically describe nondeterministic processes, so for a given input tree, there
is a set of possible output trees; that set may (for example) be described by a
regular tree grammar.
Tree transducers and synchronous grammars both describe mathematical relations
over trees, so we can sensibly ask about their comparative formal expressive
power, and use them to compute similar queries. For example, with either a
synchronous grammar or a transducer, we may ask, for a given tree, what are the
other trees that are in the mathematical relation with it
\cite{Chiang06anintroduction}. There are transducer varieties with the same
formal expressive power as certain synchronous grammars. For example,
synchronous tree substitution grammars (STSG) have the same formal power as
xLNT transducers \cite{Maletti:2010:WST:1857999.1858129}, which will be
described in more detail in the next section.
While there are very many kinds of possible tree transducers, the ones used in
NLP applications typically fall into one of two classes, T transducers, which
operate ``top-down", and ``B" transducers, which operate ``bottom-up".
\begin{figure*}
\begin{center}
\begin{itemize}
\item synchronous grammar: a grammar over two languages simultaneously, where
rules are given in pairs and must be used together
\item probabilistic grammar: a grammar where rules have associated weights,
which defines a probability distribution over derivations licensed by that
grammar
\item TAG: tree adjoining grammar, mildly context-sensitive grammar formalism
over trees, with substitution and adjunction operations
\item TIG: tree insertion grammar: a TAG wherein rules have certain
restrictions, described in Section \ref{sec:tagfamily}.
\item TSG: tree substitution grammar; similar to TAG without the adjoining
operation
\item RTG: regular tree grammar, the tree analogue of context-free grammar
\item finite-state transducers: transducers over strings; finite-state automata
with the added ability to produce output
\item tree transducers: automata that define relations over trees procedurally
\item T transducers: ``top down" tree transducers
\item R transducers: the same as T transducers, a name used in earlier work.
``R" stands for ``Root to Frontier"
\item (L)T transducers: ``linear" T transducers, constrained such that their
rules are non-copying, and a variable appearing on the left-hand side of a rule
must appear at most once in the right-hand side
\item (N)T transducers: ``nondeleting" T transducers, constrained such that
a variable appearing on the left-hand side of a rule must appear at least once
in the right-hand side
\item (x)T transducers: T transducers with ``extended" pattern matching,
allowing for complex finite patterns to appear in the left-hand side of
rules.
\end{itemize}
\end{center}
\caption{Glossary of transducers and automata}
\label{glossary}
\end{figure*}
\section{T Transducers}
Let us now describe T transducers in more detail. T transducers transform trees
into other trees via sets of production rules.
Many production rules may apply at a given step in a derivation, so the
transductions are usually nondeterministic, relating a given input tree to many
possible output trees. Thus a T transducer, like a synchronous tree grammar,
defines a \emph{relation} over sets of trees.
Intuitively, transduction begins with an input tree, where its root node is in
the initial state $q_0$. Each node in a tree may be in one of the states in $Q$
(the set of possible states), or in no state at all. Transduction proceeds by
finding all the transduction rules that can apply to an existing tree, or
subtrees of an existing tree. A rule applies when the root of its left-hand
side matches a node in the tree, and the state of the node matches the state of
the rule. When a rule matches a subtree (call it $t$), then a new tree is
produced and added to the set of current trees by replacing the subtree that
matched the rule with the right-hand side of the rule, save that the variables
in the right-hand side of the rule have been replaced by the corresponding
subtrees of $t$. Additionally, a rule may specify that subtrees of the new tree
being produced should be in states as well, indicating that more transduction
work must be done on them before the derivation is finished. A complete,
successful transduction in a T transducer begins with the root node being in
the initial state, then states propagating down the tree to the leaves, until
the entire tree has been transduced. See Figure \ref{exampletransduction} for
an illustrative example, adapted from \cite{DBLP:journals/coling/GraehlKM08}.
\begin{figure*}
Rule 1:
\Tree [.{q A} $x_0$ $x_1$ ]
\hspace{1in}
$\longrightarrow$
\Tree [.A [.R {q $x_1$} {q $x_0$} ] [.S X ] ]
\bigskip
Rule 2:
\Tree [.{q B} $x_0$ $x_1$ ]
\hspace{1in}
$\longrightarrow$
\Tree [.U ]
\bigskip
Input tree:
\Tree [.A [.B D E ] [.C F G ] ]
\bigskip
Tree after application of Rule 1:
\Tree [.A [.R [.{q C} F G ] [.{q B} D E ] ] [.S X ] ]
\bigskip
Tree after subsequent application of Rule 2:
\Tree [.A [.R [.{q C} F G ] U ] [.S X ] ]
\caption{Example transduction steps, simplified from
\cite{DBLP:journals/coling/GraehlKM08}. Note that this transduction is not
complete because the node with the symbol ``C" is still in the state q.}
\label{exampletransduction}
\end{figure*}
A new tree is produced by replacing the subtree that matched the rule with the
right-hand side of the rule, with its variables filled in with the appropriate
subtrees. The new tree is then added to the inventory of current trees in the
usual way for production systems. A transduction is complete for a tree in the
inventory when all of its nodes are no longer in states; at this point, the
states will have propagated all the way from the top of the tree to the leaves,
and then be resolved; in the case of translation, the symbols in the tree will
be words in the output language. The transduction process is nondeterministic;
many rules may apply to a given tree in the inventory, and even the same rule
may apply to different subtrees. To do a complete search for all possible
transductions, we apply each rule to every subtree where it is applicable, and
produce every possible resulting tree; beam search may also be done, where
search paths with low probabilities are pruned.
Formally, a T transducer has the following elements.
\begin{itemize}
\item an input alphabet $\Sigma$
\item an output alphabet $\Delta$
\item a set of \emph{states} $Q$
\item an \emph{initial state}, typically denoted $q_0$
\item transition rules, which are tuples of the form
$(q \subset Q,\sigma \subset \Sigma,tpat, p)$
\end{itemize}
The transition rule tuples specify the state that a given node must be in, and
the symbol from the input language that the subtree must have, (state $q$ and
symbol $\sigma$, respectively), in order for this rule to match. They also
specify a tree \emph{pattern} that forms the right-hand side of the rule, and a
weight $p$ for this rule. The tree pattern is a tree where some of the elements
in the tree may be variables, which refer to subtrees of the left-hand side
under consideration.
\subsection{xT Transducers}
The ``extended" variation of T transducers, indicated with an ``x" prefix, adds
the capability for rules to check whether a potentially matching subtree
matches a certain pattern of finite size, in addition to the given state and
value of the node. The tree pattern in the left-hand side of an xT transduction
rule may contain literal symbols as well as variables, which allows for
lexicalized rules that only apply when certain words are in a subtree. The tree
patterns also make it possible for the rules to reference material finitely far
into a subtree, which makes local rotations straightforward; see Figure
\ref{missesanybody} for example xT rules that perform a local rotation and also
use finite lookahead to produce Francophone names. In the notation common in
the literature, a state for a node is written next to that node in the tree
structure.
\begin{figure*}
\bigskip
1. \Tree [.{q S} $x_0$ [.VP [.V misses ] $x_1$ ] ]
\hspace{1in}
$\longrightarrow$
\Tree [.S {q2 $x_1$} [.VP [.V manque ] [.PP [.P à ] {q2 $x_0$} ] ] ]
\bigskip
2. \Tree [.{q2 NP} John ]
\hspace{1in}
$\longrightarrow$
\Tree [.NP Jean ]
\bigskip
3. \Tree [.{q2 NP} Mary ]
\hspace{1in}
$\longrightarrow$
\Tree [.NP Marie ]
\caption{Switching subject and object in translation to French with an xT
transducer rule. The state q2 here indicates that we want to translate names as
well.}
\label{missesanybody}
\end{figure*}
While T transducers are not as expressive as synchronous TSG
\cite{Shieber:2004:SGT}, xT transducers are as expressive, and can
even be used to simulate synchronous TAG in some cases \cite{maletti:2010:ACL}.
In addition to their formal expressive power, xT transducers are much more
convenient for rule authors; some finite lookahead can be simulated with the
standard T transducers, as shown in \cite{DBLP:journals/coling/GraehlKM08}, but
it is somewhat tedious. The use of xT transducers makes writing rules to
rearrange material in a tree much more convenient.
\subsection{Restricted Versions of T and xT Transducers}
For computational efficiency purposes, we may also consider placing certain
restrictions on the rules in a T or xT transducer. Options that have been
explored include requiring that a transducer be \emph{linear}, which means that
any variable occurring in the left-hand side of a rule should appear no more
than once in the right-hand side, and \emph{nondeleting}, which means that a
variable in the left-hand side must appear at least once in the right-hand
side. Linear transducers are given the prefix ``L", and nondeleting transducers
the prefix ``N", so for example, extended linear non-deleting top-down
transducers are described as ``xLNT". This particular combination of options
has been used several times in the literature, including
\cite{galley-EtAl:2004:HLTNAACL}. Also note that the transducer in Figure
\ref{exampletransduction} is not nondeleting, since Rule 2 does not reference
its variables in its right-hand side.
Among the benefits of adding these constraints on rules are that, LT and LNT
transducers are \emph{compositional}, meaning that a relation that can be
expressed by a cascade of two LT transducers can also be expressed by a single
LT transducer, and that the composition of those two transducers can be
computed. However this is not possible with any other members of the
T-transducer family; even xLNT transducers are non-compositional
\cite{DBLP:journals/mt/Knight07}.
\section{Linguistic Divergences}
Bonnie Dorr, in \cite{DBLP:journals/coling/Dorr94}, enumerates several
different kinds of structural divergences that we might see in translation
between languages. These divergences occur when translating from English to
closely related languages, Spanish and German, all of which have fairly similar
word orders. These are not the only kinds of syntactic differences
that there can be in a translation. They do not, for example, cover the more
large-scale reorderings that we see when translating between SVO and SOV or VSO
languages. However, each of these divergences require something more than
simple word substitution or reordering the children of a given node: many of
these require raising and lowering tree material (performing ``rotations",
in the terminology of \cite{Shieber:2004:SGT}), and nested phrases that are
present in one language are often not present in the other. Many of these
divergences may appear in a given pair of translated sentences. The following
subsections describe Dorr's seven kinds of divergence.
\subsection{Thematic Divergence}
Different languages may express a situation by assigning different thematic
roles to the participants of an action, swapping (for example) the subject and
object. For example, translating from English to Spanish, we see:
\begin{itemize}
\item I like Mary
\item María me gusta a mí
\end{itemize}
In Spanish it is more common to say that ``X pleases Y" than that "Y
wants/likes X". The Spanish verb \textit{querer} has the same structure as the
English ``like", but the meaning of ``gustar" is closer to the English ``to
like".
\subsection{Promotional Divergence}
A modifier in one language may be the head in another language.
\begin{itemize}
\item John usually goes home
\item Juan suele ir a casa
\end{itemize}
Here in English, an adverb modifies the verb to indicate that it is habitual,
but in Spanish we use the verb ``soler" (which inflects as ``suele" for
third-person singular), to express this. It has an infinitive as a dependent.
\subsection{Demotional Divergence}
The demotional divergence is similar to a demotional divergence, viewed in the
other direction; in cases of demotional divergence, a head in one language is a
modifier in the other. In \cite{DBLP:journals/coling/Dorr94}, a formal
distinction is made between the two because in Dorr's MT system, they would be
triggered in different circumstances, but for our purposes they are
effectively analogous.
\begin{itemize}
\item I like eating
\item Ich esse gern
\end{itemize}
In this example, while English uses the verb ``to like", German has an adverb.
The sentence has a literal translation of ``I eat likingly".
\subsection{Categorial Divergence}
In cases of categorial divergence, the meaning of a word with a certain part of
speech in one language is expressed with a different part of speech in the
other.
\begin{itemize}
\item I am hungry
\item Ich habe Hunger.
\end{itemize}
The German sentence here translates literally as ``I have hunger."
\subsection{Structural Divergence}
In cases of structural divergence, there are phrases in one language not
present in the other.
\begin{itemize}
\item John entered the house
\item Juan entró en la casa
\end{itemize}
While the English sentence has the destination of the motion verb as an object,
in the Spanish we see the prepositional phrase ``en la casa" (``in the house").
\subsection{Lexical Divergence}
In cases of lexical divergence, the two languages involved have different
idiomatic phrases for describing a situation.
\begin{itemize}
\item John broke into the room
\item Juan forzó la entrada al cuarto
\end{itemize}
While ``break into" is a phrasal verb in English, in Spanish it is more
idiomatic to ``force entry to". This example also includes a structural
divergence, as ``al cuarto" is a prepositional phrase not present in the
English.
\subsection{Conflational Divergence}
The meaning of the sentence may be distributed to different words in a
different language; the meaning of a verb, for example, may be carried by a
verb and its object after translation.
\begin{itemize}
\item I stabbed John
\item Yo le di puñaladas a Juan
\end{itemize}
Here the Spanish sentence means literally ``I gave John knife wounds". The words
``le" and ``a" are both required, but for different reasons: the verb ``dar"
(to give) requires the personal pronoun beforehand, and whenever a human being
is the object of a verb in Spanish, we add the ``personal a" beforehand.
\section{Implementation}
In the course of this project, I have produced a small, easily-understandable
toolkit named \texttt{kurt} (the Keen Utility for Rewriting Trees), for
experimenting with weighted xT tree transducers. It is implemented in Python 3
and makes use of the NLTK tree libraries \cite{nltkbook}. \texttt{kurt} has
been released as free software, and is available online
\footnote{http://github.com/alexrudnick/kurt}.
The software can perform tree transduction in general for weighted xT
transducers: given a tree, it applies xT transduction rules and produces a list
of output trees. The implementation is fairly naïve, and proceeds as a simple
production system. Partial solutions are matched against every rule in the
transducer, then each matching rule is applied to the partial solution,
producing a new generation of partial solutions. Eventually, the derivation
either succeeds by producing at least one tree with no nodes in a state, or
it fails if the input tree cannot be completely transduced by the given rules.
The system returns all possible output trees, and the complete solutions are
printed out at the conclusion of the program.
The xT rules are straightforward to write, and are stored in YAML files. I have
also provided example xT rules that translate the examples of divergences given
by Dorr; these are described in more detail in Section
\ref{sec:dorr-transducers}.
A complete and useful MT system based on this software -- such that the rules
and their weights were not completely the product of human knowledge
engineering -- would require the implementation of a few more algorithms
described in \cite{DBLP:journals/coling/GraehlKM08}, particularly their EM
training algorithm to calculate weights for a given set of transduction rules,
which depends on their transduction algorithm that produces the more compact
representation of a transduction, a RTG. Decoding would require beam search
over tree transduction, or perhaps over generation using this compact RTG
representation. Additionally, some clever algorithm for extracting tree
transducer rules from parallel treebanks would be useful for the case where
parallel treebanks are available; some candidate techniques for this last
problem are discussed in Section \ref{sec:extraction}.
\subsection{Using the Software}
Transducers are stored in YAML files, with one xT transducer per file; each
rule is specified as an entry in that YAML file, and contains the following
entries.
\begin{itemize}
\item \texttt{state}: (required) The name of the state that a node at the root
of a subtree must be in to match this rule
\item \texttt{lhs}: (required) The left-hand side of the rule: a tree pattern,
typically with variables (tokens starting with \texttt{?}) that must unify with a
subtree in order for that subtree to match this rule
\item \texttt{rhs}: (required) The right-hand side of the rule: another tree
pattern, which is filled in when this rule is applied. It may contain
variables, in which case all of the variables must also be present in the
left-hand side of the rule.
\item \texttt{newstates}: (optional) Specifies the locations of transduction
states in the subtree produced by this rule. There may be many states specified
in the new subtree. They are given in the form
\texttt{[location, statename]}, where location is a bracketed list that
describes the path down the tree from the root of the subtree, with 0-indexed
children. For example, to put the second child of the leftmost child of the
root in state \texttt{foo}, a rule would have a \texttt{newstates} member
\texttt{[[0,1], foo]}.
\item \texttt{weight}: (optional) The weight for this rule. If unspecified, it
defaults to 1.0.
\end{itemize}
Given a file with these entries for each rule of a transducer, say called
\texttt{translation.yaml}, a Python 3 program can use \texttt{kurt} to do tree
transductions in the following way, assuming the libraries are all in the
\texttt{\$PYTHONPATH} or the current working directory.
\begin{verbatim}
from loadrules import loadrules
from translate import translate
rules = loadrules("translation.yaml")
tr = Tree("""(S (NP (PRP I))
(VP (VB am)
(JJ hungry)))""")
## print all valid transductions
translate(tr, rules)
\end{verbatim}
\subsection{Simple Topicalization Example}
In Figure \ref{topicalization}, we see a toy example of xT rules realized with
the system. This is a complete running example that exercises many features of
the software; it translates an English sentence into ``LOLcat" Internet
slang, which features more prominent topicalization \footnote{Readers may or
may not be familiar with the moderately popular catchphrase ``My Pokemans, let
me show you them".}. For simplicity, the syntactic structure of the parse tree
is elided. The initial rule matches a sentence in the initial state \texttt{q},
containing ``let me show you my $x_0$" and produces a new sentence where ``my
$x_0$" has been moved to the front . The rule also specifies that the
(0-indexed) child of the S node at index $1$ is in the state \texttt{respell}.
The second rule matches the word ``Pokémon" when it is in the state
\texttt{respell}, replacing it with the slang spelling of ``Pokemans". The
third rule is for generalization, allowing words other than ``Pokémon" to be
translated in this position. Due to both the second and third rules applying to
the subtree, both spellings are produced in the output, but the translation
with the slang spelling is given a higher weight.
\begin{figure*}
\begin{center}
\begin{verbatim}
## lolcat topicalization (fronting)
- state: q
lhs: (S let me show you my ?x0)
rhs: (S my ?x0 , let me show you them)
newstates:
- [[1], respell]
- state: respell
lhs: Pokémon
rhs: Pokemans
weight: 0.9
- state: respell
lhs: ?x0
rhs: ?x0
weight: 0.1
\end{verbatim}
\end{center}
\caption{xT rules for translating into LOLcat dialect, which features
topicalization, in the YAML format used by the software implemented as part of
this work}
\label{topicalization}
\end{figure*}
\section{xT Transducers for Linguistic Divergences}
\label{sec:dorr-transducers}
I wrote xT transduction rules for the software toolkit that handle each of
Dorr's divergence examples. Most of the work involved was constructing parse
trees for the source- and target-language sentences; I then converted the trees
into templates for the desired trees, at which point they were effectively xT
transduction rules. Some examples are included in Figures
\ref{translationGerman} and \ref{translationSpanish}, but the complete set of
rules are in \texttt{german.yaml} and \texttt{spanish.yaml}, included with the
software. Most of the transformations required to implement these rules are
instances of local rotations, as described by \cite{Shieber:2004:SGT}.
\begin{figure*}
\begin{center}
\begin{verbatim}
# handle <pronoun> like <gerund>
- state: q
lhs: (S (NP ?x0) (VP (VB like) ?x1))
rhs: (S (NP ?x0) (VP ?x1 (RB gern)))
newstates:
- [[0, 0], lookup]
- [[1, 0], gerundtotensed]
# handle I am <adj>
- state: q
lhs: (S (NP ?x0) (VP (VB am) ?x1))
rhs: (S (NP ?x0) (VP (VB habe) ?x1))
newstates:
- [[0, 0], lookup]
- [[1, 1], adjtonoun]
## simple lookups for known phrases
- state: lookup
lhs: (PRP I)
rhs: (PRP ich)
## POS changes.
- state: gerundtotensed
lhs: (VBG eating)
rhs: (VB esse)
- state: adjtonoun
lhs: (JJ hungry)
rhs: (NN hunger)
\end{verbatim}
\end{center}
\caption{Sample translation rules for German}
\label{translationGerman}
\end{figure*}
\begin{figure*}
\begin{center}
\begin{verbatim}
# handle <pronoun> like <name>
- state: q
lhs: (S (NP ?x0) (VP (VB like) (NP ?x1)))
rhs: (S (NP ?x1) (VP (NP ?x0) (VB gusta) ?x0))
newstates:
- [[0, 0], lookup]
- [[1, 0, 0], objectivize]
- [[1, 2], tothisperson]
- state: tothisperson
lhs: (PRP I)
rhs: (PP (A a) (PRP mí))
# handle usually -> soler
- state: q
lhs: (S (NP ?x0) (VP (RB usually) ?x1))
rhs: (S (NP ?x0) (VP (VBZ suele) ?x1))
newstates:
- [[0,0], lookup]
- [[1,1], unconjugate]
# handle entered-object -> entró en ...
- state: q
lhs: (S (NP ?x0) (VP (VBD entered) ?x1))
rhs: (S (NP ?x0) (VP (VBD entró) (PP (IN en) ?x1)))
newstates:
- [[0,0], lookup]
- [[1,1,1], lookup]
# handle broke-into X -> forzó la entrada a X
- state: q
lhs: (S (NP ?x0) (VP (VBD broke) (PP (IN into) ?x1)))
rhs: (S (NP ?x0) (VP (VBD forzó)
(NP (DT la) (NN entrada) (PP (IN a) ?x1))))
newstates:
- [[0,0], lookup]
- [[1,1,2,1], lookup]
- [[1,1,2], al]
# handle I stabbed X -> le di puńaladas a
- state: q
lhs: (S (NP ?x0) (VP (VBD stabbed) ?x1))
rhs: (S (NP ?x0) (VP (PRP le) (VBD di)
(NP (NN puñaladas)) (NP (A a) (NNP Juan))))
newstates:
- [[0,0], lookup]
\end{verbatim}
\end{center}
\caption{Sample translation rules for Spanish}
\label{translationSpanish}
\end{figure*}
\section{Related Work}
\label{sec:relatedwork}
In addition to the work on tree-based MT, some very sophisticated
string-based MT algorithms have been framed in terms of finite-state
transducers. Not long after the introduction of modern word-based SMT, Knight
and Al-Onaizan showed that IBM Model 3 could be expressed with a cascade of
FSTs \cite{DBLP:conf/amta/KnightA98}. Since string transducers can be composed,
decoding in this case becomes one enormous beam search over a single state
machine. Similarly, Shankar Kumar and William Byrne expressed the phrase-based
alignment template model as FSTs \cite{DBLP:conf/naacl/KumarB03}. The last part
of the decoding process in Chiang's hierarchical phrase-based model can also be
described in terms of FSTs \cite{iglesias-EtAl:2009:NAACLHLT09}; Iglesias et
al. use finite-state techniques to traverse a lattice of possible translations
once chart parsing with an SCFG has completed.
For tutorials and related algorithms, Chiang provides an excellent introduction
to synchronous grammars in \cite{Chiang06anintroduction}. My understanding of
TAG was greatly aided by the TAG section in \cite{vannoord93}; it is referenced
in the TAG Wikipedia page. For overviews of different applications of T-family
tree transducers and their various properties, in a very approachable style,
\cite{DBLP:journals/mt/Knight07} and \cite{KnightGraehlOverview} are very
helpful. Additionally \cite{DBLP:journals/coling/GraehlKM08} contains excellent
examples for understanding xT transduction (one of which is in this paper in
simplified form, though the original example is worth working through and
understanding fully), along with a set of algorithms that can be computed over
xT transducers, including an EM procedure that can be used to estimate the
weights for an xT grammar given a parallel treebank.
\section{Conclusions and Future Work}
Here I have described the ``T" family of tree transducers and situated them
among the various formalisms for describing relations over strings and trees;
I have also demonstrated that xT transducers are sufficient for handling
translation across the linguistic divergences described by Dorr. I have
presented a software package suitable for experimentation with xT transducers,
which comes with example translation rules that perform translations over each
of the divergences.
There remains significant work to be done on the topic; for example, to my
knowledge, there is no easily available end-to-end MT system based on tree
transducers, either commercial or Open Source. There are many more questions
that I would like to answer; as far as I know, these are open problems in the
field.
\subsection{Transducers, Disambiguation, and Language Models}
While weighted synchronous grammars and xT transducers provide generative
models of translation, the probabilities that they assign to a given rule are
set ahead of time, and are not conditioned on features of the surrounding
context. It may be fruitful to try using discriminative approaches (i.e.,
classifiers) to help a transducer-based MT system make decisions about which
rules are the most likely to apply in a given context, either based on the
surrounding tree material, or on the surface words in the source-language
sentence. It may turn out that there is a more principled way to achieve the
same benefits, perhaps by adding more conditions on the probabilities in a
generative model. However, cross-language phrase-sense disambiguation with
classifiers, like in the work of Carpuat and Wu \cite{carpuatpsd}, has proved
useful for phrase-based SMT. For phrase-based SMT in general, discriminative
approaches such as Minimum Error-Rate Training (MERT)
\cite{DBLP:conf/acl/Och03} have become quite typical.
Another guide for the tree transduction process could be language models,
either flat n-gram models or structured ones, which would have the added
benefit that they could be trained on larger corpora than those used to produce
the tree transduction rules in the first place.
\subsection{Extraction and Training Transducers}
\label{sec:extraction}
Thus far, it seems as though there is no agreed-upon best approach for
extracting a set of tree transduction rules from a parallel
treebank, such that a tree-to-tree MT system could be constructed. While
parallel treebanks are not abundant, with sufficiently good monolingual
parsers, parallel trees can be created from bitext, and hopefully these could
be used to induce transduction rules for tree-to-tree MT systems. Other work
has presented methods for learning tree-to-string transduction rules, for
example \cite{galley-EtAl:2004:HLTNAACL} and \cite{deneefe-knight:2009:EMNLP}.
These approaches for learning tree-to-string transducers, if I understood them
more completely, might turn out to generalize easily to the tree-to-tree case,
but if so, it is not yet obvious to me how to do this.
One proposed approach for learning relations over trees is given in
\cite{eisner:2003:ACL-companion}, in which Eisner presents algorithms for both
extracting an STSG grammar and training its weights; STSGs can then be
expressed as xT transducers as described by Maletti in \cite{maletti:2010:ACL}.
Additionally, approaches for leaning tree transduction rules have been
suggested for tasks other than machine translation, particularly in the
summarization work of Cohn and Lapata \cite{cohn-lapata:2007:EMNLP-CoNLL2007},
\cite{cohn-lapata:2008:PAPERS}, who work with a corpus that not only has parse
trees for both source and target languages (in their case, pairs of longer and
paraphrased sentences, both in English), but has also been word-aligned. The
word alignments inform their grammar extraction. Cohn and Lapata use a very
small training paraphrase corpus (480 sentences), which suggests that perhaps
their methods would be useful for MT with low-resourced languages. They also
use of discriminative methods for training and decoding. Both their algorithm
for rule extraction and the tree transducers with discriminative methods may
have been used in tree-to-tree MT system, but I have not yet found work that
describes this; if it has not yet been tried, someone should explore it.
\subsection{XDG as Transducers}
Given that many grammar formalisms are expressible in terms of tree
transducers, one wonders if constraint-based dependency frameworks, such as
Extensible Dependency Grammar \cite{Debusmann06}, which has been used by
Michael Gasser for machine translation \cite{gasser:2011:freerbmt}, could be
expressed in terms of tree transducers. Transducers over dependency trees have
already been used for machine translation, for example by Ding and Palmer
\cite{ding-palmer:2005:ACL}. However, XDG defines not just one layer of
dependency analysis for a language, but several. Its analysis of a sentence
in a given language is a multigraph with multiple dimensions of analysis, with
constraints describing permissible structures on each dimension, as well as the
relationships between dimensions. This suggests that perhaps XDG could be
expressed as a cascade of transducers, with each layer in the cascade
describing the relation between one XDG dimension and the next.
A problem with this interpretation is that not all layers of an XDG multigraph
are tree structures. This might mean that XDG cannot be cleanly expressed in
this way at all, or perhaps that another kind of transducer that operates on
graphs more generally could be used. Alternatively, perhaps XDG could be tweaked
such that every layer has a tree structure.
If it is in fact possible to express XDG translation rules as a cascade of
transducers, then this would present a clear path for integrating machine
learning into the largely rule-based system, making use of the training
algorithms already present in the literature. As a fairly modest step, given
small numbers of parallel training sentences, one could use EM to train the
weights of the transduction rules that implement the XDG grammar. More
ambitiously, one could perhaps extract grammar rules from example translation
pairs, although the XDG parse graphs would have to be provided by an expert,
for each layer in the analysis. This could be done either simply on demand,
when the existing grammar fails to parse and translate a sentence, or using
active learning to select sentences for human annotation.
One problem not addressed at all in the literature that I have seen is how to
translate, either into or out of, morphologically rich languages using tree
transducers. It seems as though morphological analysis and lemmatization would
be an important first step in a transducer-based MT system, to limit the number
of rules that the system needs to consider, but then the morphological
information should be used to help the system make choices during transduction
(decoding). Perhaps morphological features would be useful to classifiers
trained to help make syntactic disambiguation decisions.
\bibliographystyle{acl}
|
2,869,038,155,105 | arxiv | \section{Introduction \label{sec:introduction}}
The H{\sc i} properties of cluster spiral galaxies are significantly different from those
of field spiral galaxies. Spiral galaxies located near the cluster center are often
H{\sc i} deficient (Chamaraux et al. 1980, Bothun et al. 1982,
Giovanelli \& Haynes 1985, Gavazzi 1987, 1989)
and their H{\sc i} disk sizes are considerably reduced (van Gorkom \& Kotanyi 1985, Warmels 1988,
Cayatte et al. 1990, 1994). The observed small H{\sc i} disks together with unperturbed old
stellar disks favor a ram pressure stripping scenario as the origin of the
H{\sc i} deficiency of Virgo cluster spirals.
Fourteen out of the 22 brightest Virgo spiral galaxies show an H{\sc i} deficiency
greater than 0.3 (Cayatte et al. 1994), i.e. they have lost more than half of their
initial reservoir of atomic hydrogen. This represents more than $10^9$~M$_{\odot}$ per
galaxy. Despite the large missing mass, it is very difficult to observe
this stripped gas. In the Virgo cluster an extended gas tail is only observed
in one spiral galaxy.
This exception is NGC~4388 which is located at a projected distance of
1.3$^{\rm o}$ ($\sim 0.4$~Mpc\footnote{We use a distance of 17~Mpc to the Virgo cluster})
from the Virgo cluster center (M87) and hosts a Seyfert 2 nucleus.
Yoshida et al. (2002) discovered a very large H$\alpha$ plume that extends
up to $\sim$35~kpc north eastwards from the galactic disk which is seen almost edge-on.
This region contains $\sim 10^{5}$~M$_{\odot}$ of ionized gas. Vollmer \& Huchtmeier (2003)
were the first who detected atomic hydrogen associated with this plume.
Oosterloo \& van Gorkom (2005) imaged this gas tail with the WSRT and found an
extent of $\sim 100$~kpc and a mass of $3.4 \times 10^8$~M$_{\odot}$.
This represents only about 20\% of the stripped gas assuming an H{\sc i}
deficiency of 0.8 (Cayatte et al. 1994).
These arguments only hold if the spiral galaxy had as much gas as a field galaxy
of the same optical diameter and the same morphology before the ram pressure
stripping event. On the other hand, cluster galaxies might have experienced other gas removing
interactions before ram pressure stripping.
Possible interactions can be divided into two classes (for a review see
Gavazzi \& Boselli 2006): (i) the ``preprocessing''
of spiral galaxies through tidal interactions in infalling galaxy groups
(Mihos 2004, Fujita 2004, Dressler 2004) and (ii) harassment (Moore et al. 1996, 1998),
viscous/turbulent stripping (Nulsen 1982) and/or thermal evaporation (Cowie \& McKee 1977)
which occur once the galaxy resides within the cluster.
These mechanisms make the missing mass decreasing but not vanish.
We still expect several $10^8$~M$_{\odot}$ of stripped gas which is not detected.
The question thus arises where the stripped gas is hidden and if we can or
should detect it. The underlying problem is the evolution of the multiphase ISM
once it is pushed out of the galactic disk by ram pressure.
Within the starforming disk the ISM is turbulent and consists of several phases:
the hot ionized ($\sim 10^6$~K), the warm neutral and ionized ($\sim 10^4$~K),
and the cold neutral ($\sim 100$~K) phase (see, e.g. Kulkarni \& Heiles 1988,
Spitzer 1990, McKee 1995). The neutral phase is not uniform but of fractal nature
(Elmegreen \& Falgarone 1996). Braun (1997) analyzed the resolved neutral hydrogen emission
properties of the 11 nearest spiral galaxies. He identified the high-brightness network of
H{\sc i} emission features with the cold neutral medium ($T \sim 100$~K)
and found that the bulk of atomic hydrogen is in this cold phase
(between 60\% and 90\%). Braun (1997) also noted that the fractional
line flux due to the cold phase drops
abruptly near the optical radius of a given galaxy. However, beyond this radius
a cool phase still exists in most of the galaxies,
even though it represents only a few percent of the total H{\sc i} flux.
When the ISM is leaving the disk, its heating (kinematical heating by supernova
explosions and heating by stellar radiation) decreases and the whole gas
is surrounded by the hot intracluster medium (ICM).
At the same time ram pressure driven shocks propagate through the ISM.
Initially dense ISM regions might collapse and form stable globules (Vollmer et al. 2001)
or form stars, whereas tenuous regions might expand. The intracluster
medium does not only confine the stripped gas, but also causes evaporation (Cowie \& McKee 1977).
How efficient evaporation is depends on the geometry of the magnetic field
frozen into the ISM. A tangled field will increases the evaporation timescale
considerably (Cowie et al. 1981, Malyshkin \& Kulsrud 2001).
Since the magnetic field geometry is not accessible, only direct observations
of the stripped gas at different wavelengths can help us to determine what happens
to the ISM once it has left the galactic disk.
In this article we approach this problem with deep single dish H{\sc i} observations
to search for atomic hydrogen far away from the galactic disks ($>20$~kpc)
and a balance of previous detections of extraplanar gas.
We selected 5 galaxies for which deep interferometric H{\sc i} observations
showed extraplanar gas. The observations are
described in Sect.~\ref{sec:observations} followed by the presentation of the
results (Sect.~\ref{sec:results}). These results are discussed in the framework
of the stripping of a two-phase atomic hydrogen taking into account existing detections of
extraplanar gas (Sect.~\ref{sec:discussion}). Finally, we give our conclusions in
Sect.~\ref{sec:conclusions}.
\section{Observations \label{sec:observations}}
In 2001--2003, we performed 21-cm line observations with the Effelsberg 100-m
telescope at different positions centered on
the systemic velocities of NGC~4402, NGC~4438, NGC~4501, and NGC~4522 with a bandwidth of 12.5~MHz.
The two-channel receiver had a system noise of $\sim$30~K. The 1024 channel autocorrelator
was split into four banks with 256 channels each, yielding a
channel separation of $\sim$10~km\,s$^{-1}$. We further binned the channels to obtain
a final channel separation of $\sim$10~km\,s$^{-1}$ like the archival VLA data which
we use for comparison (NGC~4402, NGC~4501, NGC~4522).
The galaxy's central position and four
positions at a distance of one beam width (9.3$'$) to the NW, SW, SE, and
NE from the galaxy center were observed in on--off mode
(5~min on source, 5~min off source). In addition, we observed a sixth position
$6.5'$ west of the galaxy center for NGC~4438.
\begin{table}
\caption{Integration times and rms.}
\label{tab:table}
\[
\begin{array}{lcccccc}
\hline
\hline
\noalign{\smallskip}
{\rm \bf NGC~4402} & & & & & & \\
\hline
\noalign{\smallskip}
{\rm position} & {\rm C} & {\rm NW} & {\rm W} & {\rm SW} & {\rm SE} & {\rm NE} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
$$\Delta t$$\ {\rm (min)} & 120 & 120 & - & 120 & 120 & 120 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
{\rm rms\ (mJy)} & 2.6 & 1.8 & - & 2.0 & 1.5 & 3.5 \\
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
{\rm \bf NGC~4438} & & & & & & \\
\hline
\noalign{\smallskip}
{\rm position} & {\rm C} & {\rm NW} & {\rm W} & {\rm SW} & {\rm SE} & {\rm NE} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
$$\Delta t$$\ {\rm (min)} & 120 & 120 & 120 & 120 & 120 & 120 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
{\rm rms\ (mJy)} & 1.8 & 1.7 & 1.8 & 2.2 & 2.2 & 2.2 \\
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
{\rm \bf NGC~4501} & & & & & & \\
\hline
\noalign{\smallskip}
{\rm position} & {\rm C} & {\rm NW} & {\rm W} & {\rm SW} & {\rm SE} & {\rm NE} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
$$\Delta t$$\ {\rm (min)} & 120 & 120 & - & 120 & 120 & 120 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
{\rm rms\ (mJy)} & 1.8 & 1.0 & - & 1.4 & 1.6 & 1.7 \\
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
{\rm \bf NGC~4522} & & & & & & \\
\hline
\noalign{\smallskip}
{\rm position} & {\rm C} & {\rm NW} & {\rm W} & {\rm SW} & {\rm SE} & {\rm NE} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
$$\Delta t$$\ {\rm (min)} & 120 & 120 & - & 120 & 120 & 120 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
{\rm rms\ (mJy)} & 1.2 & 1.8 & - & 1.9 & 1.1 & 1.2 \\
\noalign{\smallskip}
\hline
\hline
\end{array}
\]
\end{table}
Care was taken to avoid other Virgo cluster galaxies with velocities within our bandwidth
in all observations. We used 3C286 for pointing and flux calibration.
The observation time was 120~min per position.
The resulting noise (Table~\ref{tab:table}) is partly determined by small amplitude
interferences, but it is close to the theoretical noise of 2~mJy per hour
of integration: on average $1\sigma=1.5$~mJy (varying from 1.1 to 2.0~mJy).
In addition we observed a field of about $1^{\circ} \times 1^{\circ}$ centered on the H{\sc i}
plume of NGC~4388.
The noise level of these spectra is largely determined by the closeness of M87 which has
a flux density of $\sim 220$~Jy at 1.4~GHz. The noise of our spectra varies between
$1\sigma=2$ and $7$~mJy.
In order to compare our Effelsberg H{\sc i} spectra to interferometric data where the
galaxy is spatially resolved, we use VLA 21~cm data (Crowl et al. 2005, Vollmer et al.
in prep., Kenney et al. 2004).
These data have spatial resolutions of $\sim 20''$ and channel separations
of 10~km\,s$^{-1}$. We clipped the data cubes at a level of 3~mJy/beam
and produced a synthesized single dish spectrum using a Gaussian beam of $9.3'$ HPBW.
\section{Results \label{sec:results}}
\subsection{NGC~4388}
We observed $6 \times 5$ positions centered on the H{\sc i} plume of NGC~4388 (Oosterloo \&
van Gorkom 2005) (Fig.~\ref{fig:n4388ext}). The spectrum centered on NGC~4388 is labeled
with a ``G''. The observing conditions did not permit us
to obtain interference-free spectra of two positions in the north-east of NGC~4388.
In the corresponding boxes we have replaced the spectra by a solid line.
The sinusoidal behaviour of the spectra in the west and the east is most probably due to
sidelobe detection of M87.
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{6302fig1.ps}}
\caption{Deep Effelsberg 100-m H{\sc i} spectra of the NGC~4388 H{\sc i} plume.
The spectrum centered on the galaxy is labeled with a ``G''.
We clearly detect H{\sc i} line emission is 4 positions: in NGC~4388 and in 3 positions
to the north west of the galactic disk. These positions are labeled with ``tail''.
The spatial separation between two spectra corresponds to the
beamsize ($9.3'$). We were not able to obtain proper spectra for two
positions north west of the galaxy.
In the corresponding boxes we have replaced the spectra by a solid line.
} \label{fig:n4388ext}
\end{figure*}
We clearly detect H{\sc i} line emission in 4 positions: in NGC~4388 and in 3 positions
to the north west of the galactic disk. These positions are labeled with ``tail''.
One of the spectra shows a double line profile consistent with the WSRT observations
of Oosterloo \& van Gorkom (2005). Our deep H{\sc i} observations show that there is
no significant amount of atomic hydrogen beyond this H{\sc i} tail.
\subsection{NGC~4402}
NGC~4402 is another H{\sc i} deficient edge-on spiral located close to NGC~4388.
Crowl et al. (2005) detected an asymmetric distribution of the 20cm continuum
emission and $2.7 \times 10^7$~M$_{\odot}$ of extraplanar atomic
hydrogen in the north east of the galactic disk.
Both features are consistent with a scenario where ram pressure is responsible for
the compressed radio continuum halo and the extraplanar H{\sc i}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{6302fig2.ps}}
\caption{Solid line: Effelsberg 100-m spectrum of the central position.
Dashed line: spectrum of the VLA data (Crowl et al. 2005). Dotted line: 3$\sigma$ noise level of the
100-m spectrum. Heliocentric velocities are given relative to the systemic velocity
of NGC~4402 ($v_{\rm sys}$=235~km\,s$^{-1}$).
} \label{fig:n4402c}
\end{figure}
Our Effelsberg observations of the central position Fig.~\ref{fig:n4402c}
do not reveal more H{\sc i} than observed with the VLA (Crowl et al. 2005). We do not find
any H{\sc i} in the offset positions (Fig.~\ref{fig:n4402eff}).
The gap in the spectrum around zero radial velocity is due to galactic H{\sc i} emission.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{6302fig3.ps}}
\caption{Solid lines: Effelsberg 100-m spectra of the four off-center positions.
Their locations with respect to the galaxy center are marked on top of each panel.
Dashed line: synthesized VLA spectra (Crowl et al. 2005), which only show H{\sc i} disk emission.
Dotted line: 3$\sigma$ noise levels of the Effelsberg spectra.
Radial velocities are given relative to the systemic velocity of NGC~4402.
} \label{fig:n4402eff}
\end{figure}
\subsection{NGC~4438}
NGC~4438 has a strongly perturbed stellar disk with a prominent stellar tidal arm
pointing to the north. This perturbation is due to a rapid and close gravitational
interaction with its companion S0 galaxy NGC~4435 (Combes et al. 1988, Vollmer et al. 2005).
The gas disk is heavily truncated and almost exclusively molecular.
Extraplanar CO with a mass of $\sim 5 \times 10^8$~M$_{\odot}$ is detected to the west of
the galactic disk. Detailed modelling
of the interaction including the gravitational interaction, ram pressure, and
an ISM-ISM collision showed that ram pressure (together with the tidal
interaction) is the most important ingredient
to reproduce the observed CO emission distribution and kinematics (Vollmer et al. 2005).
With our deep Effelsberg observations, linearly interpolated where galactic
H{\sc i} emission dominates, we detect a total H{\sc i} mass of
$\sim 6 \times 10^8$~M$_{\odot}$ (Fig.~\ref{fig:n4438c}).
Since there is only a small fraction of the total H{\sc i} emission detected in
interferometric radio observations
(Cayatte et al. 1990, Hibbard et al. 2001), we compare our deep Effelsberg H{\sc i}
spectrum with the integrated CO spectrum of Vollmer et al. (2005) where the
extraplanar gas is included. We find a remarkable resemblance between the
two spectra tracing the molecular and the atomic hydrogen.
The only differences are that (i) the ratio between the H{\sc i} peak flux density
at $v=0$~km\,s$^{-1}$ and $v>0$~km\,s$^{-1}$ is larger than that of the
CO data and (ii) the CO peak at $v>0$~km\,s$^{-1}$ extends further to high velocities.
This might indicate that both gas phases are well mixed. However, without
deep interferometric H{\sc i} observations, which are still lacking
due to the closeness of M87, it is not possible to investigate the
H{\sc i} -- H$_2$ connection in more detail.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{6302fig4.ps}}
\caption{Solid line: Effelsberg 100-m spectrum of the central position.
Dashed line: CO(1--0) spectrum from Vollmer et al. (2005) in arbitrary units.
Dotted line: 3$\sigma$ noise level of the
100-m spectrum. Heliocentric velocities are given relative to the systemic velocity
of NGC~4438 ($v_{\rm sys}$=-70~km\,s$^{-1}$).
} \label{fig:n4438c}
\end{figure}
We do not find any significant H{\sc i} in the offset positions (Fig.~\ref{fig:n4438eff}).
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{6302fig5.ps}}
\caption{Solid lines: Effelsberg 100-m spectra of the five off-center positions.
Their locations with respect to the galaxy center are marked on top of each panel.
Dotted line: 3$\sigma$ noise levels of the Effelsberg spectra.
Radial velocities are given relative to the systemic velocity of NGC~4438.
} \label{fig:n4438eff}
\end{figure}
\subsection{NGC~4501}
The spiral galaxy NGC~4501 has an H{\sc i} disk which is truncated close to the
optical radius $R_{25}$. In addition, the H{\sc i} surface density is
enhanced in the south western side of the galactic disk (Cayatte et al. 1990, 1994).
In a forthcoming paper (Vollmer et al., in prep.) we will show that
this galaxy is in a pre-peak stripping stage, i.e. it is approaching the
cluster center and ram pressure will reach its maximum in about 100~Myr.
Our deep Effelsberg H{\sc i} observations of the central position (Fig.~\ref{fig:n4501c})
do not reveal more H{\sc i} than observed with the VLA (Vollmer et al., in prep.).
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{6302fig6.ps}}
\caption{Solid line: Effelsberg 100-m spectrum of the central position.
Dashed line: spectrum of the VLA data (Vollmer et al. in prep.).
Dotted line: 3$\sigma$ noise level of the
100-m spectrum. Heliocentric velocities are given relative to the systemic velocity
of NGC~4501 ($v_{\rm sys}$=2281~km\,s$^{-1}$).
} \label{fig:n4501c}
\end{figure}
The single dish Effelsberg spectra of the offset positions show a possible
additional H{\sc i} flux density compared to the VLA data in the north-east
of the galactic disk (Fig.~\ref{fig:n4501eff}). This extra emission
corresponds to an H{\sc i} mass of $\sim 10^7$~M$_{\odot}$.
This is consistent with the pre-peak stripping scenario.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{6302fig7.ps}}
\caption{Solid lines: Effelsberg 100-m spectra of the four off-center positions.
Their locations with respect to the galaxy center are marked on top of each panel.
Dashed line: synthesized VLA spectra, which only show H{\sc i} disk emission.
Dotted line: 3$\sigma$ noise levels of the Effelsberg spectra.
Radial velocities are given relative to the systemic velocity of NGC~4501.
} \label{fig:n4501eff}
\end{figure}
\subsection{NGC~4522}
NGC~4522 is one of the best examples for ongoing ram pressure stripping.
H{\sc i} and H$\alpha$ observations (Kenney et al. 2004, Kenney \& Koopmann 1999) showed a
heavily truncated gas disk at a radius of 3~kpc, which is $\sim 40$\% of the optical radius,
and a significant amount of extraplanar gas to the west
of the galactic disk ($1.5 \times 10^8$~M$_{\odot}$).
The one-sided extraplanar atomic gas distribution shows high column
densities, comparable to those of the adjectant galactic disk.
A strong ram pressure scenario can account for the truncated gas disk and the western
extraplanar gas. Further evidence for such a peak ram pressure
scenario comes from polarized radio continuum observations (Vollmer et al. 2004b).
The 6~cm polarized emission is located at the eastern edge of the galactic disk, opposite to the
western extraplanar gas. This ridge of polarized radio continuum emission is most likely
due to ram pressure compression of the interstellar medium (ISM) and its magnetic field.
In addition, the degree of polarization decreases from the east to the west and the flattest
spectral index between 20~cm and 6~cm coincides with the peak of the 6~cm polarized emission.
These findings together with a detailed dynamical model (Vollmer et al. 2006)
are consistent with a scenario where ram pressure is close to its maximum.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{6302fig8.ps}}
\caption{Solid line: Effelsberg 100-m spectrum of the central position.
Dashed line: spectrum of the VLA data of Kenney et al. (2004). Dotted line: 3$\sigma$
noise level of the 100-m spectrum.
Heliocentric velocities are given relative to the systemic velocity
of NGC~4522 ($v_{\rm sys}$=2324~km\,s$^{-1}$).
} \label{fig:n4522c}
\end{figure}
Our Effelsberg observations of the central position (Fig.~\ref{fig:n4522c})
do not reveal more H{\sc i} than observed with the VLA (Kenney et al. 2004). We do not find
any H{\sc i} in the offset positions (Fig.~\ref{fig:n4522eff}).
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{6302fig9.ps}}
\caption{Solid lines: Effelsberg 100-m spectra of the four off-center positions.
Their locations with respect to the galaxy center are marked on top of each panel.
Dashed line: synthesized VLA spectra (Kenney et al. 2004), which only show H{\sc i} disk emission.
Dotted line: 3$\sigma$ noise levels of the Effelsberg spectra.
Radial velocities are given relative to the systemic velocity of NGC~4522.
} \label{fig:n4522eff}
\end{figure}
\subsection{NGC~4569}
Deep VLA and Effelsberg H{\sc i} data together with a dynamical model of this galaxy are
presented in Vollmer et al. (2004a). They discovered a low surface density H{\sc i} arm
in the west of the galaxy, whose velocity field is distinct from that of the overall disk rotation.
No H{\sc i} emission was detected in the Effelsberg H{\sc i} off center observations.
A post--stripping scenario is consistent with the main observed characteristics of NGC~4569.
In this scenario the galaxy's closest approach to the cluster center, i.e. peak ram
pressure, occured $\sim 300$~Myr ago.
\section{Where did the ram pressure stripped gas go? \label{sec:discussion}}
With our sample of 6 galaxies we can now investigate how much of the missing stripped
gas mass we detect in H{\sc i}.
The first result is that we do not detect a significant
amount of atomic hydrogen at distances greater than 20~kpc in any of these galaxies
except in NGC~4388 (see Sec.~\ref{sec:n4388}).
For the determination of a gas detection rate we need to know
the expected atomic gas mass of a non-H{\sc i} deficient galaxy.
With the H{\sc i} deficiency and the total observed H{\sc i}
mass we can estimate the expected initial H{\sc i} mass of a spiral galaxy before it entered
the Virgo cluster (table~\ref{tab:masses}).
As discussed in Sec.~\ref{sec:introduction} this depends on whether the galaxy
experienced a gas removing interaction before the ram pressure stripping event.
Therefore we calculate the initial H{\sc i} mass for a ``normal'' field galaxy and
a galaxy that has already lost a significant amount of gas due to tidal
interactions, turbulent/viscous stripping, and/or evaporation.
\subsection{Initially ``normal'' gas rich spiral galaxies}
In general, the uncertainty of H{\sc i} deficiencies is about $\pm 0.2$.
We use the H{\sc i} deficiencies of Crowl et al. (2005) and Kenney et al. (2004)
which are consistent with those of Cayatte et al. (1994) within the errors.
If we assume that the missing (non-detected) gas has been evaporated by the
hot intracluster medium, we can estimate the evaporation rate by dividing
the missing mass by a characteristic timescale.
For this timescale we chose the time to ram pressure peak when most of
the gas leaves the galaxy.
Detailed comparison between observations and numerical modelling of
these galaxies showed that we observe NGC~4388 $\sim 100$~Myr after peak ram pressure
(Vollmer \& Huchtmeier 2003), NGC~4402 more than several 100~Myr after peak ram pressure
(Crowl et al. 2005), NGC~4438 is near peak ram pressure (Vollmer et al. 2005),
NGC~4501 is $\sim 100$~Myr before peak ram pressure (Vollmer et al. in prep.), NGC~4522 is near
peak ram pressure (Vollmer et al. 2006), and NGC~4569 $\sim 300$~Myr after peak ram pressure
(Vollmer et al. 2004a).
Table~\ref{tab:masses} summarizes the data for all galaxies: the time to peak ram pressure (col.(2)),
the H{\sc i} deficiency (col.(3)),
the observed total H{\sc i} mass (col(4)), the observed extraplanar H{\sc i} gas mass
(col.(5); except for NGC~4438 where we have taken the extraplanar CO mass), the
expected H{\sc i} mass based on the H{\sc i} deficiency (col.(6)), the percentage of
extraplanar gas to the missing gas mass, i.e. the difference between the observed
and the expected initial gas mass (col.(7)), the expected H{\sc i} mass assuming an initial
H{\sc i} deficiency of 0.4 (col.(8)), the percentage of
extraplanar gas to the missing gas mass assuming an initial H{\sc i} deficiency of 0.4 (col.(9)),
and the estimated evaporation rate (\ref{eq:evap}; col.(10)).
\begin{table*}
\caption{Galaxy gas properties.}
\label{tab:masses}
\[
\begin{array}{|l|c|c|c|c|c|c|c|c|c|}
\hline
{\rm name} & {\rm time\ to\ ram} & {\rm HI\ def} & {\rm M}^{\rm extra}_{\rm HI} & {\rm M}^{\rm total}_{\rm HI} & {\rm M}^{\rm expected}_{\rm HI} & \%{\rm \ of} & {\rm M}^{\rm expected}_{\rm HI} & \%{\rm \ of} & {\rm evaporation} \\
&{\rm pressure\ max.} & & & & &{\rm missing} & {\rm def=0.4} & {\rm missing} & {\rm rate} \\
&({\rm Myr}) & &(10^8{\rm M}_{\odot}) &(10^8{\rm M}_{\odot}) &(10^8{\rm M}_{\odot}) & {\rm mass} &(10^8{\rm M}_{\odot}) & {\rm mass} & {\rm M}_{\odot}/{\rm yr} \\
\hline
{\rm NGC~4388} & 100^h &0.8^a & 3.4^b & 3.6^a & 23 & 18 & 9.2 & 61 & 5.6 \\
\hline
{\rm NGC~4402} & >300^i &0.5^c & 0.3^c & 4.4^c & 14 & 3 & 5.6 & 23 & <0.4 \\
\hline
{\rm NGC~4438} & 10^j &0.9^d & 5.0^e & 6.0^f & 48 & 12 & 19 & 38 & 130 \\
\hline
{\rm NGC~4501} & -100^k &0.5^a & <0.1^f & 17^a & 54 & <0.3 & 21 & 3 & - \\
\hline
{\rm NGC~4522} & 50^l &0.6^g & 1.7^g & 4.3^g & 17 & 14 & 6.8 & 68 & 5.0\\
\hline
{\rm NGC~4569} & 300^m & 1.2^a & 0.5^m & 6.0^m & 95 & 0.6 & 38 & 1.6 & 10.7\\
\hline
\end{array}
\]
\begin{list}{}{}
\item[$^a$] Cayatte et al. (1994)
\item[$^b$] Oosterloo \& van Gorkom (2005)
\item[$^c$] Crowl et al. (2005)
\item[$^d$] We took the HI deficiency of NGC~4579 which has about the same B and H band magnitudes and
morphological type. As NGC~4438, it also shows a highly truncated gas disk with a very small
amount of H{\sc i}.
\item[$^e$] CO gas mass from Vollmer et al. (2006)
\item[$^f$] from this paper
\item[$^g$] Kenney et al. (2004)
\item[$^h$] Vollmer \& Huchtmeier (2003)
\item[$^i$] Based on the low mass and low surface brightness extraplanar H{\sc i}.
\item[$^j$] Vollmer et al. (2005)
\item[$^k$] Vollmer et al., in prep.
\item[$^l$] Vollmer et al. (2006)
\item[$^m$] Vollmer et al. (2004a)
\end{list}
\end{table*}
The percentage of extraplanar gas mass with respect to the expected gas mass
assuming an initially non-H{\sc i} deficient galaxy varies
between 0.3\% and 20\%. Thus, more than 80\% of the missing gas is undetectable in H{\sc i}.
\subsection{Initially gas deficient spiral galaxies}
The assumption of an initially non-H{\sc i} deficient galaxy is however questionable.
We argue that the interplay between ram pressure and evaporation of the galaxy's ISM
by the hot intracluster medium might be different between (i) the inner gas disk where star
formation occurs and the gas is clumpy and multiphase and (ii) the outer gas
disk where the atomic hydrogen is mainly warm ($T \sim 10^4$~K) and smoothly distributed
(see e.g. Braun 1997). In the inner star forming disk ($R < R_{25}$) the gas
is turbulent and clumpy giving rise to a tangled magnetic field which suppresses evaporation
(Cowie et al. 1981, Malyshkin \& Kulsrud 2001).
On the other hand, due to the kinematical quietness and the smoothness of the
outer gas disk, the magnetic field there is expected to be less tangled leading
to a much more efficient evaporation of the warm H{\sc i}.
Because of this effect together with the fact that the gas surface density decreases
with increasing galactic radius in the outer gas disk, the outer gas is much more
vulnerable to evaporation, harassment, turbulent/viscous stripping and ram pressure.
We therefore argue that the outer gas disk is
stripped and evaporated at much larger distances from the cluster center than
one would expect from the Gunn \& Gott criterion (Gunn \& Gott 1972).
However, an early gas removal by ``preprocessing'' is also possible.
Based on this argument, we can recalculate the expected initial gas mass assuming
that the gas disk is truncated at the optical radius ($R_{25}$).
Spiral galaxies which show this property in the Cayatte et al. (1994) sample have
a mean H{\sc i} deficiency of 0.4. We thus recalculated the expected atomic gas mass
assuming this initial H{\sc i} deficiency (Table~\ref{tab:masses} col.(8)) and the
percentage of extraplanar gas mass with respect to the expected gas mass
(Table~\ref{tab:masses} col.(9)). These percentage lie between 3\% and 70\%.
The galaxies that we observe close or up to 100~Myr after peak ram pressure
still have about half of the stripped gas in neutral form.
As a final step we can estimate an evaporation rate by
\begin{equation}
\dot{M}_{\rm evap} \sim (M_{\rm HI}^{\rm def=0.4} - M_{\rm HI}^{\rm total})/t_{\rm rps}\ ,
\label{eq:evap}
\end{equation}
where $M_{\rm HI}^{\rm def=0.4}$ is the expected H{\sc i} mass assuming an initial
H{\sc i} deficiency of 0.4, $M_{\rm gas}^{\rm total}$ is the observed total H{\sc i}
mass (from table~\ref{tab:masses}), and $t_{\rm rps}$ is the time to peak ram pressure.
This evaporation rate can be found in Table~\ref{tab:masses} col.(10).
We took here the time to peak ram pressure as the characteristic timescale.
Because it is not possible to determine if evaporation happened faster,
this timescale is an upper limit and, consequently, the derived evaporation
rate represents a lower limit.
\subsection{Evaporation rates}
Since NGC~4501 is in a pre-peak ram pressure phase, we cannot estimate
an evaporation timescale. It is intriguing that the three galaxies which are
only affected by ram pressure and which are observed less than 400~Myr after
peak ram pressure show the same evaporation rate of about
$\dot{M}_{\rm evap} \sim 5-11$~M$_{\odot}$yr$^{-1}$.
The case of NGC~4438 is complicated, because of the unknown H{\sc i}
deficiency, the additional tidal and
ISM-ISM interactions, and a possible phase transition of the displaced gas,
but since ram pressure has the greatest effect on its ISM the derived
evaporation rate might still be valuable.
The analytical estimate of the classical evaporation rate of a spherical gas cloud
by Cowie \& McKee (1977) is
\begin{equation}
\dot{M}_{\rm evap} = 4.34 \times 10^{-22} T_{\rm ICM}^{\frac{5}{2}}R_{\rm pc}(30/\ln \Lambda)\ {\rm M_{\odot}yr^{-1}}\ .
\end{equation}
For a cloud size of the order of the disk height $R_{\rm pc}=500$~pc, an ICM temperature
of $T_{\rm ICM}=3 \times 10^7$~K, and a Coulomb logarithm of $\ln \Lambda = 30$,
one finds $\dot{M}_{\rm evap} \sim 1$~M$_{\odot}$yr$^{-1}$.
About 1000 clouds are necessary to fill an H{\sc i} disk of constant hight $H=500$~pc between
$R=5$~kpc and $R=10$~kpc. If these clouds have a density of $n = 1$~cm$^{-1}$ the resulting
gas mass is about $1.5 \times 10^9$~M$_{\odot}$. If 10\% of the surface of these clouds are
surrounded by the hot intracluster medium (this depends on the tail geometry)
the resulting total evaporation rate
is $\dot{M}_{\rm evap}^{\rm tot} \sim 100$~M$_{\odot}$yr$^{-1}$.
Our derived evaporation rate of NGC~4438 is close to that value.
This might imply that the evaporation timescale is short (of the order of 10~Myr).
If we assume this short timescale for NGC~4388 and NGC~4522 we obtain evaporation
rate of 56~M$_{\odot}$yr$^{-1}$ and 25~M$_{\odot}$yr$^{-1}$ which are close to the
analytical estimate.
\subsection{Amount of detected stripped gas}
The percentage of the observed gas mass varies between 23\% and 68\% of the expected gas mass assuming an
initial H{\sc i} deficiency of 0.4 for the galaxies which are stripped inside the optical radius
(table~\ref{tab:masses} col.(9)).
We argue that this corresponds to the fraction of cold H{\sc i} in the galactic disk before
it was stripped. This is entirely consistent with the results based on the deep H{\sc i}
observations of nearby undisturbed gas disks reported in (Braun 1997; see Sec.~\ref{sec:introduction}).
We propose a scenario
where the diffuse warm ($T \sim 8000$~K) H{\sc i} evaporates rapidly and the cold
($T \sim 100$~K) H{\sc i} resists much longer and can still be observed 100~Myr
after its removal from the galactic disk.
This scenario is also consistent with the H{\sc i} observations and simulations of NGC~4522
(Kenney et al. 2004, Vollmer et al. 2006) where a low surface brightness H{\sc i} component
is detected with a large linewidth ($\sim 100$~km\,s$^{-1}$). Vollmer et al. (2006)
interpreted this component as diffuse warm H{\sc i} which is stripped more efficiently
than the cold dense H{\sc i}. Here we add the effect of evaporation to this picture
which might be partly responsible for the increased stripping efficiency.
\subsection{The exception: NGC~4388 \label{sec:n4388}}
NGC~4388 is the only galaxy where extraplanar low surface density gas is detected
at distances larger than 10~kpc from the galactic disk. Oosterloo \& van Gorkom (2005)
speculated that the H{\sc i} tail did not evaporate, because NGC~4388 was stripped
by the ICM of M86. This ICM is less dense and maybe cooler than that of the Virgo
cluster, i.e. M87, and therefore evaporation is slower. However, M86 has a negative
absolute velocity and lies far behind M87 (more than 1~Mpc, see, e.g., Vollmer et al. 2004c).
On the other hand, NGC~4388 is located close to M87 ($D \sim 2000$~km\,s$^{-1} \times 100$~Myr
$\sim 0.1$~Mpc). The deprojected tail length in a putative M86 stripping scenario would be
$\sim 1$~Mpc which requires an unreasonable large evaporation time.
The most forward explanation, which is consistent with out findings, is that NGC~4388 is the only
galaxy that we are observing in an evolutionary stage long enough after peak ram pressure
to show an extended gas tail and short enough after peak ram pressure so that the
tail is not yet evaporated. This time window has a width of $\sim 200$~Myr compared to
a cluster crossing time of $\sim 3$~Gyr explaining the low probability to observe a
galaxy in this evolutionary stage.
\section{Conclusions \label{sec:conclusions}}
We made deep H{\sc i} observations with the Effelsberg 100-m telescope around 5 H{\sc i} deficient
Virgo spiral galaxies to search for neutral gas located far away from the galactic disks
(more than 20~kpc). These galaxies are or were all affected by ram pressure stripping.
The following results were obtained
\begin{itemize}
\item
we did not detect H{\sc i} emission far away from NGC~4402, NGC~4438, NGC~4501, and NGC~4522;
\item
the already known H{\sc i} tail in the north of NGC~4388 does not extend further than
the WSRT image of Oosterloo \& van Gorkom (2005) has shown;
\item
the H{\sc i} tail of NGC~4388 seems thus to be an exception.
\end{itemize}
Based on the absence of H{\sc i} tails in these galaxies and a balance of previous
detections of extraplanar gas in the targeted galaxies we propose a global picture
where the outer gas disk (beyond the optical radius $R_{25}$) is evaporated/stripped
much earlier than expected by the classical ram pressure criterion (Gunn \& Gott 1972).
The key ingredient for this argument is the two-phase nature of the atomic hydrogen.
In the inner disk ($R<R_{25}$) cold and warm H{\sc i} coexist, whereas in the outer
disk the atomic gas is mostly warm (Braun 1997). The cold H{\sc i} is located
near star forming regions and might be stirred by supernova explosions.
This dynamical stirring causes a tangled magnetic field in the dense cold H{\sc i}
clouds which inhibits their evaporation by the hot intracluster medium once they
are pushed out of the galactic disk by ram pressure. We further argue that the
warm diffuse H{\sc i} is evaporated and stripped rapidly with an evaporation
rate between 10 and 100~M$_{\odot}$yr$^{-1}$.
After a ram pressure stripping event we therefore can only observe the fraction of the
ISM which was in form of dense cold clouds before it was removed from the galactic disk.
More observations are needed to test our scenario of the stripping of a two-phase atomic
hydrogen.
\begin{acknowledgements}
Based on observations with the 100-m telescope of the MPIfR (Max-Planck-Institut f\"{u}r
Radioastronomie) at Effelsberg.
\end{acknowledgements}
|
2,869,038,155,106 | arxiv | \section{Introduction}
The first extra-Solar X-ray source discovered was the low-mass X-ray
binary Sco~X-1 \citep{Giacconi:1962a}. Its optical counterpart,
V818~Sco, was discovered by \citet{Sandage:1966a}, paving the way for
many subsequent multiwavelength studies. The binary period is widely
accepted to be 18.9\,hr based on the discovery of a photometric
modulation by \citet{Gottlieb:1975a} and spectroscopic confirmation by
\citet{Cowley:1975a}. We now know that Sco~X-1 contains a low-mass
late-type donor transferring mass onto a neutron star at a rather high
rate. The modulation arises from X-ray heating of the donor star,
which also manifests as narrow emission lines of N\,{\sc iii} and
C\,{\sc iii} moving in phase with the donor star
\citep{Steeghs:2002a}.
\citet{Gottlieb:1975a} obtained the period of
$0.787313\pm0.000001$\,days quite remarkably by examining archival
photographic plates from 1889 to 1974. A sinusoidal modulation of
full amplitude around 0.2--0.3\,mag was found in several independent
datasets, with considerable scatter around the mean curve
\citep{Gottlieb:1975a,Wright:1975a}. While the long baseline of
photographic observations defined the period to incredible precision,
the sparse sampling left a plethora of aliases, and
\citet{Gottlieb:1975a} identified strong signals at one-day,
one-month, and one-year aliases of their favored period. Of these,
the one-year alias has been by far the hardest to reject. Several
subsequent photometric studies reproduced the modulation, but none
improved the ephemeris, or resolved the one-year alias issue
\citep{vanGenderen:1977a,Augusteijn:1992a}
Spectroscopic confirmation of this period was suggested by
\citet{Gottlieb:1975a} and \citet{Wright:1975a}, and demonstrated
conclusively by \citet{Cowley:1975a}, who found a
period of $0.787\pm0.006$\,days, and again by
\citet{LaSala:1985a}. Both of these works performed a period search on
the data, but in both cases the frequency resolution was limited by
only observing over a baseline of a week. Other spectroscopic
analyses of these and other data have also found variations at this
period, \citep{Crampton:1976a,Bord:1976a,Steeghs:2002a}, but no other
groups have performed a rigorous independent period search.
Several groups also searched for the orbital period in X-ray data,
with initially no success
\citep{Holt:1976a,Coe:1980a,Priedhorsky:1987a,Priedhorsky:1995a}. The
only positive detection of an orbital period in X-rays came from
\citet{Vanderlinde:2003a} based on a multi-year {\it RXTE}/ASM
dataset. They did not find exactly the \citet{Gottlieb:1975a} period,
but instead the one-year alias (0.78893\,days) with a modulation
around 1\,\%. Given the intensive multi-year coverage of {\it RXTE}
this is surprising, since this dataset should not be susceptible to
the one-year alias problem. \citet{Vanderlinde:2003a} therefore
claimed that their period was the true orbital period and that
\citet{Gottlieb:1975a} had misidentified the alias. While this result
was tantalizing, \citet{Levine:2011a} could not reproduce this period
using a larger {\it RXTE} dataset. They did, however, not use as
sophisticated an analysis as \citet{Vanderlinde:2003a}, leaving open
the possibility that the X-ray period could be real.
Surprisingly, then, fifty years after discovery of the prototypical
LMXB Sco~X-1, there remain doubts about its most fundamental
parameter, the orbital period. While the original optical ephemeris
of \citet{Gottlieb:1975a} has remained the standard reference for the
37\,years since its publication, it remains to be resolved whether
this, or the X-ray period of \citet{Vanderlinde:2003a}, is the true
orbital period. To attempt to resolve these questions, and update the
ephemeris of Sco~X-1 with modern data, we examine here archival
photometry from the All Sky Automated Survey (ASAS). This nine year
dataset has both the long baseline to determine a precise period, and
coverage of a large enough fraction of a year to finally break the
one-year alias problem using optical data.
\section{Observations}
\label{DataSection}
The All Sky Automated Survey (ASAS) monitored Sco~X-1 from 2001 to
2009 \citep{Pojmanski:2002a}. We note that while Sco~X-1 was not
included in the ASAS Catalog of Variable Stars (ACVS) its photometry
is in the ASAS-3 Photometric $V$ Band Catalog in two datasets,
161955--1538.4 and 161955--1538.5. The Sco X-1 datasets include 640
observations from 2001 January 22 to 2009 October 5. With multiyear
coverage spanning typically about 270\,days of the year, it is ideally
suited for obtaining an updated ephemeris and breaking the one-year
alias.
We performed our analysis for a range of choices of data grades and
apertures to optimize our filter criteria. For final analysis, we
retained the 567 grade A or B observations, and used the smallest ASAS
aperture. Inclusion of grade C or worse data, or use of larger
aperture data, significantly reduced the quality of the fits.
\section{Ephemeris}
\label{PeriodSection}
To determine the orbital period we performed a sinuoidal fit to the
data points. Since the scatter around the model is dominated by
intrinsic flickering rather than photometric uncertainties, we
assigned a mean uncertainty of 0.30\,mag to each point to represent
the flickering. This was chosen to produce a minimum $\chi^2$ equal
to the number of degrees of freedom. We then evaluated sinusoidal
fits over a range of trial periods. For each period the best-fitting
mean magnitude, amplitude, and phasing were determined using the
downhill simplex algorithm \citep{Nelder:1965a}. We show the results
in the vicinity of the disputed periods in Fig.~\ref{PeriodFig}.
\begin{figure}
\includegraphics[angle=90,width=3.5in]{fig1.ps}
\caption{$\chi^2$ as a function of trial period for sinusoidal fits to
ASAS data. We show the \citet{Gottlieb:1975a} period of
0.787313\,days and the \citet{Vanderlinde:2003a} period of
0.78893\,days for comparison. We also show calculated one-year
aliases of the preferred period at 0.78562 and 0.78901\,days. The
\citet{Gottlieb:1975a} period is strongly favored by the ASAS data. While
some signal is seen at the alternative periods, all are rejected at
greater than 5-$\sigma$ confidence.}
\label{PeriodFig}
\end{figure}
We see that the \citet{Gottlieb:1975a} period is reproduced exactly to
within the limits of our frequency resolution. Our formal best period
is $0.787313\pm0.000015$\,days. The uncertainty quoted is a formal
1-$\sigma$ error determined from the $\Delta\chi^2=1$ confidence range
in period. We verified the uncertainty using the bootstrap method
with 30 resamplings of the data. This gave a consistent 1-$\sigma$
uncertainty ($1.6\times10^{-5}$). We also show the period of
\citet{Vanderlinde:2003a}, and the one-year aliases with which they
associated it. We find that none of these alternatives are consistent
with the ASAS data, and all can be rejected at better than 5-$\sigma$
confidence. We therefore cannot directly improve on the period of
\citet{Gottlieb:1975a} using the ASAS data, which is not surprising as
that used data drawn from nearly a hundred year baseline. We can,
however, overcome the limitation of that dataset in its vulnerability
to one-year aliases, as the ASAS data has much wider coverage within a
year.
Using the same $\chi^2$ approach, we determine a mean time of minimum
of $2453510.329\pm0.017$. This corresponds to an offset of very close
to 17057 cycles from the time of minimum of \citet{Gottlieb:1975a}.
If we project their time of minimum forwards we predict
$2453510.328\pm0.024$, with equal contributions to the uncertainty
from their time of minimum (quoted as 0.022 cycles) and their period
($10^{-6}$\,days). Our time of minimum is completely consistent with
theirs (a remarkable testament to the accuracy of their historical
ephemeris), but at this point our modern measurement of the time is somewhat
better constrained for use with modern data.
Finally, we show in Figure~\ref{LightcurveFig} the ASAS lightcurve folded on
our derived ephemeris, together with the best fitting sine wave. The
mean $V$ band brightness is 12.63, and the full-amplitude is 0.26\,mag,
comparable to that found by \citet{Gottlieb:1975a} and
\citet{Wright:1975a}.
\begin{figure}
\includegraphics[angle=90,width=3.5in]{fig2.ps}
\caption{Folded and phase-binned ASAS lightcurve of Sco~X-1. The
data have been grouped into 50 phase bins and plotted twice.
Errorbars are empirical and indicate the error on the mean of each
bin. The model plotted is the best fitting sine wave determined
Section~\ref{PeriodSection}.}
\label{LightcurveFig}
\end{figure}
\section{Discussion}
We have established that in optical photometry the 0.787313\,day
period produces a stable modulation over 120\,years of observation.
The ephemeris of \citet{Gottlieb:1975a} reliably and precisely
predicts the time of minimum in the ASAS data, over 17,000
intervening cycles. It is hard to imagine any clock other than the
orbital period providing this stability. This has to be the true
orbital period.
The question then arises as to what, if anything,
\citet{Vanderlinde:2003a} detected. We of course should allow that it
was a spurious detection, until it can be reproduced with data from
the remainder of the {\it RXTE} mission. \citet{Levine:2011a} failed
to reproduce it, but also did not use all the techniques that
\citet{Vanderlinde:2003a} used. Associating it with an alias of the
true orbital period seems unlikely, as {\it RXTE}/ASM data on Sco~X-1
are rather well sampled through the year (just as ASAS data are).
One possible explanation might be if the X-ray signal came at the beat
frequency between the orbital period and a superorbital period of
around a year. Many X-ray binaries have indeed shown super-orbital
periods of tens to hundreds of days \citep[see e.g.][]{Charles:2008a},
although typically all are shorter than a year. The only claim of
such a long period in Sco~X-1, came from early {\it RXTE}/ASM data,
from which \citet{Peele:1996a} suggested a 37\,day period. This
detection has not been sustained in subsequent data, and no
super-orbital period was found by \citet{Farrell:2009a} in {\it
Swift}/BAT data. On longer timescales, \citet{Durant:2010a} and
\citet{Kotze:2010a} both independently suggested a $\sim9$\,year X-ray
modulation is present in {\it RXTE}/ASM data, although this is too
long to account for the \citet{Vanderlinde:2003a} period. This
explanation therefore seems unlikely, and it remains to be seen if the
X-ray period can be reproduced from the full {\it RXTE} mission-long
dataset.
\section{Conclusions}
We have analyzed ASAS data of Sco~X-1 spanning nine years. We can
confirm the period of \citet{Gottlieb:1975a}, while rejecting its
one-year aliases, and also the putative X-ray period of
\citet{Vanderlinde:2003a}. Our updated ephemeris is $T_{\rm min}({\rm
HJD}) = 2453510.329(17)+0.787313(1)E$.
\acknowledgments
This work was supported by the National Science Foundation under Grant
No. AST-0908789. This research has made use of NASA's Astrophysics
Data System.
{\it Facilities:} \facility{ASAS}.
|
2,869,038,155,107 | arxiv | \section{Introduction}
Let $l$ be a rational prime and $A$ an abelian variety over a numder field $K$. Let $S$ be a set of primes of $K$ of density $1$ where
$A$ has good reduction. For $\mathfrak{p}\in S$ denote by $A_\mathfrak{p}$ the reduction of $A$ at $\mathfrak{p}$ and by $\mathbb{F}_\mathfrak{p}$ the residue field at $\mathfrak{p}$.
In \cite{Kat81}, N. Katz proved the following: if $A_\mathfrak{p}(\mathbb{F}_\mathfrak{p})[l]\neq 0$ for all $\mathfrak{p}\in S$, then there exists an abelian variety $A'$ isogenous
to $A$ over $K$ such that $A'(K)[l]\neq 0$, provided $\dim A'\leq 2$. He also constructed counterexamples in all dimensions $\geq 3$.
This answered a question posed by Serge Lang.
The analogy between Drinfeld modules and abelian varieties is well-known and has been extensively studied. In this paper we consider the
analogue of Lang's question for Drinfeld modules. Before we state our main results, we note that Lang's question can be reformulated
as follows. Let
$$\bar{\rho}_l:\mathrm{Gal}(\bar{K}/K)\to \mathrm{Aut}(A[l])
$$
be the representation arising from the action of the absolute Galois group of $K$ on $A[l]$. If for every $\sigma\in \mathrm{Gal}(\bar{K}/K)$
we have $\det(1-\bar{\rho}_l(\sigma))=0$, is it true that the semi-simplification of $A[l]$ contains the trivial representation?
In fact, Katz approaches Lang's question from this group-theoretic perspective.
\begin{comment}
\begin{thm}[\cite{Kat81}, Theorem 2]\label{thm0}
Let $E$ be an elliptic curve over a number field $K$, and $n\geqslant 2$ an integer. Let $\mathfrak{p}$ be a prime of $K$ where $E$ has good reduction. Let $N(\mathfrak{p})$ be the number of $\mathbb{F}_{\mathfrak{p}}$-rational points on $E\mod \mathfrak{p}$.
Suppose we have $n\ |\ N(\mathfrak{p})$ for almost all primes $\mathfrak{p}$, then there is a $K$-isogenous elliptic curve $E'$ over $K$ such that $n$ divides $|{\rm{Tor}}(E'(K))|$.
\end{thm}
In other words, the existence of a $\mathbb{F}_{\mathfrak{p}}$-rational point on $E \mod \mathfrak{p}$ whose order is divisible by $n$ for almost all prime $\mathfrak{p}$ would imply the existence of $K$-rational points on some $K$-isogenous elliptic curve $E'$ with order divisible by $n$. In general, Katz analyzed the above problem for $2$-dimensional abelian varieties over number fields under certain conditions (see \cite{Kat81}, Theorem 4). Moreover, for abelian varieties of dimension $\geqslant 3$, he showed that the analogous statement for abelian variety of dimension $\geqslant 3$ is false. More precisely, Katz gave an example of an abelian variety $A$ over a number field $K$ such that the number of $\mathbb{F}_\mathfrak{p}$-rational points is divisible by an odd integer $N\geqslant 3$ for almost all prime $\mathfrak{p}$. However, the order of ${\rm{Tor}}A'(K)$ is prime to $N$ for any $K$-isogenous abelian variety $A'$ of $A$.
Our goal in this paper is to investigate a function field analogue of the problems studied above. Drinfeld $A$-module of rank two over $\mathbb{F}_q(T)$ plays a similar role to an elliptic curve over $\mathbb{Q}$, and we are able to prove the function field analogue of Theorem \ref{thm0}.
\begin{thm}\label{mainthm1}
Let $\phi$ be a Drinfeld $\mathbb{F}_q[T]$-module over $\mathbb{F}_q(T)$ of rank $2$. Let $\mathfrak{l}$ be a prime ideal of $\mathbb{F}_q[T]$. Suppose for almost all prime $\mathfrak{p}$ of $\mathbb{F}_q[T]$, the reduction $\phi \otimes \mathbb{F}_\mathfrak{p}$ of $\phi$ at $\mathfrak{p}$ has a nontrivial $\mathfrak{l}$-torsion point defined over $\mathbb{F}_\mathfrak{p}$. Then the semisimplification of the Galois representation $\phi[\mathfrak{l}]$ contain a trivial representation.
\end{thm}
\end{comment}
Now let $A=\mathbb{F}_q[T]$ be the ring of polynomials in indeterminate $T$ with coefficients in $\mathbb{F}_q$,
where $q$ is a power of a prime $p\geq 5$. Let $F=\mathbb{F}_q(T)$ be the
field of fractions of $A$. Let $\phi$ be a Drinfeld $A$-module of rank $r$ over $F$. Let $S$ be a set of maximal ideals of $A$ of density $1$ where
$\phi$ has good reduction. Let $\mathbb{F}_\mathfrak{p}=A/\mathfrak{p}$ denote the residue field at $\mathfrak{p}$ and $\phi\otimes \mathbb{F}_\mathfrak{p}$ the reduction of $\phi$
at $\mathfrak{p}\in S$. Given a maximal ideal $\mathfrak{l}$ of $A$,
the absolute Galois group $G_F:=\mathrm{Gal}(F^\mathrm{sep}/F)$ of $F$ acts on $\phi[\mathfrak{l}]$. Let
$$
\bar{\rho}_{\phi, \mathfrak{l}}: G_F\to \mathrm{Aut}(\phi[\mathfrak{l}])\cong \mathrm{GL}_r(\mathbb{F}_\mathfrak{l})
$$
be the corresponding representation.
The main results of this paper are the following:
\begin{thm}\label{mainthm1} Assume $r=2$. Let $\mathfrak{l}$ be a maximal ideal of $A$.
Suppose for all $\mathfrak{p}\in S$ the reduction $\phi\otimes \mathbb{F}_\mathfrak{p}$ has a nontrivial $\mathfrak{l}$-torsion point defined over $\mathbb{F}_\mathfrak{p}$.
Then the semisimplification of the $G_F$-module $\phi[\mathfrak{l}]$ contains the trivial representation.
\end{thm}
\begin{thm}\label{mainthm2} Assume $r=3$.
Let $\mathfrak{l}$ be a maximal ideal of $A$.
Suppose for all $\mathfrak{p}\in S$ the reduction $\phi\otimes \mathbb{F}_\mathfrak{p}$ has a nontrivial $\mathfrak{l}$-torsion point defined over $\mathbb{F}_\mathfrak{p}$.
\begin{itemize}
\item[(1)] If $\det(\bar{\rho}_{\phi, \mathfrak{l}})$ is a nontrivial representation, then the semisimplification of $\phi[\mathfrak{l}]$ contains the trivial representation.
\item[(2)] If $\det(\bar{\rho}_{\phi, \mathfrak{l}})$ is the trivial representation and $\phi[\mathfrak{l}]$ is reducible, then either the semisimplification of $\phi[\mathfrak{l}]$ contains the trivial representation or there is a quadratic twist $\phi'$ of $\phi$ such that the semisimplification of $\phi'[\mathfrak{l}]$ contains the trivial representation. Moreover, there do exist Drinfeld modules $\phi$ for which
$\det(\bar{\rho}_{\phi, \mathfrak{l}})$ is the trivial representation, $\phi[\mathfrak{l}]$ is reducible, but
the semisimplification of $\phi[\mathfrak{l}]$ does not contain the trivial representation.
\item[(3)] There are Drinfeld modules $\phi$ which satisfy the assumptions of the theorem but for which $\phi[\mathfrak{l}]$ is irreducible, so
its semisimplification does not contain the trivial representation.
\end{itemize}
\end{thm}
In particular, the analogue of Lang's question has a positive answer for Drinfeld modules of rank $2$,
but can have a negative answer for Drinfeld modules of rank $3$.
To prove the main theorems, we analyze the structure of the Galois module $\phi[\mathfrak{l}]$ following the
strategy in Katz's paper \cite{Kat81}.
A crucial idea
in our proof of part (3) of Theorem \ref{mainthm2} was inspired by Cullinan's paper \cite{Cull19}.
To conclude the introduction, we observe the following relation of our results to the existence of rational points, up to isogeny.
Suppose the semisimplification of $\phi[\mathfrak{l}]$ contains the trivial representation. If the trivial representation is
is a $G_F$-submodule of $\phi[\mathfrak{l}]$ then $\phi$ has an $F$-rational $\mathfrak{l}$-torsion point. Otherwise,
there is a $G_F$-submodule $H\subset \phi[\mathfrak{l}]$ such that the trivial representation is a $G_F$-submodule
of $H':=\phi[\mathfrak{l}]/H$. In this case,
we consider the $F$-isogenous Drinfeld module $\phi':=\phi/H$ of $\phi$
(see \cite{Gos96} Proposition 4.7.11 for the construction of $\phi'$).
Then we have $H'\subset \phi'[\mathfrak{l}]$, which implies that $\phi'[\mathfrak{l}]$ has a nontrivial $F$-rational point.
\section{Preliminaries}
\begin{comment}
\subsection{Notation}
\begin{itemize}
\item$q=p^e$ is a prime power with $p\geqslant5$
\item$A=\mathbb{F}_q[T]$
\item$F=\mathbb{F}_q(T)$
\item$F^{{\rm{sep}}}$= separable closure of $F$
\item$F^{{\rm{alg}}}$= algebraic closure of $F$
\item$G_F= {\rm{Gal}}(F^{{\rm{sep}}}/F)$
\item$A_{\mathfrak{p}}=$ completion of A with respect to a nonzero prime ideal $\mathfrak{p}\triangleleft A$
\item$\widehat{A}=\underset{\mathfrak{a}\triangleleft A}{\varprojlim}A/\mathfrak{a}$
\item$F_{\mathfrak{p}}=$ Fraction field of $A_{\mathfrak{p}}$
\item$\mathbb{F}_{\mathfrak{p}}= A/\mathfrak{p}$
\end{itemize}
\end{comment}
An {\textbf{$A$-field}} is a field $K$ equipped with a homomorphism $\gamma: A \rightarrow K$ of $\mathbb{F}_q$-algebras.
The kernel $\ker(\gamma)$ is called the {\textbf{$A$-characteristic}} of $K$;
we say $K$ has {\textbf{generic characteristic}} if $\ker(\gamma)=0$.
Let $K\{\tau\}$ be the ring of skew polynomials satisfying the commutation rule $\tau \cdot c = c^q\cdot \tau$.
A \textbf{Drinfeld $A$-module over $K$ of rank $r\geqslant 1$} is a ring homomorphism
\begin{align*}
\phi&: A\longrightarrow K\{\tau\} \\
&\ \ \ a \longmapsto \phi_a=\gamma(a)+ \sum^{r\cdot {\rm{deg}}(a)}_{i=1}g_i(a)\tau^i.
\end{align*}
It is uniquely determined by $\phi_T=\gamma(T)+\sum^{r}_{i=1}g_i(T)\tau^i$, where $g_r(T)\neq0$.
An \textbf{isogeny} from a Drinfeld module $\phi$ to another Drinfeld module $\psi$ over $K$ is a nonzero element $u\in K\{\tau\}$ such that $u\cdot \phi_a=\psi_a \cdot u$ for all $a\in A$.
The Drinfeld module $\phi$ over $K$ gives $K$ an $A$-module structure, where $a\in A$ acts on $K$ via $\phi_{a}$. We use the notation
$^{\phi}K$ to emphasize the action of $A$ on $K$.
The {\textbf{$a$-torsion}} is
$$\phi[a]=\left\{ \alpha\in K^\mathrm{alg}\mid \phi_a(\alpha)= \gamma(a)\alpha+ \sum^{r\cdot {\rm{deg}}(a)}_{i=1}g_i(a)\alpha^{q^i}=0 \right\}.$$
Note that $\phi[a]$ is an $A$-module, where $b\in A$ acts on $\alpha\in \phi[a]$ by
$b\cdot \alpha=\phi_b(\alpha)$. The following is well-known and easy to prove (cf. \cite{Gos96}):
\begin{prop}\label{prop0.2}
Let $\phi$ be a Drinfeld module over $K$ of rank $r$ and $0\neq a\in A$. If the $A$-characteristic of $K$ does not divide $a$, then
there is an isomorphism of $A$-modules $\phi[a]\simeq (A/aA)^r$.
\end{prop}
Note that if the characteristic of $K$ does not divide $a$, then $\phi_a(x)=\gamma(a)x+\sum^{r\cdot {\rm{deg}}(a)}_{i=1}g_i(a)x^{q^i}$ is a separable polynomial, so $G_K=\mathrm{Gal}(K^\mathrm{sep}/K)$ acts on $\phi[a]$ and this action commutes with the action of $A$. From this action we get a
representation $G_K\to \mathrm{Aut}_A(\phi[a])\cong \mathrm{GL}_r(A/aA)$. When $a=\mathfrak{l}$ is irreducible, we denoted this representation $\bar{\rho}_{\phi, \mathfrak{l}}$.
Taking inverse limit with respect to $\mathfrak{l}^i$, we get the \textbf{$\mathfrak{l}$-adic Galois representation}
$${\rho}_{\phi,\mathfrak{l}}: G_K \longrightarrow \varprojlim_{i}{\mathrm{Aut}}(\phi[\mathfrak{l}^i])\cong \mathrm{GL}_r(A_\mathfrak{l}),$$
where $A_\mathfrak{l}$ denotes the completion of $A$ at $\mathfrak{l}$.
Let $\phi$ be a Drinfeld module over $F$ defined by $$\phi_T=T+g_1\tau+g_2\tau^2+\cdots+g_r\tau^r.$$
We say that $\phi$ has \textbf{good reduction} at the maximal ideal $\mathfrak{p}$ of $A$ if all $g_i$ are integral at $\mathfrak{p}$, i.e., lie in $A_\mathfrak{p}$,
and $g_r$ is a unit in $A_\mathfrak{p}$. The \textbf{reduction} of $\phi$ at $\mathfrak{p}$
is the Drinfeld module $\phi\otimes \mathbb{F}_\mathfrak{p}$ over $\mathbb{F}_\mathfrak{p}$ defined by
$(\phi\otimes \mathbb{F}_\mathfrak{p})_T=\bar{T}+\bar{g_1}\tau+\cdots+\bar{g_r}\tau^r$, where $\bar{g_i}$ is the reduction of $g_i$ modulo $\mathfrak{p}$.
\begin{comment}
Let $\phi: A\longrightarrow F_{\mathfrak{p}}\{\tau\}$ be a Drinfeld module of rank $r$. We say that $\phi$ has \textbf{stable reduction} if there is a Drinfeld module $\phi':A\longrightarrow A_{\mathfrak{p}}\{\tau\}$ such that
\begin{enumerate}
\item$\phi'$ is isomorphic to $\phi$ over $F_{\mathfrak{p}}$;
\item$\phi'$ mod $\mathfrak{p}$ is still a Drinfeld module (i.e. $\phi'_{T}$ mod $\mathfrak{p}$ has ${\rm{deg}}_{\tau}\geqslant 1$ ).
\end{enumerate}
$\phi$ is said to have \textbf{stable reduction of rank $r_1$} if $\phi$ has stable reduction and $\phi$ mod $\mathfrak{p}$ has rank $r_1$.\\
$\phi$ is said to have \textbf{good reduction} is $\phi$ has stable reduction and $\phi$ mod $\mathfrak{p}$ has rank $r$.
\begin{rem}\ \\
{\rm{We sometimes denote $\phi \mod \mathfrak{p}$ by $\phi \otimes \mathbb{F}_{\mathfrak{p}}$.}}
\end{rem}
\end{comment}
If $\mathfrak{p} \neq \mathfrak{l}$ is a prime of good reduction of $\phi$, then the $\mathfrak{l}$-adic Galois representation ${\rho}_{\phi,\mathfrak{l}}$ is unramified at $\mathfrak{p}$. Therefore, the matrix ${\rho}_{\phi,\mathfrak{l}}({\rm{Frob}}_{\mathfrak{p}}) \in {\rm{GL}}_r(A_\mathfrak{l})$ is well-defined up to conjugation, so we can consider the characteristic polynomial $P_{\phi,\mathfrak{p}}(x)=\det(xI-{\rho}_{\phi,\mathfrak{l}}({\rm{Frob}}_{\mathfrak{p}}))$ of the Frobenius element ${\rm{Frob}_{\mathfrak{p}}}$
at $\mathfrak{p}$.
It is known that the coefficients of polynomial $P_{\phi,\mathfrak{p}}(x)$ are independent of the choice of $\mathfrak{l}$ and belong to $A$ (see \cite{Gek91} Corollary 3.4).
Moreover, $P_{\phi,\mathfrak{p}}(x)$ is equal to the characteristic polynomial of Frobenius endomorphism of $\phi\otimes\mathbb{F}_{\mathfrak{p}}$ acting on $T_{\mathfrak{l}}(\phi\otimes\mathbb{F}_{\mathfrak{p}})$.
We will need a fact about the value $P_{\phi,\mathfrak{p}}(1)$ which is the analogue of Hasse's theorem about the number of
rational points on an elliptic curve over a finite field.
By the structure theorem for finitely generated modules
over principal ideal domains, we have an isomorphism of $A$-module
$$^{\phi\otimes\mathbb{F}_\mathfrak{p}}\mathbb{F}_\mathfrak{p}\cong A/b_1A\times\cdots\times A/b_sA,$$
for uniquely determined monic polynomials $b_1\mid b_2\mid \cdots \mid b_s$.
\begin{prop}\label{red}
We have an equality of ideals
$\left(\prod_{i=1}^{s}b_i\right)= (P_{\phi,\mathfrak{p}}(1))$.
\end{prop}
\begin{proof}
See \cite{Gek91}.
\end{proof}
We conclude this section by recalling the Brauer-Nesbitt Theorem.
Let $G$ be a finite group and $V$ be a representation of $G$ defined over a field $K$ of characteristic $p$.
The {\textbf{semisimplification}} $V^{\ss}$ of $V$ is the direct sum of Jordan-H\"older constituents of the $K[G]$-module $V$. In other words, suppose the Jordan-H\"older series of the $K[G]$-module $V$ is $V=V_0\supset V_1\supset V_2\supset\cdots\supset V_n=\{0\}$, then $$V^{ss}=\oplus_{i=0}^{n-1}V_i/V_{i+1}.$$
\begin{thm}[Brauer-Nesbitt Theorem]\label{bnt}
Let $G$ be a finite group. Let $V$ and $W$ be two $K[G]$-modules which are finite dimensional as $K$-vector spaces. If for all $g\in G$, the characteristic polynomial of $g$ acting on $V$ and $W$ are equal, then $V$ and $W$ have the same Jordan-H\"older constituents. In other words, the semisimplification $V^\ss$ is isomorphic to $W^\ss$ as $K[G]$-modules.
\end{thm}
\begin{proof}
See \cite{CuRe62}, p.215.
\end{proof}
\section{Proof of Theorem \ref{mainthm1}}
Let $\phi$ be a Drinfeld $A$-module over $F$ of rank $2$ and $\mathfrak{l}$ be a prime ideal of $A$. Replacing $\phi$ by $c\phi c^{-1}$ with suitable $c\in A$ such that $(c\phi c^{-1})_T\in A\{\tau\}$, we may assume $\phi$ is defined over $A$.
Let $S$ be a subset of primes of $A$ with density $1$ where $\phi$ has good reduction.
Assume:
\begin{center}
For all $\mathfrak{p}\in S$, the space $(\phi\otimes\mathbb{F}_\mathfrak{p})[\mathfrak{l}]$ contains a nontrivial $\mathbb{F}_\mathfrak{p}$-rational point.
\end{center}
The assumption is equivalent to saying that the $A$-module $^{\phi\otimes\mathbb{F}_\mathfrak{p}}\mathbb{F}_\mathfrak{p}$ has a nontrivial $\mathfrak{l}$-torsion. Hence Proposition \ref{red} implies $P_{\phi,\mathfrak{p}}(1)$ is divisible by $\mathfrak{l}$ for almost all prime ideal $\mathfrak{p}$ of $A$. Thus we have
$$P_{\phi,\mathfrak{p}}(1)=1-{\rm{tr}}({\rm{Frob}}_\mathfrak{p})+\det({\rm{Frob}}_\mathfrak{p})\equiv 0 \mod \mathfrak{l}.$$
The Chebotarev density theorem implies that for all $g\in G_F$, the action of $g$ on $\phi[\mathfrak{l}]$ satisfies
$${\rm{tr}}(g)\equiv 1+\det(g) \mod \mathfrak{l}.$$
Now we can compare the $G_F$-modules $\phi[\mathfrak{l}]$ and $\mathbb{1}\oplus\det(\phi[\mathfrak{l}])$, where $\mathbb{1}$ denotes the trivial representation. For any $g\in G_F$, we know the characteristic polynomial of $g$ acting on $\phi[\mathfrak{l}]$ and $\mathbb{1}\oplus\det(\phi[\mathfrak{l}])$ are the same. Hence Theorem \ref{bnt} implies $\phi[\mathfrak{l}]$ and $\mathbb{1}\oplus\det(\phi[\mathfrak{l}])$ are isomorphic up to semisimplification.
However, Theorem \ref{mainthm1} does not imply that $\phi[\mathfrak{l}]$ contains a nontrivial $\mathfrak{l}$-torsion point defined over $F$. We give an example showing that there is a Drinfeld $A$-module $\phi$ of rank $2$
such that $\phi[\mathfrak{l}]^\ss$ contains the trivial representation while $\phi[\mathfrak{l}]$ has no nontrivial points defined over $F$.
\begin{example}
Consider the Drinfeld module $\phi$ over $F$ defined by
$$\phi_T(x)=\prod^{q-1}_{i=0}(x^q+Tx-i).$$
Let $e_1$ be a nonzero root of $x^q+Tx$ and $e_2$ be a nonzero root of $x^q+Tx-1$. Then, $\phi[T]$ as an $\mathbb{F}_q$-vector space has
a a basis $\{e_1, e_2\}$. An element $g\in G_F$ act on this basis by
\begin{align*}
ge_1&= c_ge_1, \quad c_g\in \mathbb{F}_q^*,\\
ge_2&=e_2+d_ge_1,\quad d_g\in \mathbb{F}_q.
\end{align*}
Hence $\bar{\rho}_{\phi, T}$ is the Galois representation
$$\begin{array}{rccl}
&G_F&\rightarrow&{\rm{GL}}(V)\cong {\rm{GL}}_2(\mathbb{F}_q)\\
&g&\mapsto&\left(\begin{array}{cc}c_g & d_g \\0 & 1\end{array}\right)
\end{array}.
$$
Since there are $c_g\neq 1$, this representation does not contain the trivial representation as a direct summand.
\end{example}
\section{Proof of Theorem \ref{mainthm2}}
\subsection{Basic setting}
Similar to the previous section, let $\phi$ be a Drinfeld $A$-module over $F$ of rank $3$ and $\mathfrak{l}$ be a prime ideal of $A$.
Let $S$ be a subset of primes of $A$ with density $1$ where $\phi$ has good reduction. Assume:
\begin{center}
For all $\mathfrak{p}\in S$, the space $(\phi\otimes\mathbb{F}_\mathfrak{p})[\mathfrak{l}]$ contains a nontrivial $\mathbb{F}_\mathfrak{p}$-rational point.
\end{center}
Again, using Proposition \ref{red}, we get
$$P_{\phi,\mathfrak{p}}(1)=\det(1-{\rm{Frob}}_\mathfrak{p})\equiv 0 \mod \mathfrak{l}$$
for all $\mathfrak{p}\in S$. Then, by the Chebotarev density theorem, the characteristic polynomial of any $g\in G_F$ acting on $\phi[\mathfrak{l}]$ satisfies
$$\det(1-g)=\sum_{i=0}^{3}(-1)^i{\rm{Tr}}(g|_{\Lambda^i(\phi[\mathfrak{l}])})=0.$$
Here $\Lambda^i(\phi[\mathfrak{l}])$ are exterior powers of $\phi[\mathfrak{l}]$, and $\Lambda^0(\phi[\mathfrak{l}])$ is defined to be the trivial representation $\mathbb{1}$. Therefore, we have
$${\rm{Tr}}(\mathbb{1}\oplus \Lambda^2(\phi[\mathfrak{l}]))={\rm{Tr}}(\phi[\mathfrak{l}]\oplus \det(\phi[\mathfrak{l}])).$$
Now we can compare the $\mathbb{F}_\mathfrak{l}[G_F]$-modules $\mathbb{1}\oplus \Lambda^2(\phi[\mathfrak{l}])$ and $\phi[\mathfrak{l}]\oplus \det(\phi[\mathfrak{l}])$. For any $g\in G_F$, they both have the same trace. But a stronger claim is true:
\begin{claim}
$g$ on both spaces has the same characteristic polynomial.
\end{claim}
\begin{proof}[Proof of claim]
Let $\lambda_1, \lambda_2, \lambda_3$ be the characteristic values of $g$ acting on $\phi[\mathfrak{l}]$. Then the characteristic values of $g$ acting on $\Lambda^2(\phi[\mathfrak{l}])$ and $\det(\phi[\mathfrak{l}])$ are $\lambda_1\lambda_2,\ \lambda_1\lambda_3,\ \lambda_2\lambda_3$ and $\lambda_1\lambda_2\lambda_3$, respectively. The action of $g$ on $\mathbb{1}\oplus \Lambda^2(\phi[\mathfrak{l}])$ and $\phi[\mathfrak{l}]\oplus \det(\phi[\mathfrak{l}])$ have the same trace
implies
$$
\begin{array}{rccl}
&\lambda_1+\lambda_2+\lambda_3+\lambda_1\lambda_2\lambda_3&=&1+\lambda_1\lambda_2+\lambda_1\lambda_3+\lambda_2\lambda_3.\\
\Rightarrow&\lambda_1+\lambda_2-\lambda_1\lambda_2-1&=&(\lambda_1+\lambda_2-\lambda_1\lambda_2-1)\lambda_3\\
\Rightarrow&(\lambda_1-1)(1-\lambda_2)&=&(\lambda_1-1)(1-\lambda_2)\lambda_3
\end{array}
$$
Hence one of $\lambda_1$, $\lambda_2$, and $\lambda_3$ must be equal to $1$, which implies the characteristic values of $g$ acting on both spaces are the same.
\end{proof}
As we have deduced that the $G_F$-action on $\mathbb{1}\oplus \Lambda^2(\phi[\mathfrak{l}])$ and $\phi[\mathfrak{l}]\oplus \det(\phi[\mathfrak{l}])$ have the same characteristic polynomials, the Brauer-Nesbitt theorem implies both representations are isomorphic up to semisimplification. In other words, we have
\begin{equation}\label{eqMihranInserted}
\mathbb{1}\oplus \Lambda^2(\phi[\mathfrak{l}])^{\ss}\cong \phi[\mathfrak{l}]^{\ss}\oplus \det(\phi[\mathfrak{l}]).
\end{equation}
\subsection{Case (1)} Suppose
$\det(\phi[\mathfrak{l}])\neq \mathbb{1}$. Then, the isomorphism \eqref{eqMihranInserted} implies that
the semisimplification of $\phi[\mathfrak{l}]$ contains the trivial representation.
\subsection{Case (2)}
Now assume $\det(\phi[\mathfrak{l}])=\mathbb{1}$, the $G_F$-module $\phi[\mathfrak{l}]$ is reducible, and the semisimplification of $\phi[\mathfrak{l}]$ does not contain the trivial representation.
We have the isomorphisms of $G_F$-modules
$$\Lambda^2(\phi[\mathfrak{l}])\cong{\rm{Hom}}(\phi[\mathfrak{l}],\det(\phi[\mathfrak{l}]))\cong\phi[\mathfrak{l}]^{\lor}\otimes\det(\phi[\mathfrak{l}])\cong\phi[\mathfrak{l}]^{\lor}, $$
where the first isomorphism is given by
$$
\begin{array}{clc}
\Lambda^2(\phi[\mathfrak{l}])&\rightarrow&{\rm{Hom}}(\phi[\mathfrak{l}],\det(\phi[\mathfrak{l}]))\\
u\land v&\mapsto&(w\mapsto u\land v\land w)
\end{array}
$$
Combining this with \eqref{eqMihranInserted}, we get
$$\phi[\mathfrak{l}]^{ss}\cong(\phi[\mathfrak{l}]^{\lor})^{ss},$$
i.e. the semisimplification of $\phi[\mathfrak{l}]$ is self dual.
Now we can consider the Jordan-H\"older series of $\phi[\mathfrak{l}]$.
\begin{enumerate}
\item[Case (i).] $\phi[\mathfrak{l}]\supset V\supset \{0\}$, where $V$ is an irreducible $G_F$-submodule of $\phi[\mathfrak{l}]$ with dimension $1$ or $2$.
In this case, the action of $g\in G_F$ on $\phi[\mathfrak{l}]^{ss}$ with respect to some basis is of the form
$$\left(\begin{array}{c|c}\chi(g) & \\\hline & g|_{V}\end{array}\right)\ {\rm{or}}\ \left(\begin{array}{c|c}g|_{\phi[\mathfrak{l}]/V} & \\\hline & \chi(g)\end{array}\right).$$
Here $\chi:G_F\rightarrow \mathbb{F}_\mathfrak{l}^*$ is a nontrivial character of $G_F$. Since we have
$\phi[\mathfrak{l}]^{ss}\cong(\phi[\mathfrak{l}]^{\lor})^{ss}$, then the character $\chi$ always satisfies
$$\chi(g)=\chi(g^{-1})\quad \forall\ g\in G_F.$$
Thus $\chi:G_F\rightarrow \{\pm 1\}$ is a nontrivial quadratic character. We may find some non-square element $c\in F$ such that $\chi$ can be rewritten in this way:
$$
\begin{array}{cccl}
\chi:&{\rm{Gal}}(F(\sqrt{c})/F)&\rightarrow&\{\pm1\} \\
&g&\mapsto&\chi(g)
\end{array}
.$$
where $g\cdot\sqrt{c}=\chi(g)\cdot\sqrt{c}$.
To construct a suitable quadratic twist, we need the following Lemma:
\begin{lem}\label{twist}
If the quadratic twist $\phi'$ of $\phi$ is given by
$$\phi_T'=\sqrt{c}\cdot\phi_T\cdot\sqrt{c}^{-1},$$
then
$${\bar{\rho}}_{\phi',\mathfrak{l}}\cong {\bar{\rho}}_{\phi,\mathfrak{l}}\otimes\chi$$
\end{lem}
\begin{proof}
Apply Lemma 4.1 in \cite{ChLe19}
\end{proof}
With the Lemma above, we know that the action of $g\in G_F$ on $\phi'[\mathfrak{l}]^{ss}$ with respect to some basis is of the form
$$\left(\begin{array}{c|c}\chi^2(g) & \\\hline & *\end{array}\right)\ {\rm{or}}\ \left(\begin{array}{c|c}* & \\\hline & \chi^2(g)\end{array}\right).$$
Combining with the fact that $\chi$ is a quadratic character, we can deduce that $\phi'[\mathfrak{l}]^{ss}$ contains the trivial representation.
\item[Case (ii).] $\phi[\mathfrak{l}]\supset V_1\supset V_2\supset\{0\}$, where $V_1$ and $V_2$ are $G_F$-submodules of $\phi[\mathfrak{l}]$ with dimension $2$ and $1$, respectively.
In this case, the action of $g\in G_F$ on $\phi[\mathfrak{l}]^{ss}$ with respect to some basis is of the form
$$\left(\begin{array}{ccc}\chi_1(g) & & \\ & \chi_2(g) & \\ & & \chi_3(g)\end{array}\right)$$
Here $\chi_i:G_F\rightarrow \mathbb{F}_\mathfrak{l}^*$ are nontrivial characters of $G_F$ for $i=1,2,3$. The self-duality of $\phi[\mathfrak{l}]$ then implies that for each $i\in\{1,2,3\}$,
$$\chi_i(g)=\chi_j(g^{-1})\ {\rm{for\ some\ }}j\in\{1,2,3\}.$$
If $i\neq j$, then we may assume that $\chi_1(g)=\chi_2(g^{-1})$ for all $g\in G_F$. The assumption that $\det(\phi[\mathfrak{l}])$ is the trivial representation implies $\chi_3(g)=1$ for all $g\in G_F$. Hence $\phi[\mathfrak{l}]^{ss}$ contains a trivial representation, contradicts to our assumption. Therefore, we have
$$\chi_i(g)=\chi_i(g^{-1})\ {\rm{for\ all\ }}i\in\{1,2,3\}.$$
In other words, each $\chi_i$ is a nontrivial quadratic character of $G_F$. Now we can focus on some $\chi_i$ and repeat what we did in case 1. There is some non-square element $c\in F$ such that
$$
\begin{array}{cccl}
\chi_i:&{\rm{Gal}}(F(\sqrt{c})/F)&\rightarrow&\{\pm1\} \\
&g&\mapsto&\chi_i(g)
\end{array}
.$$
where $g\cdot\sqrt{c}=\chi_i(g)\cdot\sqrt{c}$.
Lemma \ref{twist} then implies the quadratic twist $\phi_T'=\sqrt{c}\cdot\phi_T\cdot\sqrt{c}^{-1}$ satisfies the property that $\phi'[\mathfrak{l}]^{ss}$ contains a trivial representation.
\end{enumerate}
Therefore, the second case of Theorem \ref{mainthm2} has been proved.
\begin{example}
In this example, we prove the existence of a Drinfeld module $\phi$ in Case (2) with the semisimplification of mod $(T)$ representation $\phi[T]$ does not contain the trivial representation.
Let $A=\mathbb{F}_q[T]$ with $q=p^e$ be a prime power with $p\geqslant 5$, and $F=\mathbb{F}_q(T)$. Let $c_1$ and $c_2$ be two distinct non-square elements in $F$. Consider the $\mathbb{F}_q$-vector space
$$V=\sqrt{c_1}\cdot\mathbb{F}_q+\sqrt{c_2}\cdot\mathbb{F}_q+\sqrt{c_1c_2}\cdot\mathbb{F}_q.$$
The action of Galois group $G_F$ on $V$ gives a $G_F$-module structure. With respect to the basis $\{\sqrt{c_1}, \sqrt{c_2}, \sqrt{c_1c_2}\}$, we can see that $g\in G_F$ acts on $V$ via the matrix
$$\left(\begin{array}{ccc}\chi_1(g) & 0 & 0 \\0 & \chi_2(g) & 0 \\0 & 0 & \chi_1(g)\chi_2(g)\end{array}\right).$$
Here $\chi_1$ and $\chi_2$ are quadratic characters.
By the Boston-Ose theorem (\cite{Bose00} Theorem 6.1), the representation $V$ is arised from the mod $(T)$ Galois representation $\bar{\rho}_{\phi,T}$ associated to some Drinfeld $A$-module $\phi$ over $F$ of rank $3$. Therefore, there is some basis of $\phi[T]$ such that $g\in G_F$ acts on $\phi[T]$ via the above matrix with respect to the basis. Since $\chi_1$ and $\chi_2$ are quadratic characters, one of $\chi_1(g)$, $\chi_2(g)$ and $\chi_1(g)\chi_2(g)$ must equal to $1$ for each $g$. Thus the characteristic polynomial of each $g\in G_F$ acting on $\phi[T]$ must has a factor $x-1$ and so the Drinfeld module satisfies the assumption in Theorem \ref{mainthm2}. The reason why the semisimplification $\phi[T]^{ss}$ does not contain the trivial representation is because $\chi_1$ and $\chi_2$ are distinct quadratic characters. Thus for each one of $\chi_1$, $\chi_2$, and $\chi_1\chi_2$, there is some Galois element $g\in G_F$ that makes it not equal to $1$.
\end{example}
\subsection{Case (3)}
In this section we construct a Drinfeld module $\phi$ of rank $3$ over $F$ such that
for all $\mathfrak{p}\in S$ the space $(\phi\otimes\mathbb{F}_\mathfrak{p})[\mathfrak{l}]$ contains a nontrivial $\mathbb{F}_\mathfrak{p}$-rational point,
but $\phi[\mathfrak{l}]$ is irreducible.
In this subsection, we set $A=\mathbb{F}_q[T]$, $q=p^e$ a prime power with $p\geqslant 5$ such that $x^2+x+1$ is irreducible in $\mathbb{F}_q[x]$, and $F=\mathbb{F}_q(T)$. Consider the Drinfeld module $\varphi$ of rank $2$ defined by
$$\varphi_T=T+T\tau+T^q\tau^2.$$
\begin{lem}\label{image sl2}
The mod $(T)$ representation ${\bar{\rho}}_{\varphi,T}:G_F\rightarrow {\rm{GL}}_2(\mathbb{F}_q)$ has image equal to ${\rm{SL}}_2(\mathbb{F}_q)$.
\end{lem}
\begin{proof}
By \cite{Hei03} Proposition 4.7.1, we have $\det\circ\bar{\rho}_{\varphi,T}=\bar{\rho}_{\psi,T}$ where $\psi$ is the Drinfeld module of rank $1$ defined by $\psi_T=T-T^q\tau$. Thus the representation $\bar{\rho}_{\psi,T}$ is trivial, which implies the image of $\bar{\rho}_{\varphi,T}$ lies in ${\rm{SL}}_2(\mathbb{F}_q)$.
Next, we prove the image of $\bar{\rho}_{\varphi,T}$ contains a subgroup of order $q$. Consider the decomposition subgroup ${\rm{Gal}}(F^{sep}_{(T)}/F_{(T)})$ of $G_F$. Since $F_{(T)}(\varphi[T])$ is the smallest extension of $F_{(T)}$ such that ${\rm{Gal}}(F^{sep}_{(T)}/F_{(T)}(\varphi[T]))$ acts trivially on $\varphi[\mathfrak{l}]$, we have $\bar{\rho}_{\varphi,T}({\rm{Gal}}(F^{sep}_{(T)}/F_{(T)})) \cong {\rm{Gal}}(F_{(T)}(\varphi[T])/F_{(T)}).$ By looking at the Newton's polygon of $\varphi_T(x)/x=T+Tx^{q-1}+T^qx^{q^2-1}$, we know $\varphi_T(x)$ has roots with valuation equal to $-\frac{1}{q}$. Therefore, $F_{(T)}(\varphi[T])$ must contain a subfield $M$ which is Galois over $F_{(T)}$ and its ramification index $e[M:F_{(T)}]$ is divisible by $q$. Thus the order of $\bar{\rho}_{\varphi,T}(G_F)$ is divisible by $q$.
Finally, we prove the Galois module $\varphi[T]$ is irreducible. Let $\mathfrak{p}=(T-c)$ be a degree $1$ prime ideal of $A$ with $c\in\mathbb{F}_q^*$. By \cite{Chen20} Proposition $7$ and Proposition $8$, we may compute the characteristic polynomial $P_{\varphi,(T-c)}(x)=-c^{-1}(T-c)+ax+x^2\in A[x]$ and $a$ belongs to $\mathbb{F}_q$. Because $P_{\varphi,\mathfrak{p}}(x)$ is also the characteristic polynomial of Frobenius endomorphism of $\varphi\otimes\mathbb{F}_{\mathfrak{p}}$ acting on $T_{\mathfrak{l}}(\varphi\otimes\mathbb{F}_{\mathfrak{p}})$, we have
$$-c^{-1}(\varphi\otimes\mathbb{F}_{\mathfrak{p}})_{T-c}+(\varphi\otimes\mathbb{F}_\mathfrak{p})_a\tau+\tau^2=0$$
As $\varphi_T=T+T\tau+T^q\tau^2$, we have $(\varphi\otimes \mathbb{F}_\mathfrak{p})_{T-c}=c\tau+c\tau^2$. Hence the above equation implies $a=1$. Therefore, the characteristic polynomial of $\bar{\rho}_{\varphi,T}({\rm{Frob}}_\mathfrak{p})$ is equal to $x^2+x+1$. By our choice of $q$, we can deduce that $\phi[T]$ is irreducible.
By \cite{Zy11} Lemma A.1, we have $\bar{\rho}_{\varphi,T}(G_F)\supseteq {\rm{SL}}_2(\mathbb{F}_q)$. Thus the proof is now complete.
\end{proof}
Now we consider the representation $\rho$ of $G_F$ defined by the following composition:
$$\rho:G_F\xrightarrow{{\bar{\rho}}_{\varphi,T}}{\rm{SL}}_2(\mathbb{F}_q)\xrightarrow{{\rm{projection}}} {\rm{PSL}}_2(\mathbb{F}_q)\xrightarrow[{\rm{exceptional\ isomorphism}}]{\sim}\Omega_3(\mathbb{F}_q)\subset {\rm{SO}}_3(\mathbb{F}_q)\subset {\rm{GL}}_3(\mathbb{F}_q).$$
Here $\Omega_3(\mathbb{F}_q)$ is the subgroup of ${\rm{SO}}_3(\mathbb{F}_q)$ of index $2$ generated by $\left(\begin{array}{ccc}1 & 2 & -1 \\-1 & -1 & 0 \\-1 & 0 & 0\end{array}\right)\ {\rm{and}}\ \left(\begin{array}{ccc}\xi^{-2} & 0 & 0 \\0 & 1 & 0 \\0 & 0 & \xi^2\end{array}\right)$, where
$\xi$ is a generator of $\mathbb{F}_q^*$; see \cite{RyTa98}, section 4.6.
One can also refer to page 53 in \cite{Gr02} for the formal definition of the group $\Omega_3$ defined over a field.
By the Boston-Ose theorem (\cite{Bose00} Theorem 6.1), $\rho$ is arised from the mod $(T)$ Galois representation associated to some Drinfeld $A$-module $\phi$ over $F$ of rank $3$. In other words, the mod $(T)$ representation ${\bar{\rho}}_{\phi,T}$ associated to $\phi$ has image equal to $\Omega_3(\mathbb{F}_q)$.
\begin{lem}
For almost all prime ideal $\mathfrak{p}$ of $A$, the $T$-torsion $(\phi\otimes\mathbb{F}_\mathfrak{p})[T]$ contains a nontrivial $\mathbb{F}_\mathfrak{p}$-rational point.
\end{lem}
\begin{proof}
Consider those prime ideals $\mathfrak{p}\neq(T)$ where $\phi$ has good reduction at $\mathfrak{p}$. By Proposition \ref{red}, it suffices to prove that the characteristic polynomial $P_{\phi,\mathfrak{p}}(x)$ of ${\rm{Frob}}_\mathfrak{p}$ acting on $T_{(T)}(\phi)$ satisfies
$$P_{\phi,\mathfrak{p}}(1)\equiv 0 \mod T.$$
In other words, we want to prove that the characteristic polynomial ${\bar{P}}_{\phi,\mathfrak{p}}(x)$ of ${\rm{Frob}}_\mathfrak{p}$ acting on $\phi[T]$ has a linear factor $(x-1)$. Since the image of the mod $(T)$ representation $\bar{\rho}_{\phi,T}$ is a subgroup of ${\rm{SO}}_3(\mathbb{F}_q)$, the proof is complete whenever every nontrivial element of ${\rm{SO}}_3(\mathbb{F}_q)$ fixes a point in $\mathbb{F}_q^3$.
This has been proved in \cite{Gr02}, Corollary 6.10.
\end{proof}
\begin{prop}
$\Omega_3(\mathbb{F}_q)$ acts on $\mathbb{F}_q^3 $ irreducibly.
\end{prop}
\begin{proof}
From \cite{RyTa98}, section 4.6, there is a basis of $\mathbb{F}_q^3$ such that the generators of $\Omega_3(\mathbb{F}_q)$ are matrices
$$nx=\left(\begin{array}{ccc}1 & 2 & -1 \\-1 & -1 & 0 \\-1 & 0 & 0\end{array}\right)\ {\rm{and}}\ h=\left(\begin{array}{ccc}\xi^{-2} & 0 & 0 \\0 & 1 & 0 \\0 & 0 & \xi^2\end{array}\right)$$
with respect to that basis.
Suppose that there is a proper nontrivial subspace $V$ of $\mathbb{F}_q^3$ which is fixed under the action of $\Omega_3(\mathbb{F}_q)$. Such $V$ cannot be of dimension $1$ by computing the eigenvectors of $nx$ and $h$. Thus $V$ must be a $2$-dimensional space. Write
$$V={\rm{span}}\left\{\left(\begin{array}{c}a \\b \\c\end{array}\right),\ \left(\begin{array}{c}x \\y \\z\end{array}\right)\right\}.$$
We compute
$$
\begin{array}{ccll}
&nx\cdot\left(\begin{array}{c}a \\b \\c\end{array}\right)&=&\left(\begin{array}{c}2a+2b-c \\-a \\0\end{array}\right)\in V\\
\ &\ &\ &\ \\
\Rightarrow&nx\cdot\left(\begin{array}{c}2a+2b-c \\-a \\0\end{array}\right)&=&\left(\begin{array}{c}-(-2b+c) \\-2b+c \\-2b+c\end{array}\right)+\left(\begin{array}{c}0 \\-a \\-2a\end{array}\right)
\end{array}
$$
\begin{claim}
$a\neq 0$.
\end{claim}
\begin{proof}[Proof of claim] There are two cases to consider:
\begin{itemize}
\item[Case (i).] If $a=0$ and $2b-c\neq 0$, then we have
$$\left(\begin{array}{c}-1 \\1 \\1\end{array}\right)\in V\ {\rm{and}}\ \left(\begin{array}{c}1 \\0 \\0\end{array}\right)\in V.$$
Hence we have a basis of $V$.
However, $h\cdot\left(\begin{array}{c}-1 \\1 \\1\end{array}\right)=\left(\begin{array}{c}-\xi^2 \\1 \\\xi^2\end{array}\right)$ does not belong to $V$, which gives a contradiction since $V$ is fixed under $\Omega_3(\mathbb{F}_q)$-action.
\item[Case (ii).] If $a=2b-c=0$, then the base vector $\left(\begin{array}{c}a \\b \\c\end{array}\right)$ gives
$\left(\begin{array}{c}0 \\1 \\2\end{array}\right)\in V.$
Moreover, we have
$$nx\cdot \left(\begin{array}{c}0 \\1 \\2\end{array}\right)=\left(\begin{array}{c}0 \\-1 \\0\end{array}\right)\in V.$$
Hence we have found a basis for $V$.
However, $nx\cdot h\cdot\left(\begin{array}{c}0 \\1 \\2\end{array}\right)=\left(\begin{array}{c}2-2\xi^2 \\-1 \\0\end{array}\right)$ does not belong to $V$ because $\xi^2\neq 1$. This gives a contradiction.
\end{itemize}
\end{proof}
Thus, the entry $a$ in $\left(\begin{array}{c}a \\b \\c\end{array}\right)$ must not equal to $0$. Similarly, the entry $x$ in $\left(\begin{array}{c}x \\y \\z\end{array}\right)$ is not equal to $0$ as well. Hence there are some nonzero $\alpha,\ \beta\in\mathbb{F}_q$ such that
$$\alpha\left(\begin{array}{c}a \\b \\c\end{array}\right)+\beta\left(\begin{array}{c}x \\y \\z\end{array}\right)=\left(\begin{array}{c}0 \\s \\t\end{array}\right)\in V.$$
We may write
$$V={\rm{span}}\left\{\left(\begin{array}{c}0 \\s \\t\end{array}\right),\ \left(\begin{array}{c}x \\y \\z\end{array}\right)\right\}.$$
This contradicts the claim proved above.
Hence there is no such nontrivial proper invariant subspace $V$.
\end{proof}
Unfortunately, the proof of the Boston-Ose theorem only implies the existence of $\phi$ without providing a method
for writing down an equation for $\phi_T$. It seems like an interesting problem to write down $\phi_T$
such that $\bar{\rho}_{\phi, T}(G_F)\cong \Omega_3(\mathbb{F}_q)$.
\section*{Acknowledgements}
The author would like to thank his advisor Professor Mihran Papikian for helpful comments and suggestions on carrying out this paper.
\bibliographystyle{alpha}
|
2,869,038,155,108 | arxiv |
\subsection{Threat Model}
We consider a LiDAR spoofing adversary that has the ability to spoof return signals of LiDAR demonstrated in \cite{petit2015remote,shin2017illusion,cao2019adversarial, 255240}. We follow closely the threat model in \cite{hau2021shadow} with the \textit{static} adversary's ($\mathcal{A}_{static}$) capabilities and goals:
\vspace{3pt}\noindent$\bullet$ \textit{Number of spoofed points.} We assume $\mathcal{A}_{static}$ enjoys state of the art sensor spoofing capabilities and can inject $\leq200$ points~\cite{255240} in a 3D scene.
\vspace{3pt}\noindent$\bullet$ \textit{Types of spoofed objects.} We consider model-level spoofing attacks able to emulate distant and occluded vehicles, pedestrians and cyclists ~\cite{cao2019adversarial, 255240, hau2021shadow}.
\vspace{3pt}\noindent$\bullet$ \textit{Knowledge.} We consider a white-box model-level spoofing adversary who has full knowledge of the internals of both the victim model and the detection mechanism.
\vspace{3pt}\noindent$\bullet$ \textit{Aims.} The adversary can launch \emph{ghost attacks} by spoofing front-near objects (5m-8m in front of the ego-vehicle).~\cite{cao2019adversarial,255240}.
\subsection{3D-TC2 \space Methodology}
\begin{figure*}[htbp]
\centerline{\includegraphics[width=0.8\textwidth]{figures/3D-TC2_methodology.png}}
\caption{Methodology of 3D-TC2. Flowchart of how 3D scenes are processed across the 3 phases and modular components.}
\label{fig:3D-TC2_methodology}
\end{figure*}
In this work, we leverage motion as a physical invariant to verify the presence of genuine 3D objects. We propose \emph{3D Object-detection Temporal Consistency Check} (3D-TC2), a modular methodology which utilizes motion prediction to analyze the temporal consistency of objects across consecutive frames in a driving scene. (See Figure \ref{fig:3D-TC2_methodology}). 3D-TC2{} consists of the following 3 Phases with a total of 4 modular components :
\vspace{3pt}\noindent\textbf{Prediction.} The prediction phase consists of 2 modular components: \textit{Object Detector} and \textit{Object-Motion Predictor}. The two components work in parallel. The \textit{Object Detector} takes in the current frame (3D point-cloud) as input and outputs object predictions as a form of bounding boxes coordinates in a 3D point-cloud. The \textit{Object-Motion Predictor} uses historical spatio-temporal information from a number of previous 3D scenes to predict the expected location of objects in the current frame. This prediction is based on temporal information learnt from previous frames. As such, we hypothesize that any abrupt introduction of objects into a frame, which is a characteristic of LiDAR-based front-near object spoofing attack, can be detected as an anomaly. Our modular design allows for both components to be interchanged, allowing to reap the benefits from any advancements in both object detection and motion prediction.
\vspace{3pt}\noindent\textbf{Alignment.} Although the output of the 2 components in the \textit{Prediction Phase} holds information of the current frame, the representation of these results might be different. For example, the \textit{Object-Motion Predictor} produces object information in a 2D discretized space (i.e. predicts labels for each cell in the space) whereas the object detector provides higher-level object information in 3D space (i.e. bounding box coordinates for each object in the space). Hence, there is a need to align the prediction representations into a common representation before any useful comparison. The alignment component varies with the type of models used in the previous phase as it is dependent on the output of each model.
\vspace{2pt}\noindent\textbf{Attack Detection.} With the prediction and detection output in a common representation, we can analyze the results for any discrepancies that would indicate a potential LiDAR spoofing attack. This component is modular as well, allowing the user to interchange the strategy used for detecting discrepancies between the prediction model and object detection model for a particular frame at a specified time.
\subsection{Implementation Details}\label{sub-sec:implementation}
To demonstrate the application of our proposed methodology, we implemented a prototype based on 3D-TC2 \space for detecting ghost objects in 3D scenes. The components used in our implementation are summarized in Table \ref{tab:implementation_components}.
\begin{table}[!htbp]
\caption{Components for 3D-TC2 \space Implementation}
\label{tab:implementation_components}
\centering
\vspace{-0.3cm}
\resizebox{\linewidth}{!}{%
\begin{tabular}{llll}
\hline
\multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}\textbf{Object }\\\textbf{Detection}\\\textbf{~Model}\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\textbf{Object-Motion }\\\textbf{Prediction }\\\textbf{Model}\end{tabular}} & \multicolumn{1}{c|}{\textbf{Alignment}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}\textbf{Attack }\\\textbf{Detection}\end{tabular}} \\
\hline
\multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}}PointPillars\cite{lang2019pointpillars}\\SECOND\cite{yan2018second}\end{tabular}} & \multicolumn{1}{l|}{MotionNet\cite{wu2020motionnet}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}Bounding Box \\Transformation\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}Cell-match Count \\Strategy\end{tabular}} \\
\hline
& & & \\
& & &
\end{tabular}
}
\vspace{-30pt}
\end{table}
\vspace{3pt}\noindent\textbf{Object Detection Models.} For AV perception of the current frame, we experimented with popular 3D point-cloud based object detection models such as PointPillars\cite{lang2019pointpillars} and SECOND\cite{yan2018second}. The object detection models take in the 3D point-cloud at the current frame and provide 3D bounding boxes of objects relatively to the ego-vehicle. In a benign scenario, the objects detected are all genuine, providing accurate perception of the surrounding objects to the AV. Under a LiDAR spoofing attack, $\mathcal{A}_{static}$ injects points into a scene to spoof objects. As a result, the object detection models would detect the spoofed object and this would cause the AV to make erroneous decisions (i.e. emergency brake due to a front-near object).
\vspace{3pt}\noindent\textbf{Object-Motion Prediction Model.} For prediction of the objects in the current frame from previous frames, we use a deep-learning model MotionNet\cite{wu2020motionnet}. MotionNet takes a sequence of consecutive scenes (3D point-clouds) as
input, $Time = T_{t-K}$ to $T_{t-1}$ (where $K$ is the number of historical frames and $t$ is the time for the predicted frame), and outputs a bird’s eye view (BEV) map of the predicted frame at $Time = T_{t}$. The BEV map is a 2D representation top-down view of the scene, which is then further discretized into grid cells. MotionNet predicts for each cell, the object class label and motion information. MotionNet classifies objects into Vehicles, Bike, Pedestrian, Others and Background. Where ``Vehicle'' refers to objects such as cars, buses and trucks, ``Bike`` refers to bicycles and cyclists objects, ``Others'' refers to unclassified objects not seen in training dataset and ``Background'' refers to cells with LiDAR measurements due to objects in the environment such as roads and buildings.
\vspace{3pt}\noindent\textbf{Alignment Operation.} As the output of the 3D Object Detection Models and Object-Motion Prediction Model are represented differently (3D bounding boxes vs. 2D grid cells), there is a need to align the model outputs into a common representation so that comparison of model output could be performed. In our implementation, we operated on the output of the Object Detection Models, performing transformation of the 3D bounding boxes to match the output representation of MotionNet. The bounding boxes are transformed into the same coordinate system of MotionNet output and the dimension reduced to a 2D planar representation. The 2D planar representation is then super-imposed and mapped onto the MotionNet BEV map, where the bounding boxes are aligned with the grid cells. This results in a 2D BEV map with both information from the Object Detection's output and MotionNet's output.
Figure~\ref{fig:alignment_example} illustrates the alignment operation on a target frame, where the green boxes denote projected bounding boxes from 3D to 2D and purple boxes denotes the ground truth 2D bounding box. After the alignment operation, the projected bounding boxes matches the ground truth boxes.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.45\textwidth]{figures/alignment.png}}
\caption{Example of alignment operation.}
\label{fig:alignment_example}
\end{figure}
\vspace{3pt}\noindent\textbf{Attack Detection Module.} This module identifies anomalous objects in a frame by comparing the aligned predicted frame with the output of the object detection model on the actual frame. For the comparison we device and experiment with a simple \emph{Cell-match Counting Strategy} (CMCS).
\textit{CMCS} is used to determine if a detected object's bounding box matches the location predicted based on an object's motion. This is done by counting the object categories of the grid cells occupied by a detected bounding box, the object category that has the most cells in the bounding box region would be considered as the predicted object and if this object category differs from the object detection result, it is marked as a potential LiDAR spoofing attack. Under a benign scenario, the majority of the grid cell categories would correspond to the object category of the bounding box. Under a single frame LiDAR spoofing attack, when an object is successfully injected, it does not have ``history'' from the previous frames and hence there is no equivalent motion prediction of such category for the current frame. As such, the majority cell category in the bounding box of the detected spoofed object would be expected to be ``background''. We evaluate the effectiveness of CMCS in detecting single frame object spoofing attacks in Section \ref{sub-sec:attack_detect_eval}.
\subsection{Evaluation of Prediction Model and Alignment Operation Utility}
The alignment operation is evaluated to answer the following design questions for our implementation:
\vspace{3pt}\noindent\textbf{DQ 1.} How useful is using MotionNet for the Object-motion Predictor (i.e. does object-motion prediction agrees with object detection under benign scenarios)?
\vspace{3pt}\noindent\textbf{DQ 2.} How useful is performing bounding box transformation of Object-Detection Model output (i.e. is there a good match between the output of the models under benign scenarios)?
The bounding boxes from object-detection models are transformed and overlaid onto the 2D BEV map with object categorical grid cells. The \textbf{\textit{match ratio}} for a given object category is the ratio of number of cells with that object category over the total number of cells occupied by all the bounding boxes of that object category.
\ignore{(see Eq. \ref{eq:match_ratio}).
\begin{equation}\label{eq:match_ratio}
match\_ratio_{obj=O} = \frac{num\_cells_{obj=O}}{num\_cells\_bbox_{obj=O}}
\end{equation}
}
We further break down the analysis to objects in \textit{front-near region} and objects in \textit{front-far region}. Where front-near refers to objects in front of the ego-vehicle up to a distance of 8m from the LiDAR unit, and front-far refers to objects further than 8m.
\vspace{3pt}\noindent\textbf{Results.} The results of our analysis are summarized in Table~\ref{tab:match_ratio} for objects in front-near and front-far regions respectively. We observed that over 80\% of the cells with the ``Vehicle'' category fall inside the ‘Vehicle’ bounding boxes demonstrating both a good prediction match and alignment. For ‘Pedestrian’ objects, over 50\% of the grid cells were found within their bounding box regions. A lower match ratio score obtained for ‘Pedestrian’ objects is due to the size of the bounding boxes which occupy much larger areas than the combined cells with predicted pedestrian categories, resulting with a large proportion of ``background'' cells. Nevertheless, our \textit{CMCS} strategy requires a majority of more than 50\% match ratio for the benign decision of the object-motion prediction agreeing with object detection. For ``Cyclist'' objects, the limited training data resulted in poor detection and prediction performance, consequently resulting in poor match. As for ‘Other’ object categories, we observed higher match in front-near regions compared to the further regions.
\vspace{3pt}\noindent\textbf{Conclusion.} From our evaluation of the match ratio between MotionNet output and the transformed bounding box output of Object Detection Models, we observe good match between the motion prediction model output and the transformed detection model output, with a match of more than 80\% for Vehicles and more than 50\% for Pedestrians at front-near and front-far regions. This indicates that the design choices of the use of MotionNet and the bounding box transformation are suitable for use in our implementation.
\begin{table*}[!htbp]
\caption{Match Ratio of Objects in Front-Near and Front-Far Regions of the Ego-Vehicle}
\label{tab:match_ratio}
\vspace{-0.3cm}
\begin{tabular}{|l|l|l|l|l|}
\hline
& \multicolumn{2}{c|}{\textbf{Pointpillars}} & \multicolumn{2}{c|}{\textbf{SECOND}} \\ \hline
\textbf{\begin{tabular}[c]{@{}l@{}}Object\\ Category\end{tabular}} & \textbf{Front-Near} & \textbf{Front-Far} & \textbf{Front-Near} & \textbf{Front-Far} \\ \hline
\textbf{Vehicle(Car)} & 9721/11500(84.53\%) & 9426/11589(81.34\%) & 9672/11192(86.42\%) & 9491/11536(82.27\%) \\ \hline
\textbf{Pedestrian} & 903/1619(55.78\%) & 1018/1938(52.53\%) & 826/1430(57.76\%) & 946/1669(56.68\%) \\ \hline
\textbf{Bike(Cyclist)} & 12/989(1.21\%) & 72/1112(6.47\%) & 10/396(2.53\%) & 78/808(9.65\%) \\ \hline
\textbf{Other} & 3516/5169(68.02\%) & 3673/7644(48.05\%) & 3428/3914(87.58\%) & 3095/6097(50.76\%) \\ \hline
\end{tabular}
\end{table*}
\subsection{Attack Detection Effectiveness}\label{sub-sec:attack_detect_eval}
With the prediction model and alignment module showing good utility, we are able to obtain high quality results of prediction and detection models on the same representation of a 2D BEV grid map. We now evaluate the Attack Detection Module, which uses the Cell-match Counting Strategy (CMCS) described in Section \ref{sub-sec:implementation}. This helps us answer the design question:
\vspace{3pt}\noindent\textbf{DQ 3.} How effective is the Cell-match Counting Strategy for performing a temporal consistency check to detect model-level LiDAR spoofing attacks?
For the evaluation, we generate an adversarial dataset by injecting a spoofed object into each of the 362 key frames in the original dataset at a location of approximately 8m in front of the vehicle. We define the \textbf{\textit{Attack Success Rate (ASR)}} to be the ratio of all successfully detected (by the Object Detection Model) spoofed objects over the total number of spoofed objects.
\ignore{(Eq. \ref{eq:asr}).
\begin{equation}\label{eq:asr}
ASR = \frac{num\_detected\_spoofed\_obj}{total\_num\_injected\_spoofed\_obj}
\end{equation}
}
Using CMCS, we are able to identify mismatches in predicted object's location with the detected object's location. Mismatched objects identified would be classified as a spoofed object. We define the metric \textbf{\textit{Detection Success Rate (DSR)}} as the ratio of all successfully identified spoofed objects (by CMCS) over the total number of successfully spoofed objects.
\ignore{(Eq. \ref{eq:dsr}).
\begin{equation}\label{eq:dsr}
DSR = \frac{num\_identified\_spoofed\_obj}{total\_num\_successfully\_spoofed\_obj}
\end{equation}
}
We also measure the recall of the attack detection. Recall is a metric that measures how well the detector is able to correctly identify spoofed and genuine objects.
\vspace{3pt}\noindent\textbf{Results.} Results of the effectiveness of the Attack Detection Module are summarized in Table \ref{tab:single_scene_injection_attack}. A total of 362 attack frames were used and out of these attacked frames the Attack Success Rates (ASR) for the various objects (i.e. spoofed objects detected by the victim Object Detector) are recorded and the Detection Success Rates (DSR) are successfully detected spoofed objects using temporal consistency checks.
Our implementation of 3D-TC2{} is able to detect spoofed ``Vehicle (Car)'' objects with an accuracy of more than 98\% and detection recall over 91\% for both detectors showing it is capable of reliably recognizing anomalous ‘Car’ objects. Detection performance for smaller objects were observed to be poorer with low detection rates for ``Pedestrian'' and ``Bike (Cyclist)'' objects. The poor performance could be attributed to MotionNet's inherent significantly poorer object classification \cite{wu2020motionnet} with classification accuracy of $\sim$77\% and $\sim$19\% for ‘Pedestrian’ and ‘Cyclist’ objects respectively. This highlights an opportunity for us to explore the use of alternative mechanisms for motion prediction in future work.
\begin{table*}[htbp]
\caption{Metrics for Single Frame Injection Attacks}
\vspace{-0.3cm}
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Spoofed Object\\ Category\end{tabular}}} & \multicolumn{3}{c|}{\textbf{Pointpillars}} & \multicolumn{3}{c|}{\textbf{SECOND}} \\ \cline{2-7}
& \textbf{ASR} & \textbf{DSR} & \textbf{Recall} & \textbf{ASR} & \textbf{DSR} & \textbf{Recall} \\ \hline
\textbf{Vehicle (Car)} & 353/362(97.51\%) & 348/353(98.58\%) & 91.75\% & 349/362(96.41\%) & 343/349(98.28\%) & 92.23\% \\ \hline
\textbf{Pedestrian} & 325/362(89.78\%) & 185/325(56.92\%) & 76.93\% & 158/362(43.65\%) & 75/158 (47.47\%) & 77.07\% \\ \hline
\textbf{Bike (Cyclist)} & 341/362(94.20\%) & 324/341(95.01\%) & 97.23\% & 343/362(94.75\%) & 157/343(45.77\%) & 93.79\% \\ \hline
\end{tabular}
\label{tab:single_scene_injection_attack}
\end{table*}
\vspace{3pt}\noindent\textbf{Conclusion.} The Cell-match Counting Strategy was able to effectively detect spoofed Vehicle objects in single-frame injection attacks at Detection Success Rate of over 98\%, as well as recall of 91\%, for attack detection on PointPillars and SECOND.
In all, our evaluations demonstrates that our implementation of 3D-TC2, leveraging temporal consistency to check for valid objects in a LiDAR scene, is useful and capable of detecting spoofed object with high success rates. Compared to state-of-the-art defenses against LiDAR spoofing, our approach, with 98\% detection rate for spoofed vehicle objects, performs better than CARLO and Shadow-Catcher with detection rate of 94.5\% and 94\% respectively.
\subsection{Runtime Analysis}
Runtime constraints are important for real-time systems such as autonomous driving sensing and decision making. As such, it is important for us to be able to detect attacks in real-time in order to provide timely alerts. We perform analysis on the runtime of our implementation of 3D-TC2 \space to answer the following design question:
\vspace{3pt}\noindent\textbf{DQ 4.} How fast is the implementation of 3D-TC2 \space able to provide attack detection information?
We run our implementation of 3D-TC2 \space on the adversarial dataset 362 frames for 3 objects and measure the execution time on a machine equipped with an Intel Core i7 Six Core Processor i7-7800X (3.5GHz), 62GB RAM and 2GB NVIDIA GEFORCE GTX 1050 GPU.
\vspace{3pt}\noindent\textbf{Results.} We provide the breakdown of the runtime of the models used and the Alignment and Attack Detection components in Table \ref{tab:runtime_analysis}. Our implementation is able to provide attack detection at approximately 41 frames per second (41Hz). From the runtime breakdown, we see that the bottleneck of the performance is in the Detection/Prediction phase, where the inference/prediction time of the models\footnotemark[2] take up the majority of the total runtime. The additional overhead introduced by our approach is approximately 5ms, which is a good trade-off in providing verification of spoofed objects. The overall runtime of $\sim$41Hz demonstrates that the implementation 3D-TC2 \space is able to provide real-time detection of spoofed objects.
\begin{table}[htbp]
\centering
\caption{Performance / runtime of 3D-TC2 \space Components to Process A Single Frame}
\label{tab:runtime_analysis}
\vspace{-0.3cm}
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\textbf{Mean }} & \multicolumn{1}{c|}{\textbf{std. }} \\
\hline
\textbf{PointPillars}\footnotemark[2] & 0.016s & - \\
\hline
\textbf{SECOND}\footnotemark[2] & 0.050s & - \\
\hline
\textbf{MotionNet}\footnotemark[2] & 0.019s & - \\
\hline
\textbf{Alignment} & 0.000019s (19 µs) & 0.00019s (0.19ms) \\
\hline
\textbf{Attack Detection} & 0.005s (5 ms) & 0.00066s (0.66ms) \\
\hline
\textbf{Total runtime} & 0.024s (24ms) & 0.85ms\\
\hline
\end{tabular}
\vspace{-5pt}
\end{table}
\footnotetext{Runtime of models are reported values from their respective papers.}
\vspace{3pt}\noindent\textbf{Conclusion.} We measured the performance of the individual components in our implementation of 3D-TC2 \space and provide the breakdown of the results. We show that the bottleneck of the detection system is attributed to the Object Detection / Prediction models, where performance can be enhanced with improvements made to state-of-the-art models. Our detection system is able to provide real-time detection at approximately 41 frames per sec (41Hz).
\subsection{Discussion}
\vspace{3pt}\noindent\textbf{Object hiding attacks. }3D-TC2{} was designed to detect spoofed objects that are elicited with LiDAR spoofing attacks. Recently, there have been other classes of attacks such as Object Removal Attacks \cite{hau2021object} and MSF-ADV \cite{sp:2021:ningfei:msf-adv} that aims to hide objects from detection. We expect 3D-TC2{} to be able to detect temporal anomalies of hiding attacks if there is abrupt disruption to the victim object. However, if the hidden object is temporally consistent (i.e. an adversarial object is placed on the road as the ego-vehicle approaches it), the approach will fail to detect such object. Detecting object hiding attacks is an interesting direction we hope to explore in future work.
\vspace{3pt}\noindent\textbf{Post-detection actions.} 3D-TC2{} provides detection on potentially spoofed objects in a scene. As the detection is not perfect, there could be instances where it fails to detect or erroneously detect spoofed vehicles, in these instances, although rare, the AV should take a safety-first approach and prevent collision. 3D-TC2{} can also be used in an offline fashion, as a forensic tool for post-incident analysis of 3D point-clouds for spoofed objects.
\section{Introduction}
\input{1_Introduction}
\section{Background and Related Work}\label{sec:related}
\input{2_Background_and_Related_Work}
\section{3D Temporal Consistency Check}\label{sec:threat_method}
\input{3_Temporal_Consistency_Check}
\section{Experiments \& Results}\label{sec:expt_res}
\input{4_Experiments_Results}
\section{Conclusion \& Future Work}\label{sec:conc_future_work}
\input{5_Conclusion}
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,155,109 | arxiv | \section{Introduction}
Multiplying two Lorentz boosts whose velocity vectors are collinear gives a third Lorentz boost whose velocity can be calculated from the first two using the Einstein velocity addition law. If the two original velocities are not collinear, however, we do not get a pure Lorentz boost as the product, but rather a Lorentz boost multiplied by a certain $4 \times 4$ matrix whose columns and rows are orthonormal. This orthogonal matrix has the effect of rotating the spatial components of vectors in spacetime, while leaving their temporal component unaffected.
An interesting review paper written by the eminent British mathematician I. J. Good \cite{Good} discusses this relativistic rotation paradox, but Good clearly struggles to provide a straightforward algebraic proof in the 3+1 case. Mathematically, it is necessary to show in a four-dimensional Minkowski spacetime that a certain matrix product involving two Lorentz boosts with linearly independent velocity vectors is generally equivalent to an orthogonal Lorentz matrix. In section 8 of \cite{Good}, which seeks to prove the paradox `beyond any doubt', Good admits to being unable to do this algebraically and instead resorts to providing numerical confirmations. Such computational confirmations are easy to carry out, so he suggests that `only a short and elegant algebraic proof would be worthwhile'. One of the motivations for the present note is that a proof like this still seems to be lacking in the literature.
What we have at present in the way of mathematical demonstrations are either elaborate approximations involving power series and extraneous assumptions such as infinitesimally small relative velocities (see, e.g., section 7.3 in \cite{Goldstein}), or otherwise lengthy and overly-sophisticated expositions usually not easily accessible to, say, undergraduate physics students. Some expositions are intended to be more accessible but still seem rather involved and/or do not make use of Lorentz-matrix-algebra, e.g., \cite{Ferraro} and \cite{ODonnell}.
Due to the lack of a short and transparent algebraic treatment, this interesting relativistic phenomenon is simply left unmentioned and unexplored in almost all undergraduate texts, which seems a pity. The following argument could be used shortly after introducing Lorentz transformation matrices and their properties to budding relativists.
\section{Proof of the rotation paradox}
Let $G = \text{diag}(1, -1, -1, -1)$ be the metric tensor in a four-dimensional Minkowski manifold with events specified by a time coordinate $x^0 = ct$ and rectangular spatial coordinates $x^1, x^2, x^3$. A $4 \times 4$ Lorentz matrix $\Lambda$ preserves the quadratic form $x^T G x$ in the sense that if $y = \Lambda x$ then $y^T G y = x^T G x$, so
\begin{equation}
\Lambda^T G \Lambda = G
\end{equation}
The set of all Lorentz matrices thus defined constitutes a group under matrix multiplication, so inverses and products of Lorentz matrices are also Lorentz.
Let $O$, $\overline{O}$ and $\overline{\overline{O}}$ be three inertial frames with collinear axes and with their origins initially coinciding. Let $\beta = (\beta_i) = \big(\frac{v_i}{c}\big)$, $i = 1, 2, 3$, be the $3 \times 1$ velocity vector of $\overline{O}$ relative to $O$ with corresponding Lorentz factor $\gamma = \frac{1}{\sqrt{1 - \beta^2}}$, where $\beta^2 \equiv \beta^T \beta$. Similarly, let a vector $\overline{\beta} = (\overline{\beta}_i)$, which is not collinear with $\beta$, be the velocity vector of $\overline{\overline{O}}$ relative to $\overline{O}$ with corresponding Lorentz factor $\overline{\gamma}$. Using a standard formula, e.g., formula (24) in \cite{Good} or formula (2.59) in \cite{Moller}, the velocity vector of $\overline{\overline{O}}$ relative to $O$ is given by
\begin{equation}
\overline{\overline{\beta}} = \frac{\overline{\beta} + \beta[\gamma + (\gamma - 1)(\overline{\beta}^T\beta)/\beta^2]}{(1 + \overline{\beta}^T\beta)\gamma}
\end{equation}
with corresponding Lorentz factor $\overline{\overline{\gamma}}$. Using a simplification similar to one described in section 7.3 of \cite{Goldstein}, we can let the plane defined by the vectors $\beta$ and $\overline{\beta}$ be the $\overline{x}^1\overline{x}^2$-plane of $\overline{O}$ so that $\overline{\beta}_3 =0$, and we can arrange the frames $O$ and $\overline{O}$ so that the vector $\beta$ is along the $x^1$ axis of $O$, implying $\beta_2 = \beta_3 = 0$. We can do this for any given pair of velocity vectors which are not collinear, so there is no loss of generality here. Then (2) gives
\begin{equation}
\overline{\overline{\beta}}_1 = \frac{\overline{\beta}_1 + \beta_1}{1 + \overline{\beta}_1 \beta_1}
\end{equation}
\begin{equation}
\overline{\overline{\beta}}_2 = \frac{\overline{\beta}_2}{(1 + \overline{\beta}_1 \beta_1) \gamma}
\end{equation}
\begin{equation}
\overline{\overline{\beta}}_3 = 0
\end{equation}
and a standard formula, e.g., formula (7.11) in \cite{Goldstein}, allows us to write Lorentz transformations $L$ and $\overline{L}$ from $O$ to $\overline{O}$ and from $\overline{O}$ to $\overline{\overline{O}}$ respectively as
\begin{equation}
L =
\begin{pmatrix}
\gamma & -\gamma \beta_1 & 0 \ & 0 \ \\
-\gamma \beta_1 & \gamma & 0 \ & 0 \ \\
0 & 0 & 1 \ & 0 \\
0 & 0 & 0 \ & 1
\end{pmatrix}
\end{equation}
and
\begin{equation}
\overline{L} =
\begin{pmatrix}
\overline{\gamma} \ \ & -\overline{\gamma} \overline{\beta}_1 \ \ & -\overline{\gamma} \overline{\beta}_2 \ \ & 0 \ \\
-\overline{\gamma} \overline{\beta}_1 \ \ & 1+(\overline{\gamma}-1) \frac{\overline{\beta}_1^2}{\overline{\beta}^2} \ \ & (\overline{\gamma}-1) \frac{\overline{\beta}_1\overline{\beta}_2}{\overline{\beta}^2} \ \ & 0 \ \\
-\overline{\gamma} \overline{\beta}_2 \ \ & (\overline{\gamma}-1) \frac{\overline{\beta}_1\overline{\beta}_2}{\overline{\beta}^2} \ \ & 1+(\overline{\gamma}-1) \frac{\overline{\beta}_2^2}{\overline{\beta}^2} \ \ & 0 \\
0 \ \ & 0 \ \ & 0 \ \ & 1
\end{pmatrix}
\end{equation}
A Lorentz boost $\overline{\overline{L}}$ from $O$ to $\overline{\overline{O}}$ with velocity vector $\overline{\overline{\beta}}$ would be a matrix like (7), but with $\overline{\overline{\gamma}}$, $\overline{\overline{\beta}}_1$, $\overline{\overline{\beta}}_2$ and $\overline{\overline{\beta}}$ replacing $\overline{\gamma}$, $\overline{\beta}_1$, $\overline{\beta}_2$ and $\overline{\beta}$ respectively.
The relativistic rotation paradox is that, in general, $\overline{\overline{L}} \neq \overline{L} \times L$, but rather
\begin{equation}
\overline{\overline{L}} = R \times \overline{L} \times L
\end{equation}
or equivalently
\begin{equation}
R = \overline{\overline{L}} \times L^{-1} \times \overline{L}^{-1}
\end{equation}
where (9) is equation (28) in \cite{Good}. Numerical evidence in \cite{Good} suggests that
\begin{equation}
R =
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & r_1 & s_1 & t_1 \\
0 & r_2 & s_2 & t_2 \\
0 & r_3 & s_3 & t_3
\end{pmatrix}
\end{equation}
where the $3 \times 3$ submatrix in (10) is orthogonal. An approximation to (10) is also provided in equation (7.21) of \cite{Goldstein} under the assumptions that the components of $\overline{\beta}$ are small and only need to be retained to first order, that $\overline{\gamma} \approx 1$, and that the distinction among $\gamma$, $\overline{\gamma}$ and $\overline{\overline{\gamma}}$ can be ignored to first order.
However, it is straightforward to obtain an exact algebraic proof that $R$ in (9) is indeed an orthogonal matrix of the type given in (10) by observing that $R$ must be Lorentz, since it is a product of Lorentz matrices. Therefore all that is required to prove the rotation paradox is to show that the 00-element of $\overline{\overline{L}} \times L^{-1} \times \overline{L}^{-1}$ is equal to $1$, and that all the remaining elements in the first row are equal to zero, since any Lorentz matrix with a first row of this form must necessarily be an orthogonal matrix of the type given in (10). This assertion can easily be verified by substituting a generic $4 \times 4$ matrix with first row of the form $(1 \ 0 \ 0 \ 0)$ into the left-hand side of (1), setting the result equal to $G$ on the right-hand side, and then comparing corresponding elements.
Note that $L^{-1}$ and $\overline{L}^{-1}$ are immediately obtained from (6) and (7) simply by removing the negative signs in the first row and first column. To prove that the 00-element of $\overline{\overline{L}} \times L^{-1} \times \overline{L}^{-1}$ equals 1, multiply the first row of $\overline{\overline{L}}$ by each of the columns of $L^{-1}$ to get the $1 \times 4$ row vector
\begin{equation}
\begin{pmatrix}
\overline{\overline{\gamma}} \gamma (1 - \overline{\overline{\beta}}_1 \beta_1) \ \ & \overline{\overline{\gamma}} \gamma (\beta_1 - \overline{\overline{\beta}}_1) \ \ & -\overline{\overline{\gamma}} \ \overline{\overline{\beta}}_2 \ \ & 0
\end{pmatrix}
\end{equation}
and then multiply this row vector by the first column of the matrix $\overline{L}^{-1}$ to get
\begin{equation*}
\gamma \ \overline{\gamma} \ \overline{\overline{\gamma}}(1 - \overline{\overline{\beta}}_1 \beta_1 + \overline{\beta}_1 \beta_1 - \overline{\beta}_1 \overline{\overline{\beta}}_1) - \overline{\gamma} \ \overline{\overline{\gamma}} \ \overline{\beta}_2 \overline{\overline{\beta}}_2
\end{equation*}
\begin{equation*}
= \frac{(1 + \beta_1 \overline{\beta}_1)^2 - (\beta_1 + \overline{\beta}_1)^2 - \overline{\beta}_2^2(1 - \beta_1^2)}{\sqrt{1 - \beta_1^2}\sqrt{1 - \overline{\beta}_1^2 - \overline{\beta}_2^2}\sqrt{(1 + \beta_1 \overline{\beta}_1)^2 - (\beta_1 + \overline{\beta}_1)^2 - \overline{\beta}_2^2(1 - \beta_1^2)}} = 1
\end{equation*}
as required. To prove that the 01-element of $\overline{\overline{L}} \times L^{-1} \times \overline{L}^{-1}$ equals 0, multiply the row vector in (11) by the second column of $\overline{L}^{-1}$ to get
\begin{equation*}
\gamma \ \overline{\gamma} \ \overline{\overline{\gamma}}(\overline{\beta}_1 - \beta_1 \overline{\beta}_1 \overline{\overline{\beta}}_1) +
\bigg[1+(\overline{\gamma}-1) \frac{\overline{\beta}_1^2}{\overline{\beta}^2}\bigg]
\gamma \ \overline{\overline{\gamma}} \ (\beta_1 - \overline{\overline{\beta}}_1)
- \overline{\overline{\gamma}} \ \overline{\overline{\beta}}_2 \bigg[(\overline{\gamma}-1) \frac{\overline{\beta}_1\overline{\beta}_2}{\overline{\beta}^2}\bigg]
\end{equation*}
\begin{equation*}
= \frac{\overline{\beta}_1(1 - \beta_1^2)}{(1 - \beta_1^2)(1 - \overline{\beta}_1^2 - \overline{\beta}_2^2)} -
\frac{\overline{\beta}_1(\overline{\beta}_1^2 + \overline{\beta}_2^2)}{(\overline{\beta}_1^2 + \overline{\beta}_2^2)(1 - \overline{\beta}_1^2 - \overline{\beta}_2^2)} = 0
\end{equation*}
as required. To prove that the 02-element of $\overline{\overline{L}} \times L^{-1} \times \overline{L}^{-1}$ equals 0, multiply the row vector in (11) by the third column of $\overline{L}^{-1}$ to get
\begin{equation*}
\gamma \ \overline{\gamma} \ \overline{\overline{\gamma}}(\overline{\beta}_2 - \beta_1 \overline{\beta}_2 \overline{\overline{\beta}}_1) + \bigg[(\overline{\gamma}-1) \frac{\overline{\beta}_1\overline{\beta}_2}{\overline{\beta}^2}\bigg]\gamma \ \overline{\overline{\gamma}} \ (\beta_1 - \overline{\overline{\beta}}_1) -
\overline{\overline{\gamma}} \ \overline{\overline{\beta}}_2\bigg[1+(\overline{\gamma}-1) \frac{\overline{\beta}_2^2}{\overline{\beta}^2}\bigg]
\end{equation*}
\begin{equation*}
= \frac{\overline{\beta}_2(1 - \beta_1^2)}{(1 - \beta_1^2)(1 - \overline{\beta}_1^2 - \overline{\beta}_2^2)} -
\frac{\overline{\beta}_2(\overline{\beta}_1^2 + \overline{\beta}_2^2)}{(\overline{\beta}_1^2 + \overline{\beta}_2^2)(1 - \overline{\beta}_1^2 - \overline{\beta}_2^2)} = 0
\end{equation*}
as required. Finally, to prove that the 03-element of $\overline{\overline{L}} \times L^{-1} \times \overline{L}^{-1}$ equals 0, multiply the row vector in (11) by the fourth column of $\overline{L}^{-1}$. This equals 0 by inspection, so the relativistic rotation paradox is proved.
\centerline{}
|
2,869,038,155,110 | arxiv |
\section{Introduction}
\ifdefined\isabstract\else Recently, \ac{PV} power production has grown significantly as a result of the countermeasures to fight global warming. For example, the worldwide production has grown from \SI{190}{\tera\watt\hour} in 2014 to \SI{720}{\tera\watt\hour} in 2019~\cite{iea2019}. This is an increase by \SI{379}{\percent}.\fi To ensure constant performance of the power plants, regular inspection is required, since modules might be damaged during manufacturing, transport or installation. This raises the need for fast, accurate and non-invasive inspection methods.
In the last years, \ac{EL} has been widely adopted by the community as a useful tool to conduct inspection of solar modules~\ifdefined\isabstract\cite{kontges2011crack,mayr2019weakly}\else\cite{kontges2011crack,paggi2016global,mayr2019weakly,stromer2019enhanced,deitsch2019automatic}\fi. It allows to identify many types of defects. In particular, disconnected parts of the solar module that to not contribute to the power production (inactive areas), clearly stand out~\cite{buerhop2018evolution}. Previous works have shown that\ifdefined\isabstract\else~the number of cracks is loosely correlated to the power loss~\cite{dubey2018site} and that\fi~the power loss of a module is proportional to the fraction of inactive area, as long as it remains small~\cite{schneller2018electroluminescence}. Recently, \etal{Hoffmann}~\cite{hoffmann2020deep} used deep learning to determine the module power from \ac{EL} measurements. They introduce a visualization technique that allows to quantify the power loss of individual defects or cells as predicted by the model.
\input{fig_compare_el_pl.tex}
However, \ac{EL} imaging comes at a price, since it requires to disconnect and power the string or module. Only recently, \ac{PL} imaging has become popular on an industrial scale. As opposed to \ac{EL}, the modules are excited by a light source and no external powering of modules is required. On the downside, inactive areas do not always show as black areas any more~(\cref{fig:example}). Instead, they appear with various different intensity levels. This has been previously reported by \etal{Doll}~\cite{doll2020contactless}.
In this work, we show that the deep learning-based approach~\cite{hoffmann2020deep} can be used to determine the power from \ac{PL} images of a module, too. To this end, we compile a dataset of \num{54} module \ac{PL} images along with measurements of the peak power $\ensuremath{P_{\text{mpp}}}\xspace$ and retrain the method using the new data. Furthermore, we investigate, if fine-tuning the models that have been released~\cite{hoffmann2020deep} can improve the performance even further. Finally, we show that the visualization technique using \acp{CAM} can be used to identify the inactive areas on \ac{PL} images.
\section{Methodology}\label{sec:methodology}
\input{fig_scatter.tex}
We aim to estimate the power at the maximum power point \ensuremath{P_{\text{mpp}}}\xspace under STC conditions. Here, we use the same approach proposed by \etal{Hoffmann}~\cite{hoffmann2020deep} and estimate \ensuremath{P_{\text{mpp}}}\xspace relative to the nominal power \ensuremath{P_{\text{nom}}}\xspace. This is a sensible approach, since it assures that the estimates \ensuremath{\hat{y}}\xspace are in a similar scale, independent of the nominal power of a module. Then, we obtain the absolute power as
\begin{equation}
\ensuremath{\hat{P}_{\text{mpp}}}\xspace = \ensuremath{\hat{y}}\xspace\cdot\ensuremath{P_{\text{nom}}}\xspace\,.
\end{equation}
In this work, \ensuremath{\hat{y}}\xspace is computed by linear regression from the embedding of a ResNet18\xspace~\cite{he2016identity}:
\begin{equation}
\ensuremath{\hat{y}}\xspace = \ensuremath{\weights_{\text{fc}}}\xspace\ensuremath{\myvector{f}_{\text{emb}}}\xspace\,,
\end{equation}
where $\ensuremath{\myvector{f}_{\text{emb}}}\xspace\in\mathbb{R}^{512}$ is given by $\ensuremath{\myvector{f}_{\text{emb}}}\xspace = f(\ensuremath{\myvector{x}}\xspace,\ensuremath{\mymatrix{W}}\xspace)$, $f$ represents the ResNet18\xspace, \ensuremath{\mymatrix{W}}\xspace the parameters of $f$ and \ensuremath{\weights_{\text{fc}}}\xspace is the linear regression.
We jointly optimize the parameters of the linear regression \ensuremath{\weights_{\text{fc}}}\xspace and the parameters of the ResNet18\xspace to minimize the mean squared error for all \ensuremath{N}\xspace samples in the training dataset, which is given by
\begin{equation}
L = \frac{1}{\ensuremath{N}\xspace}\sum_{i=1}^\ensuremath{N}\xspace(\ensuremath{\hat{y}}\xspace_i-\ensuremath{y}\xspace_i)^2\,.
\end{equation}
As common with deep learning approaches, this is done by batch gradient descent. Except for the weight decay \ensuremath{\lambda}\xspace, we use the same hyperparameters that have been reported in the prior work. We found that the network seriously overfits to the training data with the reported setting for \ensuremath{\lambda}\xspace. This is explained by the dataset size, which is much smaller compared to the PVPower dataset~\cite{juelich2020pvpowerdata} used with the reference approach. We heuristically set $\ensuremath{\lambda}\xspace = 0.1$, which consistently gives good results.
\subsection*{Regression maps}\label{subsec:regression-maps}
\etal{Hoffmann}~\cite{hoffmann2020deep} propose to use a modified variant of \acp{CAM} to compute regression maps that give rise to a localized quantification of power losses. In the conventional ResNet18\xspace, \ensuremath{\myvector{f}_{\text{emb}}}\xspace is computed by averaging over the \num{512} feature maps. This is commonly referred to as global average pooling. However, by averaging over the spatial dimensions of the feature maps, the spatial information, which is needed for the localized quantification of power losses, is lost. To this end, they propose to apply a $1\times1$ convolution to the feature maps, reducing the \num{512} maps to a single one, while preserving the spatial information.\ifdefined\isabstract~The resulting map is then used to compute \ensuremath{\hat{y}}\xspace during training and gives the localized power loss at test time.\else~Then, they compute the absolute value and multiply the result by $-1$. This ensures that the resulting regression map \ensuremath{\myvector{{f}_{\text{map}}}}\xspace is strictly negative. The relative power \ensuremath{\hat{y}}\xspace is then computed as
\begin{equation}
\ensuremath{\hat{y}}\xspace = 1 + \sum_{i,j \in \Omega_{\myvector{f}}} \underbrace{-\text{ReLU}(\myvector{f}_{i,j})}_{\ensuremath{\myvector{{f}_{\text{map}}}}\xspace}\,.
\end{equation}
This way, the network can be trained using only the relative power of the module as supervision signal, while the localized power loss is obtained in a weakly supervised manner. The power loss per cell is computed by integrating over the corresponding area of \ensuremath{\myvector{{f}_{\text{map}}}}\xspace. This approach has shown promising results on \ac{EL} images already. In the following, we show that it performs well on \ac{PL} images as well.\fi
\section{Experiments}\label{sec:experiments}
\input{fig_cam.tex}
In our experiments, we focus on two aspects. First, we show that the power of a module can be estimated from a \ac{PL} image of a module despite the fact that inactive areas are not clearly visible from those images in any case. Second, we show that the regression maps can be used to locate disconnected areas on the module.
\subsection{Data}
For our experiments, we use a small dataset of only \num{54} \ac{PL} images. These images have been recorded under lab conditions with a front-illuminated SI camera with $2048^2$ pixels. Photo excitation has been conducted using our LED PL setup, which has been previously reported~\cite{doll2020contactless}.
The dataset covers \num{6} different types of modules, which we denote as \typea-\typef, with nominal powers ranging from \SIrange[]{230}{345}{\wattpeak} and maximum powers ranging from \SIrange[]{145}{327}{\wattpeak}. The modules of type \typea-\typeb and \typed-\typef feature \num{60} cells arranged in \num{10} columns, while \typec has \num{72} cells in \num{12} columns.
\ifdefined\isabstract
\else
Prior to processing by the network, images are preprocessed. Here, images are cropped and scaled to a common resolution. Furthermore, they are normalized such that the mean intensity $\mu$ over all images computes as $\mu=0$ and the standard deviation $\sigma$ is $\sigma=1$. During training, we apply online data augmentation similar to the reference method. This includes random horizontal and vertical flips as well as slight rotations of the images.
\fi
\subsection{Results}
\input{tab_results.tex}
Since the dataset is small, it is challenging to draw meaningful conclusions from the result. To overcome this issue, we conduct a three-fold \ac{CV} and join the results of all folds. Here, we perform a stratified split, such that the distribution of \ensuremath{y}\xspace is similar for all three folds.
We train two different variants of the model. First, we stick to the procedure from the reference method and initialize the network with weights computed by pretraining on ImageNet. We denote this variant \imagenet. Second, we use the weights that has been published with the reference implementation for initialization. Since this has first been trained on ImageNet and then finetuned on the PVPower dataset~\cite{juelich2020pvpowerdata}, we denote this variant \pvpower. The results are summarized by~\cref{tab:results} and~\cref{fig:scatter-all}. Here, we also include a baseline that is computed by calculating the mean of \ensuremath{y}\xspace over every sample of the respective training set and use the result as the prediction \ensuremath{\hat{y}}\xspace for every sample of the corresponding test set. This gives a lower bound to the error.\ifdefined\isabstract\else~Every model that is better than weighted random predictions should surpass this lower bound.\fi~From~\cref{tab:results} we see that, despite the very small dataset, both variants perform much better than the baseline. Furthermore, we observe that pretraining on the PVPower dataset improves the results slightly.
Finally, we show and exemplary regression map in~\cref{fig:cam} and compare it to the \ac{EL} image of the same module. In summary, we see that the magnitude of predicted power loss per cell is consistent to the amount of inactive area as seen from the \ac{EL} image, although the inactive area is not always visible in the \ac{PL} image. For example, cells C1 and C2 have a similar appearance in the \ac{PL} image, although C2 is damaged more severely, which can only be seen from the \ac{EL} image.\ifdefined\isabstract\else~However, the model prediction is consistent to the \ac{EL} image, since C2 is predicted to have a higher power loss.\fi~Furthermore, we find that the model recognizes that inactive areas might appear as darker or brighter regions. This can be seen from cells B3 and C5. \ifdefined\isabstract\else~Although B3 is mostly dark in the \ac{PL}, whereas C5 has only few dark spots, they are predicted to have a similar power loss.\fi
\ifdefined\isabstract
\section{Discussion of the Significance of this Work for the Field}
The determination of the power of a module is of significant interest, since it is vital to the efficient operation of \ac{PV} power plants. Only recently, \ac{EL} imaging has been used to determine the power of a module~\cite{hoffmann2020deep}. However, this requires to disconnect every single module or string. \ac{PL} imaging provides efficient means to speed up measurements, since an external light source is used for excitation and no disconnection of modules is required any more. On the downside, \ac{PL} images complicate the interpretation, since inactive areas are not easily visible any more~\cite{doll2020contactless}. We show that power prediction can be performed using \ac{PL} imaging despite the challenging interpretability. Furthermore, we depict that deep learning is capable of learning relevant features using a relatively small dataset. Finally, we show that these features can be used to identify inactive areas as well.
\fi
\section{Summary}
We experimentally show that \ac{PL} images of solar modules can be used to determine the power of a module with a MAE of \resultsmaew{pvpower}, although inactive areas are not well represented by this modality. To this end, we compile a dataset of \num{54} \ac{PL} images along with their powers and train a deep neural network to predict the module power. Furthermore, we apply the approach by \etal{Hoffmann}~\cite{hoffmann2020deep} to compute regression maps that allow to quantify the localized power loss. Using these maps, we qualitatively show that the network learns the weakly supervised localization of inactive areas and that the results are consistent to reference \ac{EL} images.
We are confident that the quantitative results will become better, if a larger training dataset is used. Further, we believe that these preliminary results will amplify research in the field of \ac{PL} imaging for solar module inspection. For example, they can help to perform root cause analysis for damaged modules using \ac{PL} images only.
|
2,869,038,155,111 | arxiv | \section{Introduction}
In Part-I {\cite{RoseMian16_1}} of this two-paper set we defined a signaling model and developed an
information theoretic framework for evaluating the capacity and efficiency of channels which use
molecules (or ``tokens'') as information carriers. Here in Part-II we provide some necessary
undergirding results which are interesting in their own right. In particular, we consider a timing
channel similar but not identical to Anantharam's and Verd\'u's ``Bits Through Queues'' channel
\cite{bits-Qs,sundaresan1, sundaresan2} wherein a mean launch time constraint is replaced by a
launch deadline constraint. We derive closed forms for the optimizing distribution and the channel
capacity for this older timing channel and then apply the results to the molecular communication
problem. We also derive analytic expressions and bounds for a key quantity in our analysis -- the
{\em ordering entropy} $H(\Omega|\vec{{\bf S}}, {\bf T})$ -- first generally and then specifically for
exponential first-passage time distribution. These results support the capacity bounds of Part-I
{\cite{RoseMian16_1}}, provide capacity results for a timing channel with an emission deadline
under exponential first-passage, and also establish that unlike the mean-constrained timing
channel, the worst case corruption is {\em not} exponential first-passage. Our analysis ends with
the derivation of an upper bound on the token timing channel capacity.
\section{Brief Problem Description}
\label{sect:brief}
A detailed discussion of the underlying molecular communication problem and its importance in
both biology and engineering is provided in Part-I {\cite{RoseMian16_1}}. Here we assume basic
familiarity with the concepts and provide only the mathematical description of the system. As a reader aid,
key quantities are provided in TABLE~\ref{table:glossary} and in an identical table in Part-I.
\begin{table}[h]
\begin{tabular}{p{2.0cm}|p{5.9cm}} \hline
{\bf \em Token} & {\small A unit released by the transmitter and captured by the receiver}\\ \hline
{\bf \em Payload} & {\small Physical information ({inscribed matter}) carried by a token}\\ \hline
{\bf \em ${\lambda}$} & {\small The average rate at which tokens are released/launched into the channel}\\ \hline
{\bf ${\bf T}$} & {\small A vector of token release/launch times}\\ \hline
{\bf \em First-Passage} & {\small The time between token release/launch and token capture at the receiver}\\ \hline
{\bf ${\bf D}$} & {\small A vector of first-passage times associated with launch times ${\bf T}$}\\ \hline
{\bf $G(\cdot)$} & {\small The cumulative distribution function for first-passage random variable $D$}\\ \hline
{\bf $1/{\mu}$} & {\small Average/mean first-passage time}\\ \hline
{\bf $\rho$} & {\small ${\lambda}/{\mu}$, a measure of system token ``load'' }\\ \hline
{\bf ${\bf S}$} & {\small A vector of token arrival times, ${\bf S} = {\bf T} + {\bf D}$} \\ \hline
{\bf $P_k({\bf x})$} & {\small A permutation operator which rearranges the order of elements in vector ${\bf x}$}\\ \hline
{\bf $\Omega$} & {\small The ``sorting index'' which produces $\vec{{\bf S}}$ from ${\bf S}$, {\em i.e.}, $\vec{{\bf S}} = P_{\Omega}({\bf S})$}\\ \hline
{\bf $\vec{{\bf S}}$} & {\small An {\em ordered} vector of arrival times obtained by sorting the elements of ${\bf S}$ (note, the receiver only sees $\vec{{\bf S}}$ not ${\bf S}$)}\\ \hline
{\bf $I({\bf S};{\bf T})$} & {\small The mutual information between the launch times (input) and the arrival times (output)}\\ \hline
{\bf $I(\vec{{\bf S}};{\bf T})$} & {\small The mutual information between the launch times (input) and the {\em ordered} arrival times (output)}\\ \hline
{\bf $h({\bf S})$} & {\small The differential entropy of the arrival vector ${\bf S}$}\\ \hline
{\bf $H(\Omega|\vec{{\bf S}},{\bf T})$} & {\small The {\em ordering entropy} given the input ${\bf T}$ and the output $\vec{{\bf S}}$} \\ \hline
{\bf ${H^{\uparrow}}({\bf T})$} & {\small An upper bound for $H(\Omega|\vec{{\bf S}},{\bf T})$}\\ \hline
{\bf $C_q$ {\rm and } $C_t$} & {\small The asymptotic per token and per unit time capacity between input and output}\\ \hline
\end{tabular}
$\mbox{ }$\\
\caption{Glossary of useful terms}
\label{table:glossary}
\end{table}
Thus, consider a communication system in which $M$ identical tokens are released/launched at times
$T_1, T_2, \cdots, T_M$ with no assumption that the $T_m$ are ordered in time. The duration of each
token's journey from transmitter to receiver is a random variable $D_m$ so that token $m$ arrives at
time $S_m = T_m + D_m$. The $D_m$ are assumed independent and identically distributed (i.i.d.). In
vector notation, we have ${\bf S} = {\bf T} + {\bf D}$. We denote the density of each $D_m$ as
$f_{D_m}(d) = g(d)$, $d \ge 0$ and the cumulative distribution function (CDF) as $F_{D_m}(d) =
G(\cdot)$. Likewise, the complementary CDF (CCDF) is ${\bar{G}}(\cdot)$. The channel output is the
time-sorted version of the $\{ S_m \}$ which we denote as $\{ \vec{S}_i \}$, $\vec{S}_i \le
\vec{S}_{i+1}$.
However, since the tokens are identical and their transit times are random, the receiver cannot
unequivocally know which arrival, $\vec{S}_i$ corresponds to which transmission $T_m$. That is,
$\vec{{\bf S}}$, the ordered arrival times are related to ${\bf S}$ through a permutation operation,
$P_{\Omega}(\vec{{\bf S}}) = {\bf S}$ and from the receiver's perspective, $\Omega$ is a random variable,
$\Omega = 1, 2, \cdots, M!$.
In the next section, we provide a sampling of results from Part-I upon which we will expand here in
Part-II.
\section{Key Results from the Companion Paper \cite{RoseMian16_1}}
A good deal of effort was expended in Part-I quantifying the relationships between ${\bf T}$, ${\bf D}$,
${\bf S}$ and $\vec{{\bf S}}$, and in developing a signaling discipline wherein the measure of communication
efficacy is determined by the mutual information between $\vec{{\bf S}}$ and ${\bf T}$, $I(\vec{{\bf S}};{\bf T})$. That
is, we took care to make sure that channel coding theorem results \cite[(chapt 8 \& 10)]{cover}
could be applied by deriving a model in which channel uses were (asymptotically) independent.
Specifically, we assume sequential finite signaling intervals/epochs of duration $\tau$ and then
define the token intensity as ${\lambda} = \frac{M}{\tau}$ as a proxy for transmitter power (each
emission ``costs'' some fixed energy). In addition, we assume that the {\em mean first-passage
time} exists with $E[D] = 1/{\mu}$ so that tokens always (eventually) arrive at the receiver. It
is important to note that finite first-passage time is important for information-theoretic patency
of the analysis. As shown in Part-I \cite{RoseMian16_1}, finite first-passage allows sequential
signaling intervals (channel uses) to be derived which are, in the limit of long intervals,
asymptotically independent. Infinite first-passage {\em does not allow such asymptotically
independent sequential intervals to be constructed} so that mutual information $I(\vec{{\bf S}};{\bf T})$ is
not necessarily the proper measure of information carriage for the system.
We note that transport processes such as free-space diffusion do {\em not} have finite
first-passage. However, any physical system is limited in extent and therefore does have finite
(though perhaps long) first-passage under an ergodic transport model. So, the analysis holds for
situations where tokens eventually arrive at the receiver. Of course, as discussed in Part-I, there
are situations where a token might never arrive at {\em any} time. Such situations include channels
where the token ``denatures'' and becomes unrecognizable by the receiver or is ``gettered'' by
agents in the channel which remove the token from circulation before detection \cite{farsad_isit16,
farsadIT16}. Such tokens do not contribute to intersymbol interference (earlier tokens
corrupting a subsequent interval) so it is possible that slightly different first-passage time
distributions could be used which still preserve asymptotically independent channel uses. However,
since any such model produces a first passage density, $g(d)$ with singularities, the specific
analysis used in Part-I is not immediately applicable. The implications (and shortcomings) of the
finite first-passage assumption are discussed more carefully in the Discussion \& Conclusion section
of Part-I.
Now, as a prelude to deriving channel capacity, we recall from Part-I \cite{RoseMian16_1} that if
$Q({\bf x})$ is a hypersymmetric function, $Q({\bf x}) = Q(P_k({\bf x}))$ $\forall k$ where $P_k(\cdot)$ is a
permutation operator and ${\bf X}$ is a hypersymmetric random vector whose PDF obeys $f_{{\bf X}}({\bf x}) =
f_{{\bf X}}(P_k({\bf x}))$, then when $\vec{{\bf X}}$ is the ordered version of random vector ${\bf X}$ we
have
\begin{IEEEeqnarray}{c}
\label{eq:hyperexpect}
E_{\vec{{\bf X}}} \left [Q(\vec{{\bf X}}) \right ]
=
E_{{\bf X}} \left [Q({\bf X}) \right ]
\end{IEEEeqnarray}
This expression (Theorem 1 from Part-I) allows us to avoid deriving order distributions on
potentially correlated random variables.
Next, the mutual information between the input ${\bf T}$ and the output $\vec{{\bf S}}$ of the token timing
channel is given by
\begin{IEEEeqnarray}{c}
I(\vec{{\bf S}}; {\bf T})
=
h(\vec{{\bf S}}) - h(\vec{{\bf S}}|{\bf T})
\label{eq:ISvvT}
\end{IEEEeqnarray}
Then, if we assume that $g(\cdot)$ does not contain singularities, we observe that the set of all ${\bf S}$ for which two or more elements are equal is of zero measure which allows us to ``fold'' the distribution on $f_{{\bf S}}(\cdot)$ to obtain a distribution on the ordered $\vec{{\bf S}}$. If we then in addition
assume hypersymmetric ${\bf X}$, we can write \equat{ISvvT} as
\begin{IEEEeqnarray}{c}
I(\vec{{\bf S}}; {\bf T})
=
h({\bf S}) - \log M! - h(\vec{{\bf S}}|{\bf T})
\label{eq:IST_SvvT}
\end{IEEEeqnarray}
Hypersymmetry of ${\bf X}$ and no singularity in $g(\cdot)$ implies that we can ignore
situations where one or more of the $S_i$ are equal, which then implies an equivalence
\begin{IEEEeqnarray}{c}
\{ \vec{{\bf S}}, \Omega \} \Leftrightarrow {\bf S}
\label{eq:equivalence}
\end{IEEEeqnarray}
which leads to
\begin{IEEEeqnarray}{c}
h({\bf S}|{\bf T}) = h(\vec{{\bf S}}, \Omega|{\bf T}) = H(\Omega|\vec{{\bf S}},{\bf T}) + h(\vec{{\bf S}}|{\bf T})
\label{eq:hSvvTequiv}
\end{IEEEeqnarray}
where $H(\Omega|\vec{{\bf S}},{\bf T})$ is the {\em ordering entropy}, a measure of the uncertainty about which
$S_m$ correspond to which $\vec{S}_i$. \Equat{hSvvTequiv} allows us to write the \equat{IST_SvvT} as
\begin{equation}
\label{eq:orderedMI_decomp}
I(\vec{{\bf S}}; {\bf T})
=
I({\bf S};{\bf T})
-
\left ( \log M! - H(\Omega|\vec{{\bf S}},{\bf T}) \right )
\end{equation}
And since we know asymptotically independent channel uses can be assured (Part-I Theorem 2,
\cite{RoseMian16_1}), the channel capacity in bits/nats per channel use is
\begin{IEEEeqnarray*}{c}
C
=
\max_{f_{{\bf T}}(\cdot)}
\left [
I({\bf S};{\bf T})
-
\left ( \log M! - H(\Omega|\vec{{\bf S}},{\bf T}) \right )
\right ]
\end{IEEEeqnarray*}
We then derived an upper bound for the ordering entropy $H(\Omega|\vec{{\bf S}},{\bf T})$ in Part-I as
\begin{IEEEeqnarray}{c}
\label{eq:HOHupineq}
H(\Omega|\vec{{\bf S}},{\bf t}) \le {H^{\uparrow}}({\bf t})
\end{IEEEeqnarray}
and derived/defined ${H^{\uparrow}}(\cdot)$ as
\begin{IEEEeqnarray}{rCl}
\IEEEeqnarraymulticol{3}{l}{{H^{\uparrow}}({\bf t}) = \sum_{\ell=1}^{M-1} \log(1 + \ell)} \nonumber \\ \quad
& \times &
{\sum_{m=\ell}^{M-1}}
{ \sum_{|\bar{{\bf x}}| = \ell}} { \prod_{j=1}^m
{\bar{G}}^{\bar{x}_j}(\vec{t}_{m+1} - \vec{t}_j)
G^{1 -\bar{x}_j}(\vec{t}_{m+1} - \vec{t}_j)} \IEEEeqnarraynumspace
\label{eq:Hexpurg3}
\end{IEEEeqnarray}
with $\vec{\tv}$ the size-ordered version of ${\bf t}$ and $\bar{{\bf x}}$ a binary $m$-vector with $|\bar{{\bf x}}|$
defined as its number of non-zero entries. Then, through hypersymmetry arguments as in
\equat{hyperexpect}, we showed that
\begin{IEEEeqnarray}{c}
H(\Omega|\vec{{\bf S}},{\bf T}) = E_{{\bf t}} \left [ H(\Omega|\vec{{\bf S}},{\bf t}) \right ] \le E_{\vec{\tv}} \left [ {H^{\uparrow}}({\bf t}) \right ] = {H^{\uparrow}}(
{\bf T}) \IEEEeqnarraynumspace
\label{eq:HOinequality}
\end{IEEEeqnarray}
with equality {\bf iff} the first passage time is exponential (Theorem 8, Part-I \cite{RoseMian16_1}).
Based on asymptotically independent channel uses, two key measures of channel capacity were
derived. The first, $C_q$, is the asymptotic per token capacity:
\begin{IEEEeqnarray}{c}
C_q
=
\lim_{M \rightarrow \infty}
\frac{1}{M} I(\vec{{\bf S}};{\bf T})
\label{eq:Cqdef}
\end{IEEEeqnarray}
and the second is $C_t = {\lambda} C_q$, the asymptotic per unit time capacity
(Theorem 4, Part-I \cite{RoseMian16_1}).
In what follows we will seek to maximize $h({\bf S})$ under the deadline constraints on the ${\bf T}$,
derive a variety of expressions for ${H^{\uparrow}}({\bf T})$ for general and then for exponential first-passage
under both a deadline and also the mean launch constraint considered in ``Bits Through Queues''
\cite{bits-Qs} and elsewhere \cite{sundaresan1, sundaresan2}. We follow with asymptotic results for
${H^{\uparrow}}({\bf T})/M$ as $M \rightarrow \infty$ again assuming exponential first-passage and close by
providing upper bounds for $C_q$ and $C_t$.
\section{``Bits Through Queues'' With a Deadline Constraint}
\label{sect:minmaxhS}
\subsection{Preliminaries}
The award-winning paper, ``Bits Through Queues'' \cite{bits-Qs} and others
\cite{sundaresan1,sundaresan2} derived capacity results for a timing channel under a mean launch
time constraint. In this section we derive results for a similar single-token timing channel where
instead of a mean constraint, the launch time $T$ is limited to $[0,\tau]$ \cite{isit11,SonRos13}
and first-passage is exponential with parameter ${\mu}$. Here we provide closed forms for both the
capacity and for the capacity-achieving input density. However, unlike in \cite{bits-Qs} we show
that exponential first-passage is {\em not} the worst case corruption for the launch
deadline-constrained channel.
Since $T$ is independent of $D$, the density of $S=T+D$ is given by
\begin{IEEEeqnarray*}{c}
f_{S}(s) = \int_0^s f_T(t) f_D(s-t) dt \quad 0 \leq s
\end{IEEEeqnarray*}
and because $T$ is constrained to $[0,\tau]$, we can divide $f_S(s)$ into two regions:
region $I$ where $s \in [0,\tau]$ and region $II$ where $s \in (\tau,\infty)$. We then have
\begin{IEEEeqnarray}{c}
f_S(s)= \left\{
\begin{array}{l l}
\sigma f_{S|I}(s) & 0 \leq s \leq \tau\\
(1-\sigma) f_{S|II}(s) & s > \tau\\
\end{array} \right.
\label{eq:pdf_S1S2}
\end{IEEEeqnarray}
where
\begin{IEEEeqnarray*}{c}
\sigma
=
\int_{0}^{\tau} f_{S}(s) ds
\end{IEEEeqnarray*}
with
\begin{IEEEeqnarray}{c}
\sigma f_{S|I}(s)
=
\int_0^s f_T(t) f_D(s-t) dt
\end{IEEEeqnarray}
and
\begin{IEEEeqnarray}{c}
(1- \sigma) f_{S|II}(s)
=
\int_{0}^{\tau} f_T(t) f_D(s-t) dt
\end{IEEEeqnarray}
For $D$ exponential with parameter ${\mu}$ we have
\begin{IEEEeqnarray}{c}
\label{eq:pdf_S}
f_{S}(s) = \int_0^s f_T(t) {\mu} e^{-{\mu}(s-t)} dt \quad 0 \leq s
\end{IEEEeqnarray}
and
\begin{IEEEeqnarray}{c}
\sigma f_{S|I}(s)
=
\int_0^s f_T(t) {\mu} e^{-{\mu}(s-t)} dt
\end{IEEEeqnarray}
and
\begin{IEEEeqnarray}{c}
\label{eq:fSII}
(1- \sigma) f_{S|II}(s)
=
e^{-{\mu} s}
\int_{0}^{\tau} f_T(t) {\mu} e^{{\mu} t} dt
\end{IEEEeqnarray}
The entropy of $S$ is then
\begin{IEEEeqnarray}{c}
\label{eq:entropy_S}
\begin{split}
h(S) & = -\int_0^{\infty} f_{S}(s) \log f_{S}(s) ds \\
& = -\int_0^{\tau} \sigma f_{S|I}(s) \log \left ( \sigma f_{S|I}(s) \right ) ds\\
& - \int_{\tau}^{\infty} (1-\sigma )f_{S|II}(s) \log \left ( (1-\sigma )f_{S|II}(s)\right ) ds\\
& = \sigma h(S|I) + (1-\sigma) h(S|II) + H_B (\sigma) \\
\end{split}
\end{IEEEeqnarray}
where $H_B(\cdot)$ is the binary entropy function. Notice that no particular care
has to be taken with the integrals at $s=T$ because $f_S(s)$ cannot contain
singularities -- it is obtained by the convolution of two densities, one of
which, $f_D(\cdot) = g(\cdot)$, contains no singularities.
\subsection{Maximization of $h(S)$}
\label{sect:optimize}
We observe of \equat{fSII} that the {\em shape} of the
conditional density for $s>\tau$ is completely determined -- an exponential with
parameter ${\mu}$ as depicted in FIGURE~\ref{fig:step0}.
\begin{figure}[h]
\centering
\includegraphics[width=3in]{step0_mod.png}
\caption{The shapes associated with $f_{S}(s)$: We assume arbitrary shape in region I and the requisite exponential shape in region II.}
\label{fig:step0}
\end{figure}
Thus, selection of $f_T(\cdot)$ does not affect $f_{S|II}(\cdot)$ and
we must have $h(S|II) = 1 - \log {\mu}$.
This observation suggests a three-step approach to maximizing $h(S)$. In the
first two steps, we completely ignore $f_T(\cdot)$ and find the {\em shape} $f_{S|I}(\cdot)$
and value of $\sigma$ which maximize \equat{entropy_S}. In step three, we determine that
there indeed exists a density $f_T(\cdot)$ which produces the optimizing $f_S(\cdot)$.
{\bf Step 1:} For fixed $\sigma$ we see from \equat{entropy_S} that $h(S)$ is
maximized solely by our choice of $f_{S|I}(\cdot)$. The uniform density maximizes
entropy on a finite interval \cite{Cover06}. Thus, $f_{S|I}(s) = \frac{1}{\tau}$
and $h(S|I) = \log \tau$ as depicted in FIGURE~\ref{fig:step1}.
\begin{figure}[h]
\centering
\includegraphics[width=3in]{step1_mod.png}
\caption{The updated shape of $f_{S}(s)$ after step 1: $f_{S|I}(s)$ is chosen as $\frac{1}{\tau}$. }
\label{fig:step1}
\end{figure}
{\bf Step 2:} Since for any $\sigma$, $h(S|I) = \log \tau$, we have
\begin{equation}
h(S)
=
\sigma \log \tau +
(1-\sigma) (1 - \log {\mu}) + H_B (\sigma) \\
\label{eq:HSstep1}
\end{equation}
Taking the derivative of \equat{HSstep1} with respect to $\sigma$ yields
\begin{IEEEeqnarray*}{c}
\log \tau
-
(1-\log {\mu})
-
(1+\log \sigma)
+
(1+\log (1-\sigma))
\end{IEEEeqnarray*}
which we set to zero to obtain
\begin{IEEEeqnarray*}{c}
\log {\mu} \tau
-
\log \frac{\sigma}{1-\sigma}
-
1
=
0
\end{IEEEeqnarray*}
We rearrange to obtain
\begin{IEEEeqnarray*}{c}
{\mu} \tau
=
\frac{e\sigma}{1-\sigma}
\end{IEEEeqnarray*}
from which we deduce that the optimal $\sigma$ is
\begin{IEEEeqnarray}{c}
\label{eq:sigmastar}
\sigma^*
=
\frac{{\mu} \tau}{e + {\mu} \tau}
\end{IEEEeqnarray}
Returning to the entropy maximization we have
\begin{IEEEeqnarray*}{c}
\max_{f_T(\cdot)} h(S)
\le
\sigma^* \log \tau
+
(1-\sigma*) (1 - \log {\mu}) + H_B (\sigma^*)
\end{IEEEeqnarray*}
which through substitution of $\sigma^*$ according to \equat{sigmastar}
yields
\begin{IEEEeqnarray}{c}
\max_{f_T(\cdot)} h(S)
\le
\log \left ( \frac{e + {\mu} \tau}{{\mu}} \right )
\label{eq:hSmax}
\end{IEEEeqnarray}
with equality when
\begin{IEEEeqnarray}{c}
f_S(s)
=
\twodef{\frac{{\mu}}{e + {\mu} \tau} }{0 \le s < \tau}{\frac{e}{e+{\mu} \tau} {\mu} e^{- {\mu} (s-\tau)}}{s \ge \tau}
\label{eq:optimumS}
\end{IEEEeqnarray}
\blankout{
as depicted in FIGURE~\ref{fig:step2}.
\begin{figure}[h]
\centering
\includegraphics[width=3in]{step1_mod.png}
\caption{The optimizing $f_S(s)$.}
\label{fig:step2}
\end{figure}}
{\bf Step 3:} All that remains is to ascertain whether $\exists f_T(\cdot)$ which
can generate the $f_S(s)$ of \equat{optimumS}. Since $f_S(\cdot)$ is the convolution of
$f_D(\cdot)$ and $f_T(\cdot)$ we can use Fourier transforms to obtain a candidate solution for $f_T(\cdot)$. That is, the Fourier transform of $f_D(\cdot)$ is $\frac{{\mu}}{{\mu} + j 2 \pi f}$ so the Fourier transform of $f_T(\cdot)$ is
\begin{IEEEeqnarray*}{c}
{\cal F} \left \{ f_T(\cdot) \right \}
=
{\cal F} \left \{ f_S(\cdot) \right \}
\left (
\frac{j 2 \pi f}{{\mu}} + 1
\right )
\end{IEEEeqnarray*}
Multiplication by $j 2 \pi f$ implies differentiation so we must have
\begin{IEEEeqnarray*}{c}
f_T(t)
=
\frac{1}{{\mu}}
\frac{d}{dt} f_S(t)
+
f_S(t)
\end{IEEEeqnarray*}
which implies via \equat{optimumS} that
\begin{IEEEeqnarray}{c}
\label{eq:optimalfT}
f_T(t)
=
\twodef{\frac{{\mu}}{e+{\mu} \tau}}{0<t<\tau}{\delta(t) \frac{1}{e + {\mu} \tau} + \delta(t-\tau) \frac{e-1}{e + {\mu} \tau}}{\mbox{o.w.}}
\end{IEEEeqnarray}
-- a valid probability density function.
We can now state the maximum mutual information (capacity in bits per channel use) as
\begin{IEEEeqnarray}{c}
\label{eq:maxI}
\max_{f_T(\cdot)}
I(S;T)
=
\log \left ( \frac{e {\mu} \tau}{{\mu}} \right ) - (1- \log {\mu})
=
\log \left ( 1 + \frac{{\mu} \tau}{e} \right ) \IEEEeqnarraynumspace
\end{IEEEeqnarray}
which is achieved using the emission time density of \equat{optimalfT}.
We summarize the result as a theorem:
{\em \begin{theorem} {\bf Maximum \boldmath $I(S;T)$ Under a Deadline Constraint:}
\thmlabel{fTopthSopt}
If $S=T+D$ where $D$ is an exponential random variable with parameter ${\mu}$ and $T \in [0,\tau]$, then the
mutual information between $S$ and $T$ obeys
\begin{IEEEeqnarray*}{c}
I(S;T) \le \log \left ( 1 + \frac{{\mu} \tau}{e} \right )
\end{IEEEeqnarray*}
with equality when
\begin{IEEEeqnarray*}{c}
f_T(t)
=
\twodef{\frac{{\mu}}{e+{\mu} \tau}}{0<t<\tau}{\delta(t) \frac{1}{e + {\mu} \tau} + \delta(t-\tau) \frac{e-1}{e + {\mu} \tau}}{\mbox{o.w.}}
\end{IEEEeqnarray*}
and
\begin{IEEEeqnarray*}{c}
f_S(s)
=
\twodef{\frac{{\mu}}{e + {\mu} \tau} }{0 \le s < \tau}{\frac{e}{e+{\mu} \tau} {\mu} e^{- {\mu} (s-\tau)}}{s \ge \tau}
\end{IEEEeqnarray*}
\end{theorem}}
\begin{Proof}{Theorem \thmref{fTopthSopt}}
See the development leading to the statement of \equat{maxI}.
\end{Proof}
The only remaining question is whether for interval-limited inputs, the exponential first-passage
time density, to quote \cite{bits-Qs} ``plays the same role ... that Gaussian noise plays in
additive noise channels.'' Unfortunately the answer is no, a result we state as a theorem:
{\em \begin{theorem} {\bf \boldmath For $T$ Constrained to $[0,\tau]$, the Minmax Mutual Information First-Passage Density Is NOT Exponential:}
\thmlabel{minmax}
If $g(\cdot)$ is a first passage density with mean $1/\mu$ and
$f_T(\cdot)$ can be nonzero only on $[0,\tau]$, then
\begin{IEEEeqnarray*}{c}
\operatorname*{arg\,min}_{g(\cdot)}
\left [ \max_{f_T(\cdot)}
I(S;T)
\right ]
=
g^*(s)
\ne
\mu e^{-\mu s} u(s)
\end{IEEEeqnarray*}
where $u(\cdot)$ is the unit step function.
\end{theorem}}
\begin{Proof}{Theorem \thmref{minmax}}
Consider that
\begin{IEEEeqnarray}{c}
I(S;T) = \int \int f_T(t) g(s-t) \frac{g(s-t)}{f_S(s)} dt ds
\label{eq:mutualinfodef}
\end{IEEEeqnarray}
is convex in $g(\cdot)$ \cite{cover,gallagerit}. Since we constrain $g(\cdot)$ to be non-negative with mean
$1/\mu$ and unit integral, we can apply Euler-Lagrange variational techniques \cite{hild}. That is, we
set $q(x) = g(x) + \epsilon \eta(x)$ where $\eta(x)$ is any function defined on $[0,\infty)$, and
look for the stationary point
\begin{IEEEeqnarray}{rCl}
\IEEEeqnarraymulticol{3}{l}{\frac{d}{d\epsilon} \left [ \int \int f_T(t) q(s-t) \frac{ q(s-t) }{\int f_T(x) q(s-x) dx}
dt ds \right .} \nonumber \\
& + & \left . a \left ( \int s q(s) ds - \frac{1}{\mu} \right )
+ b \left (\int q(s) ds - 1 \right )
\right ]_{\epsilon = 0}
=0 \IEEEeqnarraynumspace
\label{eq:eulerlagrange}
\end{IEEEeqnarray}
where $a$ and $b$ are (Lagrange) multipliers.
Satisfaction of \equat{eulerlagrange} for any possible $\eta(\cdot)$ requires (after expansion and a change of coordinate systems in the double integral) that
\begin{IEEEeqnarray}{c}
\log g(s)
=
\int_0^\tau f_T(t) \log f_S(s+t) dt + a s + b
\label{eq:variationalg}
\end{IEEEeqnarray}
for the $g(\cdot)$ that minimizes \equat{mutualinfodef}.
Now, from \Thmref{fTopthSopt} we know the form of the optimizing $f_T(t)$, $t \in [0,\tau]$ and the
resulting $f_S(s)$ were $g(\cdot)$ exponential with mean $1/\mu$. We also know that $I(S;T)$ is
concave in $f_T(t)$\cite{cover,gallagerit}. Thus, were exponential $g(\cdot)$ to minimize the
maximum mutual information, the left hand side of \equat{variationalg} would be a linear function of
$s$. Thus, the integral term on the right would also need to be a linear function of $s$ given
$f_S(s)$ as in \equat{optimumS} and $f_T(t)$ as in \equat{optimalfT}.
For $s \ge \tau$ we have $f_S(s+t) = \frac{\mu e}{e + \mu \tau} e^{ - \mu (s+t-\tau)}$ and thence
\begin{IEEEeqnarray}{rCl}
\IEEEeqnarraymulticol{3}{l}{\int_0^\tau f_T(t) \log f_S(s+t) dt} \nonumber \\ \quad
& =&
\int_{0}^{\tau}
\frac{
\left (
\delta(t)
+
\mu
+
\delta(t-\tau)(e-1)
\right )
(-\mu (s+t-\tau))}{e + \mu \tau} dt \nonumber \\
& + & \log\frac{\mu e}{e + \mu \tau} \nonumber \\
& = &
\frac{\mu \tau - \mu s e}{e + \mu \tau}
-
\frac{\mu^2}{e + \mu \tau}
\left . \frac{(s+t - \tau)^2}{2} \right |_0^\tau
+ \log\frac{\mu e}{e + \mu \tau} \nonumber \\
& = & \log\frac{\mu e}{e + \mu \tau}
+\frac{\mu \tau - \mu s e}{e + \mu \tau}
-
\frac{\mu^2}{e + \mu \tau}
\left ( s \tau - \frac{\tau^2}{2} \right )
\end{IEEEeqnarray}
which is indeed a linear function of $s$.
However, when $0 \le s < \tau$ we obtain
\begin{IEEEeqnarray}{rCl}
\IEEEeqnarraymulticol{3}{l}{\int_0^\tau f_T(t) \log f_S(s+t) dt} \nonumber \\ \quad
& =&
\int_0^{\tau - s}
\left (\frac{\delta(t)}{e + \mu\tau}
+ \frac{\mu}{e + \mu \tau} \right )
\log \frac{\mu}{e + \mu \tau}
dt \nonumber \\
& + &
\int_{\tau - s}^{\tau}
\left ( \frac{\mu}{e + \mu \tau}
+ \frac{\delta(t-\tau)(e-1)}{e + \mu \tau}
\right )
\log \frac{\mu e}{e + \mu \tau} dt \nonumber \\
& - &
\int_{\tau - s}^{\tau}
\left ( \frac{\mu}{e + \mu \tau}
+ \frac{\delta(t-\tau)(e-1)}{e + \mu \tau}
\right )
\mu (s+t-\tau) dt \nonumber \\
& =&
\frac{1 + \mu (\tau - s)}{e + \mu \tau}
\log \frac{\mu}{e + \mu \tau}
+
\frac{\mu s +e - 1}{e + \mu \tau}
\log \frac{\mu e}{e + \mu \tau} \nonumber \\
& - &
\frac{\mu s (e-1)}{e + \mu \tau}
-
\frac{\mu^2}{e + \mu \tau}
\left . \frac{(s+t-\tau)^2}{2}\right |_{\tau-s}^\tau \nonumber \\
& = &
\log \frac{\mu}{e + \mu \tau}
+
\frac{2\mu s + e(1- \mu s) - 1}{e + \mu \tau}
-
\frac{\mu^2}{e + \mu \tau}
\frac{s^2}{2} \IEEEeqnarraynumspace
\end{IEEEeqnarray}
which does not have the requisite form owing to the term in $s^2$.
Therefore, for $T$ constrained to $[0,\tau]$, the minmax $I(S;T)$ first-passage density, $g(\cdot)$, is not
exponential.
\end{Proof}
It is important to note that owing to a faulty proof \cite{isit11}, exponential first passage was
previously claimed to maximally suppress capacity of the constrained-launch channel.
\Thmref{minmax} corrects this error.
\section{Ordering Entropy, $H(\Omega|\vec{{\bf S}},{\bf T})$}
In this section we derive a number of results for the ordering entropy, $H(\Omega|\vec{{\bf S}},{\bf T})$, both
generally and for exponential first-passage. As a prelude, we recall from section~\ref{sect:brief}
and TABLE~\ref{table:glossary} that $\vec{\tv} = \{\vec{t}_1, \vec{t}_2, \cdots, \vec{t}_M \}$ is the ordered
version of ${\bf t} = \{t_1, t_2, \cdots, t_M\}$, the launch times, and that $G(\cdot)$ is the
cumulative distribution function (CDF) for the first-passage time $D$ (with ${\bar{G}}(\cdot)$ its
complementary cumulative distribution function (CCDF). We recall from Part-I \cite{RoseMian16_1}
that $H(\Omega|\vec{{\bf S}},{\bf t}) \le {H^{\uparrow}}({\bf t})$ from \equat{HOHupineq} with ${H^{\uparrow}}({\bf t})$ defined as in
\equat{Hexpurg3}. We also recall from Part-I that $\bar{{\bf x}}$ is a binary vector of dimension $m$
and $\sum_{|\bar{{\bf x}}| = \ell}$ is a sum over all $\bar{{\bf x}}$ containing exactly $\ell$ $1$'s. The
inequality in \equat{HOinequality} is an equality {\bf iff} first-passage is exponential with ${\bar{G}}(x) =
e^{-{\mu} x} u(x)$, where $u(x)$ is the unit step function (Theorem 9, Part-I).
\subsection{General Calculation of ${H^{\uparrow}}({\bf t})$}
\label{app:Hup}
To calculate ${H^{\uparrow}}({\bf t})$ we first define
\begin{IEEEeqnarray}{c}
\label{eq:thetadef}
\Theta_{m,\ell}(\vec{\tv})
\equiv
\sum_{|\bar{{\bf x}}| = \ell}
\prod_{j=1}^m
\bar{G}^{\bar{x}_j}(\vec{t}_{m+1} - \vec{t}_j)
G^{1 -\bar{x}_j}(\vec{t}_{m+1} - \vec{t}_j) \IEEEeqnarraynumspace
\end{IEEEeqnarray}
which implies via \equat{Hexpurg3} that
\begin{IEEEeqnarray}{c}
\label{eq:Hexpurg4}
{H^{\uparrow}}({\bf t})
=
\sum_{\ell=1}^{M-1}
\log(1 + \ell)
\sum_{m=\ell}^{M-1}
\Theta_{m,\ell}(\vec{\tv})
\end{IEEEeqnarray}
\blankout{Please note that we have assumed that $t_1 \le t_2 \le \cdots \le t_m$ -- important
because although $\left | \Omega \right |_{{\bf s},{\bf t}}$ is
hypersymmetric in ${\bf t}$, ${H^{\uparrow}}({\bf t})$ is {\em not}.}
In principle, we could derive ${H^{\uparrow}}({\bf T})$ by taking the expectation of \equat{Hexpurg4} with
respect to ordered emission times, $\vec{\tv}$. However, direct {\em analytic} evaluation of ${H^{\uparrow}}({\bf T})$
requires we derive joint order densities on the underlying ${\bf T}$, a difficult task in general when
the individual $\{T_m \}$ are not necessarily independent.
So, we take a different approach. The sum over all permutations of binary vector $\bar{{\bf x}}$ in the
definition of $\Theta_{m,\ell}(\vec{\tv})$ (\equat{thetadef}) renders it hypersymmetric in $\vec{t}_1 ,
\cdots , \vec{t}_m$ given the $(m+1)^{\mbox{st}}$ smallest emission time $\vec{t}_{m+1}$. That is,
$\Theta_{m,\ell}(\vec{\tv}) = \Theta_{m,\ell}(P_k(\vec{t}_1, \cdots,\vec{t}_m), \vec{t}_{m+1})$ for any
permutation function $k$ so long as $\vec{t}_{m+1}$ is fixed. In what follows we therefore drop the
over-vector notation for the $t_1, t_2, \cdots, t_m$ and assume all are less than $\vec{t}_{m+1}$.
Therefore, by \equat{hyperexpect} we can define $E[\Theta_{m,\ell}] = \bar{\Theta}_{m,\ell}$ as
\begin{IEEEeqnarray}{c}
E_{{\vec{T}}_{m+1}} \left [
E_{T_1,\cdots,T_m|{\vec{T}}_{m+1}}
\left [
\Theta_{m,\ell}(T_1,\cdots,T_m,{\vec{T}}_{m+1})
\right ]
\right ] \IEEEeqnarraynumspace
\label{eq:Hhyper}
\end{IEEEeqnarray}
Then, the CDF, $F_{\vec{T}_{m+1}} (t_{m+1})$, of the $(m+1)^{\mbox{st}}$ smallest emission time is
\begin{IEEEeqnarray}{rCl}
\IEEEeqnarraymulticol{3}{l}{F_{\vec{T}_{m+1}} (t_{m+1})} \nonumber \\ \quad
& = &
1
-
\displaystyle \sum_{k=0}^{m}
{M \choose k}
\underbrace{
\int_{{\bf 0}}^{\bf t_{m+1}}
}_{\mbox{$k$}}
\underbrace{
\int_{\bf t_{m+1}}^{{\boldsymbol{\infty}}}
}_{\mbox{$M-k$}}
f_{{\bf T}}({\bf t})
dt_M
\cdots
dt_{1} \IEEEeqnarraynumspace
\label{eq:ftvm}
\end{IEEEeqnarray}
and likewise, the CDF, $F_{T_1,\cdots,T_m|{\vec{T}}_{m+1}} (t_1,\cdots,t_m|\vec{t}_{m+1})$, of the smallest {\em unordered} $T_1,\cdots,T_m$ given $\vec{T}_{m+1}$ is
\begin{IEEEeqnarray}{rCl}
\IEEEeqnarraymulticol{3}{l}{F_{T_1,\cdots,T_m|{\vec{T}}_{m+1}} (t_1,\cdots,t_m|\vec{t}_{m+1})} \nonumber \\ \quad
& = & \frac{F_{T_1,\cdots,T_m} (t_1,\cdots,t_m)}{F_{T_1,\cdots,T_m} (\vec{t}_{m+1},\cdots,\vec{t}_{m+1})}
\end{IEEEeqnarray}
$\forall t_j \le \vec{t}_{m+1}$ where $j=1,\cdots,m$.
Therefore, by the hypersymmetry of ${\Theta}_{m,\ell}$ in $t_1,\cdots,t_m$ we may write $\bar{\Theta}_{m,\ell}$ as
\begin{IEEEeqnarray}{c}
\int_0^{\infty}
{\bf \int}_{{\bf 0}}^{{\bf t_{m+1}}}
\frac{f_{\vec{T}_{m+1}}(t_{m+1}) f_{{\bf T_m}}({\bf t_m})B(m,\ell,{\bf t})}{F_{{\bf T_m}}(t_{m+1},\cdots,t_{m+1})}
d{\bf t_m}
dt_{m+1} \IEEEeqnarraynumspace
\label{eq:Thbardef}
\end{IEEEeqnarray}
where ${\bf T_m} = \{T_1, \cdots, T_M \}$, ${\bf t_m} = \{T_1, \cdots, t_M \}$ and
\begin{IEEEeqnarray}{c}
\label{eq:Bdef}
B(m,\ell,{\bf t})
\equiv
{m \choose \ell}
\!\prod_{j=1}^{\ell}
\!\bar{G}(t_{m+1} \!-\! t_j)\!\!\prod_{k=\ell+1}^m
\!G(t_{m+1} \!-\! t_k) \IEEEeqnarraynumspace
\end{IEEEeqnarray}
and thence
\begin{equation}
\label{eq:He1}
{H^{\uparrow}}({\bf T})
=
\sum_{\ell=1}^{M-1}
\log(1 + \ell)
\sum_{m=\ell}^{M-1}
\bar{\Theta}_{m,\ell}
\end{equation}
In addition, if we define
\begin{equation}
\label{eq:Gamma}
\Gamma_{M,\ell}
=
\sum_{m=\ell}^{M-1}
\bar{\Theta}_{m,\ell}
\end{equation}
and
\begin{equation}
\label{eq:DeltaGamma}
\Delta\Gamma_{M,\ell}
=
\Gamma_{M,\ell} - \Gamma_{M,\ell+1}
\end{equation}
then we can also express ${H^{\uparrow}}({\bf T})$ as
\begin{equation}
\label{eq:He2}
{H^{\uparrow}}({\bf T})
=
\sum_{\ell=1}^{M-1}
\Delta\Gamma_{M,\ell}
\log (\ell + 1)!
\end{equation}
The development starting in section~\ref{app:Hup} proves the following theorem:
{\em \begin{theorem}{\bf The General Form of ${H^{\uparrow}}({\bf T})$:}
\thmlabel{HupGeneral}
If we define
$$
\Gamma_{M,\ell}
=
\sum_{m=\ell}^{M-1}
\bar{\Theta}_{m,\ell}
$$
and
$$
\Delta\Gamma_{M,\ell}
=
\Gamma_{M,\ell} - \Gamma_{M,\ell+1}
$$
where $\bar{\Theta}_{m,\ell}$ is as defined by \equats{Thbardef} and \equat{Bdef}.
\blankout{\begin{IEEEeqnarray*}{c}
\begin{array}{l}
\bar{\Theta}_{m,\ell}
=
\displaystyle \int_0^{\infty}
f_{\vec{T}_{m+1}}(t_{m+1}) \times\\
\displaystyle {\bf \int}_{{\bf 0}}^{{\bf t_{m+1}}}
\left [
\begin{array}{c}
\displaystyle \frac{f_{T_1,\cdots,T_m}(t_1,\cdots,t_m)}{F_{T_1,\cdots,T_m}(t_{m+1},\cdots,t_{m+1})}\\
{\mbox{\Large \boldmath $\times$} }\\
B(m,\ell,{\bf t})
\end{array}
\right ]
dt_1
\cdots
dt_{m+1}
\end{array}
\end{IEEEeqnarray*}}
\blankout{
\begin{IEEEeqnarray}{c}
\int_0^{\infty}
{\bf \int}_{{\bf 0}}^{{\bf t_{m+1}}}
\frac{f_{\vec{T}_{m+1}}(t_{m+1}) f_{{\bf T_m}}({\bf t_m})B(m,\ell,{\bf t})}{F_{{\bf T_m}}(t_{m+1},\cdots,t_{m+1})}
{\bf dt_m}
dt_{m+1} \IEEEeqnarraynumspace
\end{IEEEeqnarray}}
\blankout{
\begin{IEEEeqnarray*}{c}
\begin{array}{rcl}
\bar{\Theta}_{m,\ell} & = &
\displaystyle \int_0^{\infty}
f_{\vec{T}_{m+1}}(t_{m+1})
\displaystyle {\bf \int}_{{\bf 0}}^{{\bf t_{m+1}}} \\
& \times &
\displaystyle \frac{f_{T_1,\cdots,T_m}(t_1,\cdots,t_m)}{F_{T_1,\cdots,T_m}(t_{m+1},\cdots,t_{m+1})}
B(m,\ell,{\bf t})
dt_1
\cdots
dt_{m+1}
\end{array}
\end{IEEEeqnarray*}}
\blankout{and
\begin{IEEEeqnarray*}{c}
B(m,\ell,{\bf t})
\equiv
{m \choose \ell}
\prod_{j=1}^{\ell}
\bar{G}(t_{m+1} - t_j)
\prod_{k=\ell+1}^m
G(t_{m+1} - t_k) \IEEEeqnarraynumspace
\end{IEEEeqnarray*}}
then we can express ${H^{\uparrow}}({\bf T})$ as
\begin{IEEEeqnarray*}{c}
{H^{\uparrow}}({\bf T})
=
\sum_{\ell=1}^{M-1}
\Delta\Gamma_{M,\ell}
\log (\ell + 1)!
\end{IEEEeqnarray*}
\end{theorem}}
\begin{Proof}{Theorem \thmref{HupGeneral}}
See the development starting in section~\ref{app:Hup} leading to the statement of \Thmref{HupGeneral}.
\end{Proof}
This concludes our calculation of ${H^{\uparrow}}({\bf T})$ for general input distributions $f_{{\bf T}}(\cdot)$. The
key utility of our formulation is that it does not require joint order distributions for the $\{
T_m\}$, only the more easily calculable $m^{\mbox{th}}$ order distribution for ${\vec{T}}_m$. We now turn to
the case where the ${\bf T}$ are i.i.d. -- important because i.i.d. ${\bf T}$ increases entropy
$h({\bf S})$.
\subsection{$H(\Omega|\vec{{\bf S}},{\bf T})$ for General IID ${\bf T}$}
\label{app:HupIID}
With i.i.d. ${\bf T}$, we can use the definition of $\Theta_{m,\ell}(\cdot)$ in \equat{thetadef} and the hypersymmetric result
of \equat{Hhyper} to obtain
\begin{IEEEeqnarray}{c}
\bar{\Theta}_{m, \ell}
=
E_{{\vec{T}}_{m+1}} \left [
\begin{array}{c}
{m \choose \ell}
E_{T \le {\vec{T}}_{m+1}}^{\ell} \left [
\bar{G}({\vec{T}}_{m+1} - T)
\right ]\\
{\mbox{\large \boldmath $\times$} }\\
E_{T \le {\vec{T}}_{m+1}}^{m-\ell} \left [
(1 - \bar{G}({\vec{T}}_{m+1} - T))
\right ]
\end{array}
\right ] \IEEEeqnarraynumspace
\label{eq:thetabariid}
\end{IEEEeqnarray}
From the definition of $F_{{\vec{T}}{m+1}}(\cdot)$ in \equat{ftvm} we obtain
\begin{IEEEeqnarray}{c}
f_{{{\vec{T}}}_{m+1}}(t)
=
\frac{d}{dt}
\left [
1
-
\sum_{k=0}^{m}
{M \choose k}
F_T^k(t)(1-F_T(t))^{M-k}
\right ] \IEEEeqnarraynumspace
\end{IEEEeqnarray}
which after rearranging as a telescoping sum simplifies to
\begin{IEEEeqnarray}{c}
\sum_{k=0}^{m}
(M-k)
{M \choose k}
f_T(t)F_T^k(t)(1-F_T(t))^{M-k-1} \nonumber\\
- \displaystyle \sum_{k=0}^{m-1}
(k+1)
{M \choose {k+1}}
f_T(t)F_T^{k}(t)(1-F_T(t))^{M-k-1} \IEEEeqnarraynumspace
\end{IEEEeqnarray}
which further simplifies to
\begin{IEEEeqnarray}{c}
(m+1)
{M \choose {m+1}}
f_T(t)F_T^m(t)(1-F_T(t))^{M-m-1} \IEEEeqnarraynumspace
\end{IEEEeqnarray}
We then define
\begin{IEEEeqnarray}{c}
\label{eq:phidef}
\phi(t)
=
\int_0^t
f_T(x) \bar{G}(t-x)
dx
\end{IEEEeqnarray}
and
\begin{IEEEeqnarray*}{c}
\int_0^t
f_T(x) (1- \bar{G}(t-x))
dx
=
F_T(t) - \phi(t)
\end{IEEEeqnarray*}
which allows us to write
\begin{IEEEeqnarray*}{c}
\phi({\vec{T}}_{m+1}) = E_{T \le {\vec{T}}_{m+1}} \left [
\bar{G}({\vec{T}}_{m+1} - T)
\right ]
\end{IEEEeqnarray*}
and
\begin{IEEEeqnarray*}{c}
F_T(t) - \phi({\vec{T}}_{m+1}) = E_{T \le {\vec{T}}_{m+1}} \left [
1 - \bar{G}({\vec{T}}_{m+1} - T)
\right ]
\end{IEEEeqnarray*}
which upon substitution into \equat{thetabariid} allows us to write $\bar{\Theta}_{m,\ell}$ as
\begin{IEEEeqnarray}{rCl}
\IEEEeqnarraymulticol{3}{l}{\bar{\Theta}_{m,\ell} = (m+1) {M \choose {m+1}}
{m \choose {\ell}} } \nonumber \\ \quad
& \times &
\displaystyle \int_0^{\infty}
\left [
\begin{array}{c}
f_T(t)(1-F_T(t))^{M-m-1}
\phi^{\ell} (t)\\
{\mbox{\large \boldmath $\times$} }\\
(F_T(t)-\phi(t))^{m-\ell}
\end{array}
\right ]
dt
\end{IEEEeqnarray}
and then as
\begin{IEEEeqnarray}{rCl}
\IEEEeqnarraymulticol{3}{l}{\bar{\Theta}_{m,\ell} = M
{{M -1} \choose \ell}
{{M - \ell - 1} \choose {m- \ell}}
} \nonumber \\ \quad
& \times &
\begin{array}{c}
\displaystyle \int_0^{\infty}
\left [
\begin{array}{c}
f_T(t)(1-F_T(t))^{M-m-1}
\phi^{\ell} (t)\\
{\mbox{\large \boldmath $\times$} }\\
(F_T(t)-\phi(t))^{m-\ell}
\end{array}
\right ]
dt
\end{array} \IEEEeqnarraynumspace
\label{eq:thetaml}
\end{IEEEeqnarray}
To evaluate ${H^{\uparrow}}({\bf T})$ in \equat{He2} we must first compute $\Gamma_{M,\ell} =\sum_{m=\ell}^{M-1}
\bar{\Theta}_{m,\ell}$ as
\begin{IEEEeqnarray}{rCl}
\IEEEeqnarraymulticol{3}{l}{\Gamma_{M,\ell} =
M {{M -1} \choose \ell} } \nonumber \\ \quad
& \times &
\int_0^{\infty}
f_T(t)
\left [
\begin{array}{c}
(1- F_T(t))^{M-1}
\left ( \frac{\phi(t)}{F_T(t) - \phi(t)} \right )^{\ell}\\
{\mbox{\large \boldmath $\times$} }\\
{\displaystyle \sum_{m=\ell}^{M-1}}
{{M - \ell - 1} \choose {m- \ell}}
\left ( \frac{F_T(t)-\phi(t)}{1 - F_T(t)} \right )^{m}
\end{array}
\right ]
dt
\IEEEeqnarraynumspace
\end{IEEEeqnarray}
which we rewrite as
\begin{IEEEeqnarray}{rCl}
\IEEEeqnarraymulticol{3}{l}{\Gamma_{M,\ell} =
M {{M \!-\!1} \choose \ell} } \nonumber \\ \quad
& \times &
\int_0^{\infty}
\! \!f_T(t)
\left [
\begin{array}{c}
(1\!-\! F_T(t))^{M\!-\!1}
\left ( \frac{\phi(t)}{F_T(t) - \phi(t)} \right )^{\ell}\\
{\mbox{\large \boldmath $\times$} }\\
{\displaystyle\sum_{m=0}^{M\!-\!1\!-\!\ell}}
{{M \!- \!\ell \!- \!1} \choose {m}}
\left ( \frac{F_T(t)-\phi(t)}{1 - F_T(t)} \right )^{m\!+\!\ell}
\end{array}
\right ]
dt
\IEEEeqnarraynumspace
\end{IEEEeqnarray}
\blankout{
\begin{IEEEeqnarray*}{c}
\begin{array}{c}
M
{{M -1} \choose \ell}\\
{\mbox{\large \boldmath $\times$} }\\
\displaystyle \int_0^{\infty}
f_T(t)
\left [
\begin{array}{c}
(1- F_T(t))^{M-1}
\left ( \frac{\phi(t)}{F_T(t) - \phi(t)} \right )^{\ell}\\
{\mbox{\large \boldmath $\times$} }\\
{\displaystyle\sum_{m=0}^{M\!-\!1\!-\!\ell}}
{{M \!- \!\ell \!- \!1} \choose {m}}
\left ( \frac{F_T(t)-\phi(t)}{1 - F_T(t)} \right )^{m+\ell}
\end{array}
\right ]
dt
\end{array}
\end{IEEEeqnarray*}}
We consolidate the binomial sum to obtain
\begin{IEEEeqnarray}{rCl}
\IEEEeqnarraymulticol{3}{l}{\Gamma_{M,\ell} =
M {{M \!-\!1} \choose \ell} } \nonumber \\ \quad
& \times & \int_0^{\infty}
\!\!f_T(t)
\left [
\begin{array}{c}
(1\!-\! F_T(t))^{M\!-1}
\left ( \frac{\phi(t)}{F_T(t) \!-\! \phi(t)} \right )^{\ell}\\
{\mbox{\large \boldmath $\times$} }\\
\left ( \frac{F_T(t)-\phi(t)}{1\! -\! F_T(t)} \right )^{\ell}
\left ( \frac{1-\phi(t)}{1 \!-\! F_T(t)} \right )^{M\!-1\!-\!\ell}
\end{array}
\right ]
dt \IEEEeqnarraynumspace
\end{IEEEeqnarray}
which reduces to
\begin{IEEEeqnarray}{c}
\label{eq:gammaML}
\Gamma_{M,\ell}
=
\int_0^{\infty}
\!\!M
{{M\!-\!1} \choose \ell}
f_T(t)
\phi^{\ell}(t)
\left ( 1 \!- \!\phi(t) \right )^{M\!-1\!-\ell}
dt \IEEEeqnarraynumspace
\end{IEEEeqnarray}
for $\ell = 1,2,\cdots,M-1$.
Now consider the integrand of the difference $\Gamma_{M,\ell} -
\Gamma_{M,\ell+1}$ where we drop the $t$ dependence for notational convenience
\begin{IEEEeqnarray*}{c}
\Gamma_{M,\ell} -
\Gamma_{M,\ell+1}
=
\left [
\begin{array}{c}
M
{{M-1} \choose \ell}
\phi^{\ell}
\left ( 1 -\phi \right )^{M-\ell-1}\\
{\mbox{\large \boldmath $-$} }\\
M
{{M\-\!1} \choose {\ell\!+\!1}}
\phi^{\ell+1}
\left ( 1 \!-\!\phi \right )^{M\!-\!\ell-2}
\end{array}
\right ]
\end{IEEEeqnarray*}
We can rewrite this expression as
\begin{IEEEeqnarray*}{c}
M
\phi^{\ell}
\left [
{{M\!-\!1} \choose \ell}
\!+\!
\sum_{r=1}^{M\!-\!\ell\! -\!1}
\!\!(-1)^r
\phi^r
\left [
\begin{array}{c}
{{M-1} \choose \ell}
{{M\!-\!\ell\! -\! 1} \choose r} \\
{\mbox{\large \boldmath $+$} }\\
{{M\!-\!1} \choose {\ell\!+\!1}}
{{M\!-\!\ell\! - \!2} \choose {r\!-\!1}}
\end{array}
\right ]
\right ]
\end{IEEEeqnarray*}
which after consolidating terms becomes
\begin{IEEEeqnarray*}{c}
M
\phi^{\ell}
\left [
\begin{array}{c}
{{M-1} \choose \ell}\\
\mbox{ }\\
{\mbox{\large \boldmath $+$}} \\
\frac{1}{M}
{{M} \choose {\ell+1}}
{\displaystyle \sum_{r=1}^{M-\ell -1}}
(-1)^r
{{M-\ell -1} \choose {r}}
(\ell+r+1)
\phi^r
\end{array}
\right ]
\end{IEEEeqnarray*}
Extending the sum to $r=0$ and subtracting the $r=0$ term produces
\begin{IEEEeqnarray*}{c}
{{M} \choose {\ell+1}}
\sum_{r=0}^{M-\ell -1}
(-1)^r
{{M-\ell -1} \choose {r}}
(\ell+r+1)
\phi^{r+\ell}
\end{IEEEeqnarray*}
which can be recognized as
\begin{IEEEeqnarray*}{c}
\frac{d}{d\phi}
\left [
{{M} \choose {\ell+1}}
\sum_{r=0}^{M-\ell -1}
(-1)^r
{{M-\ell -1} \choose {r}}
\phi^{r+\ell+1}
\right ]
\end{IEEEeqnarray*}
and then reduced to
\begin{IEEEeqnarray*}{c}
{{M} \choose {\ell+1}}
\frac{d}{d\phi}
\left [
\phi^{\ell+1} (1-\phi)^{M-\ell - 1}
\right ]
\end{IEEEeqnarray*}
so that we have $\Delta\Gamma_{M,\ell}$ as
\begin{IEEEeqnarray}{c}
\label{eq:Gammadiffiid}
{{M} \choose {\ell\!+\!1}}
\sum_{r=0}^{M\!-\!\ell\! -\!1}
\!\!(-1)^r
{{M\!-\!\ell\! -\!1} \choose {r}}
(\ell\!+\!r\!+\!1)
E \left [
\phi^{r\!+\!\ell}(t)
\right ] \IEEEeqnarraynumspace
\end{IEEEeqnarray}
where $E[\cdot]$ is the expectation using $f_T(t)$.
The previous development of section~\ref{app:HupIID} proves the following theorem:
{\em \begin{theorem} {\bf \boldmath An Upper Bound for Ordering Entropy $H(\Omega|\vec{{\bf S}},{\bf T})$ with I.I.D. ${\bf T}$:}
\thmlabel{DeltaGamma}
If ${\bf T}$ is i.i.d., then we can write $\Delta\Gamma_{M,\ell}$ as
\begin{IEEEeqnarray*}{c}
{{M} \choose {\ell\!+\!1}}
\sum_{r=0}^{M\!-\!\ell\! -\!1}
\!\!(-1)^r
{{M\!-\!\ell\! -\!1} \choose {r}}
(\ell\!+\!r\!+\!1)
E \left [
\phi^{r\!+\!\ell}(t)
\right ]
\end{IEEEeqnarray*}
where
\begin{IEEEeqnarray*}{c}
\phi(t)
=
\int_0^t
f_T(x) \bar{G}(t-x)
dx
\end{IEEEeqnarray*}
so that
\begin{IEEEeqnarray*}{c}
H(\Omega|\vec{{\bf S}},{\bf T})
\le
{H^{\uparrow}}({\bf T})
=
\sum_{\ell=1}^{M-1}
\Delta\Gamma_{M,\ell}
\log (\ell + 1)!
\end{IEEEeqnarray*}
\end{theorem}}
\begin{Proof}{Theorem \thmref{DeltaGamma}}
See the development of section~\ref{app:HupIID} leading to the statement of \Thmref{DeltaGamma}.
\end{Proof}
\subsection{$H(\Omega|\vec{{\bf S}},{\bf T})$ Special Case IID ${\bf T}$}
\label{sect:specialcaseHO}
Here we derive expressions for $H(\Omega|\vec{{\bf S}},{\bf T})$ when the i.i.d. input distribution is that which maximizes
$I({\bf S};{\bf T})$. We consider the following cases:
\begin{itemize}
\item
Exponential first-passage with $E[T] = \tau$
\item
Exponential first-passage with emission deadline, $\tau$
\end{itemize}
\subsubsection{Exponential Transit Times with a Mean Constraint}
For exponential first-passage times with mean $1/{\mu}$, the probability density of
${\bf T}$ that maximizes $h({\bf S})$ subject to a mean constraint $E[\sum_m T_m]
\le M\tau$ is i.i.d. with marginal
\begin{IEEEeqnarray}{c}
\label{eq:fThSoptmean}
f_{T_m}(t)
=
a\delta(t)
+
{\mu} a(1-a)
e^{-{{\mu}}a t}
u(t)
\end{IEEEeqnarray}
where
$a = 1/({\mu} \tau + 1)$ \cite{bits-Qs} and $u(t)$ is the unit step function \cite{bits-Qs}. For
exponential transit we have
\begin{IEEEeqnarray*}{c}
\bar{G}(t)
=
e^{-{\mu} t} u(t)
\end{IEEEeqnarray*}
\blankout{
\begin{IEEEeqnarray*}{c}
1- F_T(t) = (1-a)e^{- a {\mu} t}
\end{IEEEeqnarray*}
}
and thereby
\begin{IEEEeqnarray*}{c}
\phi(t)
=
\int_0^t
f_T(x) \bar{G}(t-x) dx
=
a e^{-{\mu} a t} u(t)
\end{IEEEeqnarray*}
We then require an expression for $E_T[\phi^k(T)]$.
Remembering that $\int_{0^-}^{0^+} \delta (t) u^k(t) dt = \frac{1}{k+1}$ we
obtain
\begin{IEEEeqnarray*}{c}
E_T[\phi^k(T)]
=
\int_0^{\infty} f_T(t) a^k e^{-k{\mu} a t} u^k(t) dt
=
\frac{a^k}{k+1}
\end{IEEEeqnarray*}
so that \equat{Gammadiffiid} becomes
\begin{IEEEeqnarray*}{c}
\Delta\Gamma_{M,\ell}
=
{{M} \choose {\ell+1}}
\sum_{r=0}^{M-\ell -1}
(-1)^r
{{M-\ell -1} \choose {r}}
a^{r+\ell}
\end{IEEEeqnarray*}
which reduces to
\begin{IEEEeqnarray}{c}
\label{eq:Gammadiffexp}
\Delta\Gamma_{M,\ell}
=
{M \choose \ell+1}
a^\ell
(1-a)^{M-\ell - 1}
\end{IEEEeqnarray}
for $\ell = 1,2,\cdots,M-1$.
With $a = \frac{1}{{\mu} \tau + 1}$ we can write $H(\Omega|\vec{{\bf S}},{\bf T})$ as
\begin{IEEEeqnarray}{c}
\label{eq:HOexpgeomform}
({\mu} \tau + 1)
\sum_{k=0}^M
\log(k!)
{M \choose k}
\left (\frac{{\mu} \tau}{{\mu} \tau + 1} \right )^{M-k}
\left (\frac{1}{{\mu} \tau + 1} \right )^k \IEEEeqnarraynumspace
\end{IEEEeqnarray}
which is the expectation of $({\mu} \tau + 1) \log K!$ for a binomial random
variable $K$ with parameters $M$ and $\frac{1}{{\mu} \tau + 1}$, or
\begin{IEEEeqnarray}{c}
\label{eq:HOexpgeomformA}
H(\Omega|\vec{{\bf S}},{\bf T})
=
({\mu} \tau + 1)
E_K \left [
\log K!
\right ]
\end{IEEEeqnarray}
We restate this result as a theorem:
{\em \begin{theorem}{\bf \boldmath $H(\Omega|\vec{{\bf S}},{\bf T})$ for Exponential First-Passage with a Mean Constraint ($E[T] = \tau$):}
\thmlabel{HOmaxmean}
For $T$ distributed as \equat{fThSoptmean} and exponential first-passage with parameter ${\mu}$, we have
\begin{IEEEeqnarray*}{c}
H(\Omega|\vec{{\bf S}},{\bf T})
=
({\mu} \tau + 1)
E_K \left [
\log K!
\right ]
\end{IEEEeqnarray*}
where $K$ is a binomial random variable with parameters $M$ and $\frac{1}{1+{\mu} \tau}$.
\end{theorem}}
\begin{Proof}{Theorem \thmref{HOmaxmean}}
See the development leading to the statement of \Thmref{HOmaxmean} and direct application of
\Thmref{HupGeneral}.
\end{Proof}
\subsubsection{Exponential Transit Times with a Deadline}
\Thmref{fTopthSopt} states that if $T$ is constrained to $[0,\tau]$ then the $f_T(t)$ that maximizes $h(S)$
(and therefore $h({\bf S})$ when in i.i.d. form) is
\begin{IEEEeqnarray}{c}
\label{eq:fThSoptdeadline}
f_{T}(t)
=
{\frac{1}{e + {\mu} \tau}
\delta(t)
+
\frac{{\mu}}{e + {\mu} \tau}
+
\frac{e-1}{e + {\mu} \tau}
\delta(t-\tau)}
\end{IEEEeqnarray}
for $t \in [0,\tau]$ and zero otherwise.
To obtain the corresponding $H(\Omega|\vec{{\bf S}},{\bf T}) = {H^{\uparrow}}({\bf T})$ we calculate $\phi(t)$ as
\begin{IEEEeqnarray}{c}
\label{eq:phiISalmostfs}
\int_0^{t}
f_T(x) e^{-{\mu} (t-x)} dx
=
\threedef{
{\frac{1}
{e + {\mu} \tau}}}{0\le t \le \tau}
{\frac{e}{e + {\mu} \tau}
e^{-{\mu} (t - {\mu} \tau)}}{t > \tau}{0}{\mbox{o.w.}}
\end{IEEEeqnarray}
Once again, we require an expression for the integral $\int_0^{\infty} f_T(t)
\phi^k(t)dt$, and again remembering that
$\int_{0^-}^{0^+} \delta (t) u^k(t) dt = \frac{1}{k+1}$ we obtain $E_T \left [ \phi^k(T) \right ]$ as
\begin{IEEEeqnarray*}{c}
\left (\frac{1}{e+{\mu} \tau} \right )^{k+1}
{\displaystyle \int_{0^-}^{0^+} }
\delta(t)
u^k(t)
dt \\
+ \\
{\mu}\left (\frac{1}{e+{\mu} \tau} \right )^{k+1}
\int_0^{\tau}
dt\\
+\\
(e\!-\!1)\left ( \frac{1}{e\!+\!{\mu} \tau} \right )^{k\!+\!1}
\!\!{\displaystyle \int_{\tau^-}^{\tau^+} }
\!\!\delta(t-\tau)
\left ( 1\! +\! (e\!-\!1)u(t\!-\!\tau) \right )^k
dt
\end{IEEEeqnarray*}
which reduces to
\begin{IEEEeqnarray*}{c}
\left (\frac{1}{e+{\mu} \tau} \right )^{k+1}
\left [
\frac{1}{k+1} + {\mu} \tau
+
\sum_{r=0}^k
{k \choose r}
\frac{1}{r+1}
(e-1)^{r+1}
\right ]
\end{IEEEeqnarray*}
which further reduces to
\begin{IEEEeqnarray*}{c}
\left (\frac{1}{e+{\mu} \tau} \right )^{k+1}
\left [
\frac{1}{k+1} + {\mu} \tau
+
\frac{e^{k+1}}{k+1}
-
\frac{1}{k+1}
\right]
\end{IEEEeqnarray*}
and then finally,
\begin{IEEEeqnarray*}{c}
E_T \left [ \phi^k(T) \right ]
=
\left (\frac{1}{e+{\mu} \tau} \right )^{k+1}
\left [
{\mu} \tau
+
\frac{e^{k+1}}{k+1}
\right]
\end{IEEEeqnarray*}
so that $\Delta\Gamma_{M,\ell}$ in \equat{Gammadiffiid} becomes
\begin{IEEEeqnarray*}{c}
{{M} \choose {\ell\!+\!1}}
\displaystyle \sum_{r=0}^{M\!-\!\ell\! -\!1}
\left [
\begin{array}{c}
(-1)^r
{{M-\ell -1} \choose {r}}\\
\times \\
\!\!(\ell\!+\!r\!+\!1)
\left (\frac{1}{e+{\mu} \tau} \right )^{r\!+\!\ell\!+\!1}
\!\!\left [
{\mu} \tau
\!+\!
\frac{e^{r+\ell+1}}{r+\ell+1}
\right]
\end{array}
\right ]
\end{IEEEeqnarray*}
which reduces to
\begin{IEEEeqnarray*}{c}
{{M} \choose {\ell+1}}
\left (\frac{e}{e+{\mu} \tau} \right )^{\ell+1}
\left (\frac{{\mu} \tau}{e+{\mu} \tau} \right )^{M-\ell-1}\\
{\mbox{\large \boldmath $+$}}\\
{\mu} \tau
{{M} \choose {\ell+1}}
\displaystyle \sum_{r=0}^{M-\ell -1}
\left [
\begin{array}{c}
(-1)^r
{{M-\ell -1} \choose {r}}\\
\times\\
(\ell+r+1)
\left (\frac{1}{e+{\mu} \tau} \right )^{r+\ell+1}
\end{array}
\right ]
\end{IEEEeqnarray*}
and then to
\begin{IEEEeqnarray*}{c}
{{M} \choose {\ell+1}}
\left (\frac{e}{e+{\mu} \tau} \right )^{\ell+1}
\left (\frac{{\mu} \tau}{e+{\mu} \tau} \right )^{M-\ell-1}\\
{\mbox{\large \boldmath $+$}}\\
\left [
\begin{array}{c}
{\mu} \tau
{{M} \choose {\ell+1}}
\left (1 - \frac{1}{e+{\mu} \tau} \right )^{M-\ell -2}\\
\times \\
\left (\frac{1}{e+{\mu} \tau} \right )^{\ell +1}
\left ( \ell + 1 -\frac{M}{e+{\mu} \tau} \right )
\end{array}
\right ]
\end{IEEEeqnarray*}
If we define $k = \ell +1$ and then
\begin{IEEEeqnarray*}{c}
p_1 = \frac{e}{e+{\mu} \tau}
\end{IEEEeqnarray*}
and
\begin{IEEEeqnarray*}{c}
p_2 = \frac{1}{e+{\mu} \tau}
\end{IEEEeqnarray*}
we can then write
\begin{IEEEeqnarray}{c}
\label{eq:Gammadiffuniform}
\Delta\Gamma_{M,k-1}
=
\left [
\begin{array}{c}
{M \choose k}
p_1^k (1-p_1)^{M-k}\\
{\mbox{\large \boldmath $+$}}\\
\frac{{\mu} \tau }{1-p_2}
\left [
k
-
\frac{M}{{\mu} \tau + e}
\right ]
{M \choose k}
p_2^k (1-p_2)^{M-k}
\end{array}
\right ] \IEEEeqnarraynumspace
\end{IEEEeqnarray}
Now if we define random variables $K_i$ to be binomial with parameters $M$ and
$p_i$, the following theorem results from direct application of \Thmref{HupGeneral}:
{\em \begin{theorem}{\bf \boldmath $H(\Omega|\vec{{\bf S}},{\bf T})$ for Exponential First-Passage with a Launch Deadline (${\bf T} \in [0,\tau]^M$):}
\thmlabel{HOmaxdeadline}
For $T$ distributed as \equat{fThSoptdeadline} we have
\begin{IEEEeqnarray}{c}
\label{eq:HOexpgeomform2}
H(\Omega|\vec{{\bf S}},{\bf T})
=
\!\left [
\!\!\begin{array}{c}
E_{K_1} \left [
\log K_1!
\right ]
\!+\!
\frac{{\mu} \tau}{1-p_2}
E_{K_2} \left [
K_2 \log K_2!
\right ]\\
{\mbox{\large \boldmath $-$}}\\
\frac{{\mu} \tau M }{(1-p_2)({\mu} \tau + e)}
E_{K_2} \left [
\log K_2!
\right ]
\end{array}
\! \!\right ] \IEEEeqnarraynumspace
\end{IEEEeqnarray}
where $K_1$ is a binomial random variable with parameters $M$ and $\frac{e}{e+{\mu} \tau}$
and $K_2$ is a binomial random variable with parameters $M$ and $\frac{1}{e+{\mu} \tau}$.
\end{theorem}}
\begin{Proof}{Theorem \thmref{HOmaxdeadline}}
See the development leading to the statement of \Thmref{HOmaxdeadline} and direct application of
\Thmref{HupGeneral}.
\end{Proof}
\subsection{Asymptotic $H(\Omega|\vec{{\bf S}},{\bf T})/M$ For Exponential First-Passage}
We are interested in asymptotic values of $H(\Omega|\vec{{\bf S}},{\bf T})/M$ owing to our definition of
capacity per token in \equat{Cqdef} (see also in Part-I \cite{RoseMian16_1}). To that end, recall
that ${\lambda} \tau = M$ and we define $\rho = {\lambda}/{\mu}$, a measure of system token
``load'' (also a proxy for power expenditure in units of energy per passage time), so that
\begin{IEEEeqnarray*}{c}
\frac{1}{1 + {\mu} M/{\lambda}}
=
\frac{1}{1 + M/\rho}
\end{IEEEeqnarray*}
and likewise
\begin{IEEEeqnarray*}{c}
\frac{e}{e + {\mu} M/{\lambda}}
=
\frac{e}{e + M/\rho}
\end{IEEEeqnarray*}
and
\begin{IEEEeqnarray*}{c}
\frac{1}{e + {\mu} M/{\lambda}}
=
\frac{1}{e + M/\rho}
\end{IEEEeqnarray*}
Now, remember the binomial distribution for fixed $k$ and large $M$ is approximated by
\begin{IEEEeqnarray*}{c}
{M \choose k} p^k (1-p)^{M-k}
\approx
\frac{M^k}{k!}
p^k (1-p)^{M-k}
\end{IEEEeqnarray*}
So, for any finite $k$ it is easily seen that for $M \rightarrow \infty$
\begin{IEEEeqnarray}{c}
\label{eq:limit1}
{M \choose k}
\left ( \frac{1}{1 + \frac{M}{\rho}} \right )^k
\left (1 - \frac{1}{1 + \frac{M}{\rho}} \right )^{M-k}
\rightarrow
e^{-\rho}
\frac{1}{k!}
\rho^k \IEEEeqnarraynumspace
\end{IEEEeqnarray}
\begin{IEEEeqnarray}{c}
\label{eq:limit2}
{M \choose k}
\left ( \frac{1}{e + \frac{M}{\rho}} \right )^k
\left (1 - \frac{1}{e + \frac{M}{\rho}} \right )^{M-k}
\rightarrow
e^{-\rho}
\frac{1}{k!}
\rho^k \IEEEeqnarraynumspace
\end{IEEEeqnarray}
and
\begin{IEEEeqnarray}{c}
\label{eq:limit3}
{M \choose k}
\left ( \frac{e}{e + \frac{M}{\rho}} \right )^k
\left (1 - \frac{e}{e + \frac{M}{\rho}} \right )^{M-k}
\rightarrow
e^{-\rho e}
\frac{1}{k!}
\rho^k e^k \IEEEeqnarraynumspace
\end{IEEEeqnarray}
and we note that all these limiting distributions are Poisson.
\Equat{HOexpgeomformA} and \equat{HOexpgeomform2} can then be combined with \equat{limit1},
\equat{limit2} and \equat{limit3} to produce the following two theorems:
{\em \begin{theorem}{\bf \boldmath Asymptotic $H(\Omega|\vec{{\bf S}},{\bf T})/M$ for Exponential First-Passage with a Mean Constraint ($E[T] = \tau$):}
\thmlabel{HOasymptmean}
For exponential first-passage with $E[T] = \tau$ and $f_T(\cdot)$ as given in \equat{fThSoptmean}, $H(\Omega|\vec{{\bf S}},{\bf T})$ is given by
\begin{IEEEeqnarray}{c}
\label{eq:HOiidmean}
\lim_{M \rightarrow \infty} \frac{H(\Omega|\vec{{\bf S}},{\bf T})}{M}
=
e^{-\rho}
\sum_{k=2}^{\infty}
\rho^{k-1}
\frac{\log k!}{k!}
=
E[\frac{\log k!}{\rho}] \IEEEeqnarraynumspace
\end{IEEEeqnarray}
where the final expectation is for $k$ a Poisson random variable with parameter $\rho$.
\end{theorem}}
\begin{Proof}{Theorem \thmref{HOasymptmean}}
See \Thmref{HOmaxmean} and the development leading up to the statement of \Thmref{HOasymptmean}.
\end{Proof}
{\em \begin{theorem}{\bf \boldmath Asymptotic $H(\Omega|\vec{{\bf S}},{\bf T})/M$ for Exponential First-Passage with a Deadline Constraint ($T \in [0,\tau]$)}
\thmlabel{HOasymptdeadline}
For exponential first-passage with ${\bf T} \in [0,M/\rho]^M$ and $f_T(\cdot)$ as given in \equat{fThSoptdeadline},
$H(\Omega|\vec{{\bf S}},{\bf T})$ is given by
\begin{IEEEeqnarray}{c}
\label{eq:HOiiddeadline}
\lim_{M \rightarrow \infty} \frac{H(\Omega|\vec{{\bf S}},{\bf T})}{M}
=
E[ (\frac{k}{\rho} - 1) \log k!] \IEEEeqnarraynumspace
\end{IEEEeqnarray}
where the final expectation is for $k$ a Poisson random variable with parameter $\rho$.
\end{theorem}}
\begin{Proof}{Theorem \thmref{HOasymptdeadline}}
See \Thmref{HOmaxdeadline} and the development leading up to the statement of \Thmref{HOasymptmean}.
\end{Proof}
\blankout{
\section{Lower Bound on $C_q$}
{\em \begin{theorem} {\bf \boldmath A Lower Bound on $C_q$:}
\thmlabel{Ilowbound}
If the first-passage density $f_D(\cdot)$ is exponential with parameter ${\mu}$ and ${\lambda}$ is the constant
average rate at which tokens are released (${\lambda} \equiv \lim_{M \rightarrow \infty}\frac{M}{\tau(M)}$) then the capacity \cite{isit13} per token, $C_q$, obeys
\begin{IEEEeqnarray}{c}
\label{eq:Cmdown}
C_q
\ge
-\log \rho +
e^{-\rho}
{\displaystyle \sum_{k=1}^{\infty}}
{\rho}^k
(\frac{k}{\rho} - 1)
\frac{\log k!}{k!}
\end{IEEEeqnarray}
where $\rho \equiv \frac{{\lambda}}{{\mu}}$.
\end{theorem}}
\begin{Proof}{Theorem \thmref{Ilowbound}}
See development leading to \Thmref{Ilowbound}.
\end{Proof}
We can rewrite \equat{Cmdown} more compactly by noting that
\begin{IEEEeqnarray*}{rCl}
\begin{array}{rcl}
{\displaystyle \sum_{k=1}^{\infty}}
\rho^k
(\frac{k}{\rho} - 1)
\frac{\log k!}{k!} & = & {\displaystyle \sum_{\ell=1}^{\infty}}
\log \ell
{\displaystyle \sum_{k=\ell}^{\infty}}
\rho^k
\frac{(\frac{k}{\rho} - 1)}{k!}
\end{array}
\end{IEEEeqnarray*}
Then
\begin{IEEEeqnarray*}{c}
\sum_{k=\ell}^{\infty}
\left (\rho \right )^k \frac{1}{k!}
=
e^{\rho}
-
\sum_{k=0}^{\ell - 1}
\left (\rho \right )^k \frac{1}{k!}
\end{IEEEeqnarray*}
and
\begin{IEEEeqnarray*}{c}
\sum_{k=\ell}^{\infty}
k/ \rho \left (\rho \right )^k \frac{1}{k!}
=
\sum_{k=\ell-1}^{\infty}
\left (\rho \right )^k \frac{1}{k!}
\end{IEEEeqnarray*}
\blankout{
which leads to
\begin{IEEEeqnarray*}{c}
\sum_{k=\ell}^{\infty}
(k/\rho - 1)\left (\rho \right )^k \frac{1}{k!}
=
\sum_{k=\ell-1}^{\infty}
\left (\rho \right )^k \frac{1}{k!}
+
\sum_{k=0}^{\ell - 1}
\left (\rho \right )^k \frac{1}{k!}
-
e^{\rho}
\end{IEEEeqnarray*}}
can be used to obtain
\begin{IEEEeqnarray*}{c}
\sum_{k=\ell}^{\infty}
\left (\rho \right )^k \frac{(k/\rho - 1)}{k!}
=
\frac{1}{(\ell - 1)!} \left (\rho \right )^{\ell -1}
=
\ell \left (\rho \right )^{\ell} \frac{1}{\rho \ell!}
\end{IEEEeqnarray*}
We can then define the probability mass function
$
p_\ell
=
e^{-\rho}
\left (\rho \right )^\ell \frac{1}{\ell!}
$, $\ell = 0, 1, \cdots, \infty$ to obtain the more compact
\begin{IEEEeqnarray}{c}
\label{eq:HOsimple}
{\displaystyle \sum_{k=1}^{\infty}}
\left (\rho \right )^k
(k/ \rho - 1)
\frac{\log k!}{k!}
=
\frac{1}{\rho} E_\ell \left [ (\ell \log \ell) \right ]
\end{IEEEeqnarray}
so that \equat{Cmdown} becomes
\begin{IEEEeqnarray}{c}
\label{eq:Cmdownsimple}
C_m
\ge
-\log \rho +
\frac{1}{\rho} E_\ell \left [ (\ell \log \ell) \right ]
\end{IEEEeqnarray}
}
\blankout{
\section{Simple Lower Bound for $C_q$}
We can derive a simple lower bound on $C_q(M)$ by noting that
\equat{orderedMI_decomp} and the definition of \equat{pertokenCtauM} with
$\tau(M)$ produces
\begin{IEEEeqnarray}{c}
\label{eq:RtauMboundsimple}
\begin{array}{rcl}
C_q(M) & = &
\max_{f_{{\bf T}}(\cdot)}
\left [ I({\bf S};{\bf T}) + H(\Omega|\vec{{\bf S}},{\bf T}) \right ]
-
\log M!\\
& \ge & \max_{f_{{\bf T}}(\cdot)}
I({\bf S};{\bf T}) - \log M!
\end{array}
\end{IEEEeqnarray}
because $0 \le H(\Omega|\vec{{\bf S}},{\bf T}) \le M!$.
From section~\ref{sect:minmaxhS} (and \cite{isit11}) we know that the univariate maximum $I(S;T)$
subject to $T \le \tau$ and a mean first-passage time ${\mu}^{-1}$ is also minimized when the mean
first-passage time density $g(\cdot)$ is exponential with parameter ${\mu}$. We therefore have for any
finite $M$ and a finite launch deadline $\tau(M)$,
\begin{IEEEeqnarray}{c}
\label{eq:Iminmaxdeadline}
\max_{f_{{\bf T}}(\cdot)}
I({\bf S};{\bf T})
\ge
\min_{g(\cdot)} \max_{f_{{\bf T}}(\cdot)} I({\bf S};{\bf T})
=
M \log \left (1 + \frac{{\lambda} \tau(M)}{e} \right )
\end{IEEEeqnarray}
which means,
\begin{IEEEeqnarray}{c}
C_q(M)
\ge
\log \left (1 + \frac{{\lambda} \tau(M)}{e} \right ) - \frac{\log(M!)}{M}
\end{IEEEeqnarray}
for a launch deadline $\tau(M)$.
Using $\rho = {\lambda}/{\mu}$ and Stirling's approximation, $\log M! = M \log M - M +
O(\log(M))$ we have the following sequence of simplifications
\begin{IEEEeqnarray*}{c}
\begin{array}{c}
\frac{1}{M} \left ( M \log \left ( 1 + \frac {{\lambda}}{{\lambda} e} M \right ) - \log M! \right )\\
\log \left ( 1 + \frac{{\lambda}}{{\lambda} e} M \right )
- \log M + 1 - \frac{1}{M}O(\log(M))\\
\log \left ( \frac{e}{M} + \frac{{\lambda}}{{\lambda}} \right )
- \frac{1}{M}O(\log(M))
\end{array}
\end{IEEEeqnarray*}
Defining $\rho = \frac{{\lambda}}{{\mu}}$, the ratio of the token uptake rate to the
release rate, and then taking the limit as $M \rightarrow \infty$ we obtain
\begin{IEEEeqnarray}{c}
\lim_{M \rightarrow \infty}
C_q(M)
\ge
-\log \rho
\end{IEEEeqnarray}
The above results prove the following theorem:
{\em \begin{theorem} {\bf \boldmath A Simple Lower Bound for $C_q$:}
\thmlabel{IDlowerbound}
Given an average rate of signaling token production ${\lambda} = M/\tau$
and any i.i.d. first-passage time distribution with mean ${\mu}^{-1}$,
the timing channel capacity $C_q(\rho)$ in nats per token obeys
\begin{IEEEeqnarray}{c}
\label{eq:Cqlo}
C_{q}(\rho) \ge \max \left
\{-\log\rho,0 \right \}
\end{IEEEeqnarray}
where $\rho = \frac{{\lambda}}{{\mu}}$
\end{theorem}}
\begin{Proof}{Theorem \thmref{IDlowerbound}}
See development leading to \Thmref{IDlowerbound}.
\end{Proof}
We emphasize that the Theorem \thmref{IDlowerbound} bound is {\em general} and
applies to {\em any} first-passage time density $g(\cdot)$ with mean ${\mu}^{-1}$.
}
\section{Upper Bound for $I(\vec{{\bf S}};{\bf T})$}
\label{sect:upperI}
With analytic bounds for $H(\Omega|\vec{{\bf S}},{\bf T})$, we can now consider bounds on the mutual information,
$I(\vec{{\bf S}};{\bf T})$. In Part-I (using results from this, Part-II) lower bounds were derived. Here we consider an upper bound. To begin, however, we must find an upper bound for $H(\Omega|\vec{{\bf S}},{\bf T})$.
\subsection{A Useful Upper Bound On $H(\Omega|\vec{{\bf S}},{\bf T}$)}
\label{sect:upperHO}
We state the bound as a theorem with proof.
{\em \begin{theorem} {\bf \boldmath An Upper Bound for $H(\Omega|\vec{{\bf S}},{\bf T})$:}
\thmlabel{HOgamma}
Given
\begin{IEEEeqnarray}{c}
Q(\cdot) = {\bar{G}}(|\cdot|)
\end{IEEEeqnarray}
where ${\bar{G}}(\cdot)$ is the CCDF of the passage time, and defining
\begin{IEEEeqnarray}{c}
\label{eq:gamma}
\gamma_T = E_{{\bf T}} \left [ Q(T_1 - T_2) \right ]
\end{IEEEeqnarray}
we have
\begin{IEEEeqnarray}{c}
\label{eq:Homega_bound}
H(\Omega|\vec{{\bf S}},{\bf T})
\le
E_{{\bf T}}\left [ {H^{\uparrow}}({\bf T}) \right ]
\le
M \log \left ( 1 + \frac{M-1}{2} \gamma_T \right ) \IEEEeqnarraynumspace
\end{IEEEeqnarray}
\end{theorem}}
\begin{Proof}{Theorem \thmref{HOgamma}}
${H^{\uparrow}}({\bf t})$, defined in \equat{Hexpurg3} and derived in Part-I \cite{RoseMian16_1, SonRos13} is an
upper bound for $H(\Omega|\vec{{\bf S}},\vec{\tv})$. The bound is satisfied with equality {\bf iff} the
first-passage density is exponential \cite{RoseMian16_1,SonRos13}. For a given $m$, let us define
${\bar{G}}_k = {\bar{G}}(\vec{t}_{m+1} - \vec{t}_k)$ and $G_k$ in a corresponding way. Then, consider the sum of
the following $2^m$ terms
\begin{IEEEeqnarray*}{c}
{\bar{G}}_{m} {\bar{G}}_{m-1} {\bar{G}}_{m-2} \cdots {\bar{G}}_3 {\bar{G}}_2 {\bar{G}}_1\\
+\\
{\bar{G}}_{m} {\bar{G}}_{m-1} {\bar{G}}_{m-2} \cdots {\bar{G}}_3 {\bar{G}}_2 G_1\\
+\\
\vdots\\
+\\
G_{m} G_{m-1} G_{m-2} \cdots G_3 G_2 {\bar{G}}_1\\
+\\
G_{m} G_{m-1} G_{m-2} \cdots G_3 G_2 G_1
\end{IEEEeqnarray*}
Taken pairwise it is easy to see that this sum telescopes to $1$ since ${\bar{G}}_i + G_i = 1$ so that the
ensemble of terms is a PMF. Furthermore, since $m = 1,2, \cdots, M$, the complete ensemble of the
terms, $\prod_{j=1}^m {\bar{G}}^{\bar{x}_j}(\vec{t}_{m+1} - \vec{t}_j) G^{1-\bar{x}_j}(\vec{t}_{m+1} -
\vec{t}_j)$, $m=1,2,\cdots, M$, sums to $M$. So, we can define
\begin{IEEEeqnarray}{c}
\label{eq:pellm}
p_{\ell|\vec{\tv},m}
=
\sum_{|\bar{{\bf x}}| = \ell}
\prod_{j=1}^m
{{\bar{G}}}^{\bar{x}_j}(\vec{t}_{m+1} - \vec{t}_j)
G^{1 -\bar{x}_j}(\vec{t}_{m+1} - \vec{t}_j) \IEEEeqnarraynumspace
\end{IEEEeqnarray}
and then
\begin{IEEEeqnarray}{c}
\label{eq:pell}
p_{\ell|\vec{\tv}}
=
\sum_{m=\ell}^{M-1}
\sum_{|\bar{{\bf x}}| = \ell}
\prod_{j=1}^m
\frac{{{\bar{G}}}^{\bar{x}_j}(\vec{t}_{m+1} - \vec{t}_j)
G^{1 -\bar{x}_j}(\vec{t}_{m+1} - \vec{t}_j)}{M} \IEEEeqnarraynumspace
\end{IEEEeqnarray}
for $\ell = 0, 1, \cdots, M-1$. We can use Jensen's inequality to write
\begin{IEEEeqnarray}{c}
\label{eq:hup}
{H^{\uparrow}}({\bf t}) =
E_{\ell|\vec{\tv}} \left [\log(1+\ell) \right ]
\le M \log (E[\ell|\vec{\tv}] + 1)
\end{IEEEeqnarray}
Now consider that
\begin{IEEEeqnarray*}{c}
E[\ell|\vec{\tv}]
=
\sum_{m=0}^{M-1}
\frac{1}{M}
E[\ell|\vec{\tv},m]
\end{IEEEeqnarray*}
and the explicit expansion of $E[\ell|\vec{\tv},m]$ is
\begin{IEEEeqnarray}{c}
\label{eq:El}
\sum_{\ell=0}^m \ell
\mbox{\small $
\left (
{\displaystyle \sum_{|\bar{{\bf x}}| = \ell}
\prod_{j=1}^m}
{{{\bar{G}}}^{\bar{x}_j}(\vec{t}_{m+1} - \vec{t}_j)
{G}^{1- \bar{x}_j}(\vec{t}_{m+1} - \vec{t}_j)}
\right )$} \IEEEeqnarraynumspace
\end{IEEEeqnarray}
Then consider that $E[\ell| \vec{\tv},m]$ has the terms
\begin{IEEEeqnarray*}{c}
\label{eq:Elgroup}
\left .
0 \times
\left [
\begin{array}{c}
G_{m} G_{m-1} G_{m-2} \cdots G_3 G_3 G_1 \nonumber \\
\end{array}
\right ] \right \} {\mbox{$1$ term}}
\end{IEEEeqnarray*}
\begin{IEEEeqnarray*}{c}
\left .
1 \times
\left [
\begin{array}{c}
G_{m} G_{m-1} G_{m-2} \cdots G_3 G_2 {\bar{G}}_1\nonumber \\
+\nonumber \\
G_{m} G_{m-1} G_{m-2} \cdots G_3 {\bar{G}}_2 G_1\nonumber \\
+\nonumber \\
\vdots\nonumber \\
+\nonumber \\
{\bar{G}}_{m} G_{m-1} G_{m-2} \cdots G_3 G_2 G_1
\end{array}
\right ] \right \} {\mbox{$m$ terms}}
\end{IEEEeqnarray*}
\begin{IEEEeqnarray*}{c}
\left .
2 \times
\left [
\begin{array}{c}
G_{m} G_{m-1} G_{m-2} \cdots G_3 {\bar{G}}_2 {\bar{G}}_1\nonumber \\
+\nonumber \\
\vdots\nonumber \\
+\nonumber \\
{\bar{G}}_{m} {\bar{G}}_{m-1} G_{m-2} \cdots G_3 G_2 G_1
\end{array}
\right ] \right \}{\mbox{${m \choose 2}$ terms}}
\end{IEEEeqnarray*}
with final term
\begin{IEEEeqnarray*}{c}
\left .
m \times
\left [
\begin{array}{c}
{\bar{G}}_{m} {\bar{G}}_{m-1} {\bar{G}}_{m-2} \cdots {\bar{G}}_{3} {\bar{G}}_{2}{\bar{G}}_1 \\
\end{array}
\right ] \right \}{\mbox{$1$ term}}
\end{IEEEeqnarray*}
Then consider the term $G_{m} G_{m-1} G_{m-2} \cdots G_2 {\bar{G}}_1$ and group
together the other $2^{m-1} - 1$ different terms that contain ${\bar{G}}_1$. The sum of
all these terms is ${\bar{G}}_1$. We can do a corresponding
grouping for each of the $m$ terms in which ${\bar{G}}_i$ appears exactly once.
Thus, by expanding and regrouping the inner product terms of \equat{El} we can show that
\begin{IEEEeqnarray*}{c}
E[\ell|\vec{\tv},m]
=
\sum_{j=1}^m {{\bar{G}}}(\vec{t}_{m+1} - \vec{t}_j)
\end{IEEEeqnarray*}
which results in
\begin{IEEEeqnarray*}{c}
{H^{\uparrow}}({\bf t})
\le
M \log \left (
1 + \frac{1}{M} \sum_{m=1}^{M-1} \sum_{j=1}^m {{\bar{G}}}(\vec{t}_{m+1} - \vec{t}_j)
\right )
\end{IEEEeqnarray*}
via \equat{pell} and \equat{hup}, remembering that $E[\ell|\vec{\tv},m=0] = 0$. Taking the expectation in
$\vec{{\bf T}}$ yields
\begin{IEEEeqnarray}{c}
\label{eq:EHup}
{H^{\uparrow}}({\bf T})
\le
M \log \left (
1 + \sum_{m=1}^{M-1} \sum_{j=1}^m
\frac{E \left[ {{\bar{G}}}({\vec{T}}_{m+1} - {\vec{T}}_j) \right ]}{M}
\right ) \IEEEeqnarraynumspace
\end{IEEEeqnarray}
We then note that all ordered differences between the $T_i$ are
accounted for in \equat{EHup}. For any given ${\bf T}$ there are $\frac{M(M-1)}{2}$ ordered
terms. Thus, we can rewrite \equat{EHup} as
\begin{IEEEeqnarray}{c}
\label{eq:EHup2}
E_{\vec{{\bf T}}}\left [
{H^{\uparrow}}({\bf T})
\right ]
\le
M \log \left (
1 + \sum_{i,j, i \ne j}^M
\frac{E \left[ {{\bar{G}}}(\left |T_i - T_j \right |) \right ]}{2M}
\right ) \IEEEeqnarraynumspace
\end{IEEEeqnarray}
where the factor of $\frac{1}{2}$ is introduced to account for terms
$T_i < T_j$ which would not appear in the ordered case of \equat{EHup}.
Finally, hypersymmetry of ${\bf T}$ requires that $E \left[ {{\bar{G}}}(\left |T_i - T_j \right
|) \right ] = \gamma_T$, a constant for $i\ne j$ so that
\begin{IEEEeqnarray*}{c}
H(\Omega|\vec{{\bf S}},{\bf T})
\le
E_{\vec{{\bf T}}}\left [
{H^{\uparrow}}({\bf T})
\right ]
\le
M \log \left (
1 + \frac{M-1}{2} \gamma_T
\right )
\end{IEEEeqnarray*}
which matches the result stated in Theorem \thmref{HOgamma} and thus proves the theorem.
\end{Proof}
\subsection{Maximizing $h({\bf S}) + M \log \left ( 1 + \gamma_S (M-1) \right )$}
We now have the rudiments of an upper bound for $I(\vec{{\bf S}};{\bf T})$ in
\begin{IEEEeqnarray}{rCl}
\IEEEeqnarraymulticol{3}{l}{\max_{f_{{\bf T}}(\cdot)} \frac{I(\vec{{\bf S}};{\bf T})}{M}} \nonumber \\ \quad
& \le &
\frac{h({\bf S})}{M}
+
\log \left ( 1 + \gamma_S (M-1) \right )
- \frac{\log M!}{M} - h(D) \IEEEeqnarraynumspace
\end{IEEEeqnarray}
However, the upper bound \equat{Homega_bound} is in terms of $f_{{\bf T}}(\cdot)$ whereas $h({\bf S})$
is a function(al) of $f_{{\bf S}}(\cdot)$. Therefore, we must develop a relationship between
$\gamma_T = E \left [ Q(T_1 - T_2) \right ]$ and $\gamma_S = E \left [ Q(S_1 - S_2) \right ]$. This
relationship allows us to fix $\gamma_S$ and maximize $h({\bf S})$ while still maintaining an upper
bound on $H(\Omega|\vec{{\bf S}},\vec{{\bf T}})$. From here onward we assume exponential first-passage of tokens.
{\em \begin{theorem} {\bf \boldmath $\gamma_T$ versus $\gamma_S$ for Exponential First-Passage:}
\thmlabel{EQTS}
If the first-passage density $f_D(\cdot)$ is exponential then
\begin{IEEEeqnarray*}{c}
E \left [ Q(S_1 - S_2) \right ]
\ge
\frac{1}{2} E \left [ Q(T_1 - T_2) \right ]
\end{IEEEeqnarray*}
or
\begin{IEEEeqnarray*}{c}
\gamma_S
\ge
\frac{1}{2}
\gamma_T
\end{IEEEeqnarray*}
\end{theorem}}
\begin{Proof}{Theorem \thmref{EQTS}}
Let $\Delta = T_1 - T_2$ and ${\cal D} = D_2 - D_1$. Then $\Delta+{\cal D} =
S_1 - S_2$. For the i.i.d. $D_i$ exponential we have ${{\bar{G}}}(d) = e^{-{\lambda} d}$,
$d \ge 0$. Thus, $Q(\cdot)= e^{-{\lambda} |\cdot|}$. We then note that $|a+b| \le
|a| + |b|$ so that
\begin{IEEEeqnarray*}{rCl}
E[Q(\Delta+{\cal D})] & = & E[e^{-{\lambda} | \Delta+{\cal D} |}]\\
& \ge & E[e^{-{\lambda} |\Delta| -{\lambda} |{\cal D} |}]\\
& = & E[Q(\Delta)]E[Q({\cal D})]
\end{IEEEeqnarray*}
because $\Delta$ and ${\cal D}$ are independent. Then consider that the density of ${\cal D}$
is $f_{\cal D}({\cdot}) = \frac{{\lambda}}{2}e^{-{\lambda} |{\cdot}|}$ so that
$E[Q({\cal D})] = \int_{=\infty}^\infty \frac{{\lambda}}{2}e^{-{\lambda} |z|} e^{-{\lambda}|z|} dz =
\frac{1}{2}$ which completes the proof.
\end{Proof}
Now, suppose we fix $E \left [ Q(S_1 - S_2) \right ] = \gamma_S$. Then, owing to hypersymmetry we have
$E \left [ Q(S_i - S_j) \right ] = \gamma_S$ $\forall i,j,i \ne j$. Using
standard Euler-Lagrange optimization \cite{hild}, we can find the density
$f_{{\bf S}}$ which maximizes $h({\bf S})$ as
\begin{IEEEeqnarray}{c}
f_{{\bf S}}^* ({\bf s}) = \frac{1}{A(\beta)} e^{\beta \sum_{\stackrel{i,j}{i \ne j}}Q(s_i - s_j)}
\end{IEEEeqnarray}
where
\begin{IEEEeqnarray}{c}
A(\beta)
=
\int e^{\beta \sum_{\stackrel{i,j}{i\ne j}}Q(s_i - s_j)} d{\bf s}
\end{IEEEeqnarray}
and $\beta$ is a constant chosen to satisfy $E[Q(S_1 - S_2)] = \gamma_S$.
The entropy of ${\bf S}$ is then
\begin{IEEEeqnarray}{c}
h({\bf S})
=
\log A(\beta) - \beta M(M-1) \gamma_S
\end{IEEEeqnarray}
We note that for $\beta=0$, $f_{{\bf S}}(\cdot)$ is uniform. Increasing $\beta$ makes
$f_{{\bf S}}(\cdot)$ more ``peaky'' in regions where $s_i \approx s_j$ since $Q(0)=1$
and $Q(\cdot)$ is monotonically decreasing away from zero. Likewise, decreasing
$\beta$ reduces $f_{{\bf S}}(\cdot)$ in the vicinity of $s_i \approx s_j$. Thus,
$\gamma_S$ increases monotonically with $\beta$. The result is that
$\gamma_S^{\prime}(\cdot)$ is strictly positive.
More formally, we have from the definition of $\gamma_S(\beta)$ that
\begin{IEEEeqnarray*}{c}
M(M-1)\gamma_S(\beta)
=
E\left [\sum_{\stackrel{i,j}{i\ne j}}Q(s_i - s_j) \right ]
\equiv
\Gamma_S(\beta)
\end{IEEEeqnarray*}
Then
\begin{IEEEeqnarray}{c}
\label{eq:variance}
\Gamma_S^{\prime}(\beta)
\!=\!
E\!\!\left [ \!\!\left ( \! \sum_{\stackrel{i,j}{i\ne j}}\!Q(s_i\! - \!s_j) \!\right )^{\!2} \right ]
\!-\!
E^2 \!\!\left [\sum_{\stackrel{i,j}{i\ne j}}\!Q(s_i\! -\! s_j) \! \right ] \IEEEeqnarraynumspace
\end{IEEEeqnarray}
which is a variance and therefore greater than or equal to zero. Thus,
$\gamma_S^{\prime}(\beta) \ge 0$. And since $0 \le \gamma_S(\beta) \le 1$, we
must also have $\gamma_S^{\prime}(\beta) \rightarrow 0$ in the limits $\beta
\rightarrow \pm \infty$.
Now, consider all terms as functions of $\beta$ as in
\begin{IEEEeqnarray}{c}
\label{eq:Ibeta}
\begin{array}{rcl}
I(\vec{{\bf S}};{\bf T}) & \le & \log A(\beta) - \beta M(M-1)\gamma_S(\beta) \\
& + & M \log \left ( 1 + \gamma_S(\beta)(M-1) \right )\\
& - & h({\bf S}|{\bf T}) - \log M!
\end{array}
\end{IEEEeqnarray}
We can find extremal points by differentiating \equat{Ibeta} with respect to $\beta$ to obtain
the first derivative
\begin{IEEEeqnarray*}{c}
M(M-1) \gamma^{\prime}_S (\beta) \left (
-\beta
+
\frac{1}{1 + \gamma_S(\beta)(M-1)}
\right )
\end{IEEEeqnarray*}
and the second derivative
\begin{IEEEeqnarray*}{c}
\begin{array}{c}
M(M-1)\gamma^{\prime\prime}_S(\beta)
\left (
-\beta
+
\frac{1}{1 + \gamma_S(\beta)(M-1)}
\right )\\
+\\
-M(M-1) \gamma^{\prime}_S (\beta) \left (
1 + (M-1)\frac{\gamma^{\prime}_S(\beta)}{\left ( 1 + \gamma_S(\beta)(M-1) \right )^2}
\right )
\end{array}
\end{IEEEeqnarray*}
which when the first derivative is zero reduces to
\begin{IEEEeqnarray*}{c}
-M(M-1) \gamma^{\prime}_S (\beta) \left (
1 + (M-1)
\begin{array}{c}
\frac{\gamma^{\prime}_S(\beta)}{\left ( 1 + \gamma_S(\beta)(M-1) \right )^2}
\end{array}
\right ) \le 0 \IEEEeqnarraynumspace
\end{IEEEeqnarray*}
We then have
\begin{IEEEeqnarray}{c}
\label{eq:betastar}
\gamma_S^* = \gamma_S({\beta^*})
=
\frac{1 - {\beta^*}}{(M-1){\beta^*}}
\end{IEEEeqnarray}
and note that \equat{betastar} requires $\frac{1}{M} \le {\beta^*} \le 1$ since $0
\le \gamma_S(\beta) \le 1$. In addition, there is at most one solution to
\equat{betastar} since $\frac{1 - {\beta^*}}{(M-1){\beta^*}}$ monotonically
decreases in $\beta$ while $\gamma_S(\beta)$ monotonically increases in $\beta$.
Since the second derivative at the extremal is non-positive, the unique point
defined by \equat{betastar} is a maximum.
Unfortunately, solutions to \equat{betastar} have no closed form and numerical
solutions for asymptotically large $M$ are impractical. Nonetheless, the constraints
on ${\beta^*}$ will allow an oblique approach to deriving a bound.
We note again that $\Gamma_S^{\prime}(\beta)$, is the variance of $\sum_{i\ne j}Q(s_i
- s_i)$ and must decrease monotonically in $\beta$ since as previously
discussed, increased $\beta$ concentrates $f_{{\bf S}}(\cdot)$ around larger values of
$\sum_{i\ne j}Q(s_i - s_i)$. Thus,
\begin{IEEEeqnarray}{c}
\Gamma_S^{\prime}(\beta) \le \Gamma_S^{\prime}(0)
\end{IEEEeqnarray}
$\forall \beta > 0$ which in turn implies
\begin{IEEEeqnarray}{c}
\label{eq:diffeq}
\Gamma_S(\beta) \le \beta \Gamma_S^{\prime}(0) + \Gamma_S(0)
\end{IEEEeqnarray}
$\forall \beta \in (0,1]$.
Assuming exponential first-passage, $Q(x) = e^{-\mu|x|}$ and remembering that
$\Gamma_S(\beta) = M(M-1)\gamma_S(\beta)$, we can calculate both $\gamma_S(0)$
and $\gamma_S^{\prime}(0)$ in closed form as
\begin{IEEEeqnarray}{c}
\label{eq:Gamma0}
\gamma_S(0)
=
Z(\mu \tau)
\equiv
\frac{2}{(\mu \tau)^2} \left ( \mu \tau + e^{-\mu \tau} - 1 \right )
\end{IEEEeqnarray}
and
\begin{IEEEeqnarray}{c}
\label{eq:Gammaprime0}
\gamma_S^{\prime}(0)
=
\left [
\begin{array}{c}
(M-2)(M-3) \gamma_S^2(0) + 2Z(2 \mu\tau)\\
+ \\
24\frac{M-2}{(\mu \tau)^3} \left ( \mu \tau - 2 + e^{-\mu \tau} (2 + \mu \tau) \right ) \\
-\\
M(M-1) \gamma_S^2(0)
\end{array}
\right ]
\end{IEEEeqnarray}
respectively. Defining $M = {\lambda} \tau$ and taking the limit for large $M$ yields
\begin{IEEEeqnarray}{c}
\label{eq:limgamma}
\lim_{M \rightarrow \infty} M \gamma_S(0) = \frac{2{\lambda}}{{\mu}} = 2\rho
\end{IEEEeqnarray}
and
\begin{IEEEeqnarray}{c}
\label{eq:limgammaprime}
\lim_{M \rightarrow \infty}
(M-1)\gamma_S^{\prime}(0)
=
8 \frac{{\lambda}^2}{\mu^2}
+
2 \frac{{\lambda}}{\mu}
=
8\rho^2 + 2\rho
\end{IEEEeqnarray}
where once again $\rho = \frac{{\lambda}}{{\mu}}$.
Remembering that $\Gamma_S(\beta) = M(M-1)\gamma_S(\beta)$ and utilizing \equat{diffeq} we have
\begin{IEEEeqnarray}{c}
\label{eq:gammaprimemax}
\gamma_S(0)
\le
\gamma_S({\beta^*})
\le
\gamma_S^{\prime}(0) {\beta^*} + \gamma_S(0)
\end{IEEEeqnarray}
Thus, the $\gamma$-intercept of the monotonically decreasing $\frac{1-\beta}{(M-1)\beta}$
with the right hand side of \equat{gammaprimemax} must yield a value at least as large as
$\gamma({\beta^*})$. To solve for this intercept we set
\begin{IEEEeqnarray}{rCl}
\frac{1-{\tilde{\beta}}}{(M-1){\tilde{\beta}}} & = & \gamma_S^{\prime}(0) {\tilde{\beta}} + \gamma_S(0) \nonumber \\
& = &
\frac{1}{M-1} {\tilde{\beta}} \left ( {8\rho^2} + {2\rho} \right ) + {2\rho}\frac{1}{M} \IEEEeqnarraynumspace
\end{IEEEeqnarray}
so that in the limit of large $M$ we have
\begin{IEEEeqnarray*}{c}
{\tilde{\beta}}
=
\frac{\sqrt{1 + {12 \rho} +36 {\rho^2}} - (1 + {2\rho})}{{16 \rho^2} + {4\rho}}
=
\frac{1}{4 \rho + 1}
\end{IEEEeqnarray*}
which results in
\begin{IEEEeqnarray}{c}
\label{eq:tgamma}
(M-1) \gamma({\beta^*})
\le
\frac{1}{4 \rho + 1}
\left (8 \rho^2 + 2 \rho \right ) + {2\rho}
=
4 \rho
\end{IEEEeqnarray}
so that for large $M$ we have
\begin{IEEEeqnarray}{rCl}
I(\vec{{\bf S}};{\bf T}) & \le & \log A({\beta^*}) - {\beta^*} M (M-1)\gamma_S({\beta^*}) \nonumber\\
& + & M \log \left ( 1 + 4 \rho \right )
- h({\bf S}|{\bf T}) - \log M!
\label{eq:Ibetamax1}
\end{IEEEeqnarray}
To complete the mutual information bound, we could then derive upper bounds on
$A({\beta^*}) - {\beta^*} M (M-1)\gamma_S({\beta^*})$. However, in the limit of large
$M = \tau/{\lambda}$, the density on ${\bf S}$ is effectively constrained to $({\bf 0},
{\boldsymbol{\tau}})$ \cite{isit13, RoseMian16_1} which constrains $h({\bf S}) \le M \log \tau$. Then, since
$h({\bf S}|{\bf T}) = M (1-\log \mu)$ for exponential first-passage, \equat{Ibetamax1} produces
mutual information per token
\begin{IEEEeqnarray}{c}
\label{eq:IupperoverM}
\frac{I(\vec{{\bf S}};{\bf T})}{M} \le \log \tau - (1-\log \mu)
+ \log \left (1 + {4 \rho} \right )
- \frac{\log M!}{M} \IEEEeqnarraynumspace
\end{IEEEeqnarray}
Application of Stirling's approximation for large $M$
\begin{IEEEeqnarray}{c}
\frac{\log M!}{M}
\approx
\log M - 1
\label{eq:stirling}
\end{IEEEeqnarray}
in combination with \equat{IupperoverM} produces our main theorem:
{\em \begin{theorem}{\bf An Upper Bound on the Asymptotic Capacity per Token, $C_q$:}
{\thmlabel{IupperoverM}}
For exponential passage with mean first-passage time $1/\mu$ and token emission intensity
${\lambda}$, an upper bound for the asymptotic capacity per token is given by
\begin{IEEEeqnarray}{c}
C_q
=
\max_{f_{\bf T}(\cdot)}
\lim_{M \rightarrow \infty}
\frac{1}{M} I(\vec{{\bf S}};{\bf T})
\le
\log \left (\frac{1}{\rho} + 4 \right )
\end{IEEEeqnarray}
where $\rho = \frac{{\lambda}}{{\mu}}$.
\end{theorem}}
\begin{Proof}{\Thmref{IupperoverM}}
Substitution of \equat{stirling} and $\tau = M/{\lambda}$ into \equat{IupperoverM} completes the proof.
\end{Proof}
\section{Discussion \& Conclusion}
The timing channel \cite{bits-Qs, sundaresan1, sundaresan2, moewin16} is a building block upon which
the information theory of the identical molecule/token timing channel is built. In this paper we
considered a version of the timing channel were a single emission is restricted to an interval
$[0,\tau]$ and we derived closed form expressions for the channel capacity under exponential
first-passage as well as the optimal input (emission) distribution. We also established that unlike
for the mean-constrained channel, exponential first-passage is {\em not} the worst case corruption.
Building block though the single emission channel is, the identical molecule timing channel differs
from previous models because which emission corresponds to which arrival is ambiguous expressly
because travel time from sender to receiver is random and the molecules are identical. This
ambiguity is captured by a quantity we define as the ``ordering entropy'' $H(\Omega|\vec{{\bf S}},{\bf T})$ and
understanding its properties is critical to understanding the capacity of not only the molecular
timing channel, but also channels where tokens/molecules may themselves carry information payloads
-- portions of messages to be strung together at the receiver \cite{RoseMian16_1}.
In the Part-I companion to this paper \cite{RoseMian16_1}, we carefully explored the information
theory formulation of the problem to establish that the usual information $I({\bf S};{\bf T})$ is indeed
the proper measure of information flow over this channel and its relationship to
$H(\Omega|\vec{{\bf S}},{\bf T})$. In this paper, Part-II, we carefully explored the properties of
$H(\Omega|\vec{{\bf S}},{\bf T})$, showing how it can be calculated without deriving full order distributions
and deriving closed form expressions for cases where the emission times ${\bf T}$ are i.i.d. random
variables. We then derived closed form expressions for the special cases of the input distribution
being that which achieves capacity for the mean-constrained and the deadline-constrained timing
channel with exponential first-passage and the asymptotic behavior $\lim_{M \rightarrow \infty}
H(\Omega|\vec{{\bf S}},{\bf T})$. Our understanding of $H(\Omega|\vec{{\bf S}},{\bf T})$ then allowed derivation of lower
bounds on timing channel capacity for exponential first passage (Part-I, Theorem 14) and here in
Part-II, an upper bound for the molecular timing channel capacity.
Although the machinery necessary to consider a mean-constrained version of the identical token
timing channel was derived, capacity results were not pursued owing to our inability to derive an
appropriate sequential channel use model with asymptotic independence. However, if physically
parallel channels were used (so as to avoid corruption of one channel by arrivals from another), the
results of \cite{bits-Qs} combined with \Thmref{HOmaxmean} might be used to derive upper and lower
bounds analogous to those provided here and in Part-I \cite{RoseMian16_1}. This might prove
interesting since the mean-constraint seems analytically simpler than the deadline constraint with
respect to both the single-token entropy and capacity as well as the ordering entropy.
\section*{Acknowledgments}
Profound thanks are owed to A. Eckford, N. Farsad, S. Verd\'u and V. Poor for useful discussions and
guidance. We are also extremely grateful to the editorial staff and the raft of especially careful
and helpful anonymous reviewers. This work was supported in part by NSF Grant CDI-0835592.
\bibliographystyle{unsrt}
|
2,869,038,155,112 | arxiv | \section{Introduction}
Bars are a very common feature of disc galaxies. In a sample of $186$
spirals drawn from the Ohio State University Bright Spiral Galaxy
Survey, Eskridge {\it et al.} (\cite{esk00}) find that $56\%$ of the
galaxies in the near infrared are strongly barred, while an additional
$6\%$ are weakly barred. A large fraction of barred galaxies show two
clearly defined spiral arms (e.g. Elmegreen \& Elmegreen \cite{elm82}),
often departing from the end of the bar at nearly right angles. This is
the case for instance in NGC~1300, NGC~1365 and NGC~7552. Deep exposures show
that these arms wind around the bar structure and extend to large distances
from the centre (see for instance Sandage \& Bedke \cite{san94}). Almost all
researchers agree that spiral arms and rings are driven by the gravitational
field of the galaxy (see Toomre \cite{too77} and Athanassoula \cite{ath84},
for reviews). In particular, spirals are believed to be density waves in
a disc galaxy (Lindblad \cite{lind63}). Toomre (\cite{too69}) found that
the spiral waves propagate towards the principal Lindblad resonances of the
galaxy, where they damp down, and thus concludes that long-lived spirals need
some replenishment. There are essentially three different possibilities for a
spiral wave to be replenished. First, it can be driven by a companion or
satellite galaxy. A direct, relatively slow, and close passage of another
galaxy can form trailing shapes (e.g. Toomre \cite{too69}; Toomre \& Toomre
\cite{too72}; Goldreich \& Tremaine \cite{gol78}, \cite{gol79}; Toomre
\cite{too81} and references therein). They can also be excited by the
presence of a bar. Several studies have shown that a rotating bar or oval can
drive spirals (e.g. Lindblad \cite{lind60}; Toomre \cite{too69}; Sanders
\& Huntley \cite{san76}; Schwarz \cite{sch79}, \cite{sch81}; Huntley
\cite{hun80}). The third alternative, proposed by Toomre (\cite{too81}), is
the swing amplification feedback cycle. This starts with a leading wave
propagating from the centre towards corotation. In doing so, it unwinds and
then winds in the trailing sense, while being very strongly amplified.
This trailing wave will propagate towards the centre, while a further trailing
wave is emitted at corotation and propagates outwards, where it is dissipated
at the Outer Lindblad Resonance. The inwards propagating trailing wave, when
reaching the centre will reflect into a leading spiral, which will propagate
outwards towards corotation, thus closing the feedback cycle.
Danby (\cite{dan65}) argued that orbits in the gravitational potential
of a bar play an important role in the formation of arms. He noted that orbits
departing from the vicinity of the equilibrium points located at the ends of
the bar describe loci with the shape of spiral arms and can be responsible for
the transport of stars from within to outside corotation, and vice versa.
Unfortunately, he did not set his work in a rigorous theoretical context, so
that it remained purely phenomenological. He also investigated whether orbits
can be responsible for ring-like structures, but in this case, he did not
consider orbits departing from the ends of the bar as he previously did when
accounting for the spiral arms.
Strongly barred galaxies can also show prominent and spectacular rings or
partial rings. The origin of such morphologies has been studied by Schwarz
(\cite{sch81}, \cite{sch84}, \cite{sch85}), who followed the response of a
gaseous disc galaxy to a bar perturbation. He proposed that ring-like patterns
are associated to the principal orbital resonances, namely ILR, CR, and OLR.
There are different types of outer rings. Buta (\cite{but95}) classified them
according to the relative orientation of the ring and bar major axes. If these
two axes are perpendicular, the outer ring is classified as $R_1$. If the two
axes are parallel, the outer ring is classified as $R_2$. Finally, if both
types of rings are present in the galaxy, the outer ring is classified as $R_1R_2$.
In Romero-G\'omez {\it et al.} (\cite{rom06,rom07}), we propose that rings
and spiral arms are the result of the orbital motion driven by the invariant
manifolds associated to periodic orbits around unstable equilibrium points.
In Romero-G\'omez {\it et al.} (\cite{rom06}), we fix a barred galaxy
potential and we study the dynamics around the unstable equilibrium points.
We give a detailed definition of the invariant manifold associated to a
periodic orbit. For the model considered, the invariant manifolds delineate
well the loci of an $rR_1$ ring structure. In Romero-G\'omez {\it et al.}
(\cite{rom07}), we construct families of models based on simple, yet realistic,
barred galaxy potentials. In each family, we vary one of the free parameters of
the potential and keep the remaining fixed. For each model, we numerically
compute the orbital structure associated to the invariant manifolds.
In this way, we are able to study the influence of each model parameter on the
global morphologies delineated by the invariant manifolds.
In Sect. \ref{sec:mod}, we first present the equations of motion and the
galactic models used in the computations. In Sect. \ref{sec:inv}, we give a
brief description of the dynamics around the unstable equilibrium points.
In Sect. \ref{sec:res}, we show the different morphologies that result from
the computations. In Sect. \ref{sec:dis}, we compare our results with some
observational features and conclude.
\section{Equations of motion and description of the model}
\label{sec:mod}
We model the potential of a barred galaxy as the superposition of three
components, two of them axisymmetric and the third bar-like. The
last component rotates anti-clockwise with angular velocity ${\bf
\Omega_p}=\Omega_p{\bf z}$, where $\Omega_p$ is a constant pattern
speed\, \footnote{Bold letters denote vector notation. The vector {\bf z} is a unit
vector.}. The equations of motion in
this potential in a frame rotating with angular speed ${\bf \Omega_p}$ in vector form are
\begin{equation}\label{eq-motvec}
{\bf
\ddot{r}=-\nabla \Phi} -2{\bf (\Omega_p \times \dot{r})- \Omega_p \times
(\Omega_p\times r)},
\end{equation}
where the terms $-2 {\bf \Omega_p\times \dot{r}}$ and $-{\bf \Omega_p \times
(\Omega_p\times r)}$ represent the Coriolis and the centrifugal
forces, respectively, and ${\bf r}$ is the position vector. We define an
effective potential $\Phi_{\hbox{\scriptsize eff}}=\Phi-\frac{1}{2}\Omega_p^2\,
(x^2+y^2),$ then Eq. (\ref{eq-motvec}) becomes ${\bf \ddot{r}=-\nabla \Phi_{\hbox{\scriptsize eff}}} -2{\bf (\Omega_p \times \dot{r})},$ and the Jacobi constant is
\begin{equation}\label{eq-energy}
E_J = \frac{1}{2} {\bf\mid \dot{r}\mid} ^2 + \Phi_{\hbox{\scriptsize eff}},
\end{equation}
which, being constant in time, can be considered as the energy in the
rotating frame.
The axisymmetric component consists of the superposition of a disc
and a spheroid. The disc is modelled as a Kuzmin-Toomre disc
(Kuzmin \cite{kuz56}; Toomre \cite{too63}) of surface density $\sigma(r)$ and
the spheroid is modelled using a density distribution of the form $\rho(r)$:
\begin{equation}\label{eq:kuz-sph}
\sigma(r) = \frac{V_d^2}{2\pi r_d}\left(1+\frac{r^2}{r_d^2}\right)^{-3/2}, \qquad
\rho(r)=\rho_b\left(1+\frac{r^2}{r_b^2}\right)^{-3/2}.
\end{equation}
The parameters $V_d$ and $r_d$ set the scales of the velocities and radii of the
disc, respectively, and $\rho_b$ and $r_b$ determine the concentration and
scale-length of the spheroid. In our models, we use three bar potentials to compare
the results obtained. The first bar potential is described by a Ferrers
(\cite{fer77}) ellipsoid whose density distribution is:
\begin{equation}
\left\{\begin{array}{lr}
\rho_0(1-m^2)^n & m\le 1\\
0 & m\ge 1,
\end{array}\right.
\label{eq:Ferden}
\end{equation}
where $m^2=x^2/a^2+y^2/b^2$. The values of $a$ and $b$ determine the shape of
the bar, $a$ being the length of the semi-major axis, which is placed along
the $x$ coordinate axis, and $b$ being the length of the semi-minor axis. The
parameter $n$ measures the degree of concentration of the bar and $\rho_0$
represents the bar central density. We also use two ad-hoc potentials, namely
a Dehnen's bar type, $\Phi_1$, (Dehnen \cite{deh00}) and a Barbanis-Woltjer
(BW) bar type, $\Phi_2$, (Barbanis \& Woltjer \cite{bar67}):
\begin{equation}
\Phi_1(r,\theta)=-\frac{1}{2}\epsilon v_0^2\cos(2\theta)\left\{{\begin{array}{ll}
\displaystyle 2-\left(\frac{r}{\alpha}\right)^n, & r\le \alpha\rule[-.5cm]{0cm}{1.cm}\\
\displaystyle \left(\frac{\alpha}{r}\right)^n, & r\ge \alpha.\rule[-.5cm]{0cm}{1.cm}
\end{array}}\right. \qquad \Phi_2(r,\theta)=\hat{\epsilon}\sqrt{r}(r_1-r)\cos(2\theta)
\label{eq:adhoc}
\end{equation}
The parameter $\alpha$ is a characteristic length scale of the Dehnen's type bar
potential, and $v_0$ is a characteristic circular velocity. The parameter
$\epsilon$ is related to the bar strength. The parameter $r_1$ is a characteristic
scale length of the BW bar potential and the parameter $\hat{\epsilon}$ is related
to the bar strength.
\section{Dynamics around the $L_1$ and $L_2$ equilibrium points}
\label{sec:inv}
For our calculations we place ourselves in a frame of
reference corotating with the bar, and the bar semi-major axis is located
along the $x$ axis. In this rotating frame we have five equilibrium
Lagrangian points (see left panel of Fig.~\ref{myfig1}). Three of these points
are stable, namely $L_3$, which is placed at the centre of the system, and
$L_4$ and $L_5$, which are located symmetrically on the $y$ axis. $L_1$ and $L_2$
are unstable and are located symmetrically on the $x$ axis. The surface
$\Phi_{\hbox{\scriptsize eff}}=E_J$ ($E_J$ defined as in Eq. (\ref{eq-energy}))
is called the zero velocity surface, and its intersection with the $z=0$ plane
gives the zero velocity curve. All regions in which
$\Phi_{\hbox{\scriptsize eff}}>E_J$ are forbidden to a star with this
energy, and are thus called forbidden regions. The zero velocity curve also
defines two different regions, namely, an exterior region and an interior one
that contains the bar. The interior and exterior regions are connected via the
equilibrium points (see middle panel of Fig.~\ref{myfig1}). Around the equilibrium
points there exist families of periodic orbits, e.g. around the central
equilibrium point the well-known $x_1$ family of periodic orbits that is
responsible for the bar structure.
The dynamics around the unstable equilibrium points is described in detail in
Romero-G\'omez {\it et al.} (\cite{rom06}), here we give a brief summary.
Around each unstable equilibrium point there also exists a family of periodic
orbits, known as the family of Lyapunov orbits (Lyapunov \cite{lya49}). For a
given energy level, two stable and two unstable sets of asymptotic orbits emanate
from the periodic orbit, and they are known as the stable and unstable invariant
manifolds, respectively. We denote by $W^s_{\gamma_i}$ the stable invariant
manifold associated to the periodic orbit $\gamma$ around the equilibrium
point $L_i,\, i=1,2$. This stable invariant manifold is the set of
orbits that tends to the periodic orbit asymptotically. In the same way we
denote by $W^u_{\gamma_i}$ the unstable invariant manifold associated to the
periodic orbit $\gamma$ around the equilibrium point $L_i,\, i=1,2$. This unstable
invariant manifold is the set of orbits that departs asymptotically from the
periodic orbit (i.e. orbits that tend to the Lyapunov orbits when the time
tends to minus infinity), see right panel of Fig.~\ref{myfig1}. Since the
invariant manifolds extend well beyond the neighbourhood of the equilibrium
points, they can be responsible for global structures.
In Romero-G\'omez {\it et al.} (\cite{rom07}), we give a detailed description
of the role invariant manifolds play in global structures, in particular, in
the transfer of matter. Simply speaking, the transfer of matter is characterised
by the presence of homoclinic, heteroclinic, and transit orbits.
\begin{figure}[h]
\centering
\includegraphics[width=5cm,angle=-90.]{romerogomez_png_fig1.ps}\hspace{0.25cm}
\includegraphics[width=4.65cm,angle=-90.]{romerogomez_png_fig2.ps}
\caption{Dynamics around the $L_1$ and $L_2$ equilibrium points. {\it Left
panel:} location of the equilibrium points and outline of the bar.
{\it Middle panel:} Zero velocity curves and Lyapunov orbits around $L_1$
and $L_2$. {\it Right panel:} Stable, $W^s_{\gamma_1}$ in green, and unstable,
$W^u_{\gamma_1}$ in red, invariant manifolds of a periodic orbit around $L_1$.}
\label{myfig1}
\end{figure}
Homoclinic orbits correspond to asymptotic trajectories, $\psi$, such that
$\psi\in W^u_{\gamma_i}\cap W^s_{\gamma_i},\,i=1,2$. Thus, a homoclinic orbit
departs asymptotically from the unstable Lyapunov periodic orbit $\gamma$ around
$L_i$ and returns asymptotically to it (see Fig.~\ref{myfig2}a).
Heteroclinic orbits are asymptotic trajectories, $\psi^\prime$, such that
$\psi^\prime\in W^u_{\gamma_i}\cap W^s_{\gamma_j},\, i\ne j,\,i,j=1,2$. Thus,
a heteroclinic orbit departs asymptotically from the periodic orbit
$\gamma$ around $L_i$ and asymptotically approaches the corresponding Lyapunov
periodic orbit with the same energy around the Lagrangian point at the opposite
end of the bar $L_j$, $i\ne j$ (see Fig.~\ref{myfig2}b). There also exist
trajectories that spiral out from the region of the unstable periodic orbit, and
we refer to them as transit orbits (see Fig.~\ref{myfig2}c).
\begin{figure}[h]
\centering
\includegraphics[width=5.cm,angle=-90.]{romerogomez_png_fig3.ps}
\caption{Homoclinic {\bf (a)}, heteroclinic {\bf (b)} and escaping
{\bf (c)} orbits (black thick lines) in the configuration space. In
red lines, we plot the unstable invariant manifolds associated to the
periodic orbits, while in green we plot the corresponding stable invariant
manifolds. In dashed lines, we give the outline of the bar and, in
{\bf (b)} and {\bf (c)}, we plot the zero velocity curves in dot-dashed lines.}
\label{myfig2}
\end{figure}
\section{Results}
\label{sec:res}
Here we describe the main results obtained when we vary the parameters of the models
introduced in Sect. \ref{sec:mod}. In order to best see the influence of each
parameter separately, we make families of models in which only one of the
free parameters is varied, while the others are kept fixed. Our results in
Romero-G\'omez {\it et al.} (\cite{rom07}) show that only the bar pattern
speed and the bar strength have an influence on the shape of the invariant
manifolds, and thus, on the morphology of the galaxy. We have also studied the
influence of the shape of the rotation curve. We make models with either rising,
flat or falling rotation curve in the outer parts.
Our results also show that the morphologies obtained do not depend on the bar
potential we use, but on the presence of homoclinic or heteroclinic orbits.
Thus, if the model does not have either heteroclinic nor homoclinic orbits and
only transit orbits are present, the barred galaxy will present two spiral arms
emanating from the ends of the bar. The outer branches of the unstable invariant
manifolds will spiral out from the ends of the bar and they will not return to
its vicinity. If the transit orbits associated to the $W^u_{\gamma_1}$ intersect
in configuration space with the transit orbits associated to $W^u_{\gamma_2}$, then
they form the characteristic shape of $R_2$ rings. That is, the trajectories
outline an outer ring whose principal axis is parallel to the bar major axis. If
heteroclinic orbits exist, then the ring of the galaxy is classified as $rR_1$.
The inner branches of the invariant manifolds associated to $\gamma_1$ and
$\gamma_2$ outline a nearly circular inner ring that encircles the bar. The outer
branches of the same invariant manifolds form an outer ring whose principal
axis is perpendicular to the bar major axis. The last possibility is if only
homoclinic orbits exist. In this case, the inner branches of the invariant
manifolds for an inner ring, while the outer branches outline both types of
outer rings, thus the barred galaxy presents an $R_1R_2$ ring morphology.
\section{Discussion}
\label{sec:dis}
The family of Lyapunov orbits is unstable and becomes stable only at high
energy levels (Skokos {\it et al.} (\cite{sko02}). We compute the
invariant manifolds of Lyapunov orbits in the range of energies in which
they are unstable. In the right panel of Fig. \ref{myfig3}, we plot
the invariant manifolds for two different energy levels of a given model.
We find that the locus of the invariant manifolds is independent of the
energy. As the energy increases, however, the size of the Lyapunov orbits
also increases and thus the size of the invariant manifolds. Nevertheless,
as we consider more energy levels, we find that the density in the central
part is higher (see left panel of Fig. \ref{myfig3}). Therefore, the
thickness of the ring observed is smaller than the thickness of the
invariant manifold of higher energies. We also compute the radial and
tangential velocities on the galactic plane and in a non-rotating reference
frame along the ring and we observe that they are small perturbations of the
circular velocity. The maximum deviation from the typical circular velocity
of $200 \rm{km}\,\rm{s}^{-1}$ is $\pm 20 \rm{km}\,\rm{s}^{-1}$ (Athanassoula
{\it et al.} \cite{ath07}).
In the case of the spiral arms, we compute the density profile on two different
angles, namely one near the beginning of the arm and one at the end. We find
that the first cut has a narrow and high density profile, while the second cut,
has a wide and low density profile. This is the typical behaviour of the grand
design spiral arms, namely they are denser and brighter near the bar ends and
then they become more diffuse (Athanassoula {\it et al.} \cite{ath07}).
\begin{figure}[h]
\centering
\hspace{1.cm}
\includegraphics[scale=0.3,angle=-90.]{romerogomez_png_fig4.ps}\hspace{0.55cm}
\includegraphics[scale=0.3,angle=-90.]{romerogomez_png_fig5.ps}
\caption{{\it Left panel:} Density profile on a cut across the ring.
{\it Right panel:} Two unstable invariant manifolds for different
values of $E_J$. Note how similar the regions they delineate are.}
\label{myfig3}
\end{figure}
To summarise, our results show that invariant manifolds describe well the loci
of the different types of rings and spiral arms. They are formed by a bundle of
trajectories linked to the unstable regions around the $L_1/L_2$ equilibrium
points. The study of the influence of one model parameter on the shape of
the invariant manifolds in the outer parts reveals that only the pattern
speed and the bar strength affect the galaxy morphology. The study also shows
that the different ring types and spirals are obtained when we vary the
model parameters.
We have compared our results with some observational data. Regarding the
photometry, the density profiles across radial cuts in rings and spiral
arms agree with the ones obtained from observations. The velocities along
the ring also show that these are only a small perturbation of the circular
velocity.
\begin{acknowledgements} I wish to thank my collaborators E. Athanassoula,
J.J. Masdemont, and C. Garc\'ia-G\'omez.
\end{acknowledgements}
|
2,869,038,155,113 | arxiv | \section{Introduction}
The properties of light propagating in optical media is a subject as old as optics itself. In recent years, the possibility to engineer novel metamaterials has opened the door to the so-called transformation optics~\cite{leonhardt2009transformation}, a field promising to enhance existing devices and create novel ones. At the basis of this revolution is the fact that, in the geometric optics limit -- and neglecting dispersion --, light rays propagate in media following the geodesics of an effective Lorentzian metric, the so-called optical metric~\cite{gordon1923lichtfortpflanzung}. This has also led to the investigation of light in optical media as an analogue gravity model, i.e., a model in which field perturbations propagate \textit{as if} in a curved spacetime background, particularly useful in the investigation of kinematic effects of quantum field theory in curved spacetime, like the Hawking radiation and cosmological particle production~\cite{barcelo2011analogue,philbin2008fiber,rubino2011experimental}. When also the effect of dispersion is considered, the metric description can be cast aside for a more powerful Hamiltonian formalism, giving rise to the so-called ray-optical structures~\cite{1975A&A....44..389B,perlick2000ray}.
This analogy between optical media and curved spacetimes can be pushed even further by showing that Maxwell equations in vacuum, curved spacetime are equivalent to flat-spacetime Maxwell equations in the presence of a bi-anisotropic moving medium whose dielectric permittivity and magnetic permeability are determined entirely by the space-time metric~\cite{plebanski1960electromagnetic}. Spacetime itself can then be described as an optical medium at the level of full electromagnetism. It is then natural to wonder what would happen if light were to propagate in an optical medium placed in a curved spacetime. Far from being a far-fetched situation, this is exactly the case for light propagating in media on Earth due to the non-vanishing, albeit weak, gravitational field of our planet. In this work, we are interested in exactly this situation. In particular, while at the geometric optics level the formalism of ray-optical structures can be used, we aim here at a description, analogous to the one in~\cite{plebanski1960electromagnetic}, at the level of full Maxwell equations. Indeed, such a description allows for the modelling of the propagation of intense pulses in situations of physical interest, like soliton propagation in optical fibers, taking into account the effect of a weak gravitational field.
We show that light propagation in a medium in curved spacetime is equivalent to propagation in an effective medium in flat spacetime.
We then use this formalism to investigate the propagation of intense light pulses in non-linear media, giving rise to optical solitons. Solitons, and more in general propagating pulses, in optical fibers are at the basis of several communication protocols. Given that fibers on Earth are \textit{de facto} in a curved spacetime due to our planet's gravitational field, it is relevant to analyze how gravity influences light-pulses propagation. Our result allows us to set up a framework for the analysis of the effect of acceleration and curvature on the propagation of pulses in optical fibers in curved spacetimes. We numerically investigate some of these effects for the simple case of 1D propagation in the weak-field limit.
\section{An effective ``spacetime medium''}
While light in media can propagate as in a curved spacetime, curved spacetime can also be seen as an effective medium with non-trivial permeability and permittivity~\cite{plebanski1960electromagnetic,de1971gravitational}. It is not difficult to generalize the derivations in~\cite{plebanski1960electromagnetic,de1971gravitational} to the case in which light propagates in an optical medium placed in curved spacetime. Also in this case it can be shown that Maxwell's equations are equivalent to Maxwell's equations in flat spacetime for an effective medium whose properties encode both the ones of the physical medium and of curved spacetime.
Indeed, consider a dielectric and permeable medium in curved spacetime characterized by a Lorentzian metric $g_{\mu\nu}$ with mostly plus signature. We follow here the notation of~\cite{perlick2000ray}, also reported in the Supplemental Material~\cite{SM}.
Maxwell's equations
in the absence of free charges and currents
are given by
\begin{align}
\nabla_k F^{*\,ik}=0\\
\nabla_k G^{ik}=0
\end{align}
where $F^*$ is the Hodge dual of the electromagnetic tensor $F$, and $G$ and $F$ are related by the constitutive equations of the material. Choosing an observer field $u^i$, the electric and magnetic field strengths can be defined with respect to it as
\begin{align}
& B_a=-\frac{1}{2}\eta_{abcd}u^b F^{cd};\,\,E_i=F_{ij}u^j\\
& H_a=-\frac{1}{2}\eta_{abcd}u^b G^{cd};\,\,D_i=G_{ij}u^j\\
&F_{ab}=-\eta^{cd}_{ab}u_d B_c+2u_{[a}E_{b]}\\
&G_{ab}=-\eta^{cd}_{ab}u_d H_c+2u_{[a}D_{b]},
\end{align}
in the reference frame of the observer in which the medium is assumed to be at rest. Here $\eta_{ijkl}=\sqrt{-g}\delta_{ijkl}$ is the Levi-Civita tensor and $T_{[abc\dots]}$ denotes the antisymmetrization of the tensor with respect to the indices in square brackets.
As discussed in~\cite{SM}, choosing $u^{i}=\delta^i_0/\sqrt{-g_{00}}$, the projection of Maxwell's equations in 3-dimensional form leads to
\begin{align}
&\delta^{\alpha\beta\gamma}\partial_\beta\mathcal{H}_\gamma-\partial_0\mathcal{D}^\alpha=0;\,\,\partial_l\mathcal{D}^l=0\\
&\delta^{\alpha\beta\gamma}\partial_\beta\mathcal{E}_\gamma+\partial_0\mathcal{B}^\alpha=0;\,\,\partial_l\mathcal{B}^l=0,
\end{align}
where $\mathcal{E}_\alpha=\sqrt{-g_{00}}E_\alpha$, $\mathcal{H}_\alpha=\sqrt{-g_{00}}H_\alpha$, and
\begin{align}\label{Maxelleff}
& \mathcal{D}^\alpha=-\sqrt{-g}\frac{g^{\alpha\beta}}{g_{00}}\mathfrak{D}_\beta-\delta^{\alpha\beta\gamma}\frac{g_{0\gamma}}{g_{00}}\mathcal{H}_\beta\\
& \mathcal{B}^\alpha=-\sqrt{-g}\frac{g^{\alpha\beta}}{g_{00}}\mathfrak{B}_\beta+\delta^{\alpha\beta\gamma}\frac{g_{0\gamma}}{g_{00}}\mathcal{E}_\beta,
\end{align}
with $\mathfrak{B}_\alpha=\sqrt{-g_{00}}B_\alpha$, and $\mathfrak{D}_\alpha=\sqrt{-g_{00}}D_\alpha$.
These expressions are equivalent to Maxwell's equations in flat spacetime in the presence of an optical medium. In particular, for a non-dispersive medium characterized by constitutive relations $D_a=\varepsilon^b_a E_b$, and $B_a=\mu^b_a H_b$, the effective medium will be characterized by a dielectric and magnetic permeability given by the product of the material ones and the ones characterizing the curved spacetime~\cite{plebanski1960electromagnetic,de1971gravitational}. {Indeed, expressing $\mathcal{D}^\alpha=\tilde{\varepsilon}^{\alpha\beta}\mathcal{E}_\beta+\tilde{\gamma}^\beta_{\alpha}\mathcal{H}_\beta$ and correspondingly $\mathcal{B}^\alpha=\tilde{\mu}^{\alpha\beta}\mathcal{H}_\beta-\tilde{\gamma}^\beta_\alpha \mathcal{E}_\beta$, where $\tilde{\gamma}^\beta_\alpha$ encode magnetoelectric effects, we see that
}
\begin{align}\label{permten}
& \tilde{\mu}^{\alpha\beta}=-\sqrt{-g}\frac{g^{\alpha\gamma}}{g_{00}}\mu_\gamma^{\;\beta}\\
& \tilde{\varepsilon}^{\alpha\beta}=-\sqrt{-g}\frac{g^{\alpha\gamma}}{g_{00}}\varepsilon_\gamma^{\;\beta},
\end{align}
{and $\tilde{\gamma}^{\alpha\beta}=-\delta^{\alpha\beta\gamma}g_{0\gamma}/g_{00}$\footnote{Note that, in the case the material itself possesses magnetoelectric terms in the constitutive equations, i.e., $D_a=\varepsilon^b_a E_b+\gamma^b_{a}H_b$, and $B_a=\mu^b_a H_b-\gamma^b_a E_b$ then $\tilde{\gamma}^{\alpha\beta}=-\delta^{\alpha\beta\gamma}\frac{g_{0\gamma}}{g_{00}}-\sqrt{-g}\frac{g^{\alpha\delta}}{g_{00}}\gamma^{\beta}_\delta$}.}
As a direct consequence, whenever the refractive index of the \emph{effective medium} can be defined, it will also be the product of the material refractive index times the vacuum spacetime effective one. The same result can be easily obtained at the level of geometric optics.
Finally, we make two observations relevant for the study of the propagation of light pulses. Firstly, a non-magnetic material in curved spacetime corresponds to a magnetic effective medium in Minkowski due to the ``magnetic permeability'' of the background spacetime. Secondly, when considering a non-linear material, we see that the non-linearity will also be affected by the curvature of spacetime as well as the linear polarizability.
\section{Pulse propagation: Non-linear Schrödinger equation}
We next consider the propagation of light pulses in a Kerr non-linear, non-magnetic material in curved spacetime. In particular, we focus on the case in which the material is in a stationary orbit of Schwarzschild spacetime and use isotropic coordinates. This situation well-captures the cases of interest for optical communication and laboratory experiments like, e.g., optical fibers hanging still above Earth's surface.
In flat spacetime, the non-linear Schrödinger equation (NLSE) is often used when considering the propagation of light pulses whose amplitude is well-described by a scalar envelope slowly varying with respect to the light period and wavelength~\cite{agrawal2000nonlinear,boyd2020nonlinear}. In the case of a medium stationary in Schwarzschild' spacetime, by employing the correspondence with an \emph{effective medium} in flat spacetime as described in the previous section, the usual derivation of the NLSE can be carried out. However, the effective medium will be inhomogeneous due to the curved spacetime contribution to the polarizability and permeability of the material medium. This gives rise to extra terms in the NLSE which are of purely gravitational origin. Furthermore, another source of inhomogeneity in the medium can be included when considering the effect of tidal forces on the material that, through photoelasticity, render the refractive index position-dependent.
Neglecting for the moment photoelasticity, i.e., considering a rigid dielectric, we can write Maxwell's equation in flat spacetime for the effective medium in the familiar notation, {using the fields and field strengths that we indicate with plain capital letters from now on,}
\begin{align}\label{Meq}
& \nabla\cdot B=0,\,\,\nabla\cdot D=0\\
& \nabla\times E=-\partial_t B,\,\, \nabla\times H=\partial_t D,
\end{align}
where $D=\tilde{\varepsilon} E$ and $H=B/\tilde{\mu}$. Here $\tilde{\mu}=\tilde{\mu}(r)$ and $\tilde{\varepsilon}=\tilde{\varepsilon}(E,r,\omega)$ in frequency space, allowing us to account for the effect of material dispersion, are the permeability and permittivity of the effective medium. Expressing the
Schwarzschild' spacetime metric in isotropic coordinates as $ds^2=-\left(B(t,r)/A(t,r)\right)^2 dt^2+A^4(t,r)\delta_{\alpha\beta}dx^{\alpha}dx^{\beta}$, with $A(r)=1+r_S/4r$ and $B(r)=1-r_S/4r,$ with $r_S$ the Schwarzschild radius, we have
\begin{align}\label{eq:emu}
&\tilde{\varepsilon}(E,r,\omega)=\varepsilon_0\varepsilon_{\rm sp}\varepsilon=\varepsilon_0 \frac{A(r)^3}{B(r)}\left(1+\chi^{(1)}(\omega)+3\chi^{(3)}\frac{|E|^2}{\Omega}\right),\\
&\tilde{\mu}=\tilde{\mu}(r)= \mu_0{\mu_{\rm sp}=\mu_0} A(r)^3B(r)^{-1},
\end{align}
with $\Omega=A(r)^{-4}$ the conformal factor relating the spacial part of the metric with the flat, Euclidean one\footnote{This conformal factor arises due to the fact that $E^aE_a$ in {curved} spacetime corresponds to $|E|^2/\Omega$ with $|E|^2=E^aE^b\delta_{ab}$ the flat spacetime norm squared of the electric strength field.}.
The explicit radial dependence in the linear part of these effective quantities comes from the curved spacetime optical properties encoded in {the diagonal terms $\sqrt{-g}g^{\alpha\alpha}/g_{00}$ (cf. eq.\eqref{permten}) that we define as} $\varepsilon_{\rm sp}=\mu_{\rm sp}=A(r)^3B(r)^{-1}$. The field dependency of $\tilde{\varepsilon}$ takes into account the non-linearity of the physical medium. Note also that dispersion implies that the dielectric permeability is a function of the physical frequency $\omega$ defined with respect to our stationary observer $u^\mu$.
From eq.~\eqref{Meq}, and writing $D=\tilde{\varepsilon}_\ell E+P_{\rm NL}$, where {$\tilde{\varepsilon}_\ell=\varepsilon_0\varepsilon_{\rm sp}(1+\chi^{(1)}(\omega))$} is the linear part of the dielectric permeability {in eq.~\eqref{eq:emu}} and $P_{\rm NL}$ is the non-linear polarization, we can then obtain the wave equation, in frequency space,
\begin{equation}\label{start}
\nabla^2 E-\nabla(\nabla\cdot E)+\tilde{\mu}\tilde{\varepsilon}_\ell \nu^2E=-\tilde{\mu} \nu^2 P_{\rm NL}-(\nabla\log({\mu_{\rm sp}}))\times(\nabla\times E)\,.
\end{equation}
Here we indicate with $\nu$ the conjugate variable to the coordinate time $t$ in the flat spacetime of the effective medium.
Note that the homogeneous Maxwell equations imply that
\begin{align}\label{homoeq}
\nabla \cdot E = - (\nabla \log\tilde\varepsilon_\ell)\cdot E - \frac{1}{\tilde\varepsilon_\ell}\nabla \cdot P_{\rm NL},
\end{align}
and thus
\begin{align}\label{homo2}
- \nabla(\nabla\cdot E) &= (E\cdot\nabla)\nabla \log\tilde\varepsilon_\ell + \left((\nabla\log\tilde\varepsilon_\ell)\cdot \nabla\right)E \\ \nonumber
&+ (\nabla\log\tilde\varepsilon_\ell)\times(\nabla\times E)+\nabla\left(\frac{1}{\tilde\varepsilon_\ell} (\nabla\cdot P_{\rm NL})\right).
\end{align}
Eq.~\eqref{homoeq} makes evident that $\nabla \cdot E$ is of the same order as the non-linearities and inhomogeneities in the electric permittivity, which is also why it is usually safely neglected in derivations of the NLSE.
The wave equation in eq.~\eqref{start} is equivalent to Maxwell equations and, as such, presents the same level of complexity if analytical or numerical solutions are attempted.
The NLSE is a scalar propagation equation for the electric field's slowly varying amplitude that allows
one
to numerically simulate the pulse propagation. We thus want to write the electric field as the product of a slowly varying amplitude times a phase propagating along the propagation direction, that we will identify with the $z$ direction in the following.
In this context, notice that the dispersion relation of the physical medium, in its rest frame, {is given simply by} $n(\omega)=c{\kappa}/{\omega}$, with $\kappa$ the modulus of the spatial projection of the wave 4-vector. For the effective medium, this relation reads $\tilde{n}=c{\tilde{\kappa}}/{\nu}$, where $\tilde{n}=\sqrt{\varepsilon_{\rm sp}\mu_{\rm sp}}n$ is the product of the material refractive index and the ``spacetime refractive index'' $n_{\rm sp}=\sqrt{\varepsilon_{\rm sp}\mu_{\rm sp}}$. Moreover, since $\nu$ is the frequency defined with respect to Minkowski coordinate time, i.e., the conjugate Fourier variable to $t$, it is related to the physical frequency, {i.e., the one measured by a physical observer in curved spacetime,} by the gravitational redshift $\nu=\omega\sqrt{-g_{00}}$. From the equivalence of the dispersion relations, we see that $\tilde{\kappa}(r)=\kappa n_{\rm sp}(r)\sqrt{-g_{00}(r)}$.
We will thus write $E({\bf r},t)\propto \mathcal{E}({\bf r})e^{i(\tilde{\kappa}_0 z-\nu_0 t)}+cc.$, with $\tilde{\kappa}_0=\tilde{\kappa}(r,\nu_0)$ evaluated at a central frequency $\nu_0$.
In order to proceed with the derivation of the NLSE, and to further simplify our equations, we consider two separate situations of physical interest: (i) pulse propagation at approximately constant radius; (ii) pulse propagating radially.
\subsubsection{Horizontal motion at (almost) constant radius}
We assume the propagation direction of the light pulse to be the $z$ axis
taken to be perpendicular to the radial direction for horizontal motion, and consider linearly polarized light propagating in a medium stationary on Earth for concreteness. Then, for propagation distances much smaller than Earth's {radius ($r_\oplus$), i.e., $z\ll r_\oplus$, the horizontal motion can be considered as happening at constant radius.}
With these approximations, the spacetime permeability and permittivity are constant functions of $r_\oplus$, $\mu_{sp}=\varepsilon_{\rm sp}=A(r_\oplus)^3B(r_\oplus)^{-1}$ and also the physical frequency is not changing with $z$. Thus, we see that in eq.~\eqref{start} the last term on the right-hand side vanishes.
We follow the derivation in~\cite{philbin2008fiber} where the pulse propagation in a single-mode optical fiber was considered. Indeed, {for $\mu_{sp}\,\varepsilon_{\rm sp}$ constant,} eq.~\eqref{start} is formally equivalent to eq.~(S1) of~\cite{philbin2008fiber} {in frequency space}. We thus end up with an effective one dimensional problem for the slowly varying envelope, and the derivation of the NLSE is the textbook one~\cite{SM,boyd2020nonlinear}. In particular, recall that the slowly varying envelope approximation(s) (SVEA) consists in neglecting terms $\partial^2_z \mathcal{E}\ll\tilde{\kappa}_0\partial_z \mathcal{E}$ and $(\tilde{\kappa}_1/\tilde{\kappa}_0)\partial_t\ll 1$ on the basis that the envelope will contain many wavelengths and optical cycles. If we apply now the SVEA we end up with, in the time domain,
\begin{align}
&i(\partial_z +\tilde{\kappa}_1\partial_t)\mathcal{E}
-\frac{\tilde{\kappa}_2}{2} \partial^2_t \mathcal{E} ={-n_2\nu_0 n_{\rm sp}(r_\oplus) \varepsilon_0 \frac{|\mathcal{E}|^2}{\Omega} \mathcal{E}},\label{eq:soliton_diffeq_hor1}
\end{align}
where $\tilde{\kappa}_i(\nu_0)$ are the coefficients of the power series expansion $\tilde{\kappa}(\nu)= \sum_{n}\tilde{\kappa}_n(\nu_0)/n!\,(\nu-\nu_0)^n$ in $\nu-\nu_0$ and we are considering Kerr non-linear media for which the nonlinear index is $n_2=3\chi^{(3)}/(2n(\omega_0)c\varepsilon_0)$.
{Considering an anomalous dispersive material, i.e., $\kappa_2(\nu_0)<0$, an analytical solution of the NLSE can be found {(see, e.g.,~\cite{philbin2008fiber})}
and reads}
\begin{equation}\label{eq:soliton_horizontal_main}
\mathcal{E}(t,z)=\sqrt{\frac{\Omega | \tilde{\kappa}_2| }{\nu_0 n_2 n_{\rm sp}\varepsilon_0 T_0^2 }}\cosh \left(\frac{t-\tilde{\kappa}_1 z}{T_0}\right)^{-1}\exp \left(\frac{i z | \tilde{\kappa}_2| }{2 T_0^2}\right),
\end{equation}
where $T_0$ is the pulse length, and $1/\tilde{\kappa}_1$ is its speed of propagation. This reduces to the result from Philbin et al.\cite{philbin2008fiber} {-- eq.(S74) of the supplementary material in~\cite{philbin2008fiber} --} in the limit of $r_S\to 0$. From this expression, combined with the fact that $\tilde{\kappa}_1(\nu_0)=n_{\rm sp}\kappa_1(\omega_0)$, we can conclude that the {velocity of the horizontally propagating soliton in curved spacetime with respect to an observer comoving with the segment of the dielectric material\footnote{{Indeed note that proper length and proper time for an observer comoving with the segment of the dielectric material and in connection with coordinate quantities
are given by $\ell=A^2\,z$ and $\tau=t\,B/A$ so that $v\equiv \ell/\tau=A^3B^{-1}z/t=n_{\rm sp}\tilde{v}$. }}}
is given simply by $\kappa_1(\omega_0)^{-1}$.
\begin{figure}
\centering
\includegraphics[scale=0.4]{vvsz29092022V5.pdf}
\caption{Velocity of the soliton along the fiber, {with respect to an observer comoving with the segment of the dielectric material where the (peak of the) soliton is located,} for $L=0.1$\,m, $r_s=10^{-3} r_\oplus$, and including photoelasticity. The red, dashed and blue, solid curves represent the analytical expression in eq.~\eqref{eq:analytical_v} including or in the absence, respectively, of photoelasticity. The red points and blue squares are obtained by numerical simulations and
agrees
perfectly with the analytical formula of eq. \eqref{eq:analytical_v}. The inset shows the case with photoelasticity in which $r_s=10^{-2} r_\oplus$. This shows a deviation from a purely linear relation between the velocity and the propagation distance.
}
\label{fig:velocityofz}
\end{figure}
\subsubsection{Radial motion}
Let us now consider the case in which the light pulse propagates radially along the $z$ direction. Care is in order here, since now all the quantities appearing in the wave equation will change along the propagation direction, including the physical frequency that will be subject to gravitational redshift. Motivated by the symmetry of the problem, and in order to obtain a scalar, one-dimensional equation whose solution can be simulated, we assume that all the quantities entering the wave equation depend solely on $z$. This is tantamount to identifying the radial direction with the $z$-axis {and work close to $x=y= 0$} so that $r= r_\oplus+z$, which is a reasonable assumption since we are considering the vertical propagation of a well localized pulse. With this approximation, the wave equations~\eqref{start} reduce to a system of three decoupled equations~\cite{habib2013electromagnetic}
{\small
\begin{align}
\partial_z^2E_{x(y)}+\tilde{\mu}\tilde{\varepsilon}_\ell\nu^2 \label{eq:zdirection1} E_{x(y)}=&-\tilde{\mu}\nu^2P_{{\rm NL,} x(y)}+\left(\partial_z(\ln\tilde{\mu})\right)\partial_z E_{x(y)}
\end{align}
\begin{align}
\partial_z^2E_{z}+\tilde{\mu}\tilde{\varepsilon}_\ell\nu^2 E_{z}=&-\tilde{\mu}\nu^2P_{{\rm NL,} z}-\partial_z\left(\frac{1}{\tilde{\varepsilon}_\ell}\partial_z P_{{\rm NL},z}\right)\\ \nonumber
&-2(\partial_z\ln\tilde{\varepsilon}_\ell)\partial_zE_{z}-E_z\partial_z^2\ln\tilde{\varepsilon}_\ell
\end{align}
}
It is immediate to realize that $E_z=0$ is a solution of the corresponding equation so that we can consider the propagation of linearly polarized light (in a direction orthogonal to $z$) and we end up with a single equation of the form of eq.~\eqref{eq:zdirection1}.
Proceeding as before with substituting the ansatz $E(z,t)\propto \mathcal{E}(z,t)e^{i(\tilde{\kappa}_0(z) z-\nu_0 t)}+cc.$, expanding $\tilde{\kappa}(z,\nu)$ around $\nu_0$, and using the SVEA approximation(s) we obtain the NLSE given by
\begin{widetext}
{\small
\begin{align}
&i(\partial_z +\tilde{\kappa}_1\partial_t)\mathcal{E}-\frac{\tilde{\kappa}_2}{2} \partial^2_t \mathcal{E} +2i\frac{\partial_z\tilde{\kappa}_0}{2\tilde{\kappa}_0} \mathcal{E}+ 2i z\frac{\partial_z\tilde{\kappa}_0}{2\tilde{\kappa}_0}\partial_z \mathcal{E}+i z\frac{\partial_z^2\tilde{\kappa}_0}{{2\tilde{\kappa}_0}} \mathcal{E}-z\partial_z\tilde{\kappa}_0 \mathcal{E}-z^2\frac{(\partial_z\tilde{\kappa}_0)^2}{2\tilde{\kappa}_0} \mathcal{E}={-n_2\nu_0 n_{\rm sp}(r) \varepsilon_0 |\mathcal{E}|^2 \mathcal{E}/\Omega} +\frac{\partial_z\ln n_{\rm sp}}{2\tilde{\kappa}_0}\left(i\tilde{\kappa}_0\mathcal{E}+\partial_z \mathcal{E}+iz(\partial_z\tilde{\kappa}_0) \mathcal{E}\right).\label{eq:soliton_diffeq_vertical}
\end{align}
}
\end{widetext}
Eq.~\eqref{eq:soliton_diffeq_vertical} contains several additional terms with respect to the equation for the horizontal propagation due to the fact that now the wavevector $\tilde{\kappa}_0$ depends explicitly on the
coordinate along the
propagation direction and so does the refractive index, i.e., we are propagating in a gradient-index medium (GRIN)\footnote{{See also~\cite{PhysRevLett.37.693,chen1978nonlinear,herrera1984envelope} for early studies of soliton propagation in inhomogeneous media.}}.
All geometrical quantities appearing in the equation are evaluated at $r_\oplus+z$. Finally, consistently with the horizontal propagation case, upon setting $\tilde{\kappa}_0$ constant, we return to eq.~\eqref{eq:soliton_diffeq_hor1}.
\section{Including photoelasticity}
Up until now, we have considered rigid dielectrics, i.e., dielectric media in which the speed of sound is infinite. For realistic materials, this is of course never the case and the dielectric gets deformed by the action of forces, including the tidal ones in our set-up. Let us consider an optical fiber as a paradigmatic example. In this case, the deformation due to the action of gravity will be relevant only for the case of vertical propagation.
Deformations of a dielectric lead to a change in the relative permeability of the material, and thus of the refractive index, a phenomenon known as photoelasticity~\cite{chen2006foundations}. The contributions to this effect coming from the curvature of spacetime and the inertial acceleration of the fiber can be separately accounted for following the discussion in~\cite{ratzel2018frequency}. Consider a fiber of length $L$ hanging from at support located at $r_\oplus+L$. As far as the strain is within the elastic limit of the material, we can relate it with the stresses through a linear relation, i.e., Hooke's law. Thus, we write the strain tensor as $\mathcal{S}_{kl} =\frac{1}{Y} \sigma_{kl}$, where $Y$ is the Young's modulus of the material and $\sigma_{kl} = \frac{F_k}{A_l}$ is the stress tensor given by the ratio between the force $F_k$ in direction $\hat{e}_k$ and the cross-sectional area $A_l$ normal to $\hat{e}_l$ upon which the force acts. The photoelastic (or acousto-optic) effect consists in the change of the relative electric permeability by $\Delta(\bm{\varepsilon}_r)^{-1}_{kl} = \mathcal{P}_{kl\,mn}\mathcal{S}_{mn}$, where $\mathcal{P}$ is the photoelastic tensor. In the following, we limit ourselves to the case of isotropic materials and a diagonal stress tensor (see~\cite{SM} for the details of the computation).
It should be noted that photoelasticity is far from negligible in the case under investigation and becomes the dominant effect in the vertical propagation scenario, overwhelming
the effect related to the optical properties of the background spacetime.
While photoelasticity introduces a further radial dependence in the optical properties of the effective medium, this does not affect the form of eq.~\eqref{eq:soliton_diffeq_vertical}, which remains valid. The only difference is in the expressions for the quantities $\tilde{\kappa}_i$ and their derivatives, due to the fact that now the refractive index of the medium is given by $n(\omega)=\sqrt{1+\chi_1(\omega)+\Delta{\varepsilon}_r(\omega)}$~\cite{SM}.
\begin{figure}
\centering
\includegraphics[scale=0.45]{arrivaltimeV3.pdf}
\caption{Time of arrival of the soliton for the case of propagation in the gravitational field of Earth {for which we assume} $r_S = 9\times 10^{-3}$\,m. The main figure shows the difference in time of arrival, {with respect to an observer comoving with the segment of the dielectric material where the soliton is located,} between vertically and horizontally propagating solitons over the propagation coordinate length $z$. The inset shows the same in the case photoelasticity is neglected.
}
\label{fig:arrivaltime}
\end{figure}
\section{Numerical results}
While the wave equation in eq.~\eqref{start} gives us the full Maxwell equations, including possibly interesting effects related to the vectorial nature of the electric field, and thus to the interplay between gravity and the light polarization, its numerical investigation is beyond the scope of the current work, and it is left for future investigations. Here, we focus on the propagation of light pulses as described by the simplified eq.\eqref{eq:soliton_diffeq_vertical}, motivated by light propagation in optical fibers~\cite{philbin2008fiber}. Note that in the case of eq.~\eqref{eq:soliton_diffeq_hor1} an analytical solution was presented in eq.~\eqref{eq:soliton_horizontal_main}.
Equation \eqref{eq:soliton_diffeq_vertical} for the vertical propagation is solved numerically -- being a non-linear PDE with coordinate dependent coefficients -- using the split-step Fourier (SSF) method~\cite{agrawal2000nonlinear} and taking into account also the effect of the fiber deformation.
For this purpose, we utilize the same fiber parameters as in~\cite{philbin2008fiber} (see also table~\ref{tab:parameters} in~\cite{SM}) and initialize the temporal profile at $z=0$ as the one of the input pulse in the same reference.
The intuition based on the SSF method{-- where the propagation equation~\eqref{eq:soliton_diffeq_vertical} is rewritten in the form $\partial_z \mathcal{E}= \left(\hat{D}+\hat{N}\right)\mathcal{E}$ with the diffusive dynamics enclosed in the operator $\hat{D}=\hat{D}(z,\partial_t)$~\cite{SM} --} allows us to {formulate the educated guess}
that the propagation speed of the soliton, in the effective flat spacetime, is given by
\begin{equation}\label{eq:analytical_v}
\tilde{v}=\frac{1+z\, \tilde{\kappa}_0'(z)/\tilde{\kappa}_0(z)}{\tilde{\kappa}_1(z)}.
\end{equation}
Indeed, this appears as {(the real part of) the inverse of} the coefficient of the time derivative {in $\hat{D}(z,\partial_t)$}. Then, in order to translate this result {into the speed measured by an observer comoving with the segment of the dielectric material where the soliton peak is located,} we need to just multiply eq.~\eqref{eq:analytical_v} by the spacetime refractive index. That this intuition is indeed correct is verified by the numerical simulations reported in Fig.~\ref{fig:velocityofz}.
We see that the $z$-dependence of the propagation velocity is strongly enhanced by the effects of mechanical deformation of the fiber {with respect to the case in which photoelasticity is ignored.}
{The $z$-dependence of the vertical propagation velocity without photoelasticity is weak, and the velocity is close to the one of the horizontal case}. To quantify the latter statement, in Fig.~\ref{fig:arrivaltime} we show the difference in the (proper) time of arrival of the soliton for the case of propagation in the gravitational field of Earth, corresponding to
a Schwarzschild radius that we take as
$r_S= 9\times 10^{-3}$~m. The main figure shows
{\begin{equation}
\delta \tau=|z(\sqrt{-g_{00}(r_\oplus+z)} \tilde{v}_{\uparrow}^{-1})- \sqrt{-g_{00}(r_\oplus)} \tilde{v}_{\rightarrow}^{-1})|,
\end{equation}
with $\tilde{v}_{\uparrow}$ and $\tilde{v}_{\rightarrow}$}
the propagation velocities, in the effective flat spacetime, for vertical and horizontal propagation. The inset shows instead the case in which for the vertical propagation the photoelasticity is neglected, showing a much weaker dependence.
{Finally, in Fig.~\ref{fig:avgvelocity} we show the deviation of the average velocity along the vertical direction $v_{\rm av}(r_S)$ with respect to the constant velocity at $r_S=0$ as a function of the dimensionless ratio $r_S/r_\oplus$. The average velocity is obtained numerically from the simulations as the ratio of the total length $L$ and the propagation time of the soliton and transformed into the frame of the {observer comoving with the fiber at its upper end-point} -- i.e., multiplied by $n_{\rm sp}(r_\oplus+L)$. Analytically, we use $v_{\rm av}=(\int_{0}^{L}v\,{\rm d}z)/L$ with $v=n_{\rm sp}\tilde{v}$ and $\tilde{v}$ given in eq.~\eqref{eq:analytical_v}. Fig.~\ref{fig:avgvelocity} shows once again the agreement between the simulated data and our analytical ansatz and it also shows that the photoelasticity is the main effect that allows one to have a sizable difference between the flat and curved spacetime propagation.}
{Another quantity characterizing the propagating pulse is its temporal width. In the horizontal propagation case, the duration of the pulse is constant. The same is not, in general, true when considering the vertical propagation. In the Supplemental Material~\cite{SM}, we report the evolution of the temporal width along the fiber. In particular, our simulations show a focusing of the pulse which is however sizable only in the presence of photoelasticity.}
\begin{figure}
\includegraphics[scale=0.45]{vavarage29092022MathematicaV6.pdf}
\caption{Change in average velocity ($v_{\rm av}$) of the soliton in the fiber -- {with respect to the observer comoving with the dielectric --}
{compared to the case with $r_S=0$}. Orange, square points corresponds to the case of a $L=1$\,m propagation with photoelasticity. Blue, round points correspond to the case of a $L=0.1$\,m propagation with photoelasticity. Green, diamonds correspond to the case of a $L=0.1$\,m propagation without photoelasticity. The lines correspond to the analytical result that fits perfectly the different sets of data.
}
\label{fig:avgvelocity}
\end{figure}
\section{Conclusions}
We have considered the propagation of light pulses in non-linear, non-magnetic media stationary in curved spacetime. Taking some intuition from the seminal work of Plebanski~\cite{plebanski1960electromagnetic}, we
showed that light propagation in such media can be equivalently described as the propagation in an effective medium in flat spacetime whose electric and magnetic properties acquire a multiplicative factor encoding the spacetime structure. Having done that, eq.~\eqref{start} describes the propagation of light in the effective medium.
It is interesting to note, even though we did not investigate it in this work, that the vectorial nature of this equation encodes the interplay between the light polarization and the gravitational field. Such interplay should be expected on the basis of the fact that the effective medium is an inhomogeneous, gradient-index medium for which it is well known that the propagation of light is influenced by its own polarization~\cite{bliokh2009geometrodynamics,liberman1992spin,bliokh2015spin}. Furthermore, the effect of polarization
on the propagation of light in curved, vacuum spacetime has been extensively considered in the literature and shown to take place also
for static spacetimes~\cite{gosselin2007spin,oancea2020gravitational}.
Neglecting the aforementioned effects, which would be undoubtedly small, by virtue of approximations we have been able to derive a scalar NLSE describing the propagation of a light pulse. It is important to notice that, when solving the NLSE employing the SSF method, we are implicitly considering a unidirectional equation and {ignoring any possible back-propagating field in the boundary conditions imposed, for all times, at $z=0$. This means that backscattered light from the pulse is assumed negligible relative to the
pulse itself, a condition common to all unidirectional envelope propagation equations~\cite{PhysRevLett.89.283902}}.
While this is not a problem for the horizontal propagation, in which case only the weak non-linearity could give rise to back-reflection, in the case of the vertical propagation light is effectively propagating in a gradient-index medium with the refracting index slowly varying in the propagation direction. This by itself can give rise to back-{propagating fields}, and effectively limits the validity of our treatment to regimes in which the photoelasticity {allows to employ a unidirectional equation.}
Luckily, the regime of validity of the equation -- which depends on the parameter chosen for the physical medium -- can be readily estimated by following the discussion in~\cite{PhysRevA.81.013819} as we detail in~\cite{SM}.
{Given these caveats, the NLSE that we have derived shows that an optical pulse propagating radially in a Kerr non-liner medium stationary in Schwarzschild spacetime experiences a change in its propagation velocity captured by eq.~\eqref{eq:analytical_v}. This effect is mostly due to photoelasticity which overwhelms the purely spatiotemporal effects encoded in $n_{\rm sp}$. The difference in propagation velocity between the vertically and horizontally propagating pulses results, in turn, in a difference of the time of arrival of two pulses of the order of hundreds of femtoseconds in Earth gravitational field,}{ a fact that puts this difference in the reach of current technologies (see \cite{lee2010time,fortier201920,caldwell2022time} and references therein).}
\section*{Acknowledgements}
The authors thank Francesco Marino for interesting discussions. A.~Belenchia and D.~Braun acknowledge support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) project number BR 5221/4-1. {D.~R\"{a}tzel acknowledges funding by the Federal Ministry of Education and Research of Germany in the project “Open6GHub” (grant number: 16KISK016) and support through the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC-2123 QuantumFrontiers – 390837967, the Research Training Group 1620 “Models of Gravity” and the TerraQ initiative from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 434617780 – SFB 1464.}
|
2,869,038,155,114 | arxiv | \section{Introduction}
In 1978 Avramov \cite{Av2} introduced and studied small homomorphisms of local rings, and a year after, Levin \cite{Lev} introduced large homomorphisms of local rings as a dual notion of the small homomorphisms. A surjective local homomorphism $f:R\to S$ is called \emph{small} if the induced homomorphism of graded algebras $f_*:\operatorname{Tor}^R_*(k,k) \to \operatorname{Tor}^S_*(k,k)$ is injective, and it is called \emph{large} if $f_*$ is surjective. Levin proved that a local homomorphism $R \to S$ is large if and only if for every finitely generated $S$-module $M$ there is an equality of Poincar\'{e} series $\operatorname{P}^R_M(t)=\operatorname{P}^R_S(t)\cdot \operatorname{P}^S_M(t)$. This result makes large homomorphisms a very useful tool for understanding betti numbers and computing Poincar\'{e} series.
Small homomorphisms are closely related to another class of local homomorphisms namely \emph{Golod homomorphisms}; see Definition \ref{GH}. In fact, each Golod homomorphism is small, and these homomorphisms have been studied very well in several articles; see for example \cite{Av2}, \cite{Lev2}, \cite{Levin}, and \cite{S}. On the other hand, we are only aware of relatively few results about the large homomorphisms. The goal of this paper is not only to prove new results, but also to collect known facts and ideas about large homomorphisms that have been used in many articles without stating them.
The organization of the paper is as follows.
In section 2, after some preliminaries and examples, we give various conditions under which a surjective homomorphism $R \to S$ is large, specifically when $R$ is a complete intersection; see Theorem \ref{CI}.
In section 3, {over a local ring $(R,\mathfrak{m})$ with an ideal $I$, we give a necessary and sufficient condition when $R\to R/I$ is large and $R\to R/\mathfrak{m} I$ is small simultaneously; see Theorem \ref{Tor}. As a result, we show that a sujective local homomorphism $R \to S$ where $S$ is a Koszul $R$-module is large; see Corollary \ref{exlin}. Then we {provide} a sufficient condition for large homomorphisms over Golod rings; see Proposition \ref{Massey}.
\section{Large homomorphisms over complete intersections}
Throughout $(R,\mathfrak{m},k)$ is a commutative Noetherian local ring with maximal ideal $\mathfrak{m}$ and residue field $k$.
We start this section by recalling some definitions and preliminaries.
A \emph{differential graded algebra} (DG algebra) $A$ is a complex $(A,\partial)$ equipped with an algebra structure such that $ A_0$ is a commutative unitary ring, $A_i=0$ for $i<0$, and the product satisfies the Leibniz rule $\partial(ab)=\partial(a)b+(-1)^{|a|}a \partial(b)$, for $a,b \in A$.
\begin{chunk}{\bf Acyclic Closure.}\label{AC} Let $I$ be an ideal of $R$. An \emph{acyclic closure} $R\langle X_i |\ {i\geq 1} \rangle$ of the augmentation $R\to R/I$ obtained from the Tate construction is a DG algebra resolution for $R/I$ over $R$, where $X_i$ is a set of exterior variables when $i$ is odd, and it is a set of divided variables if $i$ is even number. Note that $X_i$ are added in a given homological degree $i$ in such a way that their images under the differential minimally generate the $(i-1)$th homology. For $i\geq 1$, we set $\pi_i(R)$ to be the $k$-vector space with basis $X_i$. By using the notations of \cite{Av}, we write $\varepsilon_i=\dim_k \pi_i(R)$.
An acyclic closure $R\langle X_i \rangle$ of $R/I$ is not a minimal resolution in general {but} if $I=\mathfrak{m}$, then it is a minimal resolution by Gulliksen's theorem; see \cite[Theorem 1.6.2]{GL}.
\end{chunk}
Let $U$ be a DG algebra resolution of $k$ over $R$ described in \ref{AC}, and let $\mathfrak{a}$ be an ideal of $R$. Then $A=U\otimes_R R/\mathfrak{a} $ is a DG algebra, and hence, $\operatorname{Tor}^R(k,R/\mathfrak{a})$ also has the graded algebra structure induced from $A$.
Let $M$ be a finitely generated $R$-module. The \emph{Poincar\'{e} series} $\operatorname{P}^R_M(t)$ of $M$ over $R$ is defined to be the formal power series $\operatorname{P}^R_M(t)=\sum^\infty_{i=0}\dim_k\operatorname{Tor}^R_i(M,k)t^i$.
\begin{defn}\cite[Theorem 1.1]{Lev}\label{D1} Let $(R,\mathfrak{m},k)$ be a local ring, and let $f:R\rightarrow S$ be a surjective local homomorphism. Then $f$ is called \emph{large} if one of the following equivalent conditions holds.
\begin{enumerate}
\item The induced map $f_*:\operatorname{Tor}^R_*(k,k)\rightarrow \operatorname{Tor}^S_*(k,k)$ of graded algebras is surjective.
\item The induced map $\varphi_*: \operatorname{Tor}^R_*(S,k)\rightarrow \operatorname{Tor}^R_*(k,k)$ of graded algebras is injective.
\item $\operatorname{P}^R_k(t)=\operatorname{P}^R_S(t)\cdot \operatorname{P}^S_k(t)$.
\end{enumerate}
\end{defn}
\begin{chunk}{\bf Necessary Condition.}\label{NC} Suppose $R\rightarrow R/I$ is a large homomorphism. Then by Definition \ref{D1}({2}) we have {$\operatorname{Tor}^R_1(R/I,k) \to \operatorname{Tor}^R_1(k,k)$ is injective} which specifically tells us that {a minimal generating set of $I$ can be completed to a minimal generating set of $\mathfrak{m}$}, equivalently $I\cap \mathfrak{m}^2 = \mathfrak{m} I$. {We will need this condition in most of our results.}
\end{chunk}
In the following we list some well-known examples of large homomorphisms from the literature.
\begin{exam}\label{E1} Let $(R,\mathfrak{m},k)$ be a local ring, and let $I$ be an ideal such that $I\cap \mathfrak{m}^2 = \mathfrak{m} I$. In either of the following, {the induced map} $R\rightarrow R/I$ is a large homomorphism.
\begin{enumerate}
\item $\mathrm{pd}_R (I)<\infty$.
\item $(0:_RI)=\mathfrak{m}$.
\item The ring $R/I$ is complete intersection.
\item The map ${R \to R/}I$ is a quasi-complete intersection {homomorphism}.
\item The ring $R$ is Cohen-Macaulay non-Gorenstein, $\mathfrak{m}^2\subseteq I$, and $\operatorname{G-dim}_R(I)<\infty$.
\item $\mathfrak{m}=I\oplus J$.
\end{enumerate}
\end{exam}
\begin{proof} (1) We show that if $\mathrm{pd}_R(I)<\infty$ then $I$ is generated by a regular sequence.
The claim is obvious if $I$ is a principal ideal. Hence we may assume $I=(x_1,\dots,x_n)$ such that $n\geq 2$ and $x_i\in \mathfrak{m} \setminus \mathfrak{m}^2$ for all $i$. Since $\mathrm{pd}_R(I)<\infty$, we have $\operatorname{grade}_R(I)>0$ by \cite[Corollary 1.4.6]{BH}. By prime avoidance, one can choose a non zero-divisor $x\in I \setminus \mathfrak{m}^2$. It follows from \cite[Theorem 2.2.3]{Av} that $\mathrm{pd}_{R/(x)}R/I<\infty$. Since $I/(x)$ is generated by $n-1$ elements, by induction it is generated by a regular sequence over $R/(x)$. Hence $I$ is generated by a regular sequence over $R$. Therefore $R\rightarrow R/I$ is a large homomorphism by \cite[Theorem 2.2]{Lev}.
(2) also follows by \cite[Theorem 2.2]{Lev}.
See \cite[Theorem 2.4]{Lev} for (3), \cite[Theorem 6.2]{AHS} for (4), and \cite[Theorem 1.2]{GT} for (5).
To see (6), consider the exact sequence $$0\to R \to R/I\oplus R/J \to k \to 0.$$ Applying $-\otimes_Rk$, we see that the induced map $\operatorname{Tor}^R_i(R/I,k)\to \operatorname{Tor}^R_i(k,k)$ is injective for all $i\geq 0$.
\end{proof}
The following provides more examples of large homomorphisms.
\begin{rem}\label{E3} Let $R$ be a local ring and let $I$ be an ideal of $R$. If there exists a local ring $Q$ and a surjective local homomorphism $Q \to R$ such that the composition map $Q \to R/I$ is a large homomorphism then $R \to R/I$ is large. Indeed, since the composition of the maps $Q \to R \to R/I$ is large, the induced composition of homomorphisms $\operatorname{Tor}^{Q}_*(k,k) \to \operatorname{Tor}^{R}_*(k,k) \to \operatorname{Tor}^{R/I}_*(k,k)$ is surjective. Hence $\operatorname{Tor}^{R}_*(k,k) \to \operatorname{Tor}^{R/I}_*(k,k)$ is surjective.
\end{rem}
By using Example \ref{E1}(6) and Remark \ref{E3} we get the following.
\begin{exam} Let $R=S \#_k T$ be the connected sum of local rings $S$ and $T$; see \cite{AAM}. Then $R \to S$ is a large homomorphism.
To see that, let $Q=S\times_kT$ be the fiber product of $S$ and $T$. Then the composition of homomorphisms $Q\to R\to S$ is large by Example \ref{E1} (6).
Hence $R\to S$ is large by Remark \ref{E3}.
\end{exam}
\begin{chunk}{\bf Change of ring spectral sequence.}\label{DGS} Let $f:R\to S$ be a local homomorphism. Consider the change of ring spectral sequence $$\operatorname{E}^2_{p,q}\cong\operatorname{Tor}^S_p(k,k)\otimes \operatorname{Tor}^R_q(S,k)\Longrightarrow \operatorname{Tor}^R_{p+q}(k,k).$$
The spectral sequence is derived from the double complex $C_{p,q}=G_p\otimes_RF_q$ where $F$ and $G$ are free resolutions of $k$ over $R$ and $S$, respectively.
The edge homomorphisms $\operatorname{Tor}^R_p(k,k)\rightarrow \operatorname{Tor}^S_p(k,k)\cong \operatorname{E}^2_{p,0}$ and $\operatorname{E}^2_{0,q}\cong \operatorname{Tor}^R_q(S,k)\rightarrow \operatorname{Tor}^R_q(k,k)$ are the homomorphisms induced by $f: R\to S$ and $\varphi: S\to k$, respectively; see \cite[page 348]{CE}.
\end{chunk}
\begin{chunk}{\bf Koszul homology} Let $(R,\mathfrak{m},k)$ be a local ring and let $I$ be an ideal of $R$. We denote $\operatorname{K}(I)$ the Koszul complex with respect to a minimal generating set of $I$, and $\operatorname{H}_i(I):=\operatorname{H}_i(\operatorname{K}(I))$. If $I=\mathfrak{m}$ we write $\operatorname{K}(R):=\operatorname{K}(\mathfrak{m})$ and $\operatorname{H}_i(R):=\operatorname{H}_i(\mathfrak{m})$. {If $I\cap \mathfrak{m}^2=\mathfrak{m} I$ then a minimal generating set of $I$ can be completed to one for $\mathfrak{m}$. In this case,} $\operatorname{K}(I)$ is a subcomplex of $\operatorname{K}(R)$. Therefore the inclusion map induces a natural homomorphism $\operatorname{H}_*(I) \to \operatorname{H}_*(R)$ of homology algebras.
\end{chunk}
Large homomorphisms are well understood over complete intersection local rings due to Gulliksen and Levin \cite{GL}. It is worth to bring here those conditions which characterize large homomorphisms over complete intersection local rings. First we need the following lemmas.
\begin{lem}\label{KH} Let $(R,\mathfrak{m},k)$ be a local ring and let $I$ be an ideal of $R$ such that $I\cap \mathfrak{m}^2=\mathfrak{m} I$. Then the induced map \emph{$\operatorname{H}_1(R)\rightarrow \operatorname{H}_1(R/I)$} is surjective.
\end{lem}
\begin{proof} Set $S=R/I$. The assumption $I\cap \mathfrak{m}^2=\mathfrak{m} I$ implies that $\operatorname{Tor}^R_1(S,k) \to \operatorname{Tor}^R_1(k,k)$ is injective. Hence by \ref{DGS}, $d^2_{2,0}=0$ and therefore the induced map $\operatorname{Tor}^R_i(k,k)\rightarrow \operatorname{Tor}^S_i(k,k)$ is surjective for $i\leq 2$. Let $\operatorname{K}(R)$ and $\operatorname{K}(S)$ respectively be the Koszul complexes of $R$ and $S$. Let $\sigma_1,\dots, \sigma_r\in \operatorname{K}(R)$ and $\delta_1,\dots,\delta_s\in \operatorname{K}_1(S)$ be cycles whose classes are basis for the vector spaces $\operatorname{H}_1(R)$ and $\operatorname{H}_1(S)$, respectively. Then by the Tate construction, there exists a commutative diagram
$$
\begin{CD}
(\oplus_{i<j\leq n}Re_i\wedge e_j)\oplus(\oplus_{\ell=1}^rRT_\ell) @>>> \oplus_{i=1}^n Re_i @>>> R @>>> k @>>> 0 \\
@VVf_2V @VVf_1V @VVf_0V @VV=V \\
(\oplus_{i<j\leq m}Se_i\wedge e_j)\oplus(\oplus_{\ell=1}^sS\tau_\ell) @>>> \oplus_{i=1}^m Se_i @>>> S @>>> k @>>> 0,
\end{CD}
$$
where the rows are beginning of the free resolutions of $k$ over $R$ and $S$, $T_1,\dots,T_r$ and $\tau_1,\dots \tau_s$ are divided variables of homological degree $2$ with $\partial T_i=\sigma_i$ and $\partial \tau_i=\delta_i$; see \ref{AC}. Since $\operatorname{Tor}^R_2(k,k)\rightarrow \operatorname{Tor}^S_2(k,k)$ is surjective, $f_2$ is surjective too, and hence, $\tau_j=f_2(\eta_j)$ for some $\eta_j \in F_2$. Thus $\delta_j=f_1(\partial\eta_j)$. Since the map $\operatorname{H}_1(R)\rightarrow \operatorname{H}_1(S)$ is induced by $f_1$, we are done.
\end{proof}
\begin{lem}\label{inj} Let $I$ be an ideal of $R$ such that $I\cap \mathfrak{m}^2=\mathfrak{m} I$. If the natural map $\operatorname{H}_1(I)\otimes_R k \to \operatorname{H}_1(R)$ is injective, then the induced map $\operatorname{Tor}^R_2(R/I,k) \to \operatorname{Tor}^R_2(k,k)$ is injective.
\end{lem}
\begin{proof} Let $\mathfrak{m}=(x_1,\dots,x_n)$ and $I=(x_1,\dots,x_l)$ with $l \leq n$. Let $z_1,\dots, z_q\in \operatorname{K}_1(I)$ and $z'_1,\dots,z'_r\in \operatorname{K}_1(R)$ be cycles whose classes minimally generate $\operatorname{H}_1({I})$ and $\operatorname{H}_1(R)$, respectively. Then there exists a commutative diagram
$$
\begin{CD}
F_2=(\oplus_{i<j\leq l}Re_i\wedge e_j)\oplus(\oplus_{\ell=1}^qRS_\ell) @>>> F_1=\oplus_{i=1}^l Re_i @>>> R @>>> R/I @>>> 0 \\
@VV\phi_2V @VV\phi_1V @VV=V @VV\pi V \\
G_2=(\oplus_{i<j\leq n}Re_i\wedge e_j)\oplus(\oplus_{\ell=1}^rRT_\ell) @>>> G_1=\oplus_{i=1}^n Re_i @>>> R @>>> k @>>> 0,
\end{CD}
$$ where the rows are beginning of Tate resolutions of $R/I$ and $k$, $\phi_1$ is natural inclusion, $S_1,\dots S_l$ and $T_1,\dots,T_r$ are divided variables of homological degree $2$ with $\partial S_i=z_i$ and $\partial T_i=z'_i$.
Note that $\operatorname{H}_1(I) \to \operatorname{H}_1(R)$ is induced by $\phi_1$ and one has $\phi_2|_{\oplus_{i<j\leq n}Re_i\wedge e_j}$ is the natural injection $\operatorname{K}(I)\to \operatorname{K}(R)$. Assume $\operatorname{H}_1(I)\otimes_R k \to \operatorname{H}_1(R)$ is injective and let $\bar{\phi_1}$ be the induced map $\operatorname{H}_1(I)\to \operatorname{H}_1(R)$.
Then for any choice of $\lambda_1,\dots,\lambda_q$ such that $\lambda_j \in R\setminus \mathfrak{m}$ for some $j$, one has $\bar{\phi_1}(\lambda_1\bar{z_1}+\cdots+\lambda_q\bar{z_q})$ is non zero in $\operatorname{H}_1(R)$ where $\bar{z_i}$ is the class of $z_i$ in $\operatorname{H}_1(I)$. This means $\phi_1(\lambda_1z_1+\cdots+\lambda_qz_q)\notin \operatorname{Im} \partial^{K(R)}_2$ and therefore one has $\phi_2(\lambda_1S_1+\cdots+\lambda_qS_q)\notin (\oplus_{i<j\leq n}Re_i\wedge e_j)\oplus(\oplus_{\ell=1}^r\mathfrak{m} T_\ell)$.
This implies that $\phi_2$ is split injective and hence the induced map $\operatorname{Tor}^R_2(R/I,k) \to \operatorname{Tor}^R_2(k,k)$ is injective.
\end{proof}
\begin{thm}\label{CI} Let $(R,\mathfrak{m},k)$ be a complete intersection local ring, and let $I$ be an ideal of $R$ with $I\cap \mathfrak{m}^2=\mathfrak{m} I$.
Let $S=R/I$. Then the following are equivalent.
\begin{enumerate}
\item {The induced map} $R\rightarrow S$ is large.
\item {The ring} $S$ is complete intersection.
\item The induced map {$\operatorname{Tor}^R_2(S,k)\rightarrow \operatorname{Tor}^R_2(k,k)$} is injective.
\item The induced map {$\operatorname{Tor}^R_3(k,k)\rightarrow \operatorname{Tor}^S_3(k,k)$} is surjective.
\item The induced map $\operatorname{H}_1(I)\otimes_Rk \to \operatorname{H}_1(R)$ is injective.
\item The induced map {$\operatorname{H}_2(R)\rightarrow \operatorname{H}_2(S)$} is surjective.
\end{enumerate}
\end{thm}
\begin{proof} (1)$\Rightarrow$(2) Since $R$ is complete intersection, we have $\varepsilon_3(R)=0$ by \cite[Theorem 7.3.3]{Av}. Since $R\rightarrow S$ is large, we get $\varepsilon_3(S)=0$. Therefore $S$ is complete intersection by \cite[Theorem 7.3.3]{Av}.
(2)$\Rightarrow$(1) This follows from Example \ref{E1}(3).
(2)$\Rightarrow$(5) Assume $S$ is complete intersection. Then the map $R\rightarrow S$ is a quasi-complete intersection homomorphism by \cite[Proposition 7.7]{AHS}. By \cite[Theorem 5.3]{AHS} there exists an exact sequence
$0 \to \operatorname{H}_1(I)\otimes_Rk \to \pi_2(R)$ of $k$-vector spaces. Since $\pi_2(R)\cong \operatorname{H}_1(R)$, we have the induced map $\operatorname{H}_1(I)\otimes_Rk \to \operatorname{H}_1(R)$ is injective; see \ref{AC}.
(5)$\Rightarrow$(3) This follows from Lemma \ref{inj}.
(3)$\Leftrightarrow$(4) This follows from the change of rings spectral sequence; see \ref{DGS}.
(4)$\Rightarrow$(2) Same argument as (1)$\Rightarrow$(2) applies here as well.
(2)$\Leftrightarrow$(6) {Since $I\cap \mathfrak{m}^2=\mathfrak{m} I$, we have $\operatorname{H}_1(R)\to \operatorname{H}_1(S)$ is surjective by Lemma \ref{KH}.} Since $R$ is complete intersection, by the Tate-Assmus Theorem \cite[Theorem 2.3.11]{BH} $\operatorname{H}_2(R)=\operatorname{H}_1(R)^2$. {Therefore} $S$ is complete intersection if and only if $\operatorname{H}_2(S)=\operatorname{H}_1(S)^2$ if and only if $\operatorname{H}_2(R)\rightarrow \operatorname{H}_2(S)$ is surjective.
\end{proof}
\begin{rem}\label{P2} Let $f:R\rightarrow S$ be a surjective homomorphism of local rings. Then $f$ is large if $f_i:\operatorname{Tor}^R_i(k,k) \rightarrow \operatorname{Tor}^S_i(k,k)$ is surjective for all $i\gg 0$. Indeed, this is easy to see when $S$ is a regular ring. In this case, for example, the sujectivity of $\operatorname{K}(R)\to \operatorname{K}(S)$ implies that $f$ is large. Suppose $S$ is singular.
One has $f_i$ is surjective if and only if $f^i:\operatorname{Ext}^i_S(k,k) \rightarrow \operatorname{Ext}^i_R(k,k)$ is injective. It is well-known that $\operatorname{Ext}^*_S(k,k)$ is the universal enveloping algebra of the homotopy Lie algebra $\pi^*(S)$; see \cite[Theorem 10.2.1]{Av}. Then by \cite[Lemma 5.1.7]{AV}, any element $\chi \in \pi^2(S)$ (and hence any powers of $\chi$) is a non zero divisor on $\operatorname{Ext}^*_S(k,k)$. Let $\alpha \in \ker{f^j}$ for some $j$. One has $\chi^i \alpha \in \ker f^{2i+j}$. Since $f^i$ is injective for all $i\gg 0$, we have $\chi^i \alpha=0$ and hence $\alpha =0$.
\end{rem}
Regarding {Remark} \ref{P2}, one may ask whether $R\to S$ is large if the induced map $\operatorname{Tor}^R_i(S,k) \to \operatorname{Tor}^R_i(k,k)$ is injective for all $i \gg 0$. This is not true in general. For example, if $I\subseteq \mathfrak{m}^2$ is any non zero ideal of finite projective dimension then $R\to R/I$ is not large, while $\operatorname{Tor}^R_i(R/I,k) \to \operatorname{Tor}^R_i(k,k)$ is injective for all $i \gg 0$. We don't know whether $R\to R/I$ is large if $\mathrm{pd}_R(I)=\infty$ and $\operatorname{Tor}^R_i(R/I,k) \to \operatorname{Tor}^R_i(k,k)$ is injective for all $i \gg 0$. However, we are able to prove the following.
\begin{prop}\label{nonzero} Let $(R,\mathfrak{m},k)$ be a local ring and let $I$ be an ideal of $R$ with $I\cap \mathfrak{m}^2=\mathfrak{m} I$. Assume the induced map $\operatorname{H}_1(I)\to \operatorname{H}_1(R)$ is non zero. If $\varphi_i:\operatorname{Tor}^R_i(R/I,k) \to \operatorname{Tor}^R_i(k,k)$ is injective for all $i \gg 0$ then $R\to R/I$ is large.
\end{prop}
\begin{proof} Let $\zeta$ be an element in $\operatorname{H}_1(I)$ whose image under the natural map $\operatorname{H}_1(I) \to \operatorname{H}_1(R)$ is non zero, and let $z\in \operatorname{K}_1(I)$ be a cycle whose class in $\operatorname{H}_1(I)$ is $\zeta$. By using the natural injection $\operatorname{K}_1(I)\hookrightarrow \operatorname{K}_1(R)$ we consider $z$ as an element of $\operatorname{K}_1(R)$. Then one checks that $z=z'+\partial(a)$ for some $z'\in I\operatorname{K}_1(R)$ and $a\in \operatorname{K}_2(R)$. Therefore $[z]=[z']$ in $\operatorname{H}_1(R)$. Let $U=R\langle X_i\rangle_{i\geq 1}$ be an acyclic closure of $k$ over $R$, and let ${X}\in U$ be a divided variable of degree 2 such that $\partial({X})=z$; see \ref{AC}. By \cite[Lemma 6.3.3]{Av}, there exists a {chain} $\Gamma$-derivation $d:U\to U$ of degree $-2$ which is trivial on $\operatorname{K}(R)$ and $d({X}^{(i)})={X}^{(i-1)}$.
Set $S=R/I$ and $A=U\otimes_RS$. Then ${X}\otimes 1_S$ is a cycle in $A$ whose class $[{X}\otimes 1_S]$ is non zero in $\operatorname{Tor}^R_2(S,k)$ (because $\varphi_2([{X}\otimes 1_S])={X}\otimes 1_k\neq 0$). Let $\alpha\otimes 1_S \in A$ be such that $[\alpha \otimes 1_S]\in \operatorname{Tor}^R_j(R/I,k)$ and $\varphi_j([\alpha \otimes 1_S])=0$. Since $\varphi_*$ is homomorphism of graded algebras, we have $\varphi_{j+2i}([{X}^{(i)}\alpha \otimes 1_S])=0$. Thus by assumption, $[{X}^{(i)}\alpha \otimes 1_S]=0$ for $i\gg0$. If $[d(\alpha)\otimes 1_S]=0$ then we have $[\alpha\otimes 1_S] = [d^i({X}^{(i)}\alpha) \otimes 1_S] =0$ and we are done.
Suppose $[d(\alpha)\otimes 1_S]\neq 0$, and let $r$ be the biggest integer such that $[d^r(\alpha)\otimes 1_S]\neq 0$. Since $\varphi_j([\alpha \otimes 1_S])=0$ and $d$ commutes with differentials, one has $\varphi_{j-2r}([d^r(\alpha)\otimes 1_S])=0$. By replacing $[\alpha \otimes 1_S]$ with $[d^r(\alpha)\otimes 1_S]$, the same argument as above shows that $[d^r(\alpha)\otimes 1_S]=0$ which is a contradiction.
\end{proof}
\section{Large homomorphisms over Koszul rings and Golod rings}
Let $(R,\mathfrak{m},k)$ be a local ring and let $M$ be a finitely generated $R$-module. Let $F$ be a minimal free resolution of $M$ over $R$ and let $\operatorname{lin}^R(F)$ be the associated graded complex of $F$; see \cite[\S 1]{HI} and \cite[\S 2]{S} for more details. The \emph{linearity defect} $\operatorname{ld}_R(M)$ of $M$ over $R$ is defined by $$\operatorname{ld}_R(M)=\sup\{i|\ \operatorname{H}_i(\operatorname{lin}^R(F))\neq 0 \}.$$
\begin{chunk}{\bf Koszul Modules.} An $R$-module $M$ is called a \emph{Koszul module} (or is said to have a \emph{linear resolution}) if $\operatorname{lin}^R(F)$ is acyclic, equivalently $\operatorname{ld}_R(M)=0$. $R$ is called a \emph{Koszul ring} if $k$ is a Koszul $R$-module.
Let $\operatorname{gr}_{\mathfrak{m}}(R)=\oplus_{i\geq 0}\mathfrak{m}^i/\mathfrak{m}^{i+1}$ be the associated graded ring of $R$ and $\operatorname{gr}_\mathfrak{m}(M)=\oplus_{i\geq 0}\mathfrak{m}^i M/\mathfrak{m}^{i+1}M$ be the associated graded module of $M$. If $M$ is a Koszul module then $\operatorname{lin}^R(F)$ is a minimal free resolution of $\operatorname{gr}_\mathfrak{m}(M)$ over $\operatorname{gr}_{\mathfrak{m}}(R)$;
see \cite[Proposition 1.5]{HI}.
The \emph{regularity} $\operatorname{reg}_R(M)$ is defined to be the regularity of the graded module $\operatorname{gr}_\mathfrak{m}(M)$ over the graded ring $\operatorname{gr}_\mathfrak{m}(R)$; see \cite{S}. It follows that $M$ is Koszul if and only if $\operatorname{reg}_R(M)=0$.
\end{chunk}
\begin{defn} Let $k$ be a field, and let $A$ be a DG algebra with $\operatorname{H}_0(A)\cong k$. The algebra $A$ is said to admit a \emph{trivial Massey operation},
if for some $k$-basis ${\bf h}=\{h_\lambda\}_{\lambda \in \Lambda}$ of $\operatorname{H}_{\geq 1}(A)$ there exists $\mu : \coprod_{i\geq 1}{\bf h}^i \to A$ such that
\begin{enumerate}
\item $\mu(h_\lambda)=z_\lambda$ where $z_\lambda$ is a cycle in $A$ with class $[z_\lambda]=h_\lambda \in \operatorname{H}_{\geq 1}(A)$, and
\item $\partial \mu(h_{\lambda_1},\dots,h_{\lambda_n})= \displaystyle\sum^{n-1}_{i=1}\overline{\mu(h_{\lambda_1},\dots,h_{\lambda_i})}\mu(h_{\lambda_{i+1}},\dots,h_{\lambda_n})$, where $\overline{a}=(-1)^{|a|+1}a$.
\end{enumerate}
\end{defn}
\begin{defn}\label{GH} A surjective local homomorphism $f: R \to S$ is called \emph{Golod} if $$\operatorname{P}^S_k(t)=\frac{\operatorname{P}^R_k(t)}{1-t(\operatorname{P}^R_S(t)-1)},$$ or equivalently, the DG algebra $A = U\otimes_R S$ admits a trivial Massey operation, where $U$ is a minimal DG algebra resolution of $k$ over $R$; see \cite[Theorem 1.5]{Lev2}.
\end{defn}
{\begin{thm}\label{Tor} Let $(R,\mathfrak{m},k)$ be a local ring, and let $I$ be an ideal of $R$ such that $I\cap \mathfrak{m}^2=\mathfrak{m} I$. Then the following conditions are equivalent.
\begin{enumerate}
\item The map \emph{$\operatorname{Tor}^R_i(\mathfrak{m} I,k)\rightarrow \operatorname{Tor}^R_i(I,k)$} induced by the inclusion $\mathfrak{m} I\hookrightarrow I$ is zero for all $i\geq 0$.
\item The homomorphism $\phi: R\rightarrow R/I$ is large, and the homomorphism $\psi: R\rightarrow R/\mathfrak{m} I$ is small.
\end{enumerate}
{Moreover, under these equivalent conditions $\psi$ is a Golod homomorphism.}
\end{thm} }
\begin{proof}
The assumption $I\cap \mathfrak{m}^2=\mathfrak{m} I$ implies an exact sequence $0\rightarrow I/\mathfrak{m} I \overset{\iota}\rightarrow \mathfrak{m}/\mathfrak{m}^2 \rightarrow \mathfrak{m}/(\mathfrak{m}^2+I) \rightarrow 0$ of $k$-vector spaces. By applying $-\otimes_Rk$ to the commutative diagram
$$
\begin{CD}
0 @>>> \mathfrak{m} I @>>> I @>>> I/\mathfrak{m} I @>>> 0 \\
@. @V VV @VVV @V\iota VV \\
0 @>>> \mathfrak{m}^2 @>>> \mathfrak{m} @>>> \mathfrak{m}/\mathfrak{m}^2 @>>> 0,
\end{CD}
$$
we get a commutative diagram
$$
\begin{CD}
\operatorname{Tor}^R_i(\mathfrak{m} I,k) @>f_i>> \operatorname{Tor}^R_i(I,k) @>\gamma_i>> \operatorname{Tor}^R_i(I/\mathfrak{m} I,k) \\
@VVV @Vg_iVV @V\iota_iVV \\
\operatorname{Tor}^R_i(\mathfrak{m}^2,k) @>>> \operatorname{Tor}^R_i(\mathfrak{m},k) @>>> \operatorname{Tor}^R_i(\mathfrak{m}/\mathfrak{m}^2,k)
\end{CD}
$$ with exact rows, where $\iota_i$ is injective.
(1)$\Rightarrow$ (2): Since $f_i$ is zero map and $\iota_i$ is split injection, the composition $\iota_i \circ \gamma_i$ is injective.
Therefore, the map $g_i$ is injective for all $i\geq 0$. Therefore $\operatorname{Tor}^R_i(R/I,k)\rightarrow \operatorname{Tor}^R_i(k,k)$ is injective for all $i \geq 0$.
Next, we show that $\psi$ is {Golod and hence it is} small{; see \cite[Definition 1.1]{Levin}}. {The argument is similar to the proofs of} \cite[Lemma 1.2]{RS} and \cite[Lemma 2.1]{Ah} {but we bring it for the reader's convenience.} Let $U\rightarrow k$ be a minimal DG algebra resolution of $k$ over $R$, and set $A=U\otimes_RR/\mathfrak{m} I$. Then $\operatorname{H}_i(A)\cong \operatorname{Tor}^R_i(R/\mathfrak{m} I, k)$. The map $\operatorname{Tor}^R_i(R/\mathfrak{m} I,k)\rightarrow \operatorname{Tor}^R_i(R/I,k)$ is identified with the natural map $\operatorname{H}_i(A)\rightarrow \operatorname{H}_i(A/IA)$ for all $i$. Since $f_i=0$, every element in $\operatorname{H}_{i>0}(A)$ can be represented by $[x]$ for some $x\in IA$. Thus for any choice of $[z], [w]\in\operatorname{H}_{>0}(A)$ one has $z\cdot w \in I^2 A = 0$. Hence $A$ admits a trivial Massey operation; see \cite[Lemma 1.2]{Lev2}.
(2) $\Rightarrow$ (1): Suppose the map $\phi:R\rightarrow R/I$ is large, and the map $\psi: R\rightarrow R/\mathfrak{m} I$ is small homomorphisms. { There exist a commutative diagram
$$
\begin{CD}
\operatorname{Tor}^R_i(R/\mathfrak{m} I,k) @>h_i>> \operatorname{Tor}^R_i(k,k) @>>> \operatorname{Tor}^R_{i-1}(\mathfrak{m}/\mathfrak{m} I,k) \\
&& @V\psi_iVV @VVV \\
&& \operatorname{Tor}^{R/\mathfrak{m} I}_i(k,k) @>\cong>> \operatorname{Tor}^{R/\mathfrak{m} I}_{i-1}(\mathfrak{m}/\mathfrak{m} I,k),
\end{CD}
$$
for all $i\geq 1$. Since $\psi_i$ is injective for all $i\geq 0$, the commutative diagram implies that $h_i=0$ for all $i\geq 1$.}
Therefore, in the commutative diagram above, we have $g_i\circ f_i=0$ for all $i \geq 0$. Since $\phi$ is large, $g_i$ is injective and therefore $f_i=0$ as desired.
\end{proof}
The following is an immediate consequence of Theorem \ref{Tor}.
\begin{cor}\label{exlin} Let $(R,\mathfrak{m},k)$ be a local ring, and let $I$ be an ideal of $R$ with $I\cap \mathfrak{m}^2 =\mathfrak{m} I$. If $R/I$ is a Koszul $R$-module then $R\rightarrow R/I$ is a large homomorphism {and $R \to R/\mathfrak{m} I$ is a Golod homomorphism}.
\end{cor}
\begin{proof} Since $I\cap \mathfrak{m}^2 =\mathfrak{m} I$ and $R/I$ is a Koszul $R$-module, $\operatorname{reg}_R(R/I)=0$. Therefore the maps $\operatorname{Tor}^R_i(\mathfrak{m} I,k)\rightarrow \operatorname{Tor}^R_i(I,k)$ are zero for all $i\geq 0$ by \cite[Theorem 3.2]{S}. Now the result follows from Theorem \ref{Tor}.
\end{proof}
\begin{cor}\label{GK} Let $(R,\mathfrak{m},k)$ be a graded Koszul local ring and let $I$ be a homogeneous ideal of $R$. If the map ${R\to} R/I$ is large, then $R\to R/\mathfrak{m} I$ is a Golod homomorohism and $R/\mathfrak{m} I$ is a Koszul ring.
\end{cor}
\begin{proof} Set $S=R/\mathfrak{m} I$. Since $R$ is Koszul, we have $\operatorname{reg}_R(k)=0$, and since $R \to R/I$ is large, we have the induced map $\operatorname{Tor}^R_*(R/I,k) \to \operatorname{Tor}^R_*(k,k)$ is injective. This implies that $\operatorname{reg}_R(R/I)=0$ and hence it is a Koszul $R$-module.
Therefore the map $R \to S$ is a Golod homomorphism by Corollary \ref{exlin}.
Next, we show that $S$ is Koszul. Since $R\to R/I$ is large, we have $I\cap \mathfrak{m}^2=\mathfrak{m} I$ by \ref{NC}. Then we may assume
$\mathfrak{m}=(x_1,\dots,x_r,x_{r+1},\dots,x_n)$ such that $I=(x_1,\dots, x_r)$. Let $J=(x_{r+1},\dots,x_n)$, and let $\bar{I}$ and $\bar{J}$ be ideals of $S$ generated by images of $I$ and $J$, respectively. Then one easily checks that $\bar{I}\cap\bar{J}=0$ and therefore $S$ is the fiber product of $S/\bar{I}$ and $S/\bar{J}$. We have $S/\bar{I} \cong R/I$. Since $R$ is Koszul and $R\to R/I$ is large, the surjectivity of $\operatorname{Tor}^R_*(k,k)\to \operatorname{Tor}^{R/I}_*(k,k)$ shows that $R/I$ is Koszul.
On the other hand, $S/\bar{J}$ is isomorphic to a graded local ring $(T,\mathfrak{n})$ with $\mathfrak{n}^2=0$ which is Koszul.
Now, by \cite[Proposition 3.11]{M}, $S$ is Koszul.
\end{proof}
A Koszul local ring $R$ is called \emph{absolutely Koszul} if $\operatorname{ld}_R(M)<\infty$ for every finitely generated $R$-module $M$.
\begin{cor}\label{AK} Let $(R,\mathfrak{m},k)$ be a graded Koszul complete intersection local ring and let $I$ be an ideal of $R$. If ${R\to} R/I$ is {large} then $R/\mathfrak{m} I$ is absolutely Koszul.
\end{cor}
\begin{proof} This follows by Corollary \ref{GK} and \cite[Theorem 5.9]{HI}.
\end{proof}
\begin{rem}\label{short} Let $(R,\mathfrak{m},k)$ be a local Gorenstein ring with $\mathfrak{m}^3=0$. Let $I$ be a non zero ideal of $R$ such that $I\nsubseteq \mathfrak{m}^2$. {Then by \cite[Corollary 4.7]{AIS} $R/I$ is a Koszul $R$-module. Therefore the map $R\rightarrow R/I$ is large and $R\to R/(0:_R\mathfrak{m})$ is Golod by Corollary \ref{exlin}. The later has been proven for all Artinian Gorenstein rings; see \cite[Theorem 2]{AL}.
Note that $R \to R/I$ may not be large if $R$ is not Gorenstein; see \cite[Example 3.12]{GT}.}
\end{rem}
\begin{exam}\label{E2} Let $R=k[x,y,z]/(x^2,y^2,z^2)$, where $k$ is a field.
Then $R$ is an Artinian Koszul complete intersection with $\mathfrak{m}^4=0$. Consider the ideal $I=(x+y+z)$ of $R$. Then $R/I\cong k[y,z]/(y^2,z^2,yz)$ which is not a complete intersection. Therefore $R\rightarrow R/I$ is not large by Proposition \ref{CI}.
If $k$ has characteristic different than $2$ then $\mathfrak{m} I=(xy,xz,yz)$ and therefore $R/\mathfrak{m} I=R/\mathfrak{m}^2$. This shows that the converse of Corollary \ref{AK} is not true. Also, one checks that $R\to R/\mathfrak{m} I$ is a Golod homomorphism. Therefore in Theorem \ref{Tor}, Golodness of $R\to R/\mathfrak{m} I$ does not guarantee that $R\to R/I$ is large.
\end{exam}
\begin{prop} Let $(R,\mathfrak{m},k)$ be a local ring and let $I$ be an ideal of $R$. Assume $\operatorname{ld}_R(R/I)=l<\infty$. If the induced maps $\operatorname{H}_1(I) \to \operatorname{H}_1(R)$ and $\operatorname{Tor}^R_l(R/I,k) \to \operatorname{Tor}^R_l(k,k)$ are respectively non zero and injective, then the map $R\to R/I$ is large.
\end{prop}
\begin{proof} Let $F$ and $G$ respectively be minimal free resolutions of $R/I$ and $k$ over $R$.
Let $f: F \to G$ be the comparison homomorphism induced by $R/I \to k$. We show by induction that $f_i$ is split injective for all $i\geq l$. The injectivity of $\operatorname{Tor}^R_l(R/I,k) \to \operatorname{Tor}^R_l(k,k)$ settles the case $i=l$.
Let $n> l$ and suppose $f_i$ is split injective for $l\leq i < n$. Consider the commutative diagram
$$
\begin{CD}
\dots @>>> F_n @>\partial^F_n>> F_{n-1} @>>> \dots\\
@. @V f_n VV @V f_{n-1} VV \\
\dots @>>> G_n @>\partial^G_n>> G_{n-1} @>>> \dots
\end{CD}
$$
and let $e\in F_n \setminus \mathfrak{m} F_n$. If $f_n(e) \in \mathfrak{m} G_n$ then $f_{n-1} \partial^F_n(e) \in \mathfrak{m}^2 G_{n-1}$. Since $f_{n-1}$ is split injective, we have $\partial^F_n(e) \in \mathfrak{m}^2 F_{n-1}$. As $\operatorname{ld}_R(R/I)<n$, this is a contradiction and hence $f_n(e) \in G_n\backslash \mathfrak{m} G_n$.
Therefore the induced map $\operatorname{Tor}^R_i(R/I,k) \to \operatorname{Tor}^R_i(k,k)$ is injective for all $i\geq l$. Now by using Proposition \ref{nonzero}, the induced map $R\to R/I$ is large.
\end{proof}
\begin{defn} A local ring $(R,\mathfrak{m},k)$ is called \emph{Golod} if the Poincar\'{e} series of $k$ over $R$ has presentation
$$\operatorname{P}^R_k(t)=\frac{(1+t)^{\nu_R(\mathfrak{m})}}{1-\sum_{i\geq 1}\dim_k(\operatorname{H}_i(R))t^{i+1}},$$
where $\nu_R(\mathfrak{m})$ is the number of minimal generating set of $\mathfrak{m}$.
\end{defn}
The following gives a sufficient condition for large homomorphisms over Golod rings by using the Koszul homologies.
\begin{prop}\label{Massey} Let $(R,\mathfrak{m}, k)$ be a Golod local ring, and let $f:R\to S$ be a surjective local homomorphism.
If the natural map \emph{$\operatorname{H}_i(R)\rightarrow \operatorname{H}_i(S)$} is surjective for all $i\geq 1$, then $f$ is a large homomorphism.
\end{prop}
\begin{proof} Note that $R$ is Golod if and only if $\operatorname{K}(R)$ admits a trivial Massey operation; see \cite[Theorem 5.2.2]{Av} and \cite[Corollary 4.2.4]{GL}. Since $R$ is Golod, $\operatorname{K}(R)$ admits a trivial Massey operation. The surjectivity of $\operatorname{H}_i(R)\rightarrow \operatorname{H}_i(S)$ implies that $\operatorname{K}(S)$ admits a trivial Massey operation too and therefore it is a Golod ring. Let $F$ and $G$ respectively be minimal free resolutions of $k$ over $R$ and $S$ described in \cite[Theorem 5.2.2]{Av}. Then there exists a natural chain map $f:F\rightarrow G$ which only depends on the choice of $k$-bases of $\operatorname{H}_{i\geq 1}(R)$ and $\operatorname{H}_{i\geq 1}(S)$. Since $\operatorname{H}_i(R)\rightarrow \operatorname{H}_i(S)$ is surjective, $f$ is surjective as well. Therefore the induced map $\operatorname{Tor}_{*}^R(k,k)\rightarrow \operatorname{Tor}^S_*(k,k)$ is surjective as well. This finishes the proof.
\end{proof}
Recently, Gupta proved that if $R$ is a Golod local ring and a homomorphism $f: R\to S$ is large, then $S$ is Golod; see \cite[Theorem 1.5]{G}. This result provides a useful tool to detect Golod rings by using large homomorphism.
\begin{exam} Let $R=\displaystyle\frac{k[x,y,z]}{(x^2,xy,xz,y^2,z^2)}$. Then $R$ is an Artinian local ring with the maximal ideal $\mathfrak{m}=(x,y,z)$.
We have $R/(x)\cong k[y,z]/(y^2,z^2)$ is a complete intersection of codimension two. Therefore $R\rightarrow R/(x)$ is large by Example \ref{E1}(3), and $R/(x)$ is not Golod. Hence $R$ is not Golod.
\end{exam}
\begin{lem}\label{power} Let $(Q,\mathfrak{n},k)$ be a regular local ring, $R=Q/\mathfrak{n}^p$ with $p\geq 2$, and let $\mathfrak{m}$ be the maximal ideal of $R$. Let $I$ be an ideal of $R$ such that $I\cap \mathfrak{m}^2=\mathfrak{m} I$. Then the natural map \emph{$\mathfrak{m}^{p-1}\operatorname{K}_i(I)\rightarrow \operatorname{H}_i(I)$} is surjective and therefore splits for all $i \geq 1$.
\end{lem}
\begin{proof} Let $x_1,\dots,x_n$ be a minimal generating set of $\mathfrak{n}$, and let $\overline{x}_i$ be the image of $x_i$ in $R$.
We may assume that $I=(\overline{x}_1,\dots,\overline{x}_r)$ for some $1\leq r \leq n$. If $r=1$ then $I=(\overline{x}_1)$ and we have $\operatorname{H}_1(I)=(0:_R\overline{x}_1)\operatorname{K}_1(I)=\mathfrak{m}^{p-1}\operatorname{K}_1(I)$.
Assume $r>1$ and set $J=(\overline{x}_1,\dots,\overline{x}_{r-1})$. There exists an exact sequence $$\dots \rightarrow\operatorname{H}_{i}(J)\overset{\overline{x}_r}\longrightarrow \operatorname{H}_i(J)\longrightarrow \operatorname{H}_i(I)\longrightarrow \operatorname{H}_{i-1}(J)\overset{\overline{x}_{r}}\longrightarrow \dots.$$
By induction hypothesis $f_i: \mathfrak{m}^{p-1}\operatorname{K}_i(J)\rightarrow \operatorname{H}_{i}(J)$ is surjective and so $\operatorname{H}_{i}(J)$ is a $k$-vector space for all $i\geq 1$. Thus the multiplication with $\overline{x}_r$ is zero map when $i\geq 1$. Therefore for $i\geq 2$ there is a commutative diagram
$$
\begin{CD}
0 @>>> \mathfrak{m}^{p-1}\operatorname{K}_i(J) @>>> \mathfrak{m}^{p-1}\operatorname{K}_i(I) @>>> \mathfrak{m}^{p-1}\operatorname{K}_{i-1}(J) @>>> 0\\
@. @Vf_i VV @Vg_iVV @Vf_{i-1} VV \\
0 @>>> \operatorname{H}_{i}(J) @>>> \operatorname{H}_i(I) @>>> \operatorname{H}_{i-1}(J) @>>> 0 .
\end{CD}
$$
By induction $f_{i-1}$ and $f_i$ are surjective and therefore $g_i$ is surjective for $i\geq 2$. When $i=1$ the last diagram turns to
$$
\begin{CD}
0 @>>> \mathfrak{m}^{p-1}\operatorname{K}_1(J) @>>> \mathfrak{m}^{p-1}\operatorname{K}_1(I) @>>> \mathfrak{m}^{p-1} @>>> 0\\
@. @Vf_1 VV @Vg_1VV @Vf_{0} VV \\
0 @>>> \operatorname{H}_{1}(J) @>>> \operatorname{H}_1(I) @>>> (0:_{R/J}\overline{x}_r) @>>> 0.
\end{CD}
$$
Note that $R/J\cong Q'/\mathfrak{n}'^p$ where $Q'=Q/(x_1,\dots,x_{r-1})$ is a regular local ring whose maximal ideal is $\mathfrak{n}'=(\overline{x}_r,\dots,\overline{x}_n)$. One observes that $f_0$ is a natural map of $k$-vector spaces such that $f_0(\overline{x}_r^{\alpha_r}\cdots\overline{x}_n^{\alpha_n})=\overline{x}_r^{\alpha_r}\cdots \overline{x}_n^{\alpha_n}$ where $\alpha_r+\dots+\alpha_n=p-1$. Therefore $f_0$ is surjective and since $f_1$ is surjective by induction, $g_1$ is surjective as well.
\end{proof}
\begin{cor} Let $(Q,\mathfrak{n},k)$ be a regular local ring. Let $R=Q/\mathfrak{n}^p$ where $p\geq 2$, and let $\mathfrak{m}$ be the maximal ideal of $R$. Then for any ideal $I$ of $R$ such that $I\cap \mathfrak{m}^2=\mathfrak{m} I$ one has $R\rightarrow R/I$ is a large homomorphism.
\end{cor}
\begin{proof} {It is well-known that $R$ is a Golod ring; see \cite[Theorem 4.2.6]{GL}.} Set $S=R/I$ and let $\overline{\mathfrak{m}}$ be the maximal ideal of $S$. {By Lemma \ref{power},} $ \mathfrak{m}^{p-1}\operatorname{K}_i(R)\rightarrow \operatorname{H}_i(R)$ and $\overline{\mathfrak{m}}^{p-1}\operatorname{K}_i(S)\rightarrow \operatorname{H}_i(S)$ are surjective for all $i\geq 1$. Since $\mathfrak{m}^{p-1}\operatorname{K}_i(R)\rightarrow \overline{\mathfrak{m}}^{p-1}\operatorname{K}_i(S)$ is also surjective, the commutative diagram
$$
\begin{CD}
\mathfrak{m}^{p-1}\operatorname{K}_i(R) @>>> \operatorname{H}_i(R) \\
@ VVV @ VVV \\
\overline{\mathfrak{m}}^{p-1}\operatorname{K}_i(S) @>>> \operatorname{H}_i(S)
\end{CD}
$$
shows that the induced map $\operatorname{H}_i(R)\rightarrow \operatorname{H}_i(S)$ is surjective for all $i\geq 1$. Now the result follows from Proposition \ref{Massey} .
\end{proof}
\begin{ac}
The authors thank Rasoul Ahangari Maleki, David Jorgensen, and Liana \c{S}ega for their valuable suggestions and comments on the manuscript. The authors also thank the referee for providing very useful comments and suggestions that improved this article.
\end{ac}
|
2,869,038,155,115 | arxiv |
\section*{\refname}
\newtheorem{thm}{Theorem}
\newtheorem{lem}[thm]{Lemma}
\renewcommand{\P}{\mathbb{P}}
\renewcommand{\L}{\mathfrak{L}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathfrak{F}}{\mathfrak{F}}
\newcommand{\mathfrak{B}}{\mathfrak{B}}
\newcommand{\mathfrak{H}}{\mathfrak{H}}
\newcommand{\makeatletter @\makeatother}{\makeatletter @\makeatother}
\newcommand{\norm}[1]{\left\lVert#1\right\rVert}
\newcommand{\mathbf{x}}{\mathbf{x}}
\newcommand{\mathbf{v}}{\mathbf{v}}
\newcommand{{\boldsymbol\theta}}{{\boldsymbol\theta}}
\renewcommand{\u}{\mathbf{u}}
\renewcommand{\r}{\mathbf{r}}
\newcommand{\mathbf{s}}{\mathbf{s}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\textrm{d}}{\textrm{d}}
\DeclareMathOperator*{\argmin}{arg\,min}
\DeclareMathOperator*{\argmax}{arg\,max}
\makeatletter
\providecommand*{\diff}%
{\@ifnextchar^{\DIfF}{\DIfF^{}}}
\def\DIfF^#1{%
\mathop{\mathrm{\mathstrut d}}%
\nolimits^{#1}\gobblespace}
\def\gobblespace{%
\futurelet\diffarg\opspace}
\def\opspace{%
\let\DiffSpace\!%
\ifx\diffarg(%
\let\DiffSpace\relax
\else
\ifx\diffarg[%
\let\DiffSpace\relax
\else
\ifx\diffarg\{%
\let\DiffSpace\relax
\fi\fi\fi\DiffSpace}
\let\oldnl\n
\newcommand{\nonl}{\renewcommand{\nl}{\let\nl\oldnl}
\newcommand{\ensuremath{^\circ}}{\ensuremath{^\circ}}
\newcommand{\supe}[1]{^{\smash{\mathrlap{#1}}}}
\newcommand{\indi}[1]{_{\smash{\mathrlap{#1}}}}
\newcommand{\suin}[2]{\supe{#1}\indi{#2}}
\journal{Computational and Applied Mathematics}
\begin{document}
\begin{frontmatter}
\title{Algorithm for the reconstruction of dynamic objects in CT-scanning using optical flow \tnoteref{tnote1}}
\author[UA]{Koen Ruymbeek\corref{cor1}\fnref{label1}}
\ead{[email protected]}
\cortext[cor1]{Corresponding author}
\fntext[label1]{Present adress: Department of Computer Science, KU Leuven, Celestijnenlaan 200A, 3001 Leuven, Belgium}
\author[UA]{Wim Vanroose}
\tnotetext[tnote1]{This research did not receive any specific grant from funding agencies in the public, commercial, or
not-for-profit sectors.}
\address[UA]{Department of Mathematics and Computer Science, University of Antwerp, Middelheimlaan 1, 2020 Antwerp, Belgium}
\begin{abstract}
Computed Tomography is a powerful imaging technique that allows non-destructive visualization of the interior of physical objects in different scientific areas. In traditional reconstruction techniques the object of interest is mostly considered to be static, which gives artefacts if the object is moving during the data acquisition. In this paper we present a method that, given only scan results of multiple successive scans, can estimate the motion and correct the CT-images for this motion assuming that the motion field is smooth over the complete domain using optical flow. The proposed method is validated on simulated scan data. The main contribution is that we show we can use the optical flow technique from imaging to correct CT-scan images for motion.
\end{abstract}
\begin{keyword}
Computed tomography \sep dynamic inverse problems \sep optical flow
\end{keyword}
\end{frontmatter}
\section{Introduction}
\paragraph{CT} Computed Tomography (CT) is a powerful imaging technique that allows
non-destructive visualization of the interior of physical objects in
medical applications, bio-mechanical research, material science,
geology, etc. In current applications, a certain imaging resource
and detector, e.g. an X-ray source and detector, are used to acquire
two-dimensional projection images of an object, each measured from different
directions. From these projections, a three-dimensional virtual
reconstruction can then be computed. We refer to
\cite{webb1990watching} for a review on the origin of computed
tomography.
\paragraph{Reconstruction techniques} In practice, the most commonly used analytical method for CT
reconstruction is filtered back projection \cite{pan2009commercial}. A major drawback of this method is its inflexibility to different
experimental set-ups and its inability to include reconstruction
constraints, which can be used to exploit possible prior information
to improve the reconstruction of the object. Iterative Algebraic Reconstruction Techniques
(ARTs) are a powerful alternative to the aforementioned
analytical method by describing the reconstruction
problem as a system of linear equations. Algebraic reconstruction
methods include SIRT \cite{gregor2008computational},
SART \cite{andersen1984simultaneous} and DART \cite{batenburg2011dart}
and the general class of Krylov solvers such as CG, BiCGStab, GMRES,
CGLS and LSQR (of which the latter two are applicable to non-square
systems), an overview of which can be found in
\cite{simoncini2007recent}.
\paragraph{Time-dependent CT}
In both the analytical and algebraic methods, the object of interest
is traditionally considered to be static. However, when the object is moving or changing form during the data
acquisition process, the flow (direction of movement) also needs to be
reconstructed. This requires the calculation of a full 3D
reconstruction of both the object and the flow field from the
projection measurements over a period of time, yielding a 4D
tomographic reconstruction problem, i.e. 3D in space plus 1D for time.
During the previous years, significant progress is made to account for
motion during the acquisition process. A first approach is to model
the motion in the reconstruction model \cite{mooser2013estimation, van2012combined, li2005motion, van2014region}. A
second method sorts the data in subsets such that each subset
contains data acquired from a static object. This technique is used, for example when there
is periodic motion such as breathing \cite{lu2006comparison} or
heartbeats. Within each subset, where the object does not change, a
reconstruction is performed. Finally, if the motion is known upfront, it is possible to compensate for the motion of the object in the CT-scanning \cite{hahn2014efficient} by using the method of the approximate inverse \cite{sota_4}.
\paragraph{Optical flow}
Throughout this paper, we extract the motion using optical flow from the scan data. This is a widely used technique in imaging (see \cite{bardow2016simultaneous, rol_shut} and it exploits the differences between the images to identify patterns of motion. Let $f(x,y,t)$ and $f(x,y,t+\Delta t)$ be two pictures taken with a small time difference $\Delta t$ (we consider 2D images for convenience). Assuming that for every $(x,y)$ holds that $f(x,y,t) = f(x+\Delta x, y+ \Delta y, t+ \Delta t)$ for some $\Delta x$ and $\Delta y$ (this is \emph{the brightness constancy constraint}), we can deduce the linear system $\frac{\partial f}{\partial x} v_x+\frac{\partial f}{\partial y} v_y = -\frac{\partial f}{\partial t}$ using Taylor series. The unknowns $v_x$ and $v_y$ are the $x$ and $y$ components of the optical flow or the velocity components of the object. There are many ways to use optical flow to estimate the motion from a sequence of images. The most widely used methods are the differential methods that can be classified into local methods like the Lucas-Kanade technique \cite{LK}, global methods like the Horn-Schunck approach \cite{HS} and some extensions \cite{HS_TV}. Local methods calculate the flow per point whereas global methods solve one equation for the entire domain. Recently, methods were developed that are a combination of a local and a global method \cite{Bruhn2005} and also Newton-Krylov methods with regularization have been applied to this problem \cite{mang2015inexact}.
\paragraph{Outline} The paper is structured as follows. Section \ref{sec:notation} introduces the general notations and basic concepts used throughout this article. In section~\ref{sec:mot_est} we look at how we can retrieve the motion given just the scan results from a theoretical as well as from a practical point of view. In section~\ref{sec:cor_images} we present our method to correct CT-scan images for the motion. Numerical results on both the motion estimation and the correcting of the images are shown in section~\ref{sec:numer_results}. Finally, we draw some conclusions and we review some research possibilities in section \ref{sec:conclusion}. The main contribution is that techniques in imaging to detect motion can also be used to detect the motion in CT-scan images. The numerical results show that it gives also in practice the desired results.
We note discretisations of analytic variables with bold small letters and matrices with bold big letters.
\section{Notation and key concepts} \label{sec:notation}
We describe everything for 2D CT-scanning, but all definitions can easily be expanded to 3D.
\subsection{Modelling scan data}
Let $f$ be a time-dependent object, i.e. a function $f: \mathbb{R}^2 \times [0,T_f] \rightarrow \mathbb{R}$ where $[0, T_f]$ is the time interval. For notational purposes, let $f_t: \mathbb{R}^2 \rightarrow \mathbb{R}$ be the object $f$ at time $t$. We assume, for all $t$, that $f_t$ is an element of $L^2(\mathbb{R}^2)$ and that it is zero outside a square domain $\Omega \subset \mathbb{R}^2$ around the center.
We first define the used scan model for a stationary object (independent of $t$). In the next paragraph we extend this to time-dependent objects.
The scan data per X-ray are modelled by the so-called \emph{Radon transformation}. The Radon transform for a specific angle $\alpha \in [0, 2 \pi]$ and shift $u \in \mathbb{R}$ is defined as the integral
\begin{equation}
\mathfrak{R} f(\alpha,u) = \int_{L(\alpha,u)} f(x,y)\diff x \diff y \label{eq:radon}
\end{equation}
where $L(\alpha, u) = \{ (x,y) \in \mathbb{R}^2 | x \cos(\alpha)+y
\sin(\alpha) = u\}$. This integration area is the line
perpendicular to the direction $(\cos(\alpha), \sin(\alpha))$ at a
shift $u$ of the origin. An illustration of this definition can be
found in figure~\ref{intro_fig_CT_scan}. A full scan consists of projection data over angles in an interval of size $\pi$ and shifts $u$ such that the X-ray bunch covers the hole domain $\Omega$. The set of all projection data is called a \emph{sinogram} and is thus defined as
\begin{equation}
\mathfrak{R} f: [0, \pi[ \rightarrow \mathbb{R}: (\alpha,u) \mapsto \mathfrak{R} f(\alpha,u) \label{eq:sinogram}
\end{equation}
for a stationary object. An example is given in figure \ref{intro_vb}.
\begin{figure}[H]
\begin{center}
\input{geogebra/ctr_3_intro}
\caption{Schematic representation of the 2D Radon transformation. The grey box is the domain $\Omega$ of the object $f$} \label{intro_fig_CT_scan}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[scale=0.5, clip]{Afbeeldingen/ua_logo-cropped.pdf}
\caption{}
\end{subfigure} \hspace{0.25cm}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[scale=0.47,clip]{Afbeeldingen/sino_UA-cropped.pdf}
\caption{}
\end{subfigure}
\caption{(a) The image $f$ on a $256 \times 256$ pixel grid at time point $0$ where black represents a zero and white represents the value 1. (b) Sinogram of the example on the left. Given this sinogram, we want to retrieve the object on the left. The x-axis respectively y-axis represents the different angles (between 0 and $\pi$) respectively the different detectors. The color shows the value of the detected X-ray.} \label{intro_vb}
\end{figure}
If the object moves or changes form we need to generalise the aforementioned definitions. We assume that all projection data for an angle $\alpha$ (so for all shifts $u$) is recorded instantaneously but that each angle has a different recording time $t$. Each complete scan has a total scanning time $\Delta t$ and we assume we scan consecutively $m$ times, so the total scan time is $m \Delta t$. The relation between the angle and the time we acquire data for this angle, is given by
\begin{align*}
T: [0, m \pi[ \rightarrow [0, m \Delta t[: \alpha \mapsto \dfrac{\alpha}{\pi} \Delta t.
\end{align*}
This means the Radon transform \eqref{eq:radon} is generalised as
\begin{equation}
\mathfrak{R}_{T(\alpha)} f(\alpha,u) = \int_{L(\alpha,u)} f(x,y; T(\alpha))\diff x \diff y.
\end{equation}
For notational purposes, we define $$t_i := T(i \pi), i = 0, \ldots, m,$$
as the start of the $i + 1 \,$-th scan and/or the end of the $i \,$-th scan. We make the sinograms continually so we define the sinogram at time $t \in [ \dfrac{\Delta t}{2}, m \Delta t - \dfrac{\Delta t}{2}[$ as $$\mathfrak{R}_t^{\delta t} f: [ \dfrac{t \pi}{\Delta t} -\pi/2, \dfrac{t \pi}{\Delta t} + \pi/2[ \times \mathbb{R} \rightarrow \mathbb{R}: (\alpha, u) \mapsto \mathfrak{R}_{T(\alpha)+ \delta t} f( \alpha, u) $$
where the constant $\delta t$ is the time shift. Mostly $\delta t$ is equal to zero, then we just write $\mathfrak{R}_t f$ instead of $\mathfrak{R}_t^0 f$.
For the reconstruction of the data, we make use of \emph{the filter backprojection theorem} see for example \cite{math_ct}. The \emph{backprojection} of a sinogram $\mathfrak{R} f$ of a time-independent function $f \in L^2(\mathbb{R}^2)$ is defined by
\begin{align*}
f^\text{rec} = \mathfrak{B} \mathfrak{R} g(x,y) & := \int^{\pi}_{0} \int^{\infty}_{-\infty} |\rho| \mathfrak{F}_1\big( \mathfrak{R} f(\alpha,u) \big) \big( \rho \big) \exp \left( 2\pi i \rho \left( x \cos(\alpha) + y \sin(\alpha) \right)\right) \diff \rho \diff \alpha \\
& = \int^{\pi}_{0} \mathfrak{F}_1^{-1} \Big( |\rho| \mathfrak{F}_1\big( \mathfrak{R} f(\alpha, u)\big) \big( \rho \big) \Big)\Big(x \cos(\alpha) + y \sin(\alpha) \Big) \diff \alpha
\end{align*}
where $\mathfrak{F}_1$ is the 1-dimensional Fourier transform.
Note that the domain of integration for $\alpha$ just needs to be an interval of length $\pi$ and that the interval itself depends on the angles for which we have projection data.
For a time-dependent object $f$, we define the reconstructed object $f^{\text{rec}}_t$ at time $t$ as
\begin{align}
f_t^{\text{rec}}(x,y) & = \mathfrak{B} \mathfrak{R}_t f(x,y) = \int_{\frac{t \pi}{\Delta t} - \frac{\pi}{2}}^{\frac{t \pi}{\Delta t} + \frac{\pi}{2}} \mathfrak{F}_1^{-1} \Big( |\rho| \mathfrak{F}_1\big( \mathfrak{R}_t f(\alpha, u)\big) \big( \rho \big) \Big)\Big(x \cos(\alpha) + y \sin(\alpha) \Big) \diff \rho \diff \alpha. \label{eqn:f_rec}
\end{align}
This is in fact the filtered backprojection theorem applied on the time-dependent sinogram $\mathfrak{R}_t f$. Note that the operator $\mathfrak{R}_t$ has a non-trivial null space $\aleph \mathfrak{R}_t$ which means that $\mathfrak{B} \mathfrak{R}_t (.)$ has a non-empty null space too. This means that in theory, it is possible that a reconstruction is not unique. In practice, the motion is small enough to assume that this does not causes any problem. An illustration of these definitions can be found in figure \ref{fig:sinogram}.
\begin{figure}[H]
\begin{tikzpicture}[scale = 0.55]
\draw (0,0) -- (25,0);
\draw (4*5,5pt) -- (4*5,-5pt) node[anchor = north east] {$t\indi{m-1}$};
\draw (5*5,5pt) -- (5*5,-5pt) node[anchor = north] { $t\indi{m}$};
\draw (0,5pt) -- (0,-5pt) node[anchor = north] { $0$};
\foreach \mathbf{x} in {1,2,3}
\draw (5*\mathbf{x},5pt ) -- (5*\mathbf{x} ,-5pt ) node[anchor = north] {$t\indi{\mathbf{x}}$};
\node at (3*5+2.5,-0.7cm) {$\hdots$};
\node at (3*5,0.7cm) {$\hdots$};
\draw [thick,decoration={brace,mirror, raise=0.6cm},decorate] (5*4,0) -- (5*5-0.1,0)
node [midway,anchor = north, yshift=-0.7cm] {$f^{\text{rec}}_{t_{m-1} + \Delta t/2}$};
\draw [thick,decoration={brace,mirror, raise=0.6cm},decorate] (0,0) -- (5-0.1,0)
node [midway,anchor = north, yshift=-0.7cm] {$f^{\text{rec}}_{\Delta t/2}$};
\foreach \mathbf{x} in {1,2}
\draw [thick,decoration={brace,mirror,raise=0.6cm},decorate] (\mathbf{x}*5,0) -- (\mathbf{x}*5+5-0.1,0)
node [pos=0.5,anchor=north,yshift=-0.7cm] {$f^{\text{rec}}_{t_{\mathbf{x}}+ \Delta t/2}$};
\draw [thick,decoration={brace,raise=0.3cm},decorate] (5*4-2.5,0) -- (5*4+5-2.6,0)
node [pos=0.5,yshift=0.7cm] {$f^{\text{rec}}_{t_{m-1}}$};
\foreach \mathbf{x} in {1,2}
\draw [thick,decoration={brace,raise=0.3cm},decorate] (\mathbf{x}*5 - 2.5,0) -- (\mathbf{x}*5+5-2.6,0)
node [pos=0.5,yshift=0.7cm] {$f^{\text{rec}}_{t_{\mathbf{x}}}$};
\end{tikzpicture}
\caption{Schematic representation of the notation for some time points. The range of the brace represents the time period the data is measured for the reconstruction of $f^{\text{rec}}_t$. }\label{fig:sinogram}
\end{figure}
\subsection{Discretization of Radon transformations} \label{sec:discr_radon}
A discrete sinogram for stationary objects \eqref{eq:sinogram} can be constructed by dividing the volume of the unknown object in pixels. With each pixel center $(x_i,y_j)$, we associate an unknown value $f_{ij}$ and we assume that every pixel is a square. We take in each scan projection data for 180 angles uniformly distributed over $[0,\pi[$. The line integral of a single Radon transform (see \eqref{eq:radon}) can now be approximated by a weighted sum over these pixels. These weights are the length of the line segment of the projection direction through the pixel (as it needs to approximate an integral) and are entered in a large sparse matrix to form a linear algebra problem $\mathbf{A} \mathbf{x} = \mathbf{b}$. In this matrix $\mathbf{A}$ each row contains the weights of one X-ray and the vectors $\mathbf{x}$ and $\mathbf{b}$ are the unknown pixel values and the discrete measurements respectively. The solution of this linear system is in fact the discretized version of the reconstructed image $f^\text{rec}$. In this context, the matrix $\mathbf{A}$ is called the \emph{projection operator}. There are multiple ways to discretize Radon transformations and to construct this matrix $\mathbf{A}$. We use in this paper the projection scheme described by Joseph in \cite{joseph1982improved}. In practice we generate this matrix using the Astra Toolbox, see \cite{palenstijn2011performance}. In section \ref{sec:cor_images} we explain how we can adapt this matrix $\mathbf{A}$ for the motion.
\subsection{Evolution} \label{sec:evolution}
We assume that each pixel moves in a 2D flow field
$v(x,y)$ that is yet unknown and that needs to be extracted from
the measurements.
In this paper we assume that the evolution of the object $f$ itself can be modelled by the \emph{optical flow PDE}
\begin{equation}\label{eq:evolution}
\dfrac{\partial f(x,y;t)}{\partial t} + v(x,y) \cdot \nabla f(x,y;t) = 0 \quad \text{for all} \quad x,y \in \Omega,
\end{equation}
where the flow field $v(x,y)$ represents the motion of $f$ on a time interval of size $\Delta t$. The operator $\nabla$ represents the gradient. If we model the flow of $f$ by this equation, we accept that the \emph{brightness constancy constraint}
\begin{equation} \forall x,y,t \quad \exists v_x, v_y: f(x,y,t) = f(x + \Delta t v_x, y+\Delta t v_y, t + \Delta t) \label{eq:const_constr} \end{equation} holds for a given time period $\Delta t$. In our case this $\Delta t$ is the scan time of one complete scan.
For a specific $x$ and $y$, \eqref{eq:evolution} now follows from the Taylor expansion
$$f(x+ \Delta t v_x, y + \Delta t v_y, t + \Delta t) \approx f(x,y,t) + \dfrac{\partial f(x,y;t)}{\partial t} + \Delta t v_x \dfrac{\partial f(x,y;t)}{\partial x} + \Delta t v_y \dfrac{\partial f(x,y;t)}{\partial y}$$
and \eqref{eq:const_constr}, if we define $v(x,y) = ( v_x, v_y)$.
This flow field is unknown and needs to be extracted from the measurements. We assume that this flow field is, for short time horizons, independent of the time $t$.
This assumption can be made because a CT-scan is applied fast and changes in the flow happens smoothly at a slower time scale (so changes are small in short time periods).
We describe the method to estimate the motion for objects at different time points, in section~\ref{sec:mot_est} we describe how we use this method if only scan results are available.
To estimate the flow field $v(x,y)$, we use the Horn-Schunck method \cite{HS}.
This method is a global method (one equation for the entire domain) that prefers flow fields that are smooth.
Here we minimize the following expression with respect to the flow field $v(x,y)$,
\begin{equation} E(v(x,y)) = \int \int_{\Omega} \left( \Delta t \dfrac{\partial f(x,y;t)}{\partial t} + v(x,y) \cdot \nabla f(x,y;t) \right)^2 + \lambda \left \| \nabla v(x,y) \right \|_2^2 \diff x \diff y \label{optflow_HS} \end{equation}
with $\lambda > 0$ a regularisation parameter.
The first part of the integral is the optical flow expression (see~\eqref{eq:evolution}) and the other part is a regularisation term. This means we are searching for a solution $v(x,y)$ such that it satisfies the optical flow equation and that the solution itself is smooth. The extent to which the solution needs to be smooth, is represented by the parameter $\lambda$.
Denote with $v_x$ and $v_y$ the restriction of $v(x,y)$ to the first respectively second variable.
If we minimize \eqref{optflow_HS} analytically by the Euler-Lagrange equations, we get
\begin{equation}
\left\{ \begin{aligned}
\frac{\partial f}{\partial x}\left(\frac{\partial f}{\partial x} v_x +\frac{\partial f}{\partial y} v_y+\Delta t\frac{\partial f}{\partial t}\right) - \lambda \left( \frac{\partial^2 v_x}{\partial x^2} + \frac{\partial^2 v_x}{\partial y^2} \right)= 0 \\
\frac{\partial f}{\partial y}\left(\frac{\partial f}{\partial x} v_x +\frac{\partial f}{\partial y} v_y+ \Delta t \frac{\partial f}{\partial t}\right) - \lambda \left(\frac{\partial^2 v_y}{\partial x^2} + \frac{\partial^2 v_y}{\partial y^2}\right) = 0. \\
\end{aligned}\right. \label{eqn:solution_HS}
\end{equation}
A proof for \eqref{eqn:solution_HS} can be found in \cite{HS_conv}.
We want to discretize \eqref{eqn:solution_HS} in order to make it useful for doing calculations. We use second order central formulas for all derivatives and we set Neumann boundary conditions on our domain. This means we need three images to make an estimation of the applied motion.
We denote by $\mathbf{D}_i$ and $\mathbf{D}^2_i$ the matrix we use to estimate the first respectively the second derivative in the $i$th direction. Further we define $\mathbf{z}_1 \odot \mathbf{z}_2$ as the pointwise multiplication and $\mathbf{z}^2_\odot$ as the pointwise multiplication with itself. Similar to the discretisation of an object, we associate with the centre the horizontal and vertical component $\mathbf{v}_x(x_i,y_j)$ and $\mathbf{v}_y(x_i,y_j)$ of the discretised flow field $\mathbf{v}(x_i, y_j)$. If we denote with $\mathbf{D}_t \mathbf{f}$ the approximated time derivative with time step $\Delta t$, it is verifiable that the discretisation of \eqref{eqn:solution_HS} leads to a system
\begin{equation} \mathbf{A}_{HS} \begin{bmatrix}
\text{vec}( \mathbf{v}_x)\\
\text{vec}( \mathbf{v}_y)
\end{bmatrix} = \mathbf{b}_{HS} \label{eqn::HS} \end{equation}
with
$$\mathbf{A}_{HS} = \begin{bmatrix}
\text{diag}\left(\left(\mathbf{D}_x \mathbf{f}_t \right)_\odot^2\right) -\lambda \left( \mathbf{D}_x^2 + \mathbf{D}_y^2 \right) & \text{diag}\left(\mathbf{D}_x \mathbf{f}_t \odot \mathbf{D}_y \mathbf{f}_t\right) \\
\text{diag}\left(\mathbf{D}_x \mathbf{f}_t \odot \mathbf{D}_y \mathbf{f}_t \right) & \text{diag}\left(\left(\mathbf{D}_y \mathbf{f}_t \right)_\odot^2\right) -\lambda \left( \mathbf{D}_x^2 + \mathbf{D}_y^2 \right)
\end{bmatrix} \in \mathbb{R}^{2 n^2 \times 2 n^2} \quad \text{and}$$
$$\mathbf{b}_{HS} = \begin{bmatrix}
-\Delta t \mathbf{D}_x \mathbf{f}_t \odot \mathbf{D}_t \mathbf{f}\\
-\Delta t \mathbf{D}_y \mathbf{f}_t \odot \mathbf{D}_t \mathbf{f}
\end{bmatrix} \in \mathbb{R}^{2 n^2 \times 1}.$$
We denote with $\mathbf{v}$ the exact motion and with $\hat{\mathbf{v}}$ the estimated motion.
In this work, we set the parameter $\lambda$ equal to 1. This choice gives us in practice good reconstructions. In future work this can be set automatically.
\section{Motion estimation from scan results} \label{sec:mot_est}
\subsection{Theoretical derivation}
In section \ref{sec:evolution} we have seen how we can estimate the motion in three successive images. In practice, we do not have images but only scan results, so we need to estimate the motion starting from the reconstructed images $f_t^\text{rec}$ (see \eqref{eqn:f_rec}). Before we can prove the optical flow equation for the reconstructions (see theorem \ref{the:rec_optic_flow}) we first need some lemmas.
\begin{lem} \label{lem:dRdrho}
For $f \in L^2(\mathbb{R}^2)$, it holds, for all angles $\alpha$ and $u$, that
$$ \dfrac{\partial \mathfrak{R} f}{\partial u}(\alpha, u) = \cos(\alpha) \mathfrak{R} \frac{\partial f}{\partial x}(\alpha, u) + \sin(\alpha) \mathfrak{R} \frac{\partial f}{\partial y}(\alpha, u).$$
\begin{proof}
We make use of the following transformation and its inverse in the next calculations.
\begin{equation} \begin{bmatrix}
u \\
w
\end{bmatrix} = \begin{bmatrix}
\cos(\alpha) & \sin(\alpha) \\
-\sin(\alpha) & \cos(\alpha)
\end{bmatrix}
\begin{bmatrix}
x \\
y
\end{bmatrix} \quad \text{ and } \quad \begin{bmatrix}
x \\
y
\end{bmatrix} = \begin{bmatrix}
\cos(\alpha) & -\sin(\alpha) \\
\sin(\alpha) & \cos(\alpha)
\end{bmatrix}
\begin{bmatrix}
u \\
w
\end{bmatrix} \label{anrec_eqn7}. \end{equation}
We obtain that
\begin{align*}
\dfrac{\partial \mathfrak{R} f}{\partial u}(\alpha, u) &= \dfrac{\partial }{\partial u} \int_{L(\alpha, u)} f(x,y) \diff x \diff y \\
& = \dfrac{\partial }{\partial u} \int_{-\infty}^\infty f\big( \cos(\alpha) u - \sin(\alpha)w, \sin(\alpha) u + \cos(\alpha) w\big) \diff w \\
& = \int_{-\infty}^\infty \dfrac{\partial }{\partial u} f\big( \cos(\alpha) u - \sin(\alpha)w, \sin(\alpha) u + \cos(\alpha) w\big) \diff w \\
& = \int_{-\infty}^\infty \cos(\alpha) \dfrac{\partial f}{\partial x}\big( \cos(\alpha)u - \sin(\alpha)w, \sin(\alpha)u + \cos(\alpha)w \big) + \\
& \qquad \qquad \sin(\alpha) \dfrac{\partial f}{\partial y}\big( \cos(\alpha)u- \sin(\alpha)w, \sin(\alpha)u + \cos(\alpha)w \big) \diff w \\
& = \cos(\alpha) \int_{L(\alpha, u)} \dfrac{\partial f}{\partial x} \diff x \diff y + \sin(\alpha) \int_{L(\alpha, u)} \dfrac{\partial f}{\partial y} \diff x \diff y.
\end{align*}
\end{proof}
\end{lem}
We can use this result in the next lemma which gives us a link between the derivatives of the reconstructions $f_t^\text{rec}$ and the derivatives of the object $f_t$.
\begin{lem} \label{lem:dfrecdx}
For the reconstruction of the object $f$ at time $t \in [\Delta t/2, m \Delta t - \Delta t/2]$, it holds that
$$\frac{\partial f^{\text{rec}}_t}{\partial x} = \mathfrak{B} \mathfrak{R}_t \dfrac{\partial f}{\partial x} + \mathfrak{B} \sin(\alpha) \left( \cos(\alpha) \mathfrak{R}_t \dfrac{\partial f}{\partial y} - \sin(\alpha) \mathfrak{R}_t \dfrac{\partial f}{\partial x} \right) $$ and
$$\frac{\partial f^{\text{rec}}_t}{\partial y} = \mathfrak{B} \mathfrak{R}_t \dfrac{\partial f}{\partial y} - \mathfrak{B} \cos(\alpha) \left( \cos(\alpha) \mathfrak{R}_t \dfrac{\partial f}{\partial y} - \sin(\alpha) \mathfrak{R}_t \dfrac{\partial f}{\partial x} \right).$$
\begin{proof}
For every $(x,y) \in \Omega$ holds that
\begin{align}
\frac{\partial f^{\text{rec}}_t}{\partial x}(x,y) & = \int_{\frac{t \pi}{\Delta t} - \frac{\pi}{2}}^{\frac{t \pi}{\Delta t} + \frac{\pi}{2}} \int_{-\infty}^{\infty} |\rho|\mathfrak{F}_1 \big( \mathfrak{R}_t f(\alpha, u) \big)\big(\rho \big) \frac{\partial }{\partial x} \exp \big( 2 \pi i \rho \left(x \cos(\alpha) + y \sin(\alpha) \right)\big) \diff \rho \diff \alpha \nonumber \\
& = \int_{\frac{t \pi}{\Delta t} - \frac{\pi}{2}}^{\frac{t \pi}{\Delta t} + \frac{\pi}{2}} \int_{-\infty}^{\infty} \cos(\alpha) |\rho| 2 \pi i \rho \mathfrak{F}_1 \big( \mathfrak{R}_t f(\alpha, u) \big)\big(\rho \big) \exp \left( 2 \pi i \rho \big(x \cos(\alpha) + y \sin(\alpha) \right)\big) \diff \rho \diff \alpha \nonumber \\
& = \int_{\frac{t \pi}{\Delta t} - \frac{\pi}{2}}^{\frac{t \pi}{\Delta t} + \frac{\pi}{2}} \int_{-\infty}^{\infty} \cos(\alpha) |\rho| \mathfrak{F}_1 \Big( \dfrac{\partial \mathfrak{R}_t f(\alpha,u)}{\partial u}\Big)\big(\rho \big) \exp \left( 2 \pi i \rho \big(x \cos(\alpha) + y \sin(\alpha) \right)\big) \diff \rho \diff \alpha \nonumber \\
& = \mathfrak{B} \bigg( \cos(\alpha) \dfrac{ \partial \mathfrak{R}_t f(\alpha,u)}{\partial u} \bigg)(x,y) \nonumber \\
& = \mathfrak{B} \bigg( \mathfrak{R}_t \frac{\partial f}{\partial x}(\alpha, u) \cos^2(\alpha) \bigg)(x,y) + \mathfrak{B} \bigg( \mathfrak{R}_t \frac{\partial f}{\partial y}(\alpha, u) \sin(\alpha) \cos(\alpha)\bigg)(x,y) \label{eqn:lem31} \\
& = \mathfrak{B} \bigg( \mathfrak{R}_t \frac{\partial f}{\partial x}(\alpha, u)\bigg)(x,y) - \mathfrak{B} \bigg( \sin^2(\alpha) \mathfrak{R} \frac{\partial f}{\partial x}(\alpha, u)\bigg)(x,y) + \nonumber \\
& \quad \quad \mathfrak{B} \bigg(\sin(\alpha) \cos(\alpha) \mathfrak{R}_t \frac{\partial f}{\partial y}(\alpha, u) \bigg)(x,y) \nonumber
\end{align}
where we have used previous lemma in \eqref{eqn:lem31}.
Similarly we can prove that
\begin{align*}
\frac{\partial f^{\text{rec}}_t}{\partial y} & = \mathfrak{B} \sin(\alpha)\dfrac{\partial \mathfrak{R}_t f(\alpha,u)} {\partial u} \\
& = \mathfrak{B} \mathfrak{R}_t \dfrac{\partial f}{\partial y} - \mathfrak{B} \cos(\alpha) \left( \cos(\alpha) \mathfrak{R}_t \dfrac{\partial f}{\partial y} - \sin(\alpha) \mathfrak{R}_t \dfrac{\partial f}{\partial x} \right).
\end{align*}
\end{proof}
\end{lem}
In our main theorem \ref{the:rec_optic_flow} we encounter the term $ \cos(\alpha) \mathfrak{R}_t \dfrac{\partial f}{\partial y}(\alpha,u) - \sin(\alpha) \mathfrak{R}_t\dfrac{\partial f}{\partial x}(\alpha,u)$. It is a consequence of following lemma that this term is equal to zero.
\begin{lem} \label{lem:extra_term}
For every angle $\alpha$ it holds, for all $u$, that
$$ \cos(\alpha) \mathfrak{R} \dfrac{\partial f}{\partial y}(\alpha,u) - \sin(\alpha) \mathfrak{R} \dfrac{\partial f}{\partial x} (\alpha, u) = 0.$$
\begin{proof}
We use the same transformation \eqref{anrec_eqn7} as in lemma \ref{lem:dRdrho}.
For a particular angle $\alpha$ and $u \in \mathbb{R}$, we obtain
\begin{align}
& \cos(\alpha) \mathfrak{R} \dfrac{\partial f}{\partial y}(\alpha,u) - \sin(\alpha) \mathfrak{R} \dfrac{\partial f}{\partial x} (\alpha, u) \nonumber \\
= & \int_{L(\alpha,u)} \dfrac{\partial f}{\partial y} \cos(\alpha) - \dfrac{\partial f}{\partial x} \sin(\alpha) \diff x \diff y \nonumber \\
= & \int_{-\infty}^{\infty} \dfrac{\partial f}{\partial y} \big( \cos(\alpha) u - \sin(\alpha) w, \sin(\alpha) u + \cos(\alpha) w \big) \cos(\alpha) - \nonumber \\
& \quad \dfrac{\partial f}{\partial x}\big(\cos(\alpha) u - \sin(\alpha) w, \sin(\alpha) u + \cos(\alpha)w \big) \sin(\alpha) \diff w \nonumber \\
= & \int_{-\infty}^{\infty} \dfrac{\partial f}{\partial w}\big(\cos(\alpha) u - \sin(\alpha) w, \sin(\alpha) u + \cos(\alpha)w\big) \diff w \nonumber \\
= & [f(\cos(\alpha) u - \sin(\alpha) w, \sin(\alpha) u + \cos(\alpha) w )]^{\infty}_{-\infty} \nonumber \\
= & 0 \label{eqn:iszero}
\end{align}
where \eqref{eqn:iszero} follows from the fact that $f$ is zero outside a closed area $\Omega$.
\end{proof}
\end{lem}
The following lemma give us the necessary relation between the object and its reconstruction.
\begin{lem} \label{lem:verband_frec_f}
It holds for every $t$ and $z \in \mathbb{Z}$ that
$$ \mathfrak{B} \mathfrak{R}_t^{z \Delta t} f = f_{t+z \Delta t}^{\text{rec}}.$$
\begin{proof}
For every $z \in \mathbb{Z}$ holds that
\begin{align*}
& \mathfrak{B} \mathfrak{R}_t^{z \Delta t} f \\
= & \int_{\frac{\pi t}{\Delta t} - \frac{\pi}{2}}^{\frac{\pi t}{\Delta t} + \frac{\pi}{2}} \int_{-\infty}^{\infty} |\rho| \mathfrak{F}_1 \Big( \mathfrak{R}_t^{z \Delta t} f(\alpha,u) \Big) \Big( \rho \Big) \exp \big( 2 \pi i \rho \left( x \cos(\alpha) + y \sin(\alpha) \right) \big) \diff \rho \diff \alpha \\
= & \int_{\frac{\pi t}{\Delta t} + z \pi - \frac{\pi}{2}}^{\frac{\pi t}{\Delta t} + z \pi + \frac{\pi}{2}} \int_{-\infty}^{\infty} |\rho| \mathfrak{F}_1 \Big( \mathfrak{R}_t^{z \Delta t} f(\alpha - z \pi,u) \Big) \Big( \rho \Big) \exp \big( 2 \pi i \rho \left( x \cos(\alpha-z\pi) + y \sin(\alpha-z\pi) \right) \big) \diff \rho \diff \alpha \\
= & \int_{\frac{\pi t}{\Delta t} + z\pi- \frac{\pi}{2}}^{\frac{\pi t}{\Delta t} + z \pi + \frac{\pi}{2}} \int_{-\infty}^{\infty} |\rho| \mathfrak{F}_1 \Big( \mathfrak{R}_{t+z\Delta t} f(\alpha,(-1)^z u) \Big) \Big(\rho \Big) \exp \big( 2 \pi i \left( (-1)^z \rho \right) \left( x \cos(\alpha) + y \sin(\alpha) \right) \big) \diff \rho \diff \alpha \\
= & \int_{\frac{\pi t}{\Delta t} + z \pi - \frac{\pi}{2}}^{\frac{\pi t}{\Delta t} + z \pi + \frac{\pi}{2}} \int_{-\infty}^{\infty} |(-1)^z \rho| \mathfrak{F}_1 \Big( \mathfrak{R}_{t+z \Delta t} f(\alpha,u) \Big) \Big((-1)^z \rho \Big) \exp \big( 2 \pi i \left( (-1)^z \rho \right) \left( x \cos(\alpha) + y \sin(\alpha) \right) \big) \diff \rho \diff \alpha \\
= & f_{t+z \Delta t}^{\text{rec}}
\end{align*}
\end{proof}
\end{lem}
We prove an alternative on the optical flow differential equation \eqref{eqn::HS}. Instead of $\frac{\partial f}{\partial t}$ we take the second-order Taylor estimation $\dfrac{ f_{t+ \Delta t} - f_{t-\Delta t} }{2 \Delta t}$.
\begin{thm} (Optical flow equation for the reconstructions)\label{the:rec_optic_flow}
\begin{enumerate}
\item If for all $t \in [\Delta t, (m-1) \Delta t]$
\begin{align}
v_x \frac{\partial f_t}{\partial x} + v_y \frac{\partial f_t}{\partial y} + \dfrac{f_{t+\Delta t} - f_{t-\Delta t}}{2 } & = 0 \label{eqn:optflow_1}
\end{align}
then it holds for all $t \in [3 \frac{\Delta t}{2}, (m - \frac{3}{2})\Delta t]$
\begin{align*}
v_x \frac{\partial f^{\text{rec}}_t}{\partial x} + v_y \frac{\partial f^{\text{rec}}_t}{\partial y} + \dfrac{f^{\text{rec}}_{t+\Delta t} - f^{\text{rec}}_{t-\Delta t}}{2 } & = 0.
\end{align*}
\item If for all $t \in [3 \frac{\Delta t}{2},(m - \frac{3}{2}) \Delta t]$
\begin{align*}
v_x \frac{\partial f^{\text{rec}}_t}{\partial x} + v_y \frac{\partial f^{\text{rec}}_t}{\partial y} + \dfrac{f^{\text{rec}}_{t+\Delta t} - f^{\text{rec}}_{t-\Delta t}}{2 } & =0 \nonumber
\end{align*}
then it holds for $t \in [\Delta t, (m-1) \Delta t]$ that
$$v_x \frac{\partial f_t}{\partial x} + v_y \frac{\partial f_t}{\partial y} + \dfrac{f_{t+\Delta t} - f_{t-\Delta t}}{2 } = g $$
for a certain $g \in \aleph \mathfrak{B} \mathfrak{R}_t$ (= null space of operator $\mathfrak{B} \mathfrak{R}_t$) .
\end{enumerate}
\begin{proof}
\begin{enumerate}
\item
If we apply the operator $\mathfrak{B} \mathfrak{R}_t$ on \eqref{eqn:optflow_1} we obtain
\begin{align}
\mathfrak{B} \mathfrak{R}_t \left( v_x \dfrac{\partial f_t}{\partial x} + v_y \dfrac{\partial f_t}{\partial y} + \dfrac{f_{t+\Delta t} - f_{t-\Delta t}}{2 } \right) & = 0 \nonumber \\
& \Updownarrow \nonumber \\
v_x \mathfrak{B} \mathfrak{R}_t \dfrac{\partial f_t}{\partial x} + v_y \mathfrak{B} \mathfrak{R}_t \dfrac{\partial f_t}{\partial y} + \dfrac{f^\text{rec}_{t+\Delta t} - f^\text{rec}_{t-\Delta t}}{2 } & = 0 \label{eqn:verband_frec_f} \\
& \Updownarrow \nonumber \\
v_x \dfrac{\partial f^\text{rec}_t}{\partial x} + v_y \dfrac{\partial f^\text{rec}_t}{\partial y} + \dfrac{f^\text{rec}_{t+\Delta t} - f^\text{rec}_{t-\Delta t}}{2 } & = v_x \dfrac{\partial f^\text{rec}_t}{\partial x} - v_x \mathfrak{B} \mathfrak{R}_t \dfrac{ \partial f_t}{\partial x} + v_y \dfrac{\partial f^\text{rec}_t}{\partial y} - v_y \mathfrak{B} \mathfrak{R}_t \dfrac{ \partial f_t}{\partial y} \nonumber \\
& \Updownarrow \nonumber \\
v_x \dfrac{\partial f^\text{rec}_t}{\partial x} + v_y \dfrac{\partial f^\text{rec}_t}{\partial y} + \dfrac{f^\text{rec}_{t+\Delta t} - f^\text{rec}_{t-\Delta t}}{2 } & = v_x \mathfrak{B} \sin(\alpha) \left( \cos(\alpha) \mathfrak{R}_t \dfrac{\partial f_t}{\partial y} - \sin(\alpha) \mathfrak{R}_t \dfrac{\partial f_t}{\partial x} \right) - \label{eqn:dfrecdx}\\
& \quad \quad v_y \mathfrak{B} \cos(\alpha) \left( \cos(\alpha) \mathfrak{R}_t \dfrac{\partial f}{\partial y} - \sin(\alpha) \mathfrak{R}_t \dfrac{\partial f}{\partial x} \right) \nonumber \\
& \Updownarrow \nonumber \\
v_x \dfrac{\partial f^\text{rec}_t}{\partial x} + v_y \dfrac{\partial f^\text{rec}_t}{\partial y} + \dfrac{f^\text{rec}_{t+\Delta t} - f^\text{rec}_{t-\Delta t}}{2} & = 0 \label{eqn:extra_term}
\end{align}
where \eqref{eqn:verband_frec_f}, \eqref{eqn:dfrecdx} and \eqref{eqn:extra_term} follows from respectively lemma \ref{lem:verband_frec_f}, \ref{lem:dfrecdx} and \ref{lem:extra_term}.
\item
This follows from
\begin{align}
& \mathfrak{B} \mathfrak{R}_t \left( v_x \dfrac{\partial f}{\partial x} + v_y \dfrac{\partial f}{\partial y} \right) \nonumber \\
= & v_x \frac{\partial f_t^{\text{rec}}}{\partial x} + v_y \frac{\partial f_t^{\text{rec}}}{\partial y} \nonumber \\
= & \dfrac{ f_{t+ \Delta t}^{\text{rec}} - f_{t-\Delta}^{\text{rec}}}{2} \nonumber \\
= & \mathfrak{B} \mathfrak{R}_t \left( \dfrac{ f_{t+\Delta t} - f_{t-\Delta t}}{2} \right) \nonumber.
\end{align}
\end{enumerate}
\end{proof}
\end{thm}
It is possible to achieve a similar result by approximating the time derivative by for example first order approximation formulas as long as the time mesh width is $\Delta t$ (duration of one scan) but tests showed us that the reconstruction had a significant lower quality in that case.
The function $g \in \aleph \mathfrak{B} \mathbb{R}_t$ in previous theorem is considered to be negligible in practice.
\subsection{Implementation} \label{sec:implem}
Because the optical flow equation is still valid for the reconstructions (see previous theorem), we can use the method of Horn-Schunck (see section \ref{sec:evolution}) to estimate the flow from the reconstructions $f^{\text{rec}}_{t_i-\Delta t/2}, i = 1, \hdots, m$. This means we calculate a reconstruction per scan. In fact, we are not limited to the boundaries of a scan, we can use data which is coming partly from the $i\,$-th scan and partly from the $i+1 \,$-th scan. We can reconstruct for example new images $g_i, i = 1, \hdots, m-1$ for which the data for angles between $[\pi/2 , \pi[$ is coming from the $i+1 \,$-th scan and the data for angles between $[0 , \pi/2[$ is coming from the $i \,$-th scan.
However, tests showed us that the use of more images does not add significant value and furthermore it is computationally more expensive.
From section~\ref{sec:evolution}, we can retrieve systems $$\mathbf{A}_{HS}^i \begin{bmatrix} \text{vec}( \mathbf{v}_x)\\
\text{vec}( \mathbf{v}_y)
\end{bmatrix} = \mathbf{b}_{HS}^i, \quad i = 2, 3,\hdots, m-1$$ between three successive reconstructions.
Because we assume the motion is constant over time, we know the motion between successive reconstructions is equal. This means we can see the unknown motion $\mathbf{v}$ as the solution of the big system
\begin{equation} \begin{bmatrix}
\mathbf{A}_{HS}^2\\
\mathbf{A}_{HS}^3\\
\vdots\\
\mathbf{A}_{HS}^{m-1}
\end{bmatrix} \begin{bmatrix}
\text{vec}( \mathbf{v}_x)\\
\text{vec}( \mathbf{v}_y)
\end{bmatrix} = \begin{bmatrix}
\mathbf{b}_{HS}^2\\
\mathbf{b}_{HS}^3\\
\vdots\\
\mathbf{b}_{HS}^{m-1}
\end{bmatrix}. \label{ctr_eqn1} \end{equation}
By using a single system, we make use of all information available in the scans. It is known in the literature that the Horn-Schunck method can estimate well small motions, but that large displacements can not be retrieved. We can deal with this by applying a coarse-to-fine scheme where we calculate the motion on a lower resolution and use this as an initial solution when calculating the motion on a higher resolution, see for example \cite{anandan1989computational}. This results in following algorithm where the upper index represents the resolution of the used image.
We assume the reconstructed images are objects on a $2^p \times 2^p$ grid for a certain $p \in \mathbb{N}$. The variable $d$ determines the factor we reduce the resolution in the first estimation of the motion. For the examples in section \ref{sec:numer_results} we take always $d$ equal to $3$. This means we first estimate the motion on a $\frac{2^p}{2^3} \times \frac{2^p}{2^3}$ grid.
\begin{algorithm}[H]
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{\begin{enumerate}
\item $d = $ depth: defines the resolution of the image in the first iteration
\item $\mathbf{f}^{\text{rec}}_{t_j-\Delta t/2}$ = reconstruction of the $i$th scan on a $2^p \times 2^p$ pixel grid, $j = 1,\hdots, m$
\end{enumerate}}
\Output{$\mathbf{v} = \mathbf{v}^{2^p} $: calculated motion on a $2^p \times 2^p$ pixel grid}
$\mathbf{v}_{\text{old}}^{2^p/2^d}$ = 0 \\
\For{$i = d:-1:0$}{
$\left( \mathbf{f}^{\text{rec}}_{t_j-\Delta t/2}\right)^{2^p/2^i}, j = 1, \hdots, m$ = reduce the resolution of the image $\mathbf{f}^{\text{rec}}_{t_j+\Delta t}$ with \\
\qquad a factor $2^i$ \\
$\mathbf{v}^{2^p/2^i}$= calculated motion using \eqref{ctr_eqn1}
with initial solution $\mathbf{v}_{\text{old}}^{2^p/2^i}$ \\
\If{$i > 0$}{
$\mathbf{v}_{\text{old}}^{2^p/2^{i-1}}$ = Interpolate $\mathbf{v}^{2^p/2^i}$ to a motion on a $2^p/2^{i-1} \times 2^p/2^{i-1}$ grid
}
}
\caption{Coarse-to-fine algorithm to estimate the motion $\mathbf{v}$ in scan reconstructions $\mathbf{f}^{\text{rec}}_{t_j-\Delta t/2}$. For simplicity we only consider images on a $2^p \times 2^p$ grid. Uper indices on the motion $\mathbf{v}$ and reconstructed objects $ \mathbf{f}^{\text{rec}}_{t_j-\Delta t/2}$ represent the resolution. If the resolution $s$ of the image cannot be written as $2^{n_1}, n_1 \in \mathbb{N}$ then we can just adapt this to the nearest resolution that can be written as $2^{n_1}$. Practically, adapting the resolution is done via interpolation. } \label{algo:motion}
\end{algorithm}
\section{Correcting images for motion} \label{sec:cor_images}
In this section we present a method to correct the reconstruction of a CT-scan for motion when we know the deformation $v(x,y)$ (or an estimate of it) that is performed during one scan. For this we only make use of one scan. The strategy is to move the data recorded at different time steps to a single reference time point and execute the reconstruction there. In fact, we look at how the X-rays are deformed by the motion over time. A consequence is that we also need to have an estimate of the motion on shorter time intervals. We choose to interpolate linearly since no other information is available about the motion on shorter time intervals. So the quality of the reconstruction depends on the extent the linear approximation corresponds with the real deformation.
In the next calculations we choose to take the middle of the scan time interval (= $\Delta t/2$) as reference time point, but alternative choices are also possible. It will be clear that the proposed method can easily be adapted such that it reconstructs at another time point.
To move the data to the middle of scan, we do following calculations for all angles. For the projection under an angle $\alpha$ performed at time $T(\alpha)$ now holds, for all $u$, that
\begin{align}
\mathfrak{R} f_{T(\alpha)} (\alpha,u) \nonumber & = \int_{L(\alpha,u)} f_{T(\alpha)}(x,y)\diff x \diff y \nonumber \\
& \approx \int_{L(\alpha,u)} f_{\Delta t/2} \left(x - \dfrac{-\Delta t/2+T(\alpha)}{\Delta t}v_x(x,y),y- \dfrac{-\Delta t/2+T(\alpha)}{\Delta t}v_y(x,y)\right)\diff x \diff y. \label{eqn:deform_int}
\end{align}
So instead of integrating over the path $L(\alpha, u)$, we integrate over the path
$$L_{\text{moved}}(\alpha,u) := \left \{ \left(x - \dfrac{-\Delta t/2+T(\alpha)}{\Delta t}v_x(x,y), y- \dfrac{-\Delta t/2+T(\alpha)}{\Delta t}v_y(x,y) \right) \mid (x, y) \in L(\alpha,u) \right \}.$$
and we obtain that the integral in \eqref{eqn:deform_int} is equal to
\begin{equation} \int_{L_{\text{moved}}(\alpha,u)} f_{\Delta t/2} \left(x,y\right)\diff x \diff y. \label{eqn:new_int}\end{equation}
The objective is now to approximate the integral in \eqref{eqn:new_int} by a matrix-vector product $\mathbf{A}_{\text{moved}} \mathbf{f}$ where $\mathbf{f}$ is the discretised and vectorised image at time $\Delta t/2$.
Just as the weights on each line in the projection operator $\mathbf{A}$ (see section \ref{sec:discr_radon}) are determined by the length of the segment of the projection line $L(\alpha,u)$ through the pixels, the weights on each row $\mathbf{A}_{\text{moved}}$ are the length of the adapted projection line $L_{\text{moved}}(\alpha, u)$ through the pixels. In practice we determine the values in the matrix $\mathbf{A}_{\text{moved}}$ with following algorithm for a projection angle $\alpha$ and $u \in \mathbb{R}$. Let $\Omega_\delta$ be the domain $\Omega$ with an extra border such that we are ensured that all adapted projection lines start and finish outside $\Omega$.
\begin{algorithm}[H]
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{\begin{enumerate}
\item $\Omega_\delta$: Domain $\Omega$ with an extra border
\item $L(\alpha, u)$: projection line for angle $\alpha$ and shift $u$
\item $T(\alpha$): Determines the time the projection under angle $\alpha$ is performed
\item $\Delta t$: Duration time of one scan
\item $\mathbf{v}$: Estimated or exact motion
\end{enumerate}}
\Output{$\mathbf{A}_{\text{moved}}$: projection matrix corrected for the motion $\mathbf{v}$}
\nonl We describe the method to determine a row in the matrix $\mathbf{A}_{\text{moved}}$ corresponding with a projection with angle $\alpha$ and shift $u$. This algorithm is repeated for all angles $\alpha$ and shifts $u$. \\
Find the intersection points $s$ of the projection lines $L(\alpha,u)$ with the pixels of the domain $\Omega_\delta$.
Calculate the mid points $m$ between every 2 successive intersection points.
As we have only values for the motion in the pixel centre, we estimate the motion $v(m)$ for the points $m$ from $\mathbf{v}$ using interpolation. Apply the motion $- \dfrac{-\Delta t/2+T(\alpha)}{\Delta t} v(m)$ on $m$ and call these points $m_{\text{moved}}$.
Define the path $L_{\text{moved}}(\alpha,u)$ determined by the points $m_{\text{moved}}$ where we interpolate linearly between the points.
Find the intersection points $s_{\text{moved}}$ of the adapted projection line $L_{\text{moved}}(\alpha, u)$ with the pixels of the domain $\Omega$.
Determine the values in the matrix $\mathbf{A}_{\text{moved}}$ by calculating the distance between 2 successive points $s_{\text{moved}}$.
\caption{Method to correct CT-scan images for the motion. Note that we need to perform this algorithm on every projection line but it is possible to calculate this in parallel as each projection is independent. Furthermore the first two steps do not depend on the motion $\mathbf{v}$ so they can be calculated in advance. } \label{algo:correcting}
\end{algorithm}
After performing this algorithm we get a new linear system $$\mathbf{A}_{\text{moved}} \mathbf{x} = \mathbf{b}$$ with $\mathbf{x}$ the vectorized version of $f_{\Delta t/2}$. We use $f_{\Delta t/2}^{\text{corr}}$ to refer to the solution of this system using the LSQR algorithm.
\begin{figure}[H]
\begin{subfigure}[b]{0.5\textwidth}
\input{geogebra/ctr_2_1}
\caption{}
\end{subfigure}
\hspace{0.25cm}
\begin{subfigure}[b]{0.5\textwidth}
\input{geogebra/ctr_2_2}
\caption{}
\end{subfigure} \\
\begin{subfigure}[b]{0.5\textwidth}
\input{geogebra/ctr_2_3}
\caption{}
\end{subfigure} \hspace{0.25cm}
\begin{subfigure}[b]{0.5\textwidth}
\input{geogebra/ctr_2_4}
\caption{}
\end{subfigure}
\caption{An illustration of the previous algorithm where the grey zone is the border and each square corresponds with a pixel. We use a clockwise rotation around the center point as motion. For this particular example, the adapted projection lines $L_{\text{moved}}(\alpha, u)$ are straight lines but this is usually not the case. (a) step 1: The orange balls are the intersection points $s$ of some projection lines $L(\alpha,u)$ with the pixel edges (b) step 2: The green balls are the mid points $m$ of 2 successive intersection points $s$ (c) step 3: the motion $- \dfrac{-\Delta t/2+T(\alpha)}{\Delta t} v(m)$ (the blue arrows) applied on the mid points $m$. The obtained points (the red balls) are called $m_{\text{moved}}$ (d) step 4: Define the path $L_{\text{moved}}(\alpha,u)$ determined by the points $m_{\text{moved}}$ where we interpolate linearly between the points. }\label{ctr_fig}
\end{figure}
The accuracy of the algorithm can be further improved by taking more intermediate points between successive intersection points. When doing this, take in mind that the computational time is much higher.
\section{Numerical results} \label{sec:numer_results}
In this section, the algorithms on motion estimation and the correcting of CT-scan images are tested and validated on the examples in figure \ref{fig:examples}.
The object in all our simulated examples is the logo of University of Antwerp, see figure \ref{intro_vb} (a) on a $256 \times 256$ pixel grid. We choose a shift (see figure \ref{fig:examples} (a)-(c)), a rotation (see figure \ref{fig:examples} (d)-(f)) and a motion which is not an affine transformation (see figure \ref{fig:examples} (g)-(i)) as the applied motions.
We simulated for every motion field 10 successive scans while the object was moving where per scan we acquire projection data for 180 angles uniformly distributed over $[0,\pi[$.
In the first section we discuss the results for the motion estimation based on algorithm \ref{algo:motion} described in section \ref{sec:implem}. To validate the proposed algorithm \ref{algo:correcting} in section \ref{sec:cor_images} we correct the images for the exact motion and the estimated motion.
\begin{figure}[H]
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/bveld_1-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/ua_logo_tr1-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/ua_logo_tr-cropped.pdf}
\caption{}
\end{subfigure} \\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/bveld_2-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/ua_rot_1-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/ua_rot_2-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/bveld_3-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/ua_onrot_1-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/ua_onrot_2-cropped.pdf}
\caption{}
\end{subfigure}
\caption{A representation of all the applied motions. Each row represents a different motion. The first column is the applied motion field. The motion is presented on the scale of one complete scan. The second column is the object $f$ at time $5 \Delta t$, so after 5 successive scans. The last column is the object $f$ at time $10 \Delta t$, so after 10 successive scans (a) - (c): The motion is a shift with 1 pixel per scan in both the horizontal and the vertical direction (d)-(f) The motion is a clockwise rotation around the origin with 3 degrees per scan (g)-(i) The applied motion is $\mathbf{v}_x(x,y) = -\left( (\cos(3)-1)x - \sin(3)y \right), \mathbf{v}_y(x,y) = \sin(3) x + (\cos(3)-1)y$.}\label{fig:examples}
\end{figure}
\subsection{Estimation of motion} \label{sec:motion_est_num}
In this section, we check the quality of the motion estimate. Because the error of the motion estimate is less relevant in the regions where the information comes only from the regularisation, we calculate the error only for points in the set
\begin{equation}
\mathfrak{A} := \Big \{(x,y) \in \Omega | \exists t \in [0,m \Delta t[: \left| \frac{\partial f^{\text{rec}}_t}{\partial x} \right| > \beta \text{ or } \left| \frac{\partial f^{\text{rec}}_t}{\partial y} \right| > \beta \Big \}. \label{eqn:set_A}
\end{equation}
This is the set of points for which the derivative is in absolute value, for at least one moment in the time interval, greater than a certain small default value $\beta > 0$. This corresponds with the points for which we have information about the motion. This set is represented in figure \ref{fig:RMSE_A} for each example. We define the error $\text{RMSE}_\mathfrak{A}$ as
\begin{equation} \text{RMSE}_\mathfrak{A} := \sqrt{\sum_{(x,y) \in \mathfrak{A}} \frac{ \left( v_x(x,y) - \hat{v}_x(x,y) \right)^2 + \left( v_y(x,y) - \hat{v}_y(x,y)\right)^2}{n} } \label{eqn:RMSE}
\end{equation} with $n$ the number of elements in the set $\mathfrak{A}$ and $\hat{v}(x,y)$ is the estimated motion from algorithm \ref{algo:motion}.
\begin{figure}[H]
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/RMSE_1-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/RMSE_2-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/RMSE_3-cropped.pdf}
\caption{}
\end{subfigure}
\caption{For every example in figure \ref{fig:examples} the area $\mathfrak{A}$ (see \eqref{eqn:set_A}) is denoted in white. These are the points for which we have information about the motion. If we calculate the error $\text{RMSE}_\mathfrak{A}$, we only take these points into account. We have used $\beta$ equal to $0.15$ (a) motion 1: shift (b) motion 2: rotation (c) Motion 3. } \label{fig:RMSE_A}
\end{figure}
In figure \ref{fig:motion_est} the estimated motion is presented together with a plot of the error with the exact motion. In table \ref{table:motion_est} we calculate the error $\text{RMSE}_\mathfrak{A}$ for all three examples for the exact data and for data where we add some normally distributed noise with mean $0$ and standard deviation equal to $2$. We choose to calculate it using depth $d = 0$ (so without changing the resolution) and depth $d= 3$ (we start algorithm \ref{algo:motion} by estimating the motion on a resolution which is $8 (= 2^3)$ times smaller).
\begin{table}[H]
\begin{center}
\begin{tabular}{ccccc}
\hline
& depth $d = 0$ & depth $d = 3$ & depth $d = 0$ & depth $d = 3$ \\
& no noise added & no noise added & with noise & with noise \\
\hline
Motion 1: Shift & $0.3994$ & $0.6664$ & $0.7873$ & $0.6640$ \\
Motion 2: Rotation & $3.1863$& $1.2257$ & $3.4893$ & $1.2315$ \\
Motion 3& $2.6076$ & $0.4679$ & $3.5171$ & $0.4658$ \\
\hline
\end{tabular}
\caption{The $\text{RMSE}_\mathfrak{A}$ \eqref{eqn:RMSE} when estimating the motion using the algorithm \ref{algo:motion} with depth $d$ equal to $0$ (first and third column) and depth equal to $3$ (second and last column). The noise is normally distributed with mean zero and standard deviation $2$. We can derive from this table that doing the coarse-to-fine algorithm \ref{algo:motion} improves the performance for big motions but that for smaller motion it is better to use depth $d = 0$. We see that when adding noise to the data, it is preferable to choose the depth $d$ equal to $3$.} \label{table:motion_est}
\end{center}
\end{table}
\begin{figure}[H]
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/motion_1_est-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/motion_1_ex-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/error_motion_1-cropped.pdf}
\caption{}
\end{subfigure}
\caption{In the first column we plot the motion field of the estimated motion $\hat{\mathbf{v}}$ for the first example from figure \ref{fig:examples} where we have used the depth $d$ equal to 3 in algorithm \ref{algo:motion} and we did not add noise to the data. In the second column we plot the exact motion field of the same example. In the last column we plot the error $\sqrt{ \left( v_x(x,y) - \hat{v}_x(x,y)\right)^2 + \left( v_y(x,y) - \hat{v}_y(x,y)\right)^2}$. We can derive from the last picture that at the places where we have information about the motion (see the areas $\mathfrak{A}$ in figure \ref{fig:RMSE_A}), the motion is far better estimated than at the places where the only information comes from the regularisation.}\label{fig:motion_est}
\end{figure}
\subsection{Correcting CT-images}
In this section we validate the algorithm \ref{algo:correcting} in section \ref{sec:cor_images} by applying it on the examples from figure \ref{fig:examples}. For this we assume we have access to the exact motion $\mathbf{v}$ or the approximation $\hat{ \mathbf{v}}$ made in the previous section \ref{sec:motion_est_num}. We assume we have the exact scan data, in the next section we add noise to our data. As mentioned before, we make our reconstructions at the middle of the first scan, i.e. at time $\Delta t/2$. For this we only need the sinogram of the first scan.
In table \ref{table:rec_ex} we compare the error with the exact object $f_{\Delta t/2}$ of the following reconstructions: a) the error we make when we perform a reconstruction $f^{\text{ex}}_{\Delta t/2}$ of object $f_{\Delta t/2}$ assuming the object is stationary in this state during the data acquisition,
b) reconstructed figure $f_{\Delta t/2}^{\text{corr}}$ when correcting the image for the exact motion $\mathbf{v}$, c) reconstructed figure $\hat{f}_{\Delta t/2}^{\text{corr}}$ when correcting the image for the estimated motion $\hat{ \mathbf{v}}$ and d) a reconstruction $f^\text{rec}_{\Delta t/2}$ where we do not correct for the motion. It is logical that it is not possible to do better than the reconstruction when there is no motion present so if the reconstruction when correcting for motion has a similar error then we can conclude our method works. We can conclude from this table that the error the algorithm make (columns 2) is similar to the error we make when there is no motion (column 1), so the algorithm to correct for motion works. If we compare the last column with all the other columns we can see that correcting the system for the motion has a significant effect on the quality of the reconstruction. This can also be seen in figure \ref{fig:correcting}.
\begin{table}[H]
\begin{center}
\begin{tabular}{ccccc}
\hline
& $\left \| f^{\text{ex}}_{\Delta t/2} - f_{\Delta t/2} \right \|_2$ & $\left \| f_{\Delta t/2}^{\text{corr}} - f_{\Delta t/2} \right \|_2$ & $\left \| \hat{f}_{\Delta t/2}^{\text{corr}} - f_{\Delta t/2}\right \|_2$ & $\left \| f_{\Delta t/2}^{\text{rec}} - f_{\Delta t/2} \right \|_2$\\
\hline
Motion 1: Shift & $1.3589$ & $2.7775$ & $4.0105$ & $5.4246$ \\
Motion 2: Rotation & $2.6729$ & $2.0863$ & $3.2873$ & $12.8958$ \\
Motion 3& $2.5758$ & $2.0819$ & $2.9457$ & $13.0636$ \\
\hline
\end{tabular}
\caption{The error with respect to the exact solution $f_{\Delta t/2}$. First column is the error of the reconstruction if object $f$ is stationary in the state at time $\Delta t/2$. The second column is the error when we correct for the exact motion. The third column is the error when we correct for the estimated motion. In the last column we put the error when we do not correct for the motion. We see that the error in the first two columns is similar which means our method works. We can see that when using the estimated motion, we still have a far more precise solution than when we do not correct at all.} \label{table:rec_ex}
\end{center}
\end{table}
\begin{figure}[H]
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/fcor_ex_shift-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/fcor_est_shift-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/frec_shift-cropped.pdf}
\caption{}
\end{subfigure} \\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/fcor_ex_rot-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/fcor_est_rot-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/frec_rot-cropped.pdf}
\caption{}
\end{subfigure} \\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/fcor_ex_onrot-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/fcor_est_onrot-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/frec_onrot-cropped.pdf}
\caption{}
\end{subfigure}
\caption{Reconstructions for the 3 different examples from figure \ref{fig:examples}. (a)-(d)-(g) Reconstruction $f_{t^{l/2}}^{\text{corr}}$ for each example. (b)-(e)-(h) Reconstruction $\hat{f}_{t^{l/2}}^{\text{corr}}$ for each example (c)-(f)-(i) Reconstruction $f^{\text{rec}}_{t^{l/2}}$ without correcting for motion. We can see that the reconstruction with the exact motion works the best (first column).}\label{fig:correcting}
\end{figure}
\subsection{Correcting CT-images with noise}
We have seen that our algorithm works given exact scan data. The question arises how it performs with added noise on the data. We apply the same noise as in section~\ref{sec:motion_est_num}. We first check the quality of the reconstruction when we correct for the exact motion using the noisy data and then we test how the method performs when the motion is calculated using the noisy data and the reconstruction is corrected for this estimated motion.
The results of the reconstruction can be found in the next table. We use the same abbreviations as in table \ref{table:rec_ex}. We see that adding normal distributed error does not affect much the quality of the reconstruction. The error with the exact motion (see second column in table~\ref{table:rec_ex_noise}) is also in this case comparable with the error we make when the object is stationary (see first column in table~\ref{table:rec_ex_noise}). The same conclusions can be drawn from the images of the reconstructions (see figure~\ref{fig:correcting_noise}).
\begin{table}[H]
\begin{center}
\begin{tabular}{ccccc}
\hline
& $\left \| f^{\text{ex}}_{\Delta t/2} - f_{\Delta t/2} \right \|_2$ & $\left \| f_{\Delta t/2}^{\text{corr}} - f_{\Delta t/2} \right \|_2$ & $\left \| \hat{f}_{\Delta t/2}^{\text{corr}} - f_{\Delta t/2}\right \|_2$ & $\left \| f_{\Delta t/2}^{\text{rec}} - f_{\Delta t/2} \right \|_2$\\
\hline
Motion 1: Shift & $3.3296$ & $4.4059$ & $5.0106$ & $6.1845$ \\
Motion 2: Rotation & $3.5557$ & $3.4496$ & $4.2850$ & $13.386$ \\
Motion 3& $3.4983$ & $3.4416$ & $3.6837$ & $13.506$ \\
\hline
\end{tabular}
\caption{The error with respect to the exact solution $f_{\Delta t/2}$ when noise is added to the images. The first column is the error of the reconstruction if object $f$ is stationary in the state at time $\Delta t/2$. The second column is the error when we correct for the exact motion. The third column is the error when we correct for the estimated motion. The last column shows the error in absence of motion correction. We see that the error in the first two columns is similar which means our method works. We can see that when using the estimated motion, we get an error of similar magnitude} \label{table:rec_ex_noise}
\end{center}
\end{table}
\begin{figure}[H]
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/fcor_ex_shift_noise-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/fcor_est_shift_noise-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/frec_shift_noise-cropped.pdf}
\caption{}
\end{subfigure} \\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/fcor_ex_rot_noise-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/fcor_est_rot_noise-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/frec_rot_noise-cropped.pdf}
\caption{}
\end{subfigure} \\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/fcor_ex_onrot_noise-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/fcor_est_onrot_noise-cropped.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[scale=0.366, clip]{Afbeeldingen/frec_onrot_noise-cropped.pdf}
\caption{}
\end{subfigure}
\caption{Reconstructions for the 3 different examples from figure \ref{fig:examples}. Each row represents a different example. (a)-(d)-(g) Reconstruction $f_{t^{l/2}}^{\text{corr}}$ for each example. (b)-(e)-(h) Reconstruction $\hat{f}_{t^{l/2}}^{\text{corr}}$ for each column (c)-(f)-(i) Reconstruction $f^{\text{rec}}_{t^{l/2}}$ without correcting for motion. We can see that the reconstruction with the exact motion works the best (first column).}\label{fig:correcting_noise}
\end{figure}
We see that adding normal distributed noise has no big impact on the quality of the restrictions. This can be seen in the table as well as in the above figure.
\section{Conclusion} \label{sec:conclusion}
We have shown in this paper that the motion can be determined starting from CT-scan data and that we can use this to correct CT-scan images. In fact, we show that techniques from imaging such as optical flow an be used for the dynamic CT-problem. Furthermore, the correction for the motion can be done very efficiently because all x-rays can be calculated in parallel. Although the results look promising, there is still work to refine the proposed method. First of all we have only tested the presented algorithms with simulated data so it would be very interesting to look how it performs with real data. Secondly, we calculate the motion in every pixel although in certain areas we can see that there is no motion present. Certainly when we look at extending our methods to the 3D case, it is not possible anymore to calculate the motion in every single pixel. We already did some experiments how to split up sinogram data into parts where there is motion and into parts where there is no motion present. In the areas where we know that the object stay stationary, we obviously do not need to calculate the motion here. Moreover, we only need to reconstruct this data once. The question arises how to adapt the methods if we only need to perform everything on a small area. Also the automatic determination of the regularisation parameter for the estimation of the motion is something that needs to be done. If this would work, this could significantly improve our methods.
\bibliographystyle{elsarticle-harv}
|
2,869,038,155,116 | arxiv | \section{Introduction}
To extract precise cosmological information from high-sensitivity
cosmic microwave background (CMB) data, the Galactic foregrounds must
be accurately known over a wide frequency range. Galactic radiation,
which comprises 3 well-known components (synchrotron, free-free and
vibrational dust), is the most important on angular scales $\gtrsim
1\arcdeg$. However, many authors have detected an additional
``anomalous'' component in the frequency range $\sim 10-60$~GHz whose
origin is still not understood. The anomalous emission, first detected
in the COBE-DMR maps at 31- and 53-GHz, was initially thought to be
due to free-free emission owing to its correlation with FIR maps and
its spectral index \citep{Kogut96a,Kogut96b}. It soon became clear
that free-free emission could not account for this ``excess'' due to
the lack of \text{H}$\alpha$~emission. Observations of the NCP region at 14.5- and
32-GHz \citep{Leitch97} clearly show the anomalous emission and the
tight correlation with $100~\mu$m maps. Since then, anomalous emission
has been detected by a number of authors \citep{deOliveira-Costa02,
deOliveira-Costa04, Banday03, Lagache03, Finkbeiner04a, Finkbeiner04b,
Casassus04, Watson05, Davies06} but still little is known about the
physical mechanism that produces it. Candidates include spinning dust
grains \citep{Draine98a,Draine98b}, magnetic dust emission
\citep{Draine99}, flat-spectrum synchrotron \citep{Bennett03b} and
free-free emission from very hot electrons \citep{Leitch97}.
The first targeted search for spinning dust emission from individual
objects was carried out by \cite{Finkbeiner02}. Using the Green Bank
140ft telescope, with a resolution of $6 \arcmin$, they made scans of
10 dust clouds to look for the spectral signature of spinning dust
grains i.e. a sharp rise from low frequencies up to a peak at $\sim
20$~GHz. They saw a rise in the flux density over the frequency range
$5-10$ GHz for 2 objects: [LPH96]201.663+1.643 (henceforth LPH96) and
LDN1622, both strongly correlated with FIR maps. The spectrum of
LDN1622 appears to fit a spinning dust model remarkably well, a result
that has recently been confirmed with CBI data at 31~GHz
\citep{Casassus06}. LPH96 is the brighter of the two clouds and is
classed as a diffuse \hii~region \citep{Lockman96}.
In \S\ref{sec:cbi} we present CBI total intensity and polarization
observations of LPH96 in the range $26-36$~GHz. The mapping capability
of CBI provides an angular resolution $\sim 6\arcmin$ (similar to
Green Bank observations). In \S\ref{sec:radio_maps} we compare low
frequency radio maps at 1.4 and 2.7~GHz with CBI data at 31~GHz, both
in the image and Fourier planes. \S\ref{sec:gb_fir} discusses the
Green Bank data of \cite{Finkbeiner02} and the correlations with the
$100~\mu$m dust template. Discussion and conclusions are given in
\S\ref{sec:discussion}.
\section{CBI data}
\label{sec:cbi}
The Cosmic Background Imager (CBI) is a 13-element interferometer
situated on the high altitude Chajnantor site in Chile. In a compact
configuration, baseline lengths range from 1.0m to 4.0m corresponding
to angular scales between $\sim 6\arcmin$ and $\sim 30\arcmin$. The
CBI covers the frequency range $26-36$~GHz in 10 1~GHz
channels. Observations of LPH96 (RA(J2000)=06:36:40,
DEC(J2000)=+10:46:28) were taken on the nights of 15,17 and 19
November 2002 with a combined integration time of $\sim 3000$~s. Each
receiver measures either left (L) or right (R) circular polarization,
thus each baseline measures either total intensity (LL or RR) or
polarization (LR or RL), which are combined in the $u,v$-plane to give
Stokes $I,Q,U$. We assume that circular polarization, $V=0$. A longer
integration of $\sim 3600$~s was made in total intensity mode (all
receivers measuring L only) on 14 January 2003.
The data were reduced using similar routines to those used for CMB
data \citep{Pearson03,Readhead04a,Readhead04b}. Each 8-min integration
on source was accompanied by a trail field, separated by 8 mins in RA,
observed at the same hour angle for subtraction of
ground-spillover. The overall absolute calibration scale is tied to a
Jupiter temperature of $T_{\rm J}=(147.3 \pm 1.8)$~K
\citep{Readhead04a}. Secondary calibrators (Tau-A, Jupiter, 3C274)
were used to estimate a further uncertainty of $\sim 2\%$ in the gains
on a given night. We therefore assign a total calibration uncertainty
of $3\%$.
The final CBI CLEANed total intensity map is shown in
Fig.~\ref{fig:cbi_map}. The excellent $u,v$ coverage provides robust
mapping of both compact and extended emission on scales up to $\sim
30\arcmin$ within the primary beam of $45\arcmin.2$ FWHM at
31~GHz. The peak flux density at 31~GHz is $1.79$~Jy/beam centered on
LPH96 but with some extended emission, mainly to the NW with a
deconvolved angular size of $\sim 20\arcmin$. The noise level is
$\approx 10$~mJy/beam. Low level extended emission is also detected
outside the FWHM of the primary beam, particularly to the SE of
LPH96. Two elliptical Gaussians can account for the majority of the
flux in the CLEANed map with an integrated flux density (after
correcting for the primary beam) of $7.57 \pm 0.4$~Jy and residual rms
of $\approx 30$~mJy. The 2 components are 4.1~Jy centered on the peak
of LPH96 with a deconvolved angular size of $9\arcmin.7 \times
6\arcmin.3$ and an extended component with angular size $20\arcmin.2
\times 17\arcmin.0$ containing 3.5~Jy. By splitting the bands into low
and high frequencies, maps at 28.5 and 33.5~GHz were
produced. However, due to the different $u,v$ coverage, the maps have
different resolution and spatial scales making it difficult to compare
the flux densities of extended sources. Nonetheless, the peak flux
density in a restored $6\arcmin$ FWHM beam was 1.75 and 1.71 Jy/beam
at 28.5 and 33.5~GHz, respectively. This implies a flux spectral
index\footnote{The temperature spectral index $\beta$ is related to
the flux density spectral index $\alpha$ by $\beta = \alpha -2$ in the
Rayleigh-Jeans limit.} $\alpha =-0.14\pm 0.19$ over this range.
The polarization mapping capability of the CBI has been demonstrated
from deep CMB observations
\citep{Cartwright05,Readhead04b,Sievers05}. The Stokes $Q$ and $U$
images of LPH96 have a r.m.s. noise level of $\sim 14$~mJy/beam, with
a synthesized beam of $7\arcmin.9 \times 6\arcmin.5$ (FWHM). No
significant polarization is detected above the noise from this region
on these angular scales. From the noise-corrected polarization
intensity map we place a $3 \sigma$ upper limit on the polarization
intensity of 34~mJy. This corresponds to a polarization fraction upper
limit of $2\%$ of the peak. For the extended emission to the NW, with
a brightness of $\sim 0.2$~Jy/beam, the upper limit increases to
$\approx 10\%$.
\section{Comparison with low frequency radio maps}
\label{sec:radio_maps}
At lower frequencies, the all-sky surveys at 408~MHz (Haslam et
al. 1982), 1420~MHz (Reich \& Reich 1986) and 2326~MHz (Jonas et
al. 1998) do not have sufficient angular resolutions ($51\arcmin$,
$35\arcmin$ and $20\arcmin$ FWHM, respectively) to allow a reliable
comparison with CBI data. Instead we use data from the Effelsberg
100-m telescope at 1408~MHz \citep{Reich97} and 2695~MHz
\citep{Fuerst90}. The data\footnote{The Effelsberg survey data were
downloaded from the MPIfR sampler survey website:
http://www.mpifr-bonn.mpg.de/survey.html} are fully-sampled background
subtracted continuum maps with beams of FWHM $9\arcmin.4$ and
$4\arcmin.3$ respectively. The 1.4~GHz map has a peak brightness
temperature $T_{b}=6.88$~K and LPH96 is almost unresolved
(Fig.~\ref{fig:multi_maps}a). The 2.7~GHz map
(Fig.~\ref{fig:multi_maps}b) has a peak $T_{b}=3.38$~K and shows the
extension to the NW detected in the CBI image. After smoothing to a
common resolution of $9\arcmin.4$, the peak brightness at 2.7~GHz is
$T_{b}=1.86$~K, which corresponds to a temperature spectral index
$\beta_{1.4}^{2.7}=-2.02 \pm 0.11$ (we assume a calibration
uncertainty of 5\% in the Effelsberg data).
To allow a comparison with CBI data, the 2.7~GHz map is ``observed''
with the CBI $u,v$ coverage. This involves sampling the fourier
transform of the 2.7~GHz map after multiplying the image by the CBI
primary beam (and also making a correction for the smoothing due to
the Effelsberg beam). The CLEANed image of the 2.7~GHz simulated map
is shown as contours in Fig.~\ref{fig:multi_maps}b. The image is a
good match to the CBI map with a peak flux density of $2.06 \pm
0.10$~Jy/beam and an integrated flux density for the LPH96 region of
$7.6 \pm 0.4$~Jy. The peak flux densities at 2.7 and 31~GHz in the CBI
beam correspond to a flux density spectral index $\alpha=-0.06 \pm
0.03$.
We also compare data in the Fourier plane by simulating the CBI
observation given an input map (as in the image analysis), but
comparing the visibilities directly. This allows direct comparison of
CBI data with multi-frequency maps without the problems of
deconvolution and incomplete $u,v$ coverage. The noise in each
visibility can be treated as independent so we can simply compute the
slope of the best-fitting linear relationship between the visibilities
at the two frequencies. The Pearson's correlation coefficient,
$P=0.93$, indicates a strong correlation between CBI data and the
$2.7$~GHz map. The fitted slopes from 31~GHz to 1.4 and 2.7~GHz are
$2.06\pm 0.12$~mK/K and $7.13 \pm 0.42$~mK/K respectively. This
corresponds to spectral indices of $\beta_{1.4}^{31}=-2.00\pm 0.04$
and $\beta_{2.7}^{31}=-2.02\pm 0.04$. The slopes for each CBI channel
can be calculated in the same way, but are less useful since each
frequency samples different regions of the $u,v$-plane. Nevertheless,
for correlations with the 2.7~GHz data, we get $\beta=-2.19 \pm 0.14$
over the range $26-36$~GHz, in agreement with the image analysis.
\section{Comparison with Green Bank data and FIR maps}
\label{sec:gb_fir}
The Green Bank data presented by \cite{Finkbeiner02} consist of
$48\arcmin$ long scans at 5,8.25 and 9.75 GHz, smoothed to a
resolution $6\arcmin$ FWHM. The scans were chopped at $12\arcmin$ and
a smooth baseline subtracted to remove ground (sidelobe) and
atmospheric contamination. This makes it difficult to obtain accurate
flux densities for extended structures.
However, \cite{Finkbeiner02} do find a tight correlation with the
\cite{Schlegel98} $100~\mu$m dust map which can be used to estimate
the flux density. If the emissivity is constant within a given region,
this allows comparison of data with different resolutions and/or
observing strategies. The temperature-corrected $100~\mu$m dust map is
shown in Fig.~\ref{fig:multi_maps}c with a peak brightness of
479~MJy/sr at $6\arcmin.1$ resolution. The averaged emissivities from
\cite{Finkbeiner02}, referenced to the $100~\mu$m map, are $2770.5 \pm
13.8$, $1361.0 \pm 48.0$ and $1262.5 \pm 49.5~\mu$K/(MJy/sr) at 5,
8,25 and 9.75~GHz, respectively. The Pearson's correlation was
$P=0.91$ indicating the overall similarity between the radio and
$100~\mu$m data. The dust map was scaled with these values and
observed with the CBI beam. For a restored CLEAN beam of $6\arcmin$
(FWHM), the peak flux densities were $1.97 \pm 0.23$, $2.64 \pm 0.10$,
$3.38 \pm 0.14$~Jy at 5, 8.25 and 9.75~GHz, respectively.
To test whether there is a significant variation in
$100~\mu$m-referenced emissivity between extended and compact emission
(which would invalidate the comparison), the correlation coefficient
was calculated within a restricted $u,v$-range corresponding to either
small or large angular scales. The correlation coefficient at 31~GHz
was $41.6 \pm 1.2~\mu$K/(MJy/sr) and did not change significantly
($\lesssim 5\%$) for angular scales between $6\arcmin$ and
$30\arcmin$. Joint cross-correlations with multiple maps were not
attempted since the structure in the radio and FIR maps is similar and
would not allow a reliable separation of components.
\section{Discussion and conclusions}
\label{sec:discussion}
To quantify the contribution of dust emission at GHz frequencies, the
free-free emission must be accurately known. The analysis of
\cite{Finkbeiner02} relied on \text{H}$\alpha$\ data to place limits on the amount
of free-free radiation, which was found to be dominant at
5~GHz. However, the effects of dust extinction can clearly be seen in
the \text{H}$\alpha$\ map \citep{Gaustad01} shown in
Fig.~\ref{fig:multi_maps}d. LPH96 is visible at a level of $\sim
250$~R. For optically thin emission at 2.7~GHz, and assuming a typical
electron temperature of $T_{e}\sim 7000-10000$~K, the \text{H}$\alpha$-to-free-free
conversion is $\approx 1~$mK/R \citep{Dickinson03}. The measured
electron temperature from radio recombination lines (RRL) is
$T_{e}=9100$~K \citep{Shaver83}. This would predict 0.25~K of
free-free emission based on \text{H}$\alpha$\ intensities and would require an
absorption factor of $> 10$ to match the radio flux density. Making
predictions based on \text{H}$\alpha$\ are therefore unreliable. Observations of
RRLs provide a complementary and clean method of separating the
free-free thermal emission, without the effects of dust
absorption. \cite{Lockman96} measure a H126$\alpha$ line temperature
for LPH96 of $T_{L}=(24 \pm 2.6)$~mK at $9\arcmin$
resolution. Assuming optically thin emission in local thermodynamic
equilibrium, for $T_{e}=9100$~K, this would predict $4.9\pm 0.5$~K of
free-free emission at 1.4~GHz in a $9\arcmin$ beam. This implies that
the radiation at 2.7~GHz remains optically thin down to at least
1.4~GHz, thus allowing reliable extrapolation from 2.7~GHz to higher
frequencies. There is low-level extended emission to the SE of LPH96
that has a similar spectrum suggesting that the emission in this
entire region is dominated by optically thin free-free emission. The
WMAP data\footnote{WMAP all-sky maps available from
http://lambda.gsfc.nasa.gov/.} \citep{Bennett03a}, smoothed to
$1\arcdeg$ resolution, have a spectral index $\alpha=-0.22 \pm 0.14$
over the range $20-40$~GHz. A detailed comparison of WMAP data and CBI
data is not attempted due to the mismatch of resolutions.
The spectrum of LPH96, in terms of the flux density within a
$6\arcmin$ (FWHM) Gaussian beam centered on LPH96, is shown in
Fig.~\ref{fig:spectrum}; $1.98 \pm 0.10$~Jy at 2.7~GHz and $1.71 \pm
0.05$~Jy at 31~GHz. The Green Bank points were estimated by
calculating the flux density in the $6\arcmin$ CBI beam of the
$100~\mu$m map scaled by the factors as measured by
\cite{Finkbeiner02} (see \S\ref{sec:gb_fir}). The gray shaded area
shows the allowed range for the free-free model based on extrapolating
from 2.7~GHz using $-0.02< \alpha <-0.12$. The range is based on the
flattest derived spectral index ($\alpha=-0.02$) and the theoretical
value ($\alpha=-0.12$); the best-fitting value from the image
analysis, $\alpha=-0.06 \pm 0.03$, is in the middle of this range. The
5~GHz point is close to the free-free model, but at $8-10$~GHz, the
Green Bank data show a significant excess that cannot be reconciled
with the 2.7 and 31~GHz data points. A spinning dust model for the
Warm Neutral Medium \citep{Draine98b}, scaled in amplitude to fit the
Green Bank data, is depicted in Fig.~\ref{fig:spectrum}. From this,
one would expect to see considerable emission in the $26-36$~GHz CBI
band, well above the free-free emission, which is not seen. It is
interesting to note that the Galactic plane survey of
\cite{Langston02} at 8.35~GHz ($11\arcmin.167$ beam) detected LPH96
with a peak flux density of 3.69~Jy and integrated flux density of
6.74~Jy. But no detection was made at 14.35~GHz ($8\arcmin$ beam),
with a detection limit of 2.5~Jy. Given the spectral rise observed
from $5-10$~GHz, one would expect $\approx 5$~Jy at 14.35~GHz.
If the spectrum is indeed increasing from 5 to 10 GHz, then the lack
of anomalous emission at CBI frequencies suggests that either i)
current spinning dust models are not a good fit for this particular
cloud or ii) the emission is due to another
mechanism. \cite{McCullough02} have suggested that the rising spectrum
could occur if there was a dense optically thick ultracompact H{\sc
ii} region within the cloud, which would exhibit a rising spectrum
($\alpha \approx 2$) with a flux density in the range $1.6-4.4$~Jy at
15~GHz. High resolution data are required to definitively rule out
this model, but it is unlikely given the CBI flux
density.\footnote{D. Finkbeiner et al. (priv. comm) have recently
re-observed LPH96 using the 100m Green Bank Telescope (GBT) in the
frequency range $5-18$~GHz. They could not reproduce the rising
spectrum as seen with the 140ft telescope.}
From the image analysis, we can constrain the non-free-free
contribution at 31~GHz by subtracting the free-free model based on the
extrapolation of 2.7~GHz data. A spectral index $\alpha=0.06\pm 0.03$
accounts for essentially all the emission seen at 31~GHz and is close
to the canonical value of $\alpha=-0.1$ for free-free emission at GHz
frequencies. Adopting the theoretical value, $\alpha=-0.12$ for
$T_{e}=9100$~K, the predicted flux density at 31~GHz in a $6\arcmin$
beam becomes $1.48 \pm 0.07$ Jy. This leaves $0.23 \pm 0.09$~Jy, or
$14\%$ of the total 31~GHz flux density, that could be due to an
additional anomalous component. At this level the dust emissivity
would be $\approx 6~\mu$K/(MJy/sr), within the range observed at high
latitudes (e.g. \cite{Banday03}). It is therefore possible that there
is a non-negligible anomalous dust component in this H{\sc ii} region
that is not detected because of the much brighter thermal
emission. Given the sensitivity to the assumed spectral index, we do
not claim a significant detection of anomalous emission at 31~GHz. We
place an upper limit 0.41~Jy ($2\sigma$) on the anomalous emission at
31~GHz in a $6\arcmin$ beam, which corresponds to $24\%$ of the total
flux density.
The lack of polarization seen in the CBI data are consistent with
free-free emission. If $14\%$ of the emission at 31~GHz was indeed
anomalous, then the polarization of this component is $\sim 10\%$
($2\sigma$). This largely rules out emission from aligned grains of
strongly magnetic material that is expected to be highly polarized
\citep{Draine99}.
More data are required to clarify the origin(s) of the anomalous
emission, both at high and low Galactic latitudes, and to investigate
the conditions in which it occurs. For LPH96, data in the range
$5-20$~GHz are required to investigate further the anomalous emission
reported by \cite{Finkbeiner02}. A detailed understanding of
anomalous/spinning dust emission will be important for modeling and
removal of CMB foregrounds at frequencies $\lesssim 100$~GHz,
particularly if they exhibit significant polarization.
\begin{figure*}[!h]
\centering \includegraphics[width=0.4\textwidth, angle=0]{f1.eps}
\caption{CBI 31~GHz total intensity CLEANed map of LPH96. The uniform weighted synthesized beam FWHM is $6\arcmin.5 \times 5\arcmin.9$. The primary beam ($45\arcmin.2$ FWHM) has not been corrected for in this image. Contours are at $-0.5 (dashed),0.5,1,2,4,8,16,32,64\%$ of the peak flux density, 1.79 Jy/beam.}
\label{fig:cbi_map}
\end{figure*}
\begin{figure*}[!h]
\centering
(a)~ \includegraphics[width=0.2\textwidth, angle=0]{f2a.eps}
(b)~ \includegraphics[width=0.2\textwidth, angle=0]{f2b.eps}
(c)~ \includegraphics[width=0.2\textwidth, angle=0]{f2c.eps}
(d)~ \includegraphics[width=0.2\textwidth, angle=0]{f2d.eps}
\caption{Multi-frequency maps of the LPH96 region, covering the same angular scale as Fig.~\ref{fig:cbi_map}, at their original resolutions (see text). (a) Effelsberg 21cm continuum map (units of K). (b) Effelsberg 2.7~GHz continuum map (units of K) overlaid with a CBI simulation of this data. The simulated map has been CLEANed and primary beam corrected down to $10\%$ of the peak. Contours are at $-2(dashed),2,4,8,16,32,64\%$ of the peak intensity. (c) \cite{Schlegel98} $100~\mu$m temperature-corrected dust map (units of MJy/sr). (d) SHASSA continuum-subtracted \text{H}$\alpha$~map at $4\arcmin$ resolution (units of Rayleigh).}
\label{fig:multi_maps}
\end{figure*}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.50\textwidth, angle=0]{f3.eps}
\caption{Spectrum of LPH96. Data points are in terms of the peak flux density in a $6\arcmin$ FWHM Gaussian beam, after being observed with the CBI beam. The Effelsberg 2.7~GHz point ({\it diamond}) is assumed to be due to free-free emission only. The dashed line is the emission extrapolated from 2.7~GHz with a flux density spectral index $\alpha=-2.06$. The shaded area represents the possible range for the free-free extrapolation from 2.7~GHz with $-0.12< \alpha <-0.02$ (see text). The CBI flux density at 31 GHz ({\it square}) is seen to be dominated by free-free emission alone. The Green Bank 140 ft data, at 5,8.25 and 9.75~GHz ({\it crosses}), are evaluated using correlations with the $100~\mu$m dust template (see text). A spinning dust model for the Warm Neutral Medium \citep{Draine98b}, scaled to fit the Green Bank data, is shown as the {\it dotted line}. {\it Solid line} is the sum of free-free and spinning dust components.}
\label{fig:spectrum}
\end{figure*}
\acknowledgements{We gratefully acknowledge support from the Kavli
Operating Institute and thank B. Rawn and S. Rawn Jr for their
continuing support. We are also grateful for the support of M. and
R. Linde, C. and S. Drinkward and the provost, president, and PMA
division chairman of the California Institute of Technology. The CBI
was supported by NSF grants 9802989, 0098734 and 0206416. We
acknowledge the use of the Legacy Archive for Microwave Background
Data Analysis (LAMBDA). Support for LAMBDA is provided by the NASA
Office of Space Science. We acknowledge the use of NASA's SkyView
facility (http://skyview.gsfc.nasa.gov) located at NASA Goddard
Space Flight Center. The Southern H-Alpha Sky Survey Atlas (SHASSA)
is supported by the NSF. CD thanks Patricia Reich for maintaining
the MPIfR image retrieval facility and making the Effelsberg data
available. We thank Doug Finkbeiner for informing us of new GBT
observations of LPH96. CD thanks Barbara and Stanley Rawn Jr for
funding a fellowship at Caltech. SC acknowledges support from
FONDECYT grant 1030805, and from the Chilean Center for Astrophysics
FONDAP 15010003. JLP acknowledges the grant MECESUP UCH0118 given by
the Chilean Ministry of Education.}
\bibliographystyle{apj}
|
2,869,038,155,117 | arxiv |
\subsection{Improving Monte Carlo Integration}
\label{sec: CVs}
The problem of approximating expectations $\mathbb{E}_{X \sim P}[ f(X) ]$, where $f:\X \rightarrow \mathbb{R}$ is a test function of interest, can also be addressed using Stein's method. In Bayesian statistics, it is most common for expectations to be approximated using ergodic averages from MCMC, though of course the algorithms described in Sections \ref{sec: SVGD} and \ref{sec: sampling sec} can also be used. The convergence of estimators based on MCMC is characterized by the central limit theorem, whose asymptotic variance will depend on the variance of $f$ along the sample path of the Markov chain \citep[see Chapter 17 of][]{meyn2012markov}. One approach to reducing this asymptotic variance is to use so-called \emph{control variates}. This consists of designing a function $h:\X \rightarrow \mathbb{R}$ such that, if we re-write the expectation as
\begin{talign*}
\mathbb{E}_{X \sim P}[f(X)] = \mathbb{E}_{X \sim P}[h(X)] + \mathbb{E}_{X \sim P}[f(X) - h(X)],
\end{talign*}
then the first term on the right-hand side is known analytically (by some auxiliary argument) and the second integrand, $f - h$, should have smaller variance than $f$ along the sample path of the Markov chain.
In this way estimation of the original expectation is reduced to estimation of an alternative expectation which is more amenable to MCMC.
Indeed, in an ideal situation we would pick $h$ such that $f-h$ is constant along the sample path of the Markov chain, meaning that the ergodic average is exact after just one iteration of the chain has been performed
\citep{Mira2013}.
The principal limitation to the successful application of control variates is the identification of a set of candidates for $h$ that (a) is sufficiently rich to approximate $f$ and (b) for which the expectations $\mathbb{E}_{X \sim P}[h(X)]$ can be evaluated. Several authors have developed bespoke solutions that are specific to a particular MCMC algorithm, including \citet{Andradottir1993,stein2004use,henderson2004adaptive,dellaportas2012control,mijatovic2018poisson}. It was pointed out in \cite{Oates2017} that the image of a Stein operator adapted to $P$ can serve as such a set in general.
In concrete terms, one may identify a Stein operator $\operator{}$ and a Stein set $\mathcal{G}$ that are adapted to $P$ and then attempt to pick an element $g \in \mathcal{G}$ for which $f - h \approx \text{constant}$ along the Markov chain sample path, where $h = \operator{g}$. This problem is closely related to numerical solution of the Stein equation \eqref{eq:steinequ}. Several authors have addressed this issue in recent years.
In \cite{Assaraf1999,Mira2013,Oates2016thermo}, the authors selected $g$ from a set of all polynomials of a fixed maximum degree, minimising the squared error $J_n(g) = \sum_{i=1}^n (f(x_i) - \mathcal{T}g(x_i))^2$ along the Markov chain sample path $\{x_1,\ldots,x_n\}$, with no complexity penalty used.
In \cite{south2018regularised}, the authors used an $\ell_1$ or $\ell_2$ penalty on the polynomial coefficients and recommended that cross-validation could be used as a means to select an appropriate polynomial degree.
Kernel methods with a minimum norm penalty were proposed in \cite{Oates2017,Oates2016CF2,Barp2019}.
In \cite{south2020semi}, the authors showed how polynomials and reproducing kernels can be combined in a manner that leads to polynomial exactness of the control variate estimator in the Bernstein--von--Mises limit.
The use of neural networks for $g$ was empirically assessed in \cite{Zhu2018,Si2020}.
If one specializes to particular MCMC algorithms then it may be possible to consistently estimate the asymptotic variance under the Markov chain, which can in turn be used to construct a more appropriate functional $J_n$.
This approach is exemplified in \cite{belomestny2017variance,belomestny2019variance,belomestny2020variance}.
The diverse set of approaches for constructing control variates based on Stein operators supports the view that no single method will be universally optimal for all real-world computational problems and, to some extent, the estimation of a suitable control variate remains as much an ``art'' as the design of an efficient MCMC method.
As an example, we refer to \cite{liu2017action} for a detailed application of Stein control variates to policy optimization in reinforcement learning.
\subsection{Goodness-of-fit Testing}
We will now discuss how Stein operators and discrepancies can also be used to test for goodness-of-fit. We will focus on two lines of work in particular. The first, based on KSDs, can be applied to the problem of goodness-of-fit to a fixed distribution $P$ whose density is known up to a normalizing constant. The second, based on Stein operators, has been studied for the problem of testing for goodness-of-fit to a whole parametric family of distributions.
\subsubsection{Goodness-of-fit Tests from Stein Discrepancies}\label{kernelSteinEmbedding}
Suppose we would like to test for the null hypothesis $H_0: Q = P$ based on realizations $\{x_1, \ldots, x_n\}$ from $Q$ (which may or may not be independent). One approach to this problem, first proposed by \citet{Chwialkowski2016,Liu2016test}, is to use a KSD as test statistic. These tests are motivated by the general approach of using IPMs within a hypothesis testing framework. In particular, an influential line of work in machine learning has been to use IPMs with a kernel-based underlying function class, leading to the so-called MMD hypothesis tests \citep{gretton2006kernel,gretton2012kernel}.
This approach has previously been used to test for a range of hypotheses, including two-sample tests and independence tests. Their popularity can be explained through their generality: they only rely on the choice of a kernel and samples from
both $P$ and $Q$, and can hence be implemented for a wide range of problems.
In the goodness-of-fit setting, when $P$ has a density known up to normalizing, sampling from $P$ may introduce unnecessary variance to our test statistic. The test is also somewhat sub-optimal since it does not use any specific properties of $P$. It is therefore natural to consider the use of Stein operators in this setting.
This can be achieved by selecting an IPM whose underlying function class is of the form $\operator{g}$ for $g$ in some Stein set $\mathcal{G}$. When using a Langevin Stein operator and kernel Stein set, this leads to the Langevin KSD of Example \ref{example:Langevin_KSD}, which is the case most often considered in this literature. Recalling the expression for the population Langevin KSD given in equation (\ref{ksd}),
an unbiased estimate of the squared KSD takes the
convenient form of a U-statistic:
\begin{talign*}
\widehat{\mathrm{KSD}}_{k}^2(Q)=
\frac{2}{n(n-1)}
\sum_{i<j}
k_P (x_i,x_j).
\end{talign*}
This estimate can be used as a test statistic. It is degenerate under the null hypothesis that $Q=P,$ and non-degenerate
under the alternative. As a result, the asymptotic behaviour of the
statistic is obtained via standard results \citep{serfling2009approximation}. Unfortunately, the asymptotic distribution under the null is a function of the eigenvalues of $k_P$ with respect to $Q$, which are rarely computable in closed form.
Nonetheless, a test threshold of asymptotic level $\alpha$ may be obtained using a wild-bootstrap procedure on a V-statistic approximation to the KSD. The wild bootstrap may also be adapted to the case where the sample from $Q$ is not i.i.d., but satisfies a $\tau$-mixing condition \citep{LeuNeu13}. This is especially helpful when the goodness-of-fit test is used for bias quantification of approximate MCMC procedures since these are not i.i.d.\ \cite[Section 4]{Chwialkowski2016}.
In order to guarantee consistency of the tests, it is of interest to establish when the KSD uniquely determines
whether $Q$ and $P$ correspond. We refer to \citet[Theorem 2.2]{Chwialkowski2016}:
if $k$ is $C_{0}$-universal \cite[Definition 4.1]{Carmeli2010},
and if $\mathbb{E}_{X\sim Q}[\Vert \nabla(\log (p(X)/q(X)))\Vert_2 ^{2}]<\infty$,
then $\mathrm{KSD}_{k}(q)=0$ if and only if $P=Q$. Many kernels popularly used in practice, including the exponentiated quadratic (Gaussian) kernel $k(x,y) = \exp(-\|x-y\|^2_2/l^2)$ $(l>0)$, are $C_{0}$-universal. We recall the result of \citet{Gorham2017}, however, that stronger conditions on the kernel are required when it is desired to control {\em weak convergence} to a target using the KSD.
Apart from U-statistic based tests, alternative tests exist which
can be computed in linear time, using adaptive kernel
Stein features that indicate where the data distribution $Q$ differs
from the model $P$ \citep{Jitkrittum2017}, or importance sampling approaches \citep{HugginsMa2018}. In the former case, the features are learned on a held-out sample from $Q$, so
as to maximize the power of the resulting test. Stein goodness-of-fit
tests may also be defined for right-censored time-to-event data \citep{fernandez2020kernelized}.
Right censoring is commonly observed in survival analysis studies,
where patients might leave a medical trial before the
outcome is observed.
The Stein tests of \citet{fernandez2020kernelized} are used to validate models of survival times --- for instance, whether these follow a Cox proportional hazard model --- in a number of real-world medical studies (of Leukemia, Chronic Granulotamous Disease, Ovarian Cancer, Lung Cancer), where right censoring is present.
Stein tests also yield state-of-the-art performance in detecting departures from the null for more complex hazard functions in the presence of right censoring, for instance the periodic hazard functions describing seasonal diseases such as influenza.
For discrete distributions, KSD tests include the work of \citet{yang2018goodness} which derives a discrepancy for discrete data, that of \citet{yang2019stein} which focused on point processes, and the work of \cite{xu2021stein} which introduces a goodness-of-fit test for exponential random graph models.
\subsubsection{Composite Goodness-of-fit Tests from Stein Operators}
\label{sec:stein-oper-goodn}
Now we move on to the design of tests for the composite {null hypothesis} $H_0:\;Q\in\mathscr{P}_\Theta=\{P_\vartheta:\,\vartheta\in\Theta\}$, which we will test given i.i.d.\ data $\{x_1, \ldots, x_n\}$ from $Q$. Here $\Theta\subset\mathbb{R}^s$, $s\in\mathbb{N}$, is an open parameter space, and $P_\vartheta$ is the unique distribution corresponding to $\vartheta\in\Theta$ in the parametric family $\mathscr{P}_\Theta$. Many classical goodness-of-fit problems fall into this category, including tests for normality, exponentiality, Poissonity or for families such as the gamma law.
Here we shall consider parametric families of Stein operators. Let $\{\opsub{\vartheta}:\vartheta\in\Theta\}$ be a family of Stein operators characterizing the family $\mathscr{P}_\Theta$. By the Stein characterization we have $\mathbb{E}_{X \sim P_\vartheta}[(\opsub{\vartheta} g)(X)]=0$ for all $g\in\mathcal{G}(\opsub{\vartheta})$ and $\vartheta\in\Theta$; see \citet{ley2016} for more information on parametric Stein operators. A test for the composite hypothesis based on a suitable set of test functions $\mathcal{G}=\{g_t(x): t\in M\}$, $M\subset\mathbb{R}^d$, is then given by the weighted $L^2$ statistic
\begin{talign}\label{eq:Tn}
T_n
& = n \int_{M}\big\|\frac{1}{n}\sum_{i=1}^n(\opsub{\widehat{\vartheta}_n}g_t)(x_i) - \E_{X \sim P_{\widehat{\vartheta}_n}}[(\opsub{\widehat{\vartheta}_n}g_t)(X)]\big\|^2\omega(t)\,\mbox{d}t \\
& = n\int_{M}\big\|\frac1n\sum_{i=1}^n(\opsub{\widehat{\vartheta}_n}g_t)(x_i)\big\|^2\omega(t)\,\mbox{d}t,\nonumber
\end{talign}
where $\widehat{\vartheta}_n$ is a consistent estimator of $\vartheta$, $\|\cdot\|$ is a suitable norm and $\omega:M\rightarrow[0,\infty)$ is a positive weight function satisfying some weak integrability conditions. Heuristically, $T_n$ should be close to 0 if and only if the data stems from $\mathscr{P}_\Theta$, and we will hence reject $H_0$ for large values of $T_n$.
The expression in \eqref{eq:Tn} can be thought of as a weighted $L^2$-difference between the expectation of $\opsub{\widehat{\vartheta}_n}g_t$ under $P_{\widehat{\vartheta}_n}$ and $Q_n=n^{-1} \sum_{i=1}^n \delta_{x_i}$. This is in contrast with the IPMs, such as the KSD of the previous section, which measure worst-case types of differences (recall Equation \ref{eq:IPM} which considers the supremum instead of an average). As a result, although the tests in Sections \ref{kernelSteinEmbedding} and \ref{sec:stein-oper-goodn} are both based on Stein operators, they use these in rather different manners. For the tests in this section, the benefit of considering the structure of a $L^2$-Hilbert space lies in the fact that the central limit theorem for Hilbert-space valued random elements can be exploited to derive limit distributions under $H_0$, as well as fixed and contiguous alternatives.
We now illustrate the approach for the multivariate normal distribution. Testing for normality is the most classical goodness-of-fit problem and hence extensively studied in the literature; see \citet{Henze2002} and \citet{ebner2020tests} for surveys.
\begin{example}[Testing for multivariate normality]\label{ex:gofnorm}
Consider $H_0:\,Q\in{\cal N}_d=\{\mathrm{N}_d(\mu,\Sigma):\mu\in\mathbb{R}^d,\,\Sigma\in\mathbb{R}^{d\times d}\; \mbox{positive definite}\}$. In the following, let $y_{n,j} := S_n^{-1/2}(x_j - \overline{x}_n)$ for $j=1,\ldots,n$ denote the so-called scaled residuals, where $\overline{x}_n= n^{-1} \sum_{j=1}^n x_j$, and $S_n := n^{-1} \sum_{j=1}^n(x_j-\overline{x}_n)(x_j-\overline{x}_n)^\intercal$. Further assume $n\ge d+1$ to ensure that $S_n$ has full rank. Note that if $Q\in{\cal N}_d$, $y_{n,1},\ldots,y_{n,n}$ should be approximately independent with a distribution close to $\mathrm{N}_d(0,I)$. \citet{Henze2020} implicitly used the classical Stein operator $\operator{}$ from Example \ref{ex:steingauss} with the Stein set of moment generating functions $\{g_t(x)=\exp(t^\intercal x): t\in\mathbb{R}^d\}$. For the scaled residuals, this gives the empirical moment generating function $\psi_n(t)=n^{-1}\sum_{j=1}^n \exp(t^{\intercal}y_{n,j})$, in which case the test statistic in \eqref{eq:Tn} becomes
\begin{talign}\label{eq:TNmultnorm}
T_{n,a} = n\int_{\mathbb{R}^d} \|\nabla \psi_n(t) - t \psi_n(t)\|^2_{2} \ \omega_a(t) \, \mathrm{d}t,
\end{talign}
where $ \omega_a(t)=\exp(-a\| t \|_2^2)$, $a>0$, is chosen to allow closed form computation of the integral, and $a$ is a ``tuning'' parameter. We write $y_{n,j,k}^+=
y_{n,j}+y_{n,k}$ and get the numerically stable representation
\begin{talign*}\label{eq:TNmultnorm}
T_{n,a} = \frac1n\left(\frac{\pi}{d}\right)^{d/2}\sum_{j,k=1}^n\exp\Big(\frac{\|y_{n,j,k}^+\|_2^2}{4a}\Big)\Big(y_{n,j}^\top y_{n,k}-\frac{\|y_{n,j,k}^+\|_2^2}{2a}+\frac{\|y_{n,j,k}^+\|_2^2}{4a^2}+\frac{d}{2a}\Big).
\end{talign*}
Since $T_{n,a}$ is only a function of the scalar products $y_{n,j}^\top y_{n,k}$, for $j,k = 1,\ldots,n$, the test is affine invariant, a desirable property to ensure that the limit null distribution of $T_{n,a}$ does not depend on the underlying parameters of the normal law.
\end{example}
An alternative test of univariate normality based on $\operator{}$ from Example \ref{ex:steingauss} is proposed in \citet{ebner2020combining}, but in this case test functions of the form $\{g_t(x)=\exp(\mathrm{i}t x): t\in\mathbb{R}\}$ (i.e. related to characteristic functions) are used. \citet{Doerr2020} also introduce a test of multivariate normality, based on $\oparg{g}{x}=-\Delta g(x)+(\|x\|_2^2-d)g(x)$ (where $\Delta$ denotes the Laplacian), and the class of test functions $\{g_t(x)=\exp(\mathrm{i}t^\top x): t\in\mathbb{R}^d\}$. There are considerable differences in power against specific alternatives between the tests, especially w.r.t.\ the choice of test functions. For a comparative Monte Carlo simulation study see \citet{ebner2020tests}.
In a similar vein, \citet{Betsch2019} and \citet{Betsch2020discrete} provide new characterizations of continuous and discrete parametric families of distributions through the density approach for novel tests for univariate normality \citep{betsch2020testing}, the gamma family \citep{Betsch2019a} and the inverse Gaussian law \citep{Allison2019}. Note that other test statistics of type (\ref{eq:Tn}) based on Stein operators are implicitly proposed in tests for parametric families, although originally motivated by characterizing (partial) differential equations for integral transforms; see, for instance, \citet{Baringhaus1991} for a test of exponentiality, \cite{BH:1992} for a test of Poissonity, and \citet{HME:2012} for a test of the gamma law.
Overall, characterizations of probability distributions based on Stein operators provide a powerful tool for the construction of goodness-of-fit tests. This section thus illustrates the interplay between Stein characterizations, Stein discrepancies and tests.
\section{Introduction}
\label{sec:intro}
\medskip Stein's method was
introduced by Charles Stein in the early 1970s \citep{Stein72} for distributional comparisons to the normal distribution. At the foundation of Stein's method lies a characterising equation for the normal distribution. This equation is also a cornerstone in Stein's unbiased estimator of risk \citep{stein1981estimation} and James-Stein shrinkage estimators \citep{stein1956inadmissibility, james1961estimation}; see \citet{fathi2020relaxing} for a joined-up view. In this paper we concentrate on Stein's method for distributional comparisons.
Originally developed for normal approximation, the method was extended first to Poisson approximation by \cite{chen1975poisson}, then by a growing community to a growing collection of approximation problems including beta, binomial, gamma, Kummer-U, multinomial, variance-gamma, Wishart, and many more. Stein's method has proved powerful in particular for deriving explicit bounds on distributional distances even when the underlying random element are structures with dependence. Moreover, it thrives even when the target distribution is known only up to a normalising constant. Comprehensive introductions to the theory and its applications are available in the monographs \citet{stein1986approximate,barbour1992poisson,chen2010normal, nourdin2012normal, arras2019stein}. We also refer to the surveys of \citet{ross,chatterjee2014short,barbourchen14,ley2017stein}. The websites \href{https://sites.google.com/site/malliavinstein}{https://sites.google.com/site/malliavinstein} and \href{https://sites.google.com/site/steinsmethod}{https://sites.google.com/site/steinsmethod} provide regularly updated lists of references.
Over the past few decades, Stein's method has had substantial interactions with other mathematical fields, such as Malliavin calculus, information theory, functional analysis, dynamical systems and stochastic geometry. Some examples for established applications in statistics are as follows. \citet{stein2004use} employ the method for the analysis of sample quality in simulations, \citet{HolRei04} rely on Stein’s method for empirical processes for developing a new bootstrap method for networks, \cite{Shao05} obtains a Berry-Esseen bound for Student's $t$-statistic via Stein's method, while \cite{shao2010stein} presents a survey on the applications of Stein's method to self-normalized limit theorems and discusses in particular false discovery rates in simultaneous tests. \cite{Lippert} and \cite{reinert2009alignment} utilize the method to obtain bounds on the rate of convergence for the distributional approximation of some popular statistics from alignment-free sequence comparison, and introduce a two new sequence comparison statistics. This list is by no means exhaustive, but has the goal to give the reader a first taste of the versatile usage of Stein's method in statistics.
These early and ongoing successes of Stein's method in statistics have spawned
considerable research activity in recent years and also attracted the attention of researchers from computational statistics and machine learning. The aim of this paper is to cover various (clearly not all) developments that took place during the burgeoning years since around 2015; the choice of topics is biased by the research interests of the contributors. By this survey, we also wish to bring Stein's method and its different ingredients to the attention of the broad statistical community in order to further foster this fertile research domain. Therefore our paper contains an extensive introductory section on Stein's method (Section \ref{steinsection}).
Topic-wise the paper is subdivided into two parts, corresponding to two distinct
uses of Stein's method in statistics. Section~\ref{sec:A} covers new insights gained by Stein's method into well-known statistical procedures, by providing explicit bounds for distributional distances. These applications include quantifying the asymptotic approximation error at finite sample size of the famous maximum likelihood estimator, of likelihood ratio statistics and Pearson's chi-square statistic, and assessing the effect of the prior on the posterior in Bayesian analysis. The fact that target distributions only need to be known up to a normalising constant for Stein's method to apply has sparked considerable interest in computational statistics and machine learning. Here, ingredients from Stein's method such as so-called \emph{Stein discrepancies} have been used to develop new methodological procedures {based on Stein operators. In Section \ref{sec:comp_stein_discrepancies} the practical issue of computing Stein discrepancies is discussed. New test statistics and procedures based on Stein operators} will be the content of Section~\ref{sec:B}. This paper covers as examples measuring sample quality, constructing and improving sample approximation, and goodness-of-fit testing in high dimensions. Section \ref{sec:conclu} provides some summarising conclusions.
\section{The Basic Ingredients of Stein's Method}\label{steinsection}
Stein's method is a collection of tools permitting to quantify the dissimilarity between probability distributions. The method has many components, not all of which are pertinent to the present survey. The purpose of this introductory section is to provide an overview of the basic ingredients which shall be of use in the rest of the paper: Stein operators, Stein discrepancies and Stein equations; we will also provide some approaches for choosing Stein operators.
First we fix some notation. Expectations with respect to a probability distribution $Q$ are denoted by ${\mathbb{E}} _{X \sim Q}$; sometimes the subscript is omitted when the context is clear. The distribution of a random quantity $X$ is denoted by $\mathcal{L}(X)$.
The function $\mathbb{I}_A(x)$ is the indicator function of $x \in A$, taking the value 1 if $x \in A$ and 0 otherwise.
For $\mathbb{R}^d$-valued functions $f, g$, the notation
$\langle f, g \rangle$ denotes the inner product; if $f$ and $g$ are matrix-valued, it denotes the Hilbert-Schmidt inner product. The notation $C^k(\mathbb{R}^d)$ denotes functions in $\mathbb{R}^d$ that are $k$ times continuously differentiable.
The norm $| \cdot |$ is the absolute value, $|| \cdot ||_2$ the Euclidean norm and $|| \cdot ||_\infty$ denotes the supremum norm. The operator $\nabla$ denotes the gradient operator; the gradient of a smooth function {$v: \mathbb{R}^d \to \mathbb{R}$ is the vector valued function $\grad v$} with entries
$ (\grad v)_{{i}} = \partial_{i} v$, ${i}=1, \ldots, d$, by convention viewed as column vector. For a $d$-vector-valued function $\mathbf{v}: \mathbb{R}^d \rightarrow {\mathbb{R}^d}$
with components $v_j, j = 1, \ldots, d$, the divergence is $\mathrm{div} (v) = \nabla^\intercal v = \sum_{i=1}^d \partial_i v_i( x).$
For a vector or a matrix, the superscript $T$ stands for the transpose; this also applies for vector- or matrix-valued operators. The space $L^p(Q)$ denotes the set of functions such ${\mathbb{E}} _{X \sim Q}[f^p(X)]$ is finite. Finally, by convention,
$0/0=0$.
\subsection{Stein Operators, Stein Discrepancies and Stein Equations}
\label{sec:stein-oper-discr}
The starting point of Stein's method for a target probability distribution $P$ on some set $\mathcal{X}$ consists in identifying a linear operator $\operator{}$ acting on a set $\mathcal{G}(\operator{})$ of functions on $\mathcal{X}$
such that, for any other probability measure $Q$ on $\mathcal{X}$, it holds that
\begin{talign}
\label{eq:Steinchar}
Q = P \mbox{ if and only if } \mathbb{E}_{X \sim Q} [
\oparg{g}{X}] = 0 \mbox{ for all } g \in \mathcal{G}(\operator{}).
\end{talign}
Such an operator $\operator{}$ is called a \emph{Stein operator}, the collection $\mathcal{G}(\operator{})$ of functions for which $\mathbb{E}_{X \sim P} [\oparg{g}{X}] = 0 $ is called a \emph{Stein class}, and equivalence \eqref{eq:Steinchar} is called a \emph{Stein characterization}. In many cases the characterizing nature of the operator is superfluous, and we only need to require that a \emph{Stein identity} for $P$ is satisfied, namely that $\mathbb{E}_{X \sim P} [\oparg{g}{X}] = 0 $ for all $g \in \mathcal{G}(\mathcal T)$.
We will discuss the topic of choosing Stein operators in Section~\ref{sec:choossteinop}. At this stage let us suppose that we are given a Stein operator $\mathcal T$ with Stein class $\mathcal G(\mathcal T)$. Then, for any \emph{Stein set} $\mathcal G \subset \mathcal G(\mathcal T)$, one may define a dissimilarity measure as
\begin{talign}
\label{eq:SteinDisc}
\opsetstein{Q} = \sup_{g \in
\mathcal{G}} \left\|\mathbb{E}_{X \sim Q} [
\oparg{g}{X}]\right\|^*
\end{talign}
for some appropriate norm $\| \cdot\|^*$; by construction if $\opsetstein{Q} \neq 0$ then $Q \neq P$ and if $\mathcal{G}$ is sufficiently large then $\opsetstein{Q} = 0$ also implies $Q = P$.
\citet{GorhamMa15} call the quantity \cref{eq:SteinDisc} a \emph{Stein discrepancy}. While in \citet{ledoux2015stein}, \emph{Stein discrepancy} has a different meaning, in this paper, \emph{Stein discrepancy} refers to \eqref{eq:SteinDisc}.
If the Stein
operator $\operator{}$ and the {Stein set}
$\mathcal{G} \subset \mathcal{G}(\operator{})$ are well-chosen, the Stein discrepancy $\opsetstein{Q}$ ought to capture some aspect of the dissimilarity between $P$ and $Q$. Part of the magic of Stein's method lies in the fact that there are numerous combinations of target distribution $P$ and approximating distribution $Q$ for which one can identify operators $\operator{}$ and sets $\mathcal{G}$ ensuring that the quantity $\mathcal{S} (Q, \operator{}, \mathcal{G}) $ is both tractable \emph{and} relevant.
Our first example of Stein discrepancy is a starting point for quantifying the dissimilarity between any probability distribution $Q$ on $\mathbb{R}^d$ and the normal
distribution.
\begin{example}[Stein operator and discrepancy for the multivariate
normal distribution]\label{ex:steingauss}
Let $\Sigma$ be a $d \times d$ positive definite matrix; denote by $\mathrm{N}_d(0,\Sigma)$ the centered multivariate normal with covariance $\Sigma$. Let $g : \mathbb{R}^d \to \mathbb{R}$ {be almost differentiable}, i.e. possess a gradient $\nabla g : \mathbb{R}^d \to \mathbb{R}^d$ such that, for all $z \in \mathbb{R}^d$, $g(x+z) - g(x) = \int_0^1 \left\langle z, \nabla g(x+ tz) \right\rangle \mathrm{d}t$ for almost all $x \in \mathbb{R}^d$. Suppose furthermore that $\nabla g \in L^1(\mathrm{N}_d(0,\Sigma))$.
Then
\begin{talign*}\mathbb{E}_{X \sim \mathrm{N}_d(0, \Sigma)} \left[\Sigma \nabla g(X) - X g(X)\right] =0,
\end{talign*}
see for example \citet{stein1981estimation} (for $\Sigma $ the identity matrix). We deduce that the first order differential operator
\begin{talign} \label{normalop}
\oparg{g}{x} = \Sigma \nabla g(x) - x g(x)
\end{talign}
is a Stein operator for $\mathrm{N}_d(0, \Sigma)$, acting on the Stein class $\mathcal{G}(\operator{})$ of all almost differentiable functions with (almost everywhere) gradient $\nabla g \in L^1(N_d(0, \Sigma))$. This leads to the Stein discrepancy
\begin{talign}\label{normaldis}
\opsetstein{Q} = \sup_{g \in
\mathcal{G}} \left\| \mathbb{E}_{X \sim Q} \left[
\Sigma \nabla g(X) -
X g(X) \right]\right\|_2,
\end{talign}
for any $\mathcal{G} \subset \mathcal{G}(\operator{})$, where $\|\cdot\|_2$ is the Euclidean norm on $\mathbb{R}^d$.
\end{example}
Of course it remains to ensure that the dissimilarity measures herewith obtained actually capture relevant aspects of the dissimilarity between $P$ and $Q$. Classically, there are many ways to determine discrepancies between probability measures, see for example \cite{gibbs2002choosing,rachev2013methods}. In this survey, and in much of the literature on Stein's method, the focus is on distances known as \emph{integral probability metrics} (IPMs, for short) \citep{Muller97, zolotarev1984probability}, which are defined as
\begin{talign}\label{eq:IPM}
d_{\mathcal{H}}(P, Q) := \sup_{h \in \mathcal{H}}
\left| \mathbb{E}_{X \sim P}[h(X)]- \mathbb{E}_{X \sim Q} [h(X)] \right|
\end{talign}
for some class of real-valued measurable test functions
$\mathcal{H} \subset L^1(P) \cap L^1(Q)$. When $d_\mathcal{H}$ is a distance on the set of probability measures on $\mathcal X$ then $\mathcal H$ is called \emph{measure determining}.
\begin{rem} \label{idmlist} Different choices
of $\mathcal{H}$ give rise to different {IPMs}, including:
\begin{itemize}
\item the \emph{Kolmogorov distance}: $d_{\mathrm{K}}(P, Q)$, which is the IPM induced by the set of test functions $\mathcal{H}_{\mathrm{K}} = \left\{ \mathbb{I}_{(-\infty, x]}(\cdot) \, : \, x \in \mathbb{R}^d \right\}$ (indicators of bottom left quadrants);
\item the \emph{$L^1$-Wasserstein distance} (also known as the Kantorovich-Rubenstein or earth-mover's distance): $d_{\mathrm{W}}(P, Q)$, which is the IPM induced by the set of test functions $\mathcal{H}_{\mathrm{W}} = \{h:\mathbb{R}^d\to\mathbb{R}\,:\, \sup_{x\neq y\in\mathbb{R}^d} |h(x)-h(y)|/\|x-y\|_2 \le 1\}$ (functions with Lipschitz constant at most 1);
\item the \emph{bounded Wasserstein distance} (also known as the Dudley or bounded Lipschitz metric): $ d_{\mathrm{bW}}(P, Q)$, which is the IPM induced by the set of test functions $\mathcal{H}_\mathrm{bW}$ which collects the bounded functions in $\mathcal{H}_{\mathrm{W}}$.
\end{itemize}
\end{rem}
To see the connection between IPMs $d_{\mathcal{H}}$ and Stein discrepancies $\mathcal{S}$, an additional ingredient enters the picture: the \emph{Stein equation}. Given $P$ the target distribution with Stein operator $\operator{}$ and Stein class
$\mathcal{G}(\operator{})$, and given $\mathcal{H} \subset L^1(P)$ a measure-determining class of test functions, the \emph{Stein equation} for $h \in \mathcal{H}$ is the functional equation
\begin{talign}\label{eq:steinequ}
\oparg{g}{x} = h(x) - \mathbb{E}_{X \sim P}[h(X)]
\end{talign}
evaluated over $x \in \mathcal X$, with solution $g = g(h) := \mathcal{L}h\in \mathcal{G}$, if it exists. Assuming that this solution exists, it follows that $\mathbb{E}_{X \sim Q}[h(X)0] - \mathbb{E}_{X \sim P}[h(X)] = \mathbb{E}_{X \sim Q} [ \oparg{(\mathcal{L}h)}{X}]$ so that
\begin{talign*}
d_{\mathcal{H}}(P, Q) = \sup_{h\in\mathcal{H}}
|\mathbb{E}_{X \sim P}[h(X)] - \mathbb{E}_{X \sim Q}[h(X)]| = \mathcal{S}(Q, \operator{},
\mathcal{L}\mathcal{H})
\end{talign*}
with $\mathcal{L}\mathcal{H}$ the Stein set collecting all solutions $\mathcal L h$ to the Stein equation
\eqref{eq:steinequ} with $h \in \mathcal H$. Existence of a solution of the Stein equation depends on the properties of the target measure $P$, of the Stein operator $\mathcal T$, and of the Stein class $\mathcal G(\mathcal T)$. In many cases, existence of these solutions is guaranteed and, thus, all of the IPMs listed in Remark \ref{idmlist} can be rewritten as Stein discrepancies whose underlying Stein set $\mathcal{L}\mathcal{H}$ depends on the measure $P$ characterized by $\operator{}$ through \eqref{eq:steinequ}.
Often, bounding $\mathbb{E}_{X \sim Q}[h(X)] - \mathbb{E}_{X \sim P}[h(X)]$ through bounding $ \mathbb{E}_{X \sim Q} [ \oparg{(\mathcal{L}h)}{X}]$ is advantageous as the latter only requires integrating under $Q$; the properties of $P$ have been encoded in the Stein operator and Stein class. Commonly used approaches for bounding Stein discrepancies are coupling techniques \citep{barbour1992poisson, reinert98, chen2010normal, ross}, the Malliavin-Stein method \citep{nourdin2012normal} and comparison of Stein operators \citep{Holmes2004,ley2017stein,mijoule2021stein}; here the references only serve as pointers and the list is certainly not complete.
In order to bound $ \mathbb{E}_{X \sim Q} [ \oparg{(\mathcal{L}h)}{X}]$, suitable bounds on the solutions $\mathcal Lh$ of the Stein equation, as well as certain lower order derivatives {or differences} of the solution, are usually required (although sometimes weak solutions of an appropriate equation suffice, see \citet{courtade2019existence, mijoule2021stein}). Bounds on the solution are often referred to as \emph{Stein factors}. Determining Stein factors has attracted attention in recent years. Of the many available references, we single out \cite{mackey2016multivariate, fang2019multivariate} where bounds are obtained for operators given in the setting of Example \ref{ex:steinopdiffusions} under assumptions of log-concavity; we also point out \cite{GorhamDuVoMa19}, where Stein factors are obtained under the weaker assumption of \emph{integrable Wasserstein decay}. An overview for continuous distributions is given in \cite{mijoule2021stein}.
\begin{example}[Stein equation for the multivariate normal distribution]\label{ex:normaleas}
Recall the operator \eqref{normalop} and discrepancy \eqref{normaldis} for the $\mathrm{N}_d(0, \Sigma)$ distribution given in Example \ref{ex:steingauss}. To obtain a univariate Stein equation \eqref{eq:steinequ} a natural way is to restrict attention to functions $g_0 = \nabla g$, that is to functions that are themselves gradients. The resulting Stein operator is $\oparg{g}{x} = \nabla^\intercal\Sigma\nabla g(x)- x^\intercal \nabla g(x) $.The corresponding Stein equation is
\begin{talign}
\label{multivariate_Stein_equation}
\oparg{g}{x} = \nabla^\intercal\Sigma\nabla g(x)- x^\intercal \nabla g(x) = h(x) -
\mathbb{E}_{Z \sim \mathrm{N}_d(0,I)} [h(\Sigma^{1/2}Z) ],
\end{talign}
see Section \ref{sec:generator_approach}. It is known that if $h$ is $n$ times
differentiable then so is {the solution} $g_h$ {of \eqref{multivariate_Stein_equation}}, and the supremum norm $\|\cdot\|_\infty$ of the derivatives of $g_h$ in any direction is uniformly bounded by the supremum norm of the test function $h$.
We refer to \cite{meckes2009stein} and \cite{gaunt2016rates} for more detail on Stein factors for the multivariate normal distribution. Specializing to the univariate standard normal distribution, \cite{daly08} shows that if $h^{(k-3)}$ is Lipschitz, then, with $h^{(0)}\equiv h$,
\begin{talign}\label{dalybd}\|g_h^{(k)}\|_\infty\leq 2\|h^{(k-2)}\|_\infty, \quad k\geq 3.
\end{talign}
\end{example}
We now illuminate the theory presented so far by showing how Stein operators, Stein sets, Stein equations, and Stein discrepancies can be brought together to give a short proof of a quantitative central limit theorem in dimension 1. The result is well-known in the theoretical Stein's method literature, and our proof is similar to one given in \cite{reinert98}. An improved bound, obtained using zero bias couplings, is given in Corollary 4.2 of \cite{chen2010normal}. In the bound of Theorem \ref{thm1}, no asymptotic assumptions are made; the bound is explicit and holds for any choice of $n \ge 2$.
\begin{theorem}[Stein's method and the central limit theorem]\label{thm1} Let $X_1,\ldots,X_n$ be independent random variables with zero mean, unit variance and $\mathbb{E}[|X_i^3|]<\infty$, $i=1, \ldots, n$. Put $W_n=n^{-1/2}\sum_{i=1}^nX_i$ and let $Q_n$ denote the measure of $W_n$.
Then
\begin{talign}\label{steinwassbd}
d_{\mathrm{W}}(Q_n,\mathrm{N}(0,1))\leq\frac{1}{\sqrt{n}}\big(2+\frac{1}{n}\sum_{i=1}^n \mathbb{E}[|X_i^3|]\big).
\end{talign}
\end{theorem}
\begin{proof}
Let $W_n^{(i)}=W_n-n^{-1/2}X_i$, so that $W_n^{(i)}$ and $X_i$ are independent. Notice that the multivariate normal Stein equation (\ref{multivariate_Stein_equation}) reduces to $g_h''(x)-xg_h'(x)=h(x)-\mathbb{E}[h(Z)]$ when $Z\sim \mathrm{N}(0,1)$. Assume that $h\in\mathcal{H}_{\mathrm{W}}$ {with a.s. derivative $h'$}. Then, by a Taylor expansion,
\begin{talign}\mathbb{E}[h(W_n)]-\mathbb{E}[h(Z)]&=\mathbb{E}[g_h''(W_n)]-\mathbb{E}[W_ng_h'(W_n)]\nonumber \\
&=\frac{1}{n}\sum_{i=1}^n\mathbb{E}[g_h''(W_n^{(i)})]-\frac{1}{\sqrt{n}}\sum_{i=1}^n\mathbb{E}[X_ig_h'(W_n^{(i)})]\nonumber\\
&\quad-\frac{1}{n}\sum_{i=1}^n\mathbb{E}[X_i^2g_h''(W_n^{(i)})]+R_1+R_2,\nonumber\\
\label{r1r2eq}&=R_1+R_2,
\end{talign}
where
\begin{talign*}|R_1|\leq\frac{1}{n^{3/2}}\sum_{i=1}^n\|g_h^{(3)}\|_\infty\mathbb{E}[|X_i|], \quad |R_2|\leq\frac{1}{2n^{3/2}}\sum_{i=1}^n\|g_h^{(3)}\|_\infty\mathbb{E}[|X_i^3|].
\end{talign*}
The equality (\ref{r1r2eq}) follows since $\mathbb{E}[X_ig_h'(W_n^{(i)})]=0$ and $\mathbb{E}[X_i^2g_h''(W_n^{(i)})]=\mathbb{E}[g_h''(W_n^{(i)})]$, by independence of $W_n^{(i)}$ and $X_i$, and the assumptions $\mathbb{E}[X_i]=0$, $\mathbb{E}[X_i^2]=1$. We can bound $|R_1|$ and $|R_2|$ by using the Stein factor bound (\ref{dalybd}) to bound $\|g_h^{(3)}\|_\infty$, together with an application of the Cauchy-Schwarz inequality to bound $\mathbb{E}[|X_i|]\leq(\mathbb{E}[X_i^2])^{1/2}=1$. Thus, for $h\in\mathcal{H}_{\mathrm{W}}$,
\begin{talign}\label{steinsumbd}|\mathbb{E}[h(W_n)]-\mathbb{E}[h(Z)]|\leq\frac{\|h'\|_\infty}{\sqrt{n}}\big(2+
\frac{1}{n}\sum_{i=1}^n \mathbb{E}[|X_i^3|]\big);
\end{talign}
taking supremums of both sides of (\ref{steinsumbd}) over all $h\in\mathcal{H}_{\mathrm{W}}$ gives the bound (\ref{steinwassbd}).
\end{proof}
In this section, we have kept $\H$, or equivalently $d_\H$, {mainly} general, so that the task of deriving a Stein equation and bounds on Stein factors can be presented in a form which applies to any of the IPMs {in Remark \ref{idmlist}.}
When using Stein's method to establish bounds w.r.t.\ IPMs, it is often desirable to use the Kolmogorov or Wasserstein metrics. The Kolmogorov distance is natural in the context of hypothesis testing, as error bounds in this metric can be used to construct conservative confidence intervals. The Wasserstein distance between probability distributions is also widely used in statistics \citep{pz19} and takes into account not only the amounts by which their probabilities differ, but also where these differences occur. Working in the Kolmogorov and Wasserstein metrics in Stein's method is, however, often technically demanding, because only a small number of derivatives of the solution of the Stein equation {are uniformly bounded}. Therefore, when such technical difficulties are encountered, or if fast convergence rates are sought (see Section \ref{perasonsec}), one often works in weaker probability metrics that impose greater restrictions on the test functions $h$ that induce the IPM; for example, requiring that derivatives of $h$ up to $p$-th order ($p\geq1$) are Lipschitz with Lipschitz constant at most 1; see, for example, \citet{goldstein1997}.
\subsection{Choosing Stein Operators}
\label{sec:choossteinop}
When tackling Stein's method for a general target via a Stein discrepancy $\mathcal{S}(Q,
\operator{}, \mathcal G)$, it is important to {first} choose $\mathcal T$ and $\mathcal G$ in a way which ensures relevance and tractability of the resulting metric or Stein discrepancy. For many target distributions,
such useful Stein operators and Stein sets are readily available from the literature. One of the advantages of Stein's method, however, is that for a given $P$ there is in principle full freedom of choice in the operator $\mathcal T$ and Stein set $\mathcal G$, and in particular no need to restrict to the operators from the literature nor Stein sets obtained from Stein equations.
Here we shall {mainly} concentrate on two approaches {for choosing a Stein operator}, called the generator approach (which dates back to \citet{Barbour88, Barbour90} {and \citet{Gotze91}}) and the density approach (which dates back to \citet{stein2004use}). These are not the only available approaches, see for example \citet{reinert2005three}; we conclude the section with a brief pointer to other techniques.
\subsubsection{Stein Operators via the Generator Approach} \label{sec:generator_approach}
We first describe the \emph{generator approach}, which we present for {a given target $P$ on} $\X = \mathbb{R}^d$. Given a Markov process with sufficient regularity $(Z_t)_{t\ge 0}$ (namely, a Feller process \citep[Lemma 8.1.4]{Oksendal2013}) with invariant measure $P$, the \emph{infinitesimal generator} $\generator{}$ of the process given by
\begin{talign*}
\genarg{u}{x} = \lim_{t\to 0} \frac{1}{t} ({\mathbb{E}[u(Z_t) \mid Z_0 = x] - u(x)})
\end{talign*}
satisfies the property that $\mathbb{E}_{Z \sim P}[\genarg{u}{Z}] = 0$ for all $u:\mathbb{R}^d\to\mathbb{R}$ in the domain of $\generator{}$. This simple observation, first made by \cite{Barbour88,Barbour90} and \cite{Gotze91}, provides both a Stein operator and a Stein class for all targets $P$ that are invariant measures of sufficiently regular Markov processes. The following example illustrates the approach for multivariate normals.
\begin{example}[The generator approach for the multivariate normal distribution]\label{example:generator_normal}
{For} characterizing the $\mathrm{N}_d(0,\Sigma)$ distribution, the diffusion $Z_{t,x} = \mathrm{e}^{-t} x + \sqrt{1-\mathrm{e}^{-2t}}\Sigma^{1/2}Z$, where $Z\sim\mathrm{N}_d(0,I_d)$, {has $\mathrm{N}_d(0,\Sigma)$ as invariant distribution. This observation} leads to the operator $\oparg{u}{x} = \nabla^\intercal\Sigma\nabla u(x) - x^\intercal \nabla u(x)$, applied to twice differentiable functions $u : \mathbb{R}^d \to \mathbb{R}$, see \citet{Barbour90}.
\end{example}
\citet{GorhamDuVoMa19} showed that the generator approach can be used to find Stein operators for a much wider range of distributions of interest by using operators induced by \emph{It\^o\xspace diffusions}. An It\^o\xspace diffusion \cite[Def.~7.1.1]{Oksendal2013} with starting point $x\in\mathbb{R}^d$, Lipschitz \emph{drift coefficient} $b : \mathbb{R}^d \to \mathbb{R}^d$, and Lipschitz \emph{diffusion coefficient} $\sigma : \mathbb{R}^d \to \mathbb{R}^{d\times m}$ is a stochastic process $(Z_{t,x})_{t\geq0}$ solving the It\^o\xspace stochastic differential equation
\begin{talign}\label{eqn:diffusion}
\mathrm{d} Z_{t,x} = b(Z_{t,x})\, \mathrm{d}t + \sigma(Z_{t,x})\,\mathrm{d}W_t\text{ with } Z_{0,x}=x\in\mathbb{R}^d,
\end{talign}
where $(W_t)_{t\geq 0}$ is a $m$-dimensional Brownian motion. It is known (see, e.g.,\ \citet[Thm.~2]{GorhamDuVoMa19} and \citet[Thm.~19]{barp2020bracket}) that equation \eqref{eqn:diffusion} will have invariant measure
$P$ with density $p$ {which is positive and differentiable} if and only if $b(x) = \langle\nabla, p(x)[\sigma(x)\sigma(x)^{\intercal} + c(x)]\rangle/2p(x)$ where the \emph{stream coefficient}
$c:\mathbb{R}^d\to\mathbb{R}^{d\times d}$ is some differentiable skew-symmetric valued function.
\citet{GorhamDuVoMa19} proposed the first order \emph{diffusion Stein operator}
\begin{talign}\label{eqn:diffusion-stein-operator}
(\operator{}g)(x) = \frac{1}{p(x)}\langle \nabla, p(x)[(\sigma(x)\sigma(x)^{\intercal} + c(x)]g(x)\rangle,
\end{talign}
based on the diffusion's second order infinitesimal generator $\generator{u} = \operator{}(\nabla u/2)$. Under regularity conditions, the definition in equation \eqref{eqn:diffusion-stein-operator} yields an infinite collection of Stein operators for a given target $P$, parametrized by the choice of $\sigma$ and $c$.
\begin{example}[The Langevin Stein operator on $\mathbb{R}^d$] \label{ex:steinopdiffusions}
As a concrete example, \citet{GorhamMa15,mackey2016multivariate} consider the case where $\sigma \equiv I_d$ and $c \equiv 0$, which corresponds to the overdamped Langevin diffusion. Assuming $\mathbb{E}_{X \sim P}[\|\nabla \log p(X) \|_2] < \infty$, this induces the Langevin Stein operator
\begin{talign}
(\operator{}g)(x) = \langle \nabla \log p(x), g(x)\rangle + \langle
\nabla, g(x)\rangle. \label{eq: Langevin Stein operator}
\end{talign}
For a fixed $\mathcal{H}$, this also leads to a Langevin Stein discrepancy when combined with equation \eqref{eq:SteinDisc}.
\end{example}
It\^{o} diffusions are well-studied {not only in probability theory but also} in the Markov chain Monte Carlo (MCMC) literature. As such, they provide a natural and intuitive entry point into Stein's method for computational statisticians. Indeed, they are commonly used for the design of Markov chains targeting a distribution of interest; see for example the Metropolis-adjusted Langevin algorithm and the Hamiltonian Monte Carlo algorithm \citep{Barp2018HMCReview}. Furthermore, the more general It\^{o} diffusion in \eqref{eqn:diffusion-stein-operator} leads to many common pre-conditioned MCMC algorithms.
Finally, although we focused on $\X = \mathbb{R}^d$, the generator approach {is naturally extended to other spaces, such as spaces of sample paths \citep{Barbour90}, spaces of measures \citep{reinert1994weak}, and recently it} has been extended to manifolds, with the Stein operator (\ref{eq: Langevin Stein operator}) generalizing naturally in the Riemannian setting, see \citet{barp2018riemannian,thompson2020approximation,le2020diffusion}, and to Polish spaces \citep{hodgkinson2020reproducing}. It has also received significant attention in the case of discrete $\X$, where Markov birth- and death-processes are commonly used; see for example the work of \citet{Brown2001,Holmes2004,Eichelsbacher2008}. Some examples for multivariate discrete Stein operators include \citet{barbour2018multivariatei, barbour2018multivariateii, reinert2019approximating}.
\subsubsection{Stein Operators via the Density Approach}
\label{sec:steinopdensap}
The \emph{density approach} was pioneered in \cite{stein2004use} for univariate distributions, and has since then been generalized in multiple directions, see for example \cite{yang2018goodness, mijoule2021stein}. Given a probability measure $P$ on a set $\mathcal X$ with density function (with respect to some dominating measure) $p : \mathcal X \to \mathbb{R}^+$, consider operators of the form $g \mapsto {\mathcal{D}(g(x)p(x))}/{p(x)}$, where $\mathcal D$ is a linear operator with domain $\mathrm{dom}(\mathcal D)$. Collecting into the class $\mathcal G$ all functions $g$ on $\mathcal X$ such that $x \mapsto p(x) g(x) \in \mathrm{dom}(\mathcal D)$ and $\int_{\mathcal{X}} \mathcal{D}(g(x)p(x)) \,\mathrm{d}x = 0$, the \emph{$\mathcal{D}$- density}, or for short, \emph{density} Stein operator of the density approach for $p$ is
\begin{talign*}
g \mapsto (\operator{}g)(x) = \frac{\mathcal{D}(g(x)p(x))}{p(x)}
\end{talign*}
with Stein class $\mathcal G(\mathcal T) = \mathcal G$. By construction, this operator satisfies $\mathbb{E}_{X \sim P} [ (\operator{}g)(X)]=0$ for all $g \in \mathcal{G}(\mathcal{T})$. The following example illustrates the approach for univariate distributions with interval support.
\begin{example}[Density {operators} for the exponential distribution] Fix $d=1$ and {consider as target} $P$ the exponential distribution with density function $p(x) = \lambda \mathrm{e}^{- \lambda x}\mathbb{I}_{[0, \infty)}(x)$.
A natural choice of $\mathcal D$ is $\mathcal D f(x) = f'(x)$ the usual almost everywhere derivative. If $(g p)'$ is integrable on $\mathbb{R}^+$, then $\int_0^\infty (g(x) p(x))' \,\mathrm{d}x = \lim_{x \to \infty} g(x) p(x) - \lambda g(0)$.
The corresponding density operator is therefore $(\mathcal{T} g)(x) = \frac{(g(x)p(x))'}{p(x)} = g'(x) - \lambda g(x) $
for $x \in \mathbb{R}^+$, acting on the Stein class of functions $g$ such that $(g p)'$ is integrable on $\mathbb{R}^+$ and $\lim_{x \to \infty} g(x) p(x) = \lambda g(0)$.
Clearly all functions $g(x) = x g_0(x)$ such that $\lim_{x \to \infty} x g_0(x) \mathrm{e}^{- \lambda x} =0$ belong to $\mathcal{G}(\mathcal T)$. Denoting $\tilde{\mathcal G}$ the collection of functions of this form, we reap a second operator for the exponential given by $(\mathcal T_1 g_0)(x) = (x g_0(x) \mathrm{e}^{-\lambda x})'/\mathrm{e}^{-\lambda x} = x g_0'(x) + (1-\lambda x) g_0(x)$ acting on the (restricted) Stein class $\tilde{\mathcal G}$. The advantage of the latter operator over the former is that it does not require any implicit boundary assumptions on the test functions.
Since the exponential density is also a parametric scale family in its parameter $\lambda>0$, another natural derivative in this context is $\mathcal D f(x; \lambda) = \frac{\mathrm{d}}{\mathrm{d} \lambda} f(x; \lambda)$ for all functions $f(x; \lambda)$ of the form $f(x; \lambda) = \lambda f_0( \lambda x)$ for some $f_0$. This leads to $(\mathcal T_2 g)(x) = \frac{\mathrm{d}}{\mathrm{d} \lambda} ( \lambda
g(\lambda x) \mathrm{e}^{-\lambda x}) / (\lambda \mathrm{e}^{-\lambda x}) = x g'( \lambda x)+(1/\lambda -x)g(\lambda x)
$, again with no boundary assumptions on $g$ since $\mathbb E_{X \sim \mathrm{Exp}(\lambda)} [(\mathcal T_2 g)(X)] = \frac{\mathrm{d}}{\mathrm{d} \lambda} \left(\int_0^\infty g(u) \mathrm{e}^{- u}\, \mathrm{d}u\right) = 0$ for all $g \in L^1(\mathrm{Exp}(1))$.
\end{example}
Many choices of operator $\mathcal D$ lead to Stein operators. Moreover, using appropriate product rules, Stein operators can be tailored for the specifics of the problem at hand. This process is called \emph{standardizing the Stein operator}. For obtaining Stein equations this idea has been developed in \cite{kattumannil2009stein}, see also \cite{goldstein2013stein}. Here we describe it on an example in the context of differentiable densities on $\mathbb{R}^d$.
\begin{example}[Standardizations of the {density} operator for differentiable densities on $\mathbb{R}^d$]
Let $P$ have differentiable density $p$ on $\mathcal{X} = \mathbb{R}^d$ with support $\mathcal{S}(p) = \left\{x \in \mathbb R^d \, | \, p(x)>0 \right\}$. Fix $\mathcal D = \mathrm{div}$ the divergence operator. The canonical divergence-based operator for $p$ is then ${f} \mapsto (\mathcal{T} f)(x) = {\mathrm{div}({f}(x) p(x)) }/ {p(x)}$
acting on the Stein class $\mathcal{G}(\mathcal{T})$ of functions $ f : \mathbb{R}^d \to \mathbb{R}^d$ such that $x \mapsto f(x) p(x)$ is differentiable and $\int_{\mathcal{S}(p)} \mathrm{div}({f}(x) p(x)) \,\mathrm{d}x = 0$.
{Using the well-known properties of the divergence operator, it is easy to see that
$\mathrm{div}({F}(x) g(x) p(x)) = \mathrm{div}({F}(x) p(x))g(x) + {F}(x) \grad g(x)p(x)$
holds} for all differentiable $g : \mathbb{R}^d \to \mathbb{R}$ and $ F : \mathbb{R}^d \to \mathbb{R}^{d \times d}$. Starting from here, we obtain the family of Stein operators indexed by the functions ${F}$ and given by
\begin{talign*}
g \mapsto (\mathcal A g)(x) := \frac{\mathrm{div}({F}(x) p(x)) }{p(x)} g(x) + {F}(x) \grad g(x)
\end{talign*}
acting on the collection $\mathcal G(\mathcal A_1)$ of all functions $g: \mathbb{R}^d \to \mathbb{R}^d$ such that $x \mapsto {F}(x) g(x) \in \mathcal G(\mathcal T)$. For instance, if $p$ satisfies the same assumptions as in Section \ref{sec:generator_approach} then taking $F(x) = \sigma(x)\sigma(x)^{\intercal} + c(x)$ leads to the same operator as \eqref{eqn:diffusion-stein-operator}. Many instantiations of this theory, yielding explicit and tractable operators, are available in the literature, see e.g.\ \cite{mijoule2021stein}.
\end{example}
\subsubsection{Other Approaches}
The density approach and the generator approach are by no means the only methods for obtaining Stein operators. In this section we provide a partial list of alternative existing methods. We note that, often, an operator obtained from one method may be re-derived from another one; this has already been observed above and will be illustrated again in Example \ref{ex:productnorm}.
\paragraph{Couplings.} Sometimes distributional characterizations are more straightforward to derive through coupling identities than through difference or differential characterizations. {As an illustration, the size bias relationship in general can be described as requiring that for all functions $f$ for which the expectations exist,
${\mathbb{E}} [X] {\mathbb{E}} [f(X^*) ]= {\mathbb{E}} [ X f(X)]$, creating a coupling between $X$ and $X^*$ if they are defined on the same probability space. Direct calculation shows that a discrete distribution for $X$ on the non-negative integers is Poisson if and only if its size bias distribution (with associated random variable $X^*$) is the original distribution but shifted by 1; $X^*= X + 1 $ in distribution, see for example the coupling approach in \citet{barbour1992poisson}. Thus, $X \sim {\rm Poisson}(\lambda)$ if and only if for all functions $f$ for which the expectations exist, $\lambda {\mathbb{E}} f(X+1 ) = {\mathbb{E}} [X f(X)]$}.
In \citet{stein1986approximate}, Stein's method is motivated using an exchangeable pair coupling.
In \citet{chen2010normal, chen2020palm} these ideas are generalized through the notion of a {Stein coupling}. A triple $(W,W',G)$ of square-integrable random variables is called a \emph{Stein coupling} if ${\mathbb{E}} [ G f(W') - G f(W) ] = {\mathbb{E}}
[W f(W)] $ for all functions $f$ for which the expectations exist. Using a related idea, \citet{pekoz2017joint} develop a coupling with an urn scheme to study the joint degree distributions in preferential attachment graphs.
\paragraph{Stein Operators from Orthogonal polynomials.} In \citet{diaconis1991closed}, orthogonal polynomials are used to obtain distributional characterizations in the context of Stein's method. This idea can be linked to couplings, see \citet{goldstein2005distributional}.
\paragraph{Stein Operators from Stein Operators.} When the target distribution of a random element $W$ is a function $g(Z)$ of a distribution $Z$ which is well-studied with Stein's method, then a Stein operator for the distribution of $W$ can be obtained by taking as test functions the decomposition $h \circ g$ in the Stein equation for $Z$. This idea is exploited for example in \citet{gaunt2017chi}, representing the chi-square distribution as a function of a multivariate normal distribution; see Subsection \ref{perasonsec} for details.
\paragraph{Stein Operators from ODEs.} For continuous distributions, if the density can be identified as the solution of an ordinary differential equation, then a duality argument can be used to obtain a Stein operator, as shown in Example \ref{ex:productnorm}.
\begin{example} \label{ex:productnorm}
Consider the product of two independent $\mathrm{N}(0,1)$ random variables, which has density $p(x)=(1/\pi)K_0(|x|)$, $x\in\mathbb{R}$, where $K_\nu(u)=\int_0^\infty \mathrm{e}^{-u\cosh(t)}\cosh(\nu t)\,\mathrm{d}t$, $u>0$, $\nu\in\mathbb{R}$, is a modified Bessel function of the second kind \citep{olveretal}.
The product normal density $p$ satisfies the modified Bessel equation $xp''(x)+p'(x)-xp(x)=0$, {and} integrating any test function against this equation then applying an integration by parts argument readily confirms that $(\mathcal{T}_1g)(x)=xg''(x)+g'(x)-xg(x)$ is a Stein operator for the product normal distribution (see \cite{gauntpn17}).
We note that the operator $(\mathcal{T}_1g)$ can {also} be obtained from the generator method, with the associated diffusion process a Bessel process with killing \cite[p.\ 145 and p.\ 148]{Oksendal2013}.
Finally we note that this operator can also be obtained from the density approach, as a standardization of the density operator $(\mathcal{T}_2g)(x)=g'(x)-\mathrm{sgn}(x)(K_1(|x|)/K_0(|x|))g(x)$ with $\mathrm{sgn}(x)$ the sign of $x$.
From the perspective of computational statistics, the modified Bessel functions in the Stein operator $\mathcal{T}_2$ cause no problems, aside from perhaps marginally increasing evaluation cost. However, in theoretical settings it would be awkward to work with $\mathcal{T}_2$, because the solution of the corresponding Stein equation is not bounded, and standard techniques for bounding Stein discrepancies cannot be applied.
\end{example}
{\paragraph{More Approaches.} Many other procedures have been proposed in the literature, with specific applications in mind. Examples include the \emph{perturbation approach} from \cite{barbour1999poisson}, the \emph{characteristic function} approach from \cite{arras2020stein}, and the \emph{algebraic approach} from \cite{azmoodeh2019algebraic}; again we stress that this list is far from exhaustive. As a matter of fact, not all Stein operators in the literature are differential or difference operators: for instance for Dickman distributions, an integro-differential operator is used in \citet{bhattacharjee2019dickman}, and for non-Gaussian stable distributions, fractional operators are employed in} \citet{arras2019stein, xu19}. The theory of Stein operators is out of the scope of the present paper; aside from the above references we also refer the interested reader to e.g.\ \cite{ley2017stein} and \cite{gms19} for a more thorough introduction.
\subsubsection{Some General Remarks on Stein Operators}
\label{sn:genremar}
We observe that a Stein operator can often be found even when the density of the target distribution is not available in closed form, which will be particularly useful for applications in statistics. In this context there are two classes of important problems that we highlight:
\paragraph{Bayesian Computation} In computational Bayesian statistics, usually the posterior distribution is known only in an unnormalized form. This is not a hindrance for Stein's method. Take for example the Langevin Stein operator of Example \ref{ex:steinopdiffusions}: $(\operator{}g)(x) = \langle \nabla \log p(x), g(x)\rangle + \langle
\nabla, g(x)\rangle$. Any function of the form $(\operator{}g)$ can be evaluated pointwise provided that $\nabla \log p$ can be evaluated, which is often a reasonable requirement. In particular, this does not require knowledge of the normalizing constant of $p$, since if $p=\tilde{p}/C$ for $C >0$, then
$\nabla \log p = \nabla \log \tilde{p} -\nabla \log C = \nabla \log \tilde{p}$. Illustrations of this principle can be found in Section \ref{sec: stein prior} as well as for example in \citet{GorhamDuVoMa19} and \citet{mijoule2021stein}.
\paragraph{Intractable Likelihood}
A second example includes models in which the likelihood itself is unnormalized, in which case the model is often called a Gibbs distribution. For these, $\ell(\theta;x) \propto \tilde{\ell}(\theta,x)$, where $\tilde{\ell}(\theta,x)$ can be pointwise evaluated. Once again, working with $\nabla_x \log \ell(\theta;x)$ may be practical even when the normalizing constant is an intractable integral. Furthermore, when the likelihood can be written as the density of a natural exponential family model, \cite{Barp2019} noticed that $\nabla_x \log \ell(\theta;x)$ becomes linear in $\theta$, which is particularly useful in the development of new statistical methodology based on the Langevin Stein operator (see also \cite{Matsubara2021}).
\section{Explicit Bounds in Theoretical Statistics via Stein's Method}\label{sec:A}
The focus of this section is on providing explicit error bounds for distributional approximations of statistical quantities.
In frequentist statistics, asymptotic considerations such as consistency and asymptotic efficiency are regularly used to justify the use of a particular estimator or a particular hypothesis test.
An important question is therefore to what extent the asymptotic properties of an estimator or test reflect the properties of the estimator or test when applied to a finite dataset.
In Section \ref{estim} we explain how Stein's method has been recently used to understand the error associated to normal approximation of the distribution of the maximum likelihood estimator, leading to explicit error bounds.
Then, in Sections \ref{LRstat} and \ref{perasonsec} the analogous problems for likelihood ratio test statistics and Pearson's test statistic are considered.
In Bayesian statistics, prior information is codified into a prior distribution and combined with data through Bayes' theorem, leading to the posterior distribution. In many situations the prior distribution is elicited from an expert, or chosen for convenience. Understanding the sensitivity of \emph{a posteriori} conclusions to the choice of prior is hence of interest. In Section~\ref{sec: stein prior} we explain how Stein's method can be used to derive explicit bounds on prior sensitivity, applying these to the fundamental normal model as well as to logistic regression. This leads to a theoretical measure of prior sensitivity.
For ease of presentation and without risk of ambiguity, in Sections~\ref{estim}--\ref{perasonsec}, we write $\mathbb{E}$ instead of $\mathbb{E}_{X \sim P}$. We return to the notation $\mathbb{E}_{X \sim P}$ in Section~\ref{sec: stein prior} and use it for the rest of the paper. In this section, the notation is as follows.
Let ${X} = ({X_1}, \ldots, {X_n})$ be i.i.d.\ observations on a measurable space $(\mathcal{X}, \mathcal{B})$ from a distribution with probability density function $f({x}| \theta)$, where $\theta = (\theta_1, \ldots, \theta_d)^\intercal \in \Theta \subset {\mathbb{R}} ^d$. For data ${x}=(x_1, \ldots, x_n)$ sampled from this model,
the likelihood function is $L(\theta;{x}) =\prod_{i=1}^nf({x_i}| \theta) $
and its natural logarithm is denoted by $\ell(\theta;{x})$. When the context is clear, the argument $x$ is often omitted. It is assumed that $\ell (\theta; {x})$ possesses partial derivatives up to order 3 with respect to $\theta$.
In the frequentist setting, the true value of the parameter is denoted by $\theta_0$. The expected Fisher information for one variable is then denoted by $i(\theta_0)$.
Maximum likelihood estimation gives a set of values $\hat{\theta}_n(\bolds{x})$ for the parameters of the model which maximize the likelihood function. For random data $\bolds{X}$, {such a} random vector $\hat{\theta}_n(\bolds{X})$, whenever it exists, is called a maximum likelihood estimator (MLE). For many models, an MLE exists and is unique; this is known as the `regular' case, which is assumed to hold in this paper.
\subsection{Approximating the Distribution of the MLE}\label{estim}
A fundamental result in asymptotic statistics (dating back to \cite{Fisher}) is the normal approximation of the MLE. For example, in the simple case of $X_1, X_2, \ldots, X_n$ being i.i.d.\ random variables from a single-parameter distribution, then for $Z \sim {\rm N}(0,1)$, and under classical regularity conditions,
\begin{talign}
\label{normal_approximation_univariate}
\sqrt{n\:i(\theta_0)}(\hat{\theta}_n(\bolds{X}) - \theta_0) \rightarrow_d Z, \quad \text{as $n\rightarrow\infty$,}
\end{talign}
where $\rightarrow_d$ denotes convergence in distribution. While such asymptotic results are available also in more general settings than the univariate one, explicit distributional bounds assessing the quality of the approximation (expressed in terms of some probability metric) are of interest as samples are finite. Such bounds have been obtained with the help of Stein's method; for non-explicit $O(n^{-1/2})$ Kolmogorov distance bounds derived without the use of Stein's method we refer the reader to \citet{pinelis2016optimal,pinelis2017optimal}.
Starting with the single-parameter case, under some natural regularity assumptions which we do not detail here, \cite{anastasiou_reinert2017} obtain general bounds w.r.t.\ the bounded Wasserstein distance as follows. Let $W_n:=\sqrt{n\:i(\theta_0)}(\hat{\theta}_n(\bolds{X}) - \theta_0)$. Then the interest is to find upper bounds on $|\mathbb{E}[h(W_n)] - \mathbb{E}[h(Z)]|$, where $h\in\mathcal{H}_{\mathrm{bW}}$ as in {Remark \ref{idmlist}}.
For ease of exposition here we assume that the observations ${\bolds{ x}} = (x_1, \ldots, x_n)$ are realizations of i.i.d.\ random variables ${\bolds{ X}} = (X_1, \ldots, X_n)$. The heuristic is to represent the standardized MLE {in such a way} that it contains a quantity which is a sum of independent random variables plus a term that can be controlled. As the definition of the MLE ensures that $\ell'(\hat{\theta}_n(\bolds{x});\bolds{x}) = 0$, a second order Taylor expansion of $\ell'(\hat{\theta}_n(\bolds{x});\bolds{x})$ about $\theta_0$ gives
\begin{talign}
\label{second_order_Taylor}
\ell''(\theta_0;\bolds{x})\big(\hat{\theta}_n(\bolds{x}) - \theta_0\big) = -\ell'(\theta_0;\bolds{x}) - R_1(\theta_0;\bolds{x}),
\end{talign}
where $R_1(\theta_0;\bolds{x}) = \frac{1}{2}(\hat{\theta}_n(\bolds{x})-\theta_0)^2\ell^{(3)}(\theta^*;\bolds{x})$, with $\theta^{*}$ lying between $\hat{\theta}_n(\bolds{x})$ and $\theta_0$. Rearranging \eqref{second_order_Taylor} gives
\begin{talign}
\nonumber -n\:i(\theta_0)\big(\hat{\theta}_n(\boldsymbol{x}) - \theta_0\big) = -\ell'(\theta_0;\boldsymbol{x})-R_1(\theta_0;\boldsymbol{x}) - \big(\hat{\theta}_n(\boldsymbol{x}) - \theta_0\big)[\ell''(\theta_0;\boldsymbol{x}) + n\:i(\theta_0)].
\end{talign}
The regularity assumptions include that $i(\theta_0)\neq 0$ and therefore
\begin{talign}
\label{main_heuristic_result}
W_n = \frac{1}{\sqrt{n\:i(\theta_0)}} \{\ell'(\theta_0;\bolds{x})+R_1(\theta_0;\bolds{x}) + R_2(\theta_0,\bolds{x}) \} ,
\end{talign}
where $R_2(\theta_0,\bolds{x}) = (\hat{\theta}_n(\bolds{x}) - \theta_0)\left(\ell''(\theta_0;\bolds{x}) + n\:i(\theta_0)\right)$. In \eqref{main_heuristic_result}, the quantity of interest is split into a remainder term plus $\ell'(\theta_0;\bolds{x})/\sqrt{n\:i(\theta_0)} = \sum_{i=1}^n \ell'(\theta_0; x_i)/\sqrt{n\:i(\theta_0)}$, which is a realization of a sum of independent random variables; we shall also denote $S_n:=\sum_{i=1}^n \ell'(\theta_0; X_i)/\sqrt{n\:i(\theta_0)}$. Therefore,
\begin{talign}
\label{finalboundnoncanonical1}|\mathbb{E}[h(W_n)] - \mathbb{E}[h(Z)]|& \leq
|\mathbb{E}[h(S_n)] - \mathbb{E}[h(Z)]|\\
\label{finalboundnoncanonical2} &\quad + \left|\mathbb{E}\bigg[h\bigg(\frac{l'(\theta_0;\bolds{X}) + R_1(\theta_0;\bolds{X}) + R_2(\theta_0;\boldsymbol{X})}{\sqrt{n\:i(\theta_0)}} \bigg) - h(S_n)\bigg]\right|,
\end{talign}
and it remains to bound \eqref{finalboundnoncanonical1} and \eqref{finalboundnoncanonical2} in a meaningful way. First, as $S_n$ is a sum of i.i.d.\ random variables, an upper bound for \eqref{finalboundnoncanonical1} is obtained using the bound (\ref{steinsumbd}). Second, for the quantity in \eqref{finalboundnoncanonical2}, further Taylor expansions and the Cauchy-Schwarz and Markov inequality are used to derive an upper bound. Skipping the technicalities, \cite{anastasiou_reinert2017} prove that {for $\epsilon=\epsilon(\theta_0)$ such that $( \theta_0 -\epsilon, \theta_0 + \epsilon ) \subset \Theta$, and $M(x) = \sup_{\theta: | \theta - \theta_0| \le \epsilon } | \ell^{(3)} (\theta; x) | $,}
\begin{talign}
\label{boundTHEOREMgen}
\nonumber &d_{\mathrm{bW}}(\mathcal{L}(W_n), \mathcal{L}(Z)) \leq \frac{1}{\sqrt{n}}\Big(2 + \frac{1}{[i(\theta_0)]^{3/2}}\mathbb{E}\Big[\big|\frac{\mathrm{d}}{\mathrm{d}\theta}{\rm log}f(X_1|\theta_0)\big|^3\Big]\Big)\\
\nonumber &\; + \frac{1}{\sqrt{i(\theta_0)}}\sqrt{{\rm Var}\big(\frac{\mathrm{d}^2}{\mathrm{d}\theta^2} \log (f(X_1|\theta_0))\big)}\sqrt{\mathbb{E}\big[\big(\hat{\theta}_n(\bolds{X})- \theta_0\big)^2\big]} + \frac{2}{\epsilon^2}\mathbb{E}\big[\big(\hat{\theta}_n(\bolds{X}) - \theta_0\big)^2\big]\\
&\; + \frac{1}{2\sqrt{ni(\theta_0)}}\Big[\mathbb{E}\Big[\big(\sum_{i=1}^{n}M(X_i)\big)^2\,\Big|\, |\hat{\theta}_n(\bolds{X}) - \theta_0| < \epsilon\Big]\Big]^{1/2}\Big[\mathbb{E}\big[\big(\hat{\theta}_n(\bolds{X}) - \theta_0\big)^4\big]\Big]^{1/2}.
\end{talign}
As the rate of convergence of the MSE, $\mathbb{E}[(\hat{\theta}_n(\bolds{X}) - \theta_0)^2]$, is $O(n^{-1})$, the order of the bound \eqref{boundTHEOREMgen} is $O(n^{-1/2})$. To make use of a bound such as \eqref{boundTHEOREMgen} -- which is of a form that is typical for applications of Stein's method -- it remains to compute the various quantities. This is performed in \cite{anastasiou_reinert2017} for a variety of settings, as we illustrate now for one example.
\begin{example}
Consider the exponential distribution in its canonical form. The probability density function is $f(x|\theta) = \theta{\rm exp}\{-\theta x\}$ for $x > 0$ and the unique MLE for $\theta$ is $\hat{\theta}_n(\bolds{X}) = 1/\overline{X}$, the inverse of the sample average. Then, for $\epsilon = \theta_0/2 > 0$, \eqref{boundTHEOREMgen} gives
\begin{talign*}
d_{\mathrm{bW}}(\mathcal{L}(W_n), \mathcal{L}(Z))
\leq \frac{ 4.41456}{\sqrt{n}} + \frac{8(n+2)(1+\sqrt{n} )}{(n-1)(n-2)}.
\end{talign*}
This bound is explicit and of the order $n^{-1/2}$.
\end{example}
The technique leading to bounds such as \eqref{boundTHEOREMgen} has been adapted to many different situations. For instance, using the delta method combined with Stein's method, \cite{anastasiou_ley2017} give an explicit bound for MLEs which can be expressed as a smooth function of a sum of independent terms. Relaxing the independence assumption for the random variables, \cite{anastasiou2017bounds_m} assesses the normal approximation of the MLE, but now under the presence of a local dependence structure between the random variables. The order of the bounds is still the optimal $O(n^{-1/2})$.
Upper bounds under the multivariate setting (the parameter vector is $\theta = (\theta_1, \theta_2, \ldots, \theta_d)$) have also been obtained in \cite{anastasiou2018assessing, anastasiou_gaunt2019}, starting from the multivariate Stein equation \eqref{multivariate_Stein_equation}. More recently, \cite{ag20} obtained explicit order $O(n^{-1/2})$ bounds for the multivariate normal approximation of the vector MLE under general regularity conditions, this time in Wasserstein distance (and no longer bounded Wasserstein).
\subsection{Approximating the Distribution of Likelihood Ratio Statistics}\label{LRstat}
The MLE can be used as a basis for likelihood ratio tests, which under regularity assumptions follow approximately a chi-square distribution. The test problem is
\begin{talign*}
H_0: \theta_{0,j} = 0, \quad j=1, \ldots, r \mbox{ against }
H_1: \theta \in \Theta.
\end{talign*}
Assume that ${\rm dim}(\Theta) = d$; then $\Theta_0=\{\theta \in \Theta: \theta_{0,j} = 0 \mbox{ for } j=1, \ldots, r\}$ has dimension $d-r$. Let $\hat{\theta}^{{\rm res}}(\bolds{x}) = {\rm argmax}_{\theta \in \Theta_0} L(\theta;\bolds{x})$ and $ {\hat{\theta}_n(\bolds
{x})} = {\rm argmax}_{\theta \in \Theta} L(\theta; {x})$
denote the MLEs for $\theta \in \Theta_0$ and for $\theta \in \Theta$, respectively. Wilks' Theorem \citep{wilks1938large} states that under regularity conditions, as the sample size $n$ tends to infinity,
\begin{talign*}
-2 \log (\Lambda) := 2 \big\{ \ell ( \hat{\theta}_n(\boldsymbol{X}); {\bolds{X}) }) - \ell \big( \hat{\theta}^{{\rm res}} (\bolds{X}); {\bolds{X}) }\big)\big\} \rightarrow_d \chi_{(r)}^2,
\end{talign*}
with the degrees of freedom $r$ being the difference in dimension between $\Theta$ and $\Theta_0$. Again, an explicit bound on the distance to the chi-square distribution is important for applications {as} the sample size is finite.
An explicit general bound of order $O(n^{-1/2})$ is obtained in \citet[Theorem 2.1]{anastasiou_reinert2018bounds} using Stein's method. The basic heuristic is to write the likelihood ratio statistic as a quadratic form plus a small remainder term. The distance between this quadratic form and the limiting chi-square distribution is then bounded by Stein's method for chi-square approximation. We now sketch the main steps. The log-likelihood ratio statistic is
\begin{talign*}
- 2 \log (\Lambda) = 2 \log \big( \frac{T_1}{T_2} \big)
\mbox{ with } T_1 = \frac{ L(\hat{\theta}_n(\boldsymbol{x});\boldsymbol{x})}{L(\theta_0;\boldsymbol{x})} \mbox{ and } T_2 = \frac{ L(\hat{\theta}^{{\rm res}}(\boldsymbol{x});\boldsymbol{x})}{ L(\theta_0;\boldsymbol{x})}
\end{talign*}
with $\theta_0$ the unknown true parameter. Thus, $T_1$ is the likelihood ratio for testing the simple null hypothesis that $\theta = \theta_0$ against the alternative that $\theta \in \Theta$, whereas $T_2$ is the likelihood ratio for testing the simple null hypothesis that $\theta = \theta_0$ against the alternative that $\theta \in \Theta_0$.
The score function for $\theta_0$ is
\begin{talign*}
S(\theta_0) = S(\theta_0,\boldsymbol{x}) = \nabla \log (L(\theta_0;\boldsymbol{x})) = \sqrt{n}\bigg( \begin{array}{c }\xi \\ \eta \end{array} \bigg) = \sqrt{n}
\bigg( \begin{array}{c }\xi (\theta_0, \boldsymbol{x}) \\ \eta (\theta_0, \boldsymbol{x}) \end{array} \bigg)
\end{talign*}
with column vectors $\xi = (\xi_1, \ldots, \xi_r)^{\intercal} \in {\mathbb{R}} ^r$ and $\eta= (\eta_1, \ldots, \eta_{d-r})^{\intercal} \in {\mathbb{R}} ^{d-r}$. The Fisher information matrix, which again is assumed to exist, for one random vector is denoted by $I(\theta_0)$. Under regularity conditions, the $d \times 1$ score function and the $d \times d$ Fisher information matrix are connected through the equality
\begin{talign*}
{\mathbb{E}} [S(\theta_0) S(\theta_0)^{\intercal} ] = n \; I(\theta_0) = n\left( \begin{array}{c c } A & B \\B^{\intercal} & C \end{array} \right),
\end{talign*}
where, for any $r \in \left\lbrace 1,2,\ldots, d\right\rbrace$, $A = A^\intercal \in \mathbb{R}^{r\times r}$, $B \in \mathbb{R}^{r\times(d-r)}$, and $C = C^\intercal \in \mathbb{R}^{(d-r)\times(d-r)}$.
A series of Taylor expansions, for which the details are not given here, yields (with $R$ denoting a remainder term)
\begin{talign*}
- 2 \log \Lambda = \begin{pmatrix}\xi \\
\eta
\end{pmatrix}^{\intercal}[I(\theta_0)]^{-1}\begin{pmatrix}\xi \\
\eta
\end{pmatrix} - \eta^{\intercal}C^{-1}\eta + {R}.
\end{talign*}
Now, $\begin{pmatrix}\xi \\ \eta
\end{pmatrix}^{\intercal}[I(\theta_0)]^{-1}\begin{pmatrix}\xi \\
\eta
\end{pmatrix} - \eta^{\intercal}C^{-1}\eta$ is a quadratic form and for such expressions, the distance to the chi-square distribution can be assessed using Stein's method for chi-square approximation and the relevant bound can be found in Theorem 2.1 of \cite{anastasiou_reinert2018bounds}. The remainder term $R$ is bounded using alternative techniques, based mainly on known probability inequalities.
\subsection{Approximating the Distribution of Pearson's Statistic}\label{perasonsec}
Pearson's chi-square test for goodness-of-fit of categorical data is one of the first statistical tests one learns as a student. Consider $n$ independent trials, with each trial leading to a unique classification over $m$ classes. Let $p_1,\ldots,p_m$ represent the non-zero classification probabilities, and let $U_1,\ldots,U_m$ represent the observed numbers arising in each class. Then, Pearson's chi-square statistic \citep{pearson}, given by
\begin{talign*} {W} = \sum_{j=1}^m \frac{(U_j - n p_j)^2}{n p_j},
\end{talign*}
is asymptotically $\chi_{(m-1)}^2$ distributed. \cite{gaunt2017chi} used Stein's method to obtain explicit error bounds on this distributional approximation. It is worth noting that in some sense (other than the test statistic taking a different form) the setting we are considering here is less general than that of Section~\ref{LRstat} because the likelihood ratio test can be applied in the setting of categorical data. However, by using the additional structure given by categorical data, it is possible, via careful use of couplings, to obtain a bound for the rate of convergence of Pearson's statistic that has an optimal dependence on all model parameters, a feature not seen in the bound of \cite{anastasiou_reinert2018bounds}. The main bound of \cite{gaunt2017chi} is as follows:
\begin{theorem}
Let $m\geq2$ and suppose $p_*:=\min_{1\leq j\leq m}p_j\geq n^{-1}$. Let $T_{m-1}\sim \chi_{(m-1)}^2$. Then, for all bounded $h:\mathbb{R}^+\rightarrow\mathbb{R}$ with derivatives up to fifth order bounded,
\begin{talign}\label{pearbound1}
|\mathbb{E} [h({W})] - \mathbb{E}[h(T_{m-1})] | &\leq \frac{4m}{np_*}\{19\|h\|_\infty+366\|h'\|_\infty+2016\|h''\|_\infty\nonumber\\
&\quad+5264\|h^{(3)}\|_\infty+106965\|h^{(4)}\|_\infty+302922\|h^{(5)}\|_\infty\}.
\end{talign}
\end{theorem}
The $O(n^{-1})$ rate is optimal, as is the dependence on $p_*$, because $W \rightarrow_d\chi_{(m-1)}^2$ if and only if $np_*\rightarrow\infty$. The feature of the bound decaying to zero as $np_*\rightarrow\infty$ represents an advance on earlier work \citep{gu03,mann2,yarnold}, and is a demonstration of the power of Stein's method at disentangling the complex dependence structures found in applications.
Here we sketch an overview of the proof of (\ref{pearbound1}) and describe how the $O(n^{-1})$ rate was obtained. Some of the steps may be of interest in other applications of Stein's method. Indeed, \cite{gr16} used some of these techniques to obtain an explicit $O(n^{-1})$ bound to quantify the chi-square approximation of Friedman's statistic.
For $j=1,\ldots,m$, let $S_j = (U_j - np_j)/\sqrt{np_j}$, so that $S_j$ denotes the standardized cell counts; {then} $W = \sum_{j=1}^m S_j^2$. Notice that the $S_j$ are dependent: indeed, $\sum_{j=1}^m \sqrt{p_j}S_j = 0$. Notice also that $U_j \sim \mathrm{Bin}(n, p_j)$ for each $j$. Let $I_j(i)$ be the indicator that trial $i$ results in classification
in cell $j$, and let $\tilde{I_j}(i) = I_j(i) - p_j$ be its standardized version. Then $S_j = (np_j)^{-1/2}\sum_{i=1}^n \tilde{I}_j(i)$.
Through the chi-square Stein equation $\opsub{m-1}f(w):=wf''(w)+\frac{1}{2}(m-1-w)f'(w)=h(w)-\mathbb{E}[h(T_{m-1})]$, the problem of bounding $|\mathbb{E} [h(W)] - \mathbb{E}[h(T_{m-1})] |$ reduces to bounding $|\mathbb{E}[\opsub{m-1}f(W)]|$. The first key step in the proof is to convert this chi-square approximation into one of multivariate normal approximation, which allows one to use local approach couplings to effectively treat the dependence structure of the indicators $I_j(i)$. To this end, elementary calculations show that $S = (S_1, \ldots, S_m)$ has mean vector $0$ and covariance matrix $\Sigma_{S}=(\sigma_{jk})$ with entries $\sigma_{jj}=1-p_j$ and $\sigma_{jk}=-\sqrt{p_jp_k}$, $j\not=k$.
The $\mathrm{N}_m(0,\Sigma_{S})$ Stein operator is $\opsub{\mathrm{N}_m(0,\Sigma_{S})} f(s) = \nabla^\intercal\Sigma_{S}\nabla f(s)-s^\intercal\nabla f(s)$, see Example~\ref{ex:normaleas}.
Direct calculations verify the following connection between the $\chi^2_{(m-1)}$ and $\mathrm{N}_m(0, \Sigma_{S})$ Stein operators. Let $f \in C^2(\mathbb{R})$ and define $g: \mathbb{R}^m \to \mathbb{R}$ by $g(s) = f(w)/4$ with $w = \sum_{i=1}^m s_i^2$. If $\sum_{j=1}^m \sqrt{p_j}s_j = 0$ then
\begin{talign}\label{norchicon} \opsub{\mathrm{N}_m(0,\Sigma_{S})} g(s) = \opsub{m-1} f(w).
\end{talign}
In virtue of (\ref{norchicon}), it suffices to bound $|\mathbb{E}[\opsub{\mathrm{N}_m(0,\Sigma_{S})}g(S)]|$, where $g(s) = f(w)/4$ for $f$ the solution of the $\chi_{(m-1)}^2$ Stein equation.
The quantity $|\mathbb{E}[\opsub{\mathrm{N}_m(0,\Sigma_{S})}g(S)]|$ is bounded via Taylor expansions and local approach couplings that exploit the particular dependence structure of the indicators $I_j(i)$. To obtain the $O(n^{-1})$ rate, the Taylor expansions are carried out `one term further' than in typical implementations of Stein's method for multivariate normal approximation. A number of the remainder terms can be immediately seen to be of the desired order $O(n^{-1})$. However, after various manipulations, an additional term is left over which at first glance appears to be of order $O(n^{-1/2})$:
\begin{talign*}\frac{1}{\sqrt{n}}\sum_{j=1}^m\frac{1}{\sqrt{p_j}}\left\{\frac{3}{2}|\mathbb{E}[S_jf''(W)]|+|\mathbb{E}[S_j^3f^{(3)}(W)]|\right\}.
\end{talign*}
A key part of the proof is to bound this term to order $O(n^{-1})$. Let us see why this is the case. Let $\psi_\ell(s)$ be the solution of the $\mathrm{N}_m(0,\Sigma_{S})$ Stein equation $\opsub{\mathrm{N}_m(0,\Sigma_{S})} \psi_\ell(s)=h_\ell(s)-\mathbb{E}[h_\ell(Z)],$
where $h_1(s)=s_jf''(\sum_{k=1}^ms_k^2)$, $h_2(s)=s_j^3f^{(3)}(\sum_{k=1}^ms_k^2)$ and $Z\sim \mathrm{N}_m(0,\Sigma_{S})$. Since $h_\ell(s)=-h_\ell(-s)$ for $\ell=1,2$, and $Z=_d-Z$, it follows that $\mathbb{E}[h_1(Z)]=\mathbb{E}[h_2(Z)]=0$. Therefore, for $\ell=1,2$,
\begin{talign*}|\mathbb{E}[h_\ell(S)]|=|\mathbb{E}[h_\ell(S)]-\mathbb{E}[h_\ell(Z)]|=|\mathbb{E}[\opsub{\mathrm{N}_m(0,\Sigma_{S})} \psi_\ell(S)]|,
\end{talign*}
and the term on the right-hand side can be bounded to order $O(n^{-1/2})$ by Stein's method for multivariate normal approximation.
These three developments illustrate how Stein's method can be used to quantify approximations in frequentist statistics. The next section illustrates an application of Stein's method in Bayesian statistics.
\subsection{A Measure of Prior Sensitivity} \label{sec: stein prior}
In Bayesian statistics, \cite{DF} proved that, under certain regularity conditions and for large sample sizes, the choice of a prior distribution is irrelevant {for} posterior inference. It is, however, of interest to estimate prior sensitivity for fixed (and often small) sample sizes; Stein's method can provide a partial answer by comparing density operators.
Let us start by fixing the notation. Suppose that the observations ${X_1,\ldots, X_n}$ are i.i.d.\ from a parametric model with scalar parameter of interest which we model as some random variable $\Theta$. Now, assume we have two distinct (possibly improper) prior densities $p_1(\theta)$ and $p_2(\theta)$ for the random quantity $\Theta$. The resulting posterior densities for $\Theta$ can be expressed as
\begin{talign}\label{ex:ley}
p_i (\theta;x) = \kappa_i (x) p_i(\theta) \ell(\theta;x), \ \ \ i=1,2,
\end{talign}
where $\kappa_1$ and $\kappa_2$ are
normalizing constants. Denote by $(\Theta_1, P_1)$ and $(\Theta_2, P_2)$ pairs of random variables and cumulative distribution functions which correspond to the densities $p_1(\theta; x)$ and $p_2(\theta; x)$, respectively, with means $\mu_1$ and $\mu_2$, respectively. We assume that the densities $p_1(\theta; x)$ and $p_2(\theta; x)$ are \textit{nested}, {so} that the support of one is included in the support of the other. We suppose ${I}_2 \subseteq {I}_1$ which allows us to write $p_2 (\theta;x) = \frac{\kappa_2 (x)}{\kappa_1 (x)} \rho(\theta) p_1(\theta; x)$ where $\rho(\theta) = p_2(\theta)/p_1(\theta)$ is the ratio of prior densities. The key idea relies on the elementary identity
\begin{talign*}
\frac{\frac{\mathrm{d}}{\mathrm{d}\theta}(p_2(\theta;x)f(\theta))}{p_2(\theta;x)}=\frac{\frac{\mathrm{d}}{\mathrm{d}\theta}(p_1(\theta;x)f(\theta))}{p_1(\theta;x)}+\big(\frac{\mathrm{d}}{\mathrm{d}\theta}\log(\rho(\theta))\big)f(\theta),
\end{talign*}
which is an immediate consequence of \eqref{ex:ley} and the nestedness of the densities. {This identity no longer involves the normalizing constants and it} relates the density operators of $p_1(\cdot; x)$ and $p_2(\cdot; x)$ in such a way that, with $f_h$ a solution to the Stein equation $h(x) - \mathbb{E}_{X_1\sim P_1} h(X_1) = \mathcal T_1 f_h (x)$, we get
\begin{talign*}
\mathbb{E}_{\Theta_2 \sim P_2} [h(\Theta_2)] - \mathbb{E}_{\Theta_1 \sim P_1} [h(\Theta_1)] & =\mathbb{E}_{\Theta_2 \sim P_2} \left[\mathcal{T}_1 f_h(\Theta_2)\right] =\mathbb{E}_{\Theta_2 \sim P_2} \left[(\mathcal{T}_1 - \mathcal{T}_2) f_h(\Theta_2)\right] \\
& = \mathbb{E}_{\Theta_2 \sim P_2} \left[\frac{\mathrm{d}}{\mathrm{d}\theta}\log(\rho(\theta))\big|_{\theta = \Theta_2} f_h(\Theta_2 )\right] \end{talign*}
(the second equality holds because, by definition, $\mathbb{E}_{\Theta_2 \sim P_2} \left[ \mathcal{T}_2 f_h(\Theta_2)\right] = 0$). Thus, bounding an IPM generated by some class $\mathcal H$ between $\Theta_2$ and $\Theta_1$ can be achieved by bounding $\mathbb{E}_{\Theta_2 \sim P_2} \big[\frac{\mathrm{d}}{\mathrm{d}\theta}\log(\rho(\theta))\big|_{\theta = \Theta_2} f_h (\Theta_2 )\big]$ over all $h \in \mathcal H$.
The next key concept needed here is the \textit{Stein kernel} of $p_1(\theta;x)$, which we denote by $\tau_1(\theta;x)$, and which is given by
\begin{talign}\label{tau1}
\tau_1(\theta;x) = \frac{1}{p_1(\theta; x)} \int_x^\infty (u - \mu_1) p_1(u; x) \,\mathrm{d}u.
\end{talign}
Then, as already noted by \cite{dobler2015stein}, $|f_h (x)|/\tau_1(x) \le 1$ for all $x \in I_1$ and all Lipschitz functions $h$ with Lipschitz constant 1.
These arguments lead to the following result (see \cite{LRS17,GL19}).
\begin{theorem}\label{maintheo}
Consider $\mathcal{H}_{\mathrm{W}}$ the class of Lipschitz-1 functions on $\mathbb{R}$ and suppose that both posterior distributions have finite means $\mu_1$ and $\mu_2$; {assume that $p_1$ has finite variance}. Assume {further} that $\theta\mapsto\rho(\theta)$ is differentiable on ${I}_2$ and satisfies
(i) $\mathbb{E}[|\Theta_1-\mu_1|\rho(\Theta_1)]<\infty$; (ii) $(\rho(\theta)\int_{a_1}^{\theta}(h(y)-\mathbb{E}[h(\Theta_1)])p_1(y;x)\,\mathrm{d}y)'$ is integrable for all $h\in\mathcal{H}_{\mathrm{W}}$; (iii) $\lim_{\theta\rightarrow a_2,b_2}\rho(\theta)\int_{a_1}^{\theta}(h(y)-\mathbb{E}[h(\Theta_1)])p_1(y;x)\,\mathrm{d}y=0$ for all $h\in\mathcal{H}_{\mathrm{W}}$.
Then
\begin{talign}\label{bounds}
|\mu_1-\mu_2|=\frac{|\mathbb{E}_{\Theta_1 \sim P_1}[\tau_1(\Theta_1;x)\rho'(\Theta_1)]|}{\mathbb{E}_{\Theta_1 \sim P_1}[\rho(\Theta_1)]}\leq d_{\mathrm{W}}(P_1,P_2)\leq \frac{\mathbb{E}_{\Theta_1 \sim P_1}[\tau_1(\Theta_1;x)|\rho'(\Theta_1)|]}{\mathbb{E}_{\Theta_1 \sim P_1}[\rho(\Theta_1)]}.
\end{talign}
If, furthermore, the ratio $\rho$ is monotone increasing or decreasing, then the upper and lower bounds coincide so that
$d_{\mathrm{W}}(P_1,P_2)= \frac{\mathbb{E}_{\Theta_1 \sim P_1}[\tau_1(\Theta_1;x)|\rho'(\Theta_1)|]}{\mathbb{E}_{\Theta_1 \sim P_1}[\rho(\Theta_1)]}.
$
\end{theorem}
\begin{example}
Consider normal data with fixed variance $\sigma^2$. If the mean is {the} parameter of interest, \cite{LRS17} compare a normal ${\rm N}(\mu,\delta^2)$ prior for the location parameter with a uniform prior (we note that a normal prior is the conjugate prior in this situation). The Wasserstein distance between the resulting posteriors $P_1$ and $P_2$ corresponds to
\begin{talign*}
\frac{\sigma^2}{n\delta^2+ \sigma^2}\big| \overline{x}-\mu \big| \leq d_{\mathrm W} (P_1, P_2) \leq \frac{\sigma^2}{n\delta^2+ \sigma^2}\big| \overline{x}-\mu \big| + \frac{\sqrt{2}}{\sqrt{\pi}} \frac{\sigma^3}{n\delta \sqrt{n \delta^2 + \sigma^2}}
\end{talign*}
with $\overline{x} = n^{-1} \sum_{i=1}^n x_i$ the sample average. Both bounds are of the order of $O(n^{-1})$ and are easily interpreted: the better the initial guess of the prior, meaning here of the location $\mu$, the smaller the bounds and hence the smaller the influence of the prior.
\end{example}
Further examples can be found in \cite{LRS17,GL19}. The argument has been extended to the multivariate setting in \citet[Section 5]{mijoule2021stein}. We conclude with an example illustrating the multivariate version of the bounds.
\begin{example}
\label{ex:bayeslog}
Consider the log density of a Bayesian logistic regression posterior based on a
dataset of observations $\boldsymbol{x}_{\ell} = (v_{\ell}, y_{\ell})$,
$\ell = 1, \ldots, L$, {with $v_\ell \in \mathbb{R}^d$ a vector of covariates and $y_\ell \in \{0,1\}$} and a $\mathrm{N}_d(\nu, \Sigma)$ prior {on the parameter $\beta \in \mathbb{R}^d$ of the logistic regression}:
\begin{talign*}
\log p_2(\beta) = \kappa(\boldsymbol{x}) - \frac{1}{2} \norm{
\Sigma^{-1/2}(\beta
- \nu) }^2 - \sum_{\ell = 1}^L \log \left( 1 + \mathrm{exp}\left(
-y_{\ell} \left\langle v_{\ell}, \beta \right\rangle \right) \right).
\end{talign*}
Here $ \kappa(\boldsymbol{x})$ is an irrelevant normalizing constant, the
first summand is the multivariate normal prior on $\beta$ and the second term
is the logistic regression likelihood. It follows that
\begin{talign*}
\grad \log \rho = \sum_{\ell = 1}^L \frac{-y_{\ell}
}{1+\mathrm{exp}\left(
-y_{\ell} \left\langle v_{\ell}, \beta \right\rangle \right) } v_{\ell} {\mathrm{exp}\left(
-y_{\ell} \left\langle v_{\ell}, \beta \right\rangle \right) } ,
\end{talign*}
and
\begin{talign*}
\norm{ \mathbb{E}_{\beta \sim P_2} \left[ \sum_{\ell = 1}^L \frac{-y_{\ell} \Sigma v_{\ell} {\, \mathrm{exp}\left(
-y_{\ell} \left\langle v_{\ell}, \beta \right\rangle \right) } }{1+\mathrm{exp}\left(
-y_{\ell} \left\langle v_{\ell}, \beta \right\rangle \right)}
\right] }& \leq d_{\mathrm{W}}(P_1, P_2) \leq \mathbb{E}_{\beta \sim P_2 } \left[ \norm{ \sum_{\ell = 1}^L \frac{-y_{\ell} \Sigma v_{\ell} {\, \mathrm{exp}\left(
-y_{\ell} \left\langle v_{\ell}, \beta \right\rangle \right) }}{1+\mathrm{exp}\left(
-y_{\ell} \left\langle v_{\ell}, \beta \right\rangle \right)} }
\right].
\end{talign*}
\end{example}
\section{Computable Stein Discrepancies}\label{sec:comp_stein_discrepancies}
\input{computable}
\section{New Statistical Methods Based on Stein Operators}\label{sec:B}
In this section we will show how ingredients from Stein's method have been successfully used to uncover methodological tools and procedures. In particular, we will discuss a range of recent applications of Stein's method in computational statistics and machine learning. \cref{sec:measuring} shows how computable Stein discrepancies can be used to quantify the quality of approximate MCMC schemes. \cref{sec: sampling sec} introduces a variety of ways of using Stein's method to construct and improve a sample approximation, including Stein variational gradient descent (\cref{sec: SVGD}), Stein points (\cref{sec:steinpoints}), and Stein thinning (\cref{sec:steinthinning}). \cref{sec: CVs} describes Stein-based control variates for improved Monte Carlo integration, and finally \cref{kernelSteinEmbedding,sec:stein-oper-goodn} detail goodness-of-fit tests based on Stein discrepancies and Stein operators, respectively.
\input{measuring}
\input{svgd.tex}
\input{stein_points}
\input{control_variates}
\input{gof}
\section{Conclusion}\label{sec:conclu}
The goal of this paper is to highlight some recent developments in statistics that have been accomplished via tools inherited from Stein's method. Our tour covers topics in theoretical, methodological and applied statistics. However, many exciting developments are not covered, and a complete exhaustive description is an impossible task within the constrained space of a review paper such as this one.
On the level of new bounds in theoretical statistics, Stein's method has also been used for obtaining rates of convergence in de Finetti's celebrated theorem \citep{mijoule2016rate}, for proving non-asymptotic error bounds for convex and non-convex optimization \citep{ErdogduMaSh2018,ABE_Polyak-Ruppert}, for deriving error bounds for the multivariate normal approximation of vectors of quadratic forms with application to variance components estimation \citep{de2017}, for giving error bounds for the normal approximation of the probability measure determined by conic intrinsic volumes of a closed convex cone \citep{goldstein2017gaussian},
for deriving concentration inequalities in scalar and matrix settings \citep{Chatterjee2007,Chatterjee2010,Mackey2014,paulin2013deriving,paulin2016efron}, for establishing quantitative central limit theorems for U-statistics \citep{dobler2019quantitative}, and for obtaining Berry-Esseen bounds and moderate deviations for nonlinear and self-normalized statistics \citep{changshaozhou16,shaozhou16,shaozhangzhou16,shaozhang21}. In \citet{parkpark15,fathi2020relaxing}, it has been used to prove results for Charles Stein's other well-known achievements including the James-Stein shrinkage estimators and Stein's unbiased risk estimate.
On the level of new statistical procedures, Stein's method has been used for parameter inference in the context of intractable likelihood models by \citet{Barp2019,betsch2020minimum,Liu2018Fisher,grathwohl2020learning,Matsubara2021}, for designing sampling-based algorithms for non-convex optimization \citep{ErdogduMaSh2018}, for goodness-of-fit testing for directional statistics \citep{XuMatsuda2020}, and for estimating gradients of such models in \cite{Li2018gradients,Shi2018}. It has also been used for learning semi-parametric multi-index models in high dimensions \citep{yangetal2017}. In Bayesian statistics, Stein discrepancies have been used as variational objectives for posterior approximation \citep[e.g.,][]{ranganath2016operator,hu2018stein,fisher2020measure}.
Moreover, this paper shows considerable scope for more interplay between the research strand on how to set up Stein operators and that of devising computable Stein discrepancies and related algorithms. For example, for a given target distribution, it is a mostly an open problem which Stein operator and class to choose so as to obtain a computable Stein discrepancy which is most useful for the problem at hand. This answer may also differ depending on whether we want to construct a hypothesis test, develop a sampling method, or measure sample quality. To date, the focus has mostly been on Langevin KSDs, but we envisage that some of the alternative approaches to Stein operators highlighted in Section \ref{sec:choossteinop} will play an increasing role in future applications of Stein's method to computational statistics.
The list of results given in this paper are but a mere sample of the ongoing activity in this area of research at the boundary between probability, functional analysis, data science and statistics; the current review having already more that 200 references may serve as testimony to the high activity of this field, and the substantive nature of the topics that we do not cover is testament to the considerable recent impact of Stein's method.
\hfill
\noindent \textbf{Acknowledgments}:
AA was supported by a start-up grant from the University of Cyprus. AB was supported by the Department of Engineering at the University of Cambridge, and the UK Defence Science and Technology Laboratory (Dstl) and Engineering and Physical Research Council (EPSRC) under the grant EP/R018413/2. FXB and CJO were supported by the Lloyds Register Foundation Programme on Data-Centric Engineering and The Alan Turing Institute under the EPSRC grant [EP/N510129/1]. RG was supported by a Dame Kathleen Ollerenshaw Research Fellowship. FG and CL were supported by a BOF Starting Grant of Ghent University. QL was supported in part by NSF CAREER No. 1846421. GR was supported in part by EP/T018445/1 and EP/R018472/1. YS was supported in part by CDR/OL J.0197.20 from FRS-FNRS.
\footnotesize
\bibliographystyle{apalike}
\subsection{Measuring Sample Quality}
\label{sec:measuring}
This section presents practical tools based on Stein's method for computing how well a given sample, {represented as empirical measure} $Q_n = n^{-1}\sum_{i=1}^n \delta_{x_i}$, approximates a given target distribution $P$.
This line of work was motivated by the approximate Markov chain Monte Carlo (MCMC) revolution in which practitioners have turned to asymptotically biased MCMC procedures that sacrifice asymptotic correctness for improved sampling speed \citep[see, e.g.,][]{WellingTe11,Ahn2012,Korattikara2014}. The reasoning is sound -- the reduction in Monte Carlo variance from faster sampling can outweigh the bias introduced, but standard Monte Carlo diagnostics like effective sample size, asymptotic variance, trace and mean plots, and pooled and within-chain variance diagnostics presume eventual convergence to the target distribution and hence do not account for asymptotic bias. To address this deficiency, \citet{GorhamMa15,Gorham2017,HugginsMa2018,gorham2020stochastic} introduced the computable Stein discrepancies of \cref{sec:comp_stein_discrepancies} as measures of sample quality suitable for comparing asymptotically exact, asymptotically biased, and even deterministic sample sequences $\{x_1, \dots, x_n\}$.
\vspace{-3mm}
\paragraph{Graph Stein discrepancies}
For example, \citet{GorhamMa15} used the GSDs of \cref{ex:gsd} to select and tune approximate MCMC samplers, assess the empirical convergence rates of Monte Carlo and Quasi-Monte Carlo procedures, and quantify bias-variance tradeoffs in posterior inference.
An illustrative example is given in \cref{fig:gsd-ess-sgld}.
These applications were enabled by a series of analyses establishing that the GSD converges to $0$ if and only if its empirical measure $Q_n$ converges to $P$.
Specifically, \citet{mackey2016multivariate,GorhamDuVoMa19,ErdogduMaSh2018} bounded the GSD explicitly above and below by Wasserstein distances whenever the diffusion underlying the Stein operator couples quickly and has pseudo-Lipschitz drift.
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{figures/julia_sgld-gmm_diagnostic_n=1000.pdf}
\includegraphics[width=0.67\textwidth]{figures/julia_diagnostic_contour_distname=sgld-gmm_n=1000_seed=7.pdf}
\caption{Selecting the step size $\epsilon$ for stochastic gradient Langevin dynamics \citep{WellingTe11}, a popular approximate MCMC algorithm designed for scalability. Standard MCMC diagnostics like effective sample size (ESS) do not account for asymptotic bias and select overly large $\epsilon$ with greatly overdispersed samples (right panel). Overly small $\epsilon$ leads to slow mixing (left panel).
The Stein discrepancy selects an intermediate value offering the best approximation (center panel). Figure reproduced from \citet[Fig.~3]{GorhamMa15}.}
\label{fig:gsd-ess-sgld}
\end{figure}
\vspace{-3mm}
\paragraph{Kernel Stein discrepancies}
The closed form of the KSDs of \cref{example:Langevin_KSD} represents a significant practical advantage for sample quality measurement, as no linear program solvers are necessary, and the computation of the discrepancy can be easily parallelized. However, \citet{Gorham2017} showed that not all KSDs are suitable for measuring sample quality. In particular, in dimension $d \geq 3$, KSDs based on popular kernels like the Gaussian and Mat\'ern\xspace kernels fail to detect when a sample is not converging to the target, even when the target is normal. To address this shortcoming, \citet{Gorham2017} developed a theory of weak convergence control for KSDs and designed a class of KSDs that provably control weak convergence for a large set of target distributions (see \citet{HugginsMa2018,Chen2018} for further developments). These convergence-determining KSDs have been shown to deliver substantial speed-ups over the original GSDs in higher dimensions \citep{Gorham2017}.
\vspace{-3mm}
\newcommand{$\textup{R}{\Phi}\textup{SDs}$\xspace}{$\textup{R}{\Phi}\textup{SDs}$\xspace}
\paragraph{Random feature Stein discrepancies} To identify a family of convergence-determining discrepancy measures that can be accurately and inexpensively approximated with random sampling, \citet{HugginsMa2018} introduce a new domain for the Stein operator using a feature function, giving rise to a {\it feature Stein set} and a corresponding {\it feature Stein discrepancy}. The feature Stein discrepancy is then approximated using importance sampling, which results a {\it{random feature Stein discrepancy}} (R$\Phi$SD). \citet{HugginsMa2018} showed that $\textup{R}{\Phi}\textup{SDs}$\xspace upper bound standard discrepancy measures with high probability. This translates into high-probability convergence control whenever the approximating sample sequence is uniformly integrable.
\subsubsection{Sampling with Stein Points}\label{sec:steinpoints}
\emph{Stein points} \citep{Chen2018,Chen2019} is a method that progressively constructs a set of points $\{x_1,\ldots,x_n\} \subset \X$ to approximate $P$ by minimizing a Stein discrepancy. The case considered by these authors is that of the KSD which was minimized in a sequential greedy manner:
\begin{talign}
x_1 \in \mathrm{argmin}_{x \in \mathcal{X}} \text{KSD}_k(\{x\}), \qquad
x_n \in \mathrm{argmin}_{x \in \mathcal{X}} \text{KSD}_k(\{x_1,\dots,x_{n-1},x\}) \: \text{for } n > 1, \label{alg: stein points}
\end{talign}
where $\text{KSD}_k(\{x_1, \ldots, x_n\}) = \mathcal{S}(Q_n,\operator{},\mathcal{G}_k)$ and
$\mathcal{G}_k$ is the kernel Stein set; the notation emphasizes that the KSD is an explicit function of the set $\{x_1, \ldots, x_n\}$, and in the sequel we will select such a set as to approximately minimize this KSD. A typical sequence obtained in this way is presented in the middle panel of \Cref{fig:sp-mcmc}.
Of course, finding the global minima in the equations above may be difficult and therefore the analysis that we discuss next allows for an imperfect optimization method to be used.
\citet[][Theorem 2]{Chen2018} showed that if $k_P$ in equation \eqref{eq:steinreproducingkernel} satisfies $\mathbb{P}_{X\sim P}( k_P(X,X) \geq t ) \leq b_1 \mathrm{e}^{-b_2t}$ for some constants $b_1, b_2 > 0$ and all $t \geq 0$, then there exist constants $c_1, c_2 > 0$ depending only on $k_P$ and $P$ such that any $n \in \mathbb{N}$ and $\{x_1, \ldots, x_n \} \subset \mathcal{X}$ satisfying that for all $j=1, \ldots, n,$
\begin{talign*}
\textsc{KSD}_k(\{x_1,\dots,x_j\})^2
& \leq
\frac{\delta}{n^2} + \min_{x\in \mathcal{X}: k_P(x,x)\leq \frac{2\log(j)}{c_2}} \textsc{KSD}_k(\{x_1,\dots,x_{j-1},x\})^2
\end{talign*}
and
\begin{talign*}
\textsc{KSD}_k(\{x_1, \ldots,x_n\})
\leq \mathrm{e}^{\pi/2} \sqrt{\frac{2\log(n)}{c_2n} +\frac{c_1}{n} + \frac{\delta}{n}}.
\end{talign*}
This result makes explicit how Stein's method, and specifically KSD, can be used to transform the sampling problem of approximating $P$ into an optimization problem that admits a provably convergent numerical method.
\subsubsection{Stein Thinning}\label{sec:steinthinning}
Alternatively, \cite{riabiz2020optimal} considered the use of KSD in a post-processing approach to select states from a large pre-determined candidate set, with application to de-biasing MCMC output. Their approach can be summarized as:
\begin{talign}
x_1 & \in \mathrm{argmin}_{x \in \{X_1,\dots,X_N\}} \text{KSD}_k(\{x\}), \nonumber \\
x_n & \in \mathrm{argmin}_{x \in \{X_1,\dots,X_N\}} \text{KSD}_k(\{x_1,\dots,x_{n-1},x\})\: \text{for } n > 1,
\label{alg: stein points 2}
\end{talign}
where $(X_i)_{i =1, \ldots, N}$ is a $Q$-invariant Markov chain; $Q$ and $P$ need not be equal.
A typical sequence obtained in this way is presented in the right panel of \Cref{fig:sp-mcmc}.
These authors extended earlier convergence results to prove almost sure weak convergence of $Q_n = n^{-1} \sum_{i=1}^n \delta_{x_i}$ to $P$ in the limit as $N \geq n \rightarrow \infty$.
Indeed, provided that the Markov chain is $V$-uniformly ergodic with $V(x) \geq \frac{\mathrm{d}P}{\mathrm{d}Q}(x) \sqrt{k_P(x,x)}$ and that certain moments of the chain are finite, \citet[][Theorem 3]{riabiz2020optimal} showed that $\text{KSD}_k(\{x_1, \ldots, x_n\}) \rightarrow 0$ almost surely as $n \rightarrow \infty$.
It follows that Stein discrepancies may be used to post-process MCMC output, which may have the benefits of improving approximation quality, mitigating sampler bias, and providing a compressed representation of $P$. The closed form of KSD renders such post-processing straightforward. Extensions of Stein thinning, to allow for non-myopic optimization and for mini-batching, were recently studied in \cite{teymur2020optimal}. In related work, \cite{liu2016black,hodgkinson2020reproducing} proposed to use Stein discrepancies to re-weight Markov chain output, as opposed to selecting a smaller subset.
\subsection{Constructing and Improving Sample Approximation}
\label{sec: sampling sec}
We consider the problem of constructing and improving a sample-based approximation of the form $Q_n = n^{-1} \sum_{i=1}^n \delta_{x_i}$ for an intractable distribution $P$ of interest. Popular stochastic Monte Carlo methods such as MCMC provide a standard approach to achieve this. In this section, we show that Stein's method allows us to develop a suit of \emph{optimization-based} alternatives to Monte Carlo methods. We demonstrate this with three examples: \cref{sec: SVGD} introduces Stein variational gradient descent, a gradient based algorithm that iteratively updates the location of the particles $\{x_1, \ldots, x_n\}$ to improve the approximation quality w.r.t $P$. \cref{sec:steinpoints} introduces Stein Points, a greedy algorithm that constructs the approximation by sequentially adding the particles to minimize KSD.
\cref{sec:steinthinning} introduces Stein Thinning, which compresses an existing approximation using KSD.
\subsubsection{Sampling with Stein Variational Gradient}
\label{sec: SVGD}
Let $P$ be a distribution with a continuously differentiable density function $p$ supported on $\X$. We want to find a set of points $\{x_1, \ldots, x_n\}\subset \X$, which we refer to as \emph{particles}, such that its empirical measure $Q$ gives a close approximation to $P$. Stein variational gradient descent (SVGD) \citep{Liu2016a} achieves this by iteratively updating the particles to minimize the KL divergence between $Q$ and $P$, which is made possible by exploiting an intrinsic connection between KL divergence and Stein's method.
For the purpose of derivation, we assume for now that $Q$ is a continuous distribution with a finite KL divergence $\KL(Q~||~P) <\infty$. We want to recursively ``transport" the probability mass of $Q$ with a deterministic map to move it closer to $P$ in order to decrease $\KL(Q~||~P)$ as fast as possible. Specifically, we consider mappings of the form
\begin{talign}
\T(x) = x + \epsilon \ff(x),
\end{talign}
where $\epsilon$ is a small positive scalar that serves as a step size, and
$\ff\colon \X \to \X$
is a one-to-one mapping that serves as the velocity field.
Denote by $\Ts \muo$
the distribution of $\T(X)$ when $X\sim \muo$; this is also called the {\it pushforward measure}.
The key challenge is to optimally choose $\ff$ for each given $Q$, so that the KL divergence between $\Ts Q$ and $P$ is decreased as much as possible. Assuming $\epsilon$ is infinitesimal, the optimal choice of $\ff$ can be framed into a functional optimization problem:
\begin{talign} \label{equ:ff00}
\max_{\ff \in \F} \left\{-\frac{\dno}{\dno\epsilon} \KL( \Ts \muo ~|| ~ P) ~ \big |_{\epsilon = 0}
\right\},
\end{talign}
where the negative derivative
$-\frac{\dno}{\dno\epsilon} \KL( \Ts \muo ~|| ~ P) ~ \big |_{\epsilon = 0}$ measures the decreasing rate of KL divergence under the transport map $\T$ as we increase the step size $\epsilon$ starting from zero, and
$\F$ is a function space that specifies the candidate set of $\ff$. The key observation is that the objective in \eqref{equ:ff00} is in fact equivalent to the expectation $\E_{Q}[\oparg{g}{X}]$ of the Langevin Stein operator.
\begin{theorem}\label{thm:svgd_kl}
Assume $P$ and $Q$ have positive densities on $\X={\mathbb{R}} ^d$, and the density $p$ of $P$ is in $C^1({\mathbb{R}} ^d)$. Let $\T(x) = x + \epsilon \ff(x)$, where $\epsilon \in \mathbb{R}$ and $\ff\colon {\mathbb{R}} ^d \to {\mathbb{R}} ^d$ is a $C^1$ map with $\sup_{x\in {\mathbb{R}} ^d}\norm{\nabla\ff(x)}_2 <\infty$, where $\norm{\cdot}_2$ denotes the spectral norm. We have
\begin{talign*}
-\frac{\dno }{\dno \epsilon} \KL(\Ts Q ~||~ P)~\big|_{\epsilon=0} = \E_{X \sim Q}[\oparg{g}{X}] \end{talign*}
where $
\oparg{g}{x} =
\langle \nabla \log p(x), \ff(x)\rangle + \langle \nabla, \ff(x)\rangle$.
\end{theorem}
Theorem~\ref{thm:svgd_kl} draws an intriguing connection between Stein's method, the KL divergence and optimal transport.
It shows that \eqref{equ:ff00} is equivalent to the optimization in Langevin KSD:
\begin{talign}\label{equ:tpphi}
\mathrm{KSD}_k(Q)
= \max_{g \in \F} \left \{\E_{X \sim Q}[\operator{g}(X)] \right \}
=
\max_{\ff \in \F} \left \{ -\frac{\dno }{\dno \epsilon} \KL(\Ts Q ~||~ P)\big|_{\epsilon=0} \right \}.
\end{talign}
Therefore, the Langevin KSD can be interpreted as the maximum decreasing rate of KL divergence between $Q$ and $P$
under the best transport map in $\F$. Taking $\mathcal{G}$ to be the unit ball of the RKHS with kernel $k$, we can solve equation \eqref{equ:tpphi} in closed form (see Example~\ref{example:Langevin_KSD}):
\begin{talign}\label{equ:ffs}
\ffs_{\muo,P}(\cdot) \propto \E_{X \sim \muo}\left[
\nabla \log p(X) k(X,\cdot) + \nabla_x k(X,\cdot)\right].
\end{talign}
This yields the best update direction for ``transporting'' particles from $Q$ to $P$ under KL divergence. In practice, we take $Q$ to be the empirical measure of the particles, that is, $Q_n = n^{-1} \sum_{i=1}^n \delta_{x_i}$, while iteratively updating $\{x_1,\ldots,x_n\}$ by using the optimal transport map found above $\T^*_{Q,P}(x) = x + \epsilon \ffs_{Q,P}(x)$. This yields the following simple update rule on the particles, which is illustrated in the left panel of \Cref{fig:sp-mcmc}:
\begin{talign}
\label{equ:update11}
& x_i ~ \gets ~ x_i ~ + ~ \epsilon \frac{1}{n}\sum_{j=1}^n \left ( \nablax \log p(x_j) k( x_j, x_i) + \nablax_{x_j} k(x_j, x_i) \right), ~~~ \forall i =1,\ldots, n.
\end{talign}
The two terms in \eqref{equ:update11} play intuitive roles. The term with the gradient $\nabla \log p$ pushes the particles towards the high probability regions of $P$, while the term with $\nabla_x k$ can be viewed as a repulsive force to enforce the diversity between the particles if $k$ is a stationary kernel of form $k(x,x') = \phi(x-x')$: in this case, performing $x_i' \gets x_i + \epsilon \nabla_{x_j} k(x_j, x_i)$ would decrease $k(x_i,x_j)$, which measures the similarity between $x_i$ and $x_j$, when $\epsilon$ is sufficiently small. If there is no repulsive force,
or when there is only a single particle (and the kernel satisfies $\nabla_x k(x,x') = 0$ for $x = x'$), the solution would collapse to the local optima of $\log p$, reducing to the maximum a posteriori (MAP) point. Therefore, by using different particle sizes, SVGD provides an interpolation between MAP to a full particle-based approximation.
SVGD defines a \emph{deterministic interacting particle} system in which $\{x_1, \ldots, x_n\}$ interact and co-evolve to reach a desirable equilibrium. There are (at least) two for understanding SVGD. One is through asymptotic theory \citep{liu2017stein}, which considers the limit of large particle size ($n\to\infty$) and continuous time ($\epsilon\to0$), and interprets SVGD as a \emph{gradient flow} of KL divergence induced by a kernel-Wasserstein geometric structure on the infinite dimensional space of distributions; a set of theoretical studies along this line can be found in \citet{lu2018scaling, liu2019understanding,duncan2019geometry, chewi2020svgd, gorham2020stochastic,nusken2021stein,korba2020non}. The second approach considers the non-asymptotic regime of a finite number $n$ of particles, which shows that
SVGD acts like a \emph{numerical quadrature} method in which the particles are arranged to exactly estimate the true expectation of a set of special basis functions determined by the Stein operator and kernel function \citep{liu2018stein}. This interpretation is more suitable for interpreting and optimizing the results of SVGD when using a small number of particles, which is a common case in large scale problems.
SVGD has been extended and improved in various ways. For example, amortized SVGD \citep{feng2017learning} learns neural samplers in replacement of particle approximation; gradient-free SVGD \citep{han2018stein} provides an extension that requires no gradient information of the target distribution $P$; a number of other extensions and improvements can be found in, e.g., \citet{wang2017stein, zhuo2018message,wang2019stein,liu2017riemannian,detommaso2018stein,chen2018unified,chen2019projected,li2020stochastic,gorham2020stochastic,gong2019quantile,wang2019nonlinear,han2017stein}. SVGD has been found applications in a variety of problems including in deep learning \citep[e.g.,][]{ pu2017vae, wang2016learning}, reinforcement learning \citep[e.g.,][]{haarnoja2017reinforcement,liu2017policy, liu2020off}, meta learning \citep[e.g.,][]{kim2018bayesian,feng2017learning}, and uncertainty quantification in science and engineering \citep[e.g.,][]{zhu2018bayesian, zhang2019seismic, zhang2020variational, zhang2019bayesian}.
|
2,869,038,155,118 | arxiv | \section{System Model}
\label{sec:model}
{In this section, we will present the notions related to STMs and the execution model used in the proposed approach.}
\cmnt{
Blockchain is a distributed and highly secure technology which stores the records into the block. It consists of multiple peers (or nodes), and each peer maintains decentralize distributed ledger that makes it publicly readable but tamper-proof. Peer executes some functions in the form of transactions. A transaction is a set of instructions executing in the memory. Bitcoin is a blockchain system which only maintains the balances while transferring the money from one account to another account in the distributed manner. Whereas, the popular blockchain system such as Ethereum maintains the state information as well. Here, transactions execute the atomic code known as a function of \scontract{}. Smart contract consists of one or more atomic-units or functions. In this paper, the atomic-unit contains multiple steps that have been executed by an efficient framework which is optimistic STMs.
\noindent
\textbf{Smart Contracts:} The transactions sent by clients to miners are part of a larger code called as \emph{\scontract{s}} that provide several complex services such as managing the system state, ensures rules, or credentials checking of the parties involved, etc. \cite{Dickerson+:ACSC:PODC:2017}.
For better understanding of smart contract, we describe a simple auction contract from Solidity documentation \cite{Solidity}.\\
\textbf{Simple Auction Contract:} The functionality of simple auction contract is shown in \algoref{sa}. Where \Lineref{sa1} declares the contract, followed by public state variables as \emph{highestBidder, highestBid,} and \emph{pendingReturn} which records the state of the contract. A single owner of the contract initiates the auction by executing constructor \texttt{SimpleAuction()} method (omitted due to lack of space) in which function initialize bidding time as auctionEnd (\Lineref{sa3}).
There can be any number of participants to bid. The bidders may get their money back whenever the highest bid is raised. For this, a public state variable declared at \Lineref{sa7} (\emph{pendingReturns}) uses solidity built-in complex data type mapping to maps bidder addresses with unsigned integers (withdraw amount respective to bidder). Mapping can be seen as a hash table with key-value pair. This mapping uniquely identifies account addresses of the clients in the Ethereum blockchain. A bidder withdraws the amount of their earlier bid by calling \texttt{withdraw()} method \cite{Solidity}.
At \Lineref{sa8}, a contract function \texttt{bid()} is declared, which is called by bidders to bid in the auction. Next, \emph{auctionEnd} variable is checked to identify whether the auction already called off or not. Further, bidders \emph{msg.value} check to identify the highest bid value at \Lineref{sa11}. Smart contract methods can be aborted at any time via throw when the auction is called off, or bid value is smaller than current \emph{highestBid}. When execution reaches to \Lineref{sa14}, the \texttt{bid()} method recovers the current highest bidder data from mapping through the \emph{highestBidder} address and updates the current bidder pending return amount. Finally, at \Lineref{sa16} and \Lineref{sa17}, it updates the new highest bidder and highest bid amount
\begin{algorithm}
\scriptsize
\caption{SimpleAuction(): It allows every bidder to send their bids throughout the bidding period.} \label{alg:sa}
\setlength{\multicolsep}{0pt}
\begin{multicols}{2}
\begin{algorithmic}[1]
\makeatletter\setcounter{ALG@line}{0}\makeatother
\Procedure{\emph{SimpleAuction()}}{} \label{lin:sa1}
\State address public beneficiary;\label{lin:sa2}
\State uint public auctionEnd;\label{lin:sa3}
\State /*current state of the auction*/\label{lin:sa4}
\State address public highestBidder;\label{lin:sa5}
\State uint public highestBid;\label{lin:sa6}
\State mapping(address $=>$ uint) pendingReturns; \label{lin:sa7}
\Function {}{}bid() public payable \label{lin:sa8}
\If{(now $\geq$ auctionEnd)}
\State throw;\label{lin:sa10}
\EndIf
\If{(msg.value $<$ highestBid)} \label{lin:sa11}
\State thorw;\label{lin:sa12}
\EndIf
\If{(highestBid != 0)}\label{lin:sa13}
\State pendingReturns[highestBidder] += highestBid;\label{lin:sa14}
\EndIf \label{lin:sa15}
\State highestBidder = msg.sender;\label{lin:sa16}
\State highestBid = msg.value;\label{lin:sa17}
\EndFunction
\State // more operation definitions\label{lin:sa18}
\EndProcedure
\end{algorithmic}
\end{multicols}
\end{algorithm}
}
Following~\cite{tm-book,KuznetsovPeri:Non-interference:TCS:2017}, we assume a system of $n$ processes/threads, $p_1,\ldots,p_n$ that access a collection of \emph{transactional objects} or \tobj{s} via atomic \emph{transactions}. Each transaction has a unique identifier. Within a transaction, processes can perform \emph{transactional operations or \mth{s}}:
\begin{itemize}
\item \texttt{\begt{()}}-- begins a transaction.
\item \texttt{\twrite}$(x,v)$ (or $w(x, v)$)-- updates a \tobj{} $x$ with value $v$ in its local memory.
\item \texttt{\tread}$(x, v)$ (or $r(x, v)$)-- tries to read $x$ and returns value as $v$.
\item \texttt{\tryc}$()$-- tries to commit the transaction and returns $commit$ (or $\mathcal{C}$) if succeeds.
\item \texttt{\textit{tryA}}$()$-- aborts the transaction and returns $\mathcal{A}$.
\end{itemize}
Operations \texttt{\tread{()}} and \texttt{\tryc}$()$ may return $\mathcal{A}{}$.
Transaction $T_i$ starts with the first operation and completes when any of its operations return $\mathcal{A}$ or $\mathcal{C}$.
For a transaction $T_k$, we denote all the \tobj{s} accessed by its read \op{s} and write operations as $rset\xspace_k$ and $wset\xspace_k$, respectively. We denote all the \op{s} of a transaction $T_k$ as $\evts{T_k}$ or $evts_k$.
\vspace{.2cm}
\noindent
\textbf{History:}
A \emph{history} is a sequence of \emph{events}, i.e., a sequence of
invocations and responses of transactional operations. The collection of events is denoted as $\evts{H}$. For simplicity, we consider \emph{sequential} histories, i.e., the invocation of each transactional operation is immediately followed by a matching response. Therefore, we treat each transactional operation as one atomic event and let $<_H$ denote the total order on the transactional operations incurred by $H$.
We identify a history $H$ as tuple $\langle \evts{H},<_H \rangle$.
Further, we consider \emph{well-formed} histories, i.e., no transaction of a process begins before the previous transaction invocation has completed (either $commits$ or $aborts$). We also assume that every history has an initial \emph{committed} transaction $T_0$ that initializes all the t-objects with value $0$. The set of transactions that appear in $H$ is denoted by $\txns{H}$. The set of \emph{committed} (resp., \emph{aborted}) transactions in $H$ is denoted by $\comm{H}$ (resp., $\aborted{H}$). The set of \emph{incomplete} or \emph{live} transactions in $H$ is denoted by $\incomp{H} = \live{H} = (\txns{H}-\comm{H}-\aborted{H})$.
We construct a \emph{complete history} of $H$, denoted as $\overline{H}$, by inserting $\textit{tryA}_k(\mathcal{A})$ immediately after the last event of every transaction $T_k\in \live{H}$. But for $\tryc_i$ of transaction $T_i$, if it released the lock on first \tobj{} successfully that means updates made by $T_i$ is consistent so, $T_i$ will immediately return commit.
\cmnt{
\noindent
\textbf{Sub-history:} A \textit{sub-history} ($SH$) of a history ($H$)
denoted as the tuple $\langle \evts{SH},$ $<_{SH}\rangle$ and is defined as:
(1) $<_{SH} \subseteq <_{H}$; (2) $\evts{SH} \subseteq \evts{H}$; (3) If an
event of a transaction $T_k\in\txns{H}$ is in $SH$ then all the events of $T_k$
in $H$ should also be in $SH$.
For a history $H$, let $R$ be a subset of $\txns{H}$. Then $\shist{R}{H}$ denotes the \ssch{} of $H$ that is formed from the \op{s} in $R$.
}
\vspace{.2cm}
\noindent
\textbf{\textit{Transaction Real-Time and Conflict order:}} For two transactions $T_k,T_m \in \txns{H}$, we say that $T_k$ \emph{precedes} $T_m$ in the \emph{real-time order} of $H$, denoted as $T_k\prec_H^{RT} T_m$, if $T_k$ is complete in $H$ and the last event of $T_k$ precedes the first event of $T_m$ in $H$. If neither $T_k \prec_H^{RT} T_m$ nor $T_m \prec_H^{RT} T_k$, then $T_k$ and $T_m$ \emph{overlap} in $H$. We say that a history is \emph{serial} (or \emph{\tseq}) if all the transactions are ordered by real-time order.
We say that $T_k, T_m$ are in conflict, denoted as $T_k\prec_H^{Conf} T_m$, if
(1) $\tryc_k()<_H \tryc_m()$ and $wset(T_k) \cap wset(T_m) \neq\emptyset$;
(2) $\tryc_k()<_H r_m(x,v)$, $x \in wset(T_k)$ and $v \neq \mathcal{A}$;
(3) $r_k(x,v)<_H \tryc_m()$, $x\in wset(T_m)$ and $v \neq \mathcal{A}$.
Thus, it can be seen that the conflict order is defined only on \op{s} that have successfully executed. We denote the corresponding \op{s} as conflicting.
\vspace{.2cm}
\noindent
\textbf{Valid and Legal histories:} A successful read $r_k(x, v)$ (i.e., $v \neq \mathcal{A}$) in a history $H$ is said to be \emph{\valid} if there exist a transaction $T_j$ that wrote $v$ to $x$ and \emph{committed} before $r_k(x,v)$.
History $H$ is \valid{} if all its successful read \op{s} are \valid.
We define $r_k(x, v)$'s \textit{\lastw{}} as the latest commit event $\mathcal{C}_i$ preceding $r_k(x, v)$ in $H$ such that $x\in wset_i$ ($T_i$ can also be $T_0$). A successful read \op{} $r_k(x, v)$ (i.e., $v \neq \mathcal{A}$), is said to be \emph{\legal{}} if the transaction containing $r_k$'s \lastw{} also writes $v$ onto $x$.
The history $H$ is \legal{} if all its successful read \op{s} are \legal. From the definitions we get that if $H$ is \legal{} then it is also \valid.
\vspace{.2cm}
\noindent
\textbf{Notions of Equivalence:} Two histories $H$ and $H'$ are \emph{equivalent} if they have the same set of events. We say two histories $H, H'$ are \emph{multi-version view equivalent} \cite[Chap. 5]{WeiVoss:TIS:2002:Morg} or \emph{\mvve} if
(1) $H, H'$ are valid histories and
(2) $H$ is equivalent to $H'$.
\noindent
Two histories $H, H'$ are \emph{view equivalent} \cite[Chap. 3]{WeiVoss:TIS:2002:Morg} or \emph{\vie} if
(1) $H, H'$ are legal histories and
(2) $H$ is equivalent to $H'$. By restricting to \legal{} histories, view equivalence does not use multi-versions.
\noindent
Two histories $H, H'$ are \emph{conflict equivalent} \cite[Chap. 3]{WeiVoss:TIS:2002:Morg} or \emph{\ce} if
(1) $H, H'$ are legal histories and
(2) conflict in $H, H'$ are the same, i.e., $conf(H) = conf(H')$.
Conflict equivalence like view equivalence does not use multi-versions and restricts itself to \legal{} histories.
\vspace{.2cm}
\noindent
\textbf{VSR, MVSR, and CSR:} A history $H$ is said to VSR (or View Serializable) \cite[Chap. 3]{WeiVoss:TIS:2002:Morg}, if there exist a serial history $S$ such that $S$ is view equivalent to $H$. But this notion considers only single-version corresponding to each \tobj{}.
MVSR (or Multi-Version View Serializable) maintains multiple version corresponding to each \tobj. A history $H$ is said to MVSR \cite[Chap. 5]{WeiVoss:TIS:2002:Morg}, if there exist a serial history $S$ such that $S$ is multi-version view equivalent to $H$. It can be proved that verifying the membership of VSR as well as MVSR in databases is NP-Complete \cite{Papad:1979:JACM}. To circumvent this issue, researchers in databases have identified an efficient sub-class of VSR, called CSR based
on the notion of conflicts. The membership of CSR can be verified in polynomial time using conflict graph characterization.
A history $H$ is said to CSR (or Conflict Serializable) \cite[Chap. 3]{WeiVoss:TIS:2002:Morg}, if there exist a serial history $S$ such that $S$ is conflict equivalent to $H$.
\vspace{.2cm}
\noindent
\textbf{Serializability and Opacity:}
Serializability \cite{Papad:1979:JACM} is a commonly used criterion in databases. But it is not suitable for STMs as it does not consider the correctness of \emph{aborted} transactions as shown by Guerraoui and Kapalka \cite{GuerKap:Opacity:PPoPP:2008}. Opacity, on the other hand, considers the correctness of \emph{aborted} transactions as well.
\noindent
A history $H$ is said to be \textit{opaque} \cite{GuerKap:Opacity:PPoPP:2008,tm-book} if it is \valid{} and there exists a t-sequential legal history $S$ such that
(1) $S$ is equivalent to complete history $\overline{H}$ and
(2) $S$ respects $\prec_{H}^{RT}$, i.e., $\prec_{H}^{RT} \subset \prec_{S}^{RT}$.
By requiring $S$ being equivalent to $\overline{H}$, opacity treats all the incomplete transactions as aborted.
Similar to view-serializability, verifying the membership of \opty is NP-Complete \cite{Papad:1979:JACM}. To address this issue, researchers have proposed another popular correctness-criterion \emph{co-opacity} whose membership is polynomial time verifiable.
\vspace{.2cm}
\noindent
\textbf{Co-opacity:} A history $H$ is said to be \textit{co-opaque} \cite{KuznetsovPeri:Non-interference:TCS:2017} if it is \valid{} and there exists a t-sequential legal history $S$ such that
(1) $S$ is equivalent to complete history $\overline{H}$ and
(2) $S$ respects $\prec_{H}^{RT}$, i.e., $\prec_{H}^{RT} \subset \prec_{S}^{RT}$.
(3) S preserves conflicts (i.e. $\prec^{Conf}_{H}\subseteq\prec^{Conf}_{S}$).
\cmnt{
Along same lines, a \valid{} history $H$ is said to be \textit{strictly serializable} if $\shist{\comm{H}}{H}$ is opaque. Unlike opacity, strict serializability does not include aborted or incomplete transactions in the global serialization order. An opaque history $H$ is also strictly serializable: a serialization of $\shist{\comm{H}}{H}$ is simply the subsequence of a serialization of $H$ that only contains transactions in $\comm{H}$.
}
\cmnt{
\noindent
\textbf{VSR, MVSR, and CSR:} VSR (or View Serializability) is a correctness-criterion similar to opacity but does not consider correctness of aborted transactions. When the protocol (such as MVTO) maintains multiple versions corresponding to each \tobj{} then a commonly used correctness criterion in databases is MVSR (or Multi-Version View Serializability). It can be proved that verifying the membership of VSR as well as MVSR in databases is NP-Complete \cite{Papad:1979:JACM}. To circumvent this issue, researchers in databases have identified an efficient sub-class of VSR, called CSR (or conflict-serializability), based
on the notion of conflicts. The membership of CSR can be verified in polynomial time using conflict graph characterization.
}
\vspace{.2cm}
\noindent
\textbf{Linearizability:} A history $H$ is linearizable \cite{HerlihyandWing:1990:LCC:ACM} if
(1) The invocation and response events can be reordered to get a valid sequential history.
(2) The generated sequential history satisfies the object’s sequential specification.
(3) If a response event precedes an invocation event in the original history, then this should be preserved in the sequential reordering.
\vspace{.2cm}
\noindent
\textbf{Lock Freedom:} An algorithm is said to be lock-free \cite{HerlihyShavit:Progress:Opodis:2011} if the program threads are run for a sufficiently long time, at least one of the threads makes progress. It allows individual threads to starve but guarantees system-wide throughput.
\ignore{
\begin{enumerate}
\item BTO
\begin{itemize}
\item Conflicts.
\end{itemize}
\item MVTO
\begin{itemize}
\item Conflicts.
\end{itemize}
\end{enumerate}
}
\section*{References}
\section{Conclusion}
\label{sec:con}
To exploit the multi-core processors, we have proposed the concurrent execution of \scontract{} by miners and validators, which improves the throughput. Initially, the miner executes the smart contracts concurrently using optimistic STM protocol as BTO. To reduce the number of aborts and further improve efficiency, the concurrent miner uses MVTO protocol, which maintains multiple versions corresponding to each data object. Concurrent miner proposes a block that consists of a set of transactions, concurrent bin, BG, previous block hash, and the final state of each shared data objects. Later, the validators re-execute the same \SContract{} transactions concurrently and deterministically in two-phase using concurrent bin followed by the BG given by miner, which capture the conflicting relations among the transactions to verify the final state.
{Overall, the proposed Opt-BTO and Opt-MVTO BG are $2\times$ (or $200.47\%$) and $2.30\times$ (or $229.80\%$) efficient over Def-BTO and Def-MVTO BG, respectively. With an average speedup of $4.49\times$ and $5.21\times$ for Opt-BTO, Opt-MVTO concurrent miner over serial, respectively. The Opt-BTO and Opt-MVTO decentralized concurrent validator outperform an average of $7.68\times$ and $8.60\times$ than serial validator, respectively.}
\vspace{1mm}
\noindent
\textbf{Acknowledgements.} This project was partially supported by a research grant from Thynkblynk Technologies Pvt. Ltd, and MEITY project number 4(20)/2019-ITEA.
\cmnt{
To exploit the multi-core processors, we have proposed the concurrent execution of \scontract{} by miners and validators, which improves the throughput. Initially, the miner executes the smart contracts concurrently using optimistic STM protocol as BTO. To reduce the number of aborts and further improve efficiency, the concurrent miner uses MVTO protocol, which maintains multiple versions corresponding to each data object. Concurrent miner proposes a block that consists of a set of transactions, concurrent bin, BG, previous block hash, and the final state of each shared data objects. Later, the validators re-execute the same \SContract{} transactions concurrently and deterministically in two-phase using concurrent bin followed by the BG given by miner, which capture the conflicting relations among the transactions to verify the final state.
{Overall, the proposed Opt-BTO and Opt-MVTO BG are $1.97\times$ (or $197.40\%$) and $2.29\times$ (or $229.478\%$) efficient over Def-BTO and Def-MVTO BG, respectively. With an average speedup of 4.10$\times$ and 4.55$\times$ for Opt-BTO, Opt-MVTO concurrent miner over serial, respectively. The Opt-BTO and Opt-MVTO decentralized concurrent validator outperform an average of $7.05\times$ and $7.84\times$ than serial validator, respectively.}
\vspace{1mm}
\noindent
\textbf{Acknowledgements.} This project was partially supported by a research grant from Thynkblynk Technologies Pvt. Ltd, and MEITY project number 4(20)/2019-ITEA.
}
\section{Correctness}
\label{sec:correctness}
{The correctness of concurrent BG, miner, and validator is described in this section. We first list the linearization points (LPs) of the \bg{} library methods as follows:}
\begin{enumerate}
\item \addv{(\vgn)}: (\vp.\vn.CAS(\vc, \vgn)) in \Lineref{addv5} is the LP point of \addv{()} method if \vnode{} is not exist in the BG. If \vnode{} is exist in the BG then (\vc.$ts_i$ $\neq$ \vgn.$ts_i$) in \Lineref{addv3} is the LP point.
\item \adde{\emph{(fromNode, toNode)}}: (\ep.\en.CAS(\ec, \egn)) in \Lineref{adde7} is the LP point of \adde{()} method if \enode{} is not exist in the BG. If \enode{} is exist in the BG then (\ec.$ts_i$ $\neq$ toNode.$ts_i$) in \Lineref{adde5} is the LP point.
\item \searchl{(cacheVer, $AU_{id}$)}: (cacheVer.\inc.CAS(0, -1)) in \Lineref{sl1} is the LP point of \searchl{()} method.
\item \searchg{(BG, $AU_{id}$)}: (\vnode.\inc.CAS(0, -1)) in \Lineref{sg1} is the LP point of \searchg{()} method.
\item \texttt{decInCount(remNode)}: \Lineref{ren1} is the LP point of \texttt{decInCount()} method.
\end{enumerate}
\begin{theorem}
Any history $H_m$ generated by the concurrent miner using the BTO protocol satisfies co-opacity.
\end{theorem}
\begin{proof}
Concurrent miner executes \sctrn{s} concurrently using BTO protocol and generate a concurrent history $H_m$. The underlying BTO protocol ensures the correctness of concurrent execution of $H_m$. The BTO protocol \cite[Chap 4]{WeiVoss:TIS:2002:Morg} proves that any history generated by it satisfies co-opacity \cite{Peri+:OSTM:Netys:2018}. So, implicitly BTO proves that the history $H_m$ generated by concurrent miner using BTO satisfies co-opacity.
\end{proof}
\begin{theorem}
Any history $H_m$ generated by the concurrent miner using the MVTO protocol satisfies opacity.
\end{theorem}
\begin{proof}
Concurrent miner executes \sctrn{s} concurrently using MVTO protocol and generate a concurrent history $H_m$. The underlying MVTO protocol ensures the correctness of concurrent execution of $H_m$. The MVTO protocol \cite{Kumar+:MVTO:ICDCN:2014} proves that any history generated by it satisfies opacity \cite{GuerKap:Opacity:PPoPP:2008}. So, implicitly MVTO proves that the history $H_m$ generated by concurrent miner using MVTO satisfies opacity.
\end{proof}
\begin{theorem}
All the dependencies between the conflicting nodes are captured in BG.
\end{theorem}
\begin{proof}
Dependencies between the conflicting nodes are captured in the BG using LP points of lock-free graph library methods defined above. Concurrent miner constructs the lock-free BG using BTO and MVTO protocol in \subsecref{bg}. BG consists of vertices and edges, where each committed \sctrn{} act as a vertex and edges (or dependencies) represents the conflicts of the respective STM protocol (BTO and MVTO). As we know, STM protocols BTO \cite[Chap 4]{WeiVoss:TIS:2002:Morg} and MVTO \cite{Kumar+:MVTO:ICDCN:2014} used in this paper for the concurrent execution are correct, i.e., these protocols captures all the dependencies correctly between the conflicting nodes. Hence, all the dependencies between the conflicting nodes are captured in the BG.
\end{proof}
\begin{theorem}
\label{thm:hmve}
A history $H_m$ generated by the concurrent miner using BTO protocol and a history $H_v$ generated by a concurrent validator are view equivalent.
\end{theorem}
\begin{proof}
A concurrent miner executes the \sctrn{s} of $H_m$ concurrently using BTO protocol, captures the dependencies of $H_m$ in the BG, and proposes a block $B$. Then it broadcasts the block $B$ along with BG to concurrent validators to verify the block $B$. The concurrent validator applies the topological sort on the BG and obtained an equivalent serial schedule $H_v$. Since the BG constructed from $H_m$ considers all the conflicts and $H_v$ obtained from the topological sort on the BG. So, $H_v$ is equivalent to $H_m$. Similarly, $H_v$ also follows the \emph{read from} relation of $H_m$. Hence, $H_v$ is legal. Since $H_v$ and $H_m$ are equivalent to each other, and $H_v$ is legal. So, $H_m$ and $H_v$ are view equivalent.
\end{proof}
\begin{theorem}
A history $H_m$ generated by the concurrent miner using MVTO protocol and a history $H_v$ generated by a concurrent validator are multi-version view equivalent.
\end{theorem}
\begin{proof}
Similar to the proof of \thmref{hmve}, the concurrent miner executes the \sctrn{s} of $H_m$ concurrently using MVTO protocol, captures the dependencies in the BG, proposes a block $B$, and broadcasts it to the concurrent validators to verify it. MVTO maintains multiple-version corresponding to each shared object. Later, concurrent validator obtained $H_v$ by applying topological sort on the BG provided by the concurrent miner. Since, $H_v$ obtained from topological sort on the BG so, $H_v$ is equivalent to $H_m$. Similarly, the BG maintains the \emph{read from} relations of $H_m$. So, from MVTO protocol if $T_j$ reads a value for shared object $k$ say $r_j(k)$ from $T_i$ in $H_m$ then $T_i$ committed before $r_j(k)$ in $H_v$. Therefore, $H_v$ is valid. Since $H_v$ and $H_m$ are equivalent to each other and $H_v$ is valid. So, $H_m$ and $H_v$ are multi-version view equivalent.
\end{proof}
\cmnt{
\begin{lemma}
History $H_m$ generated by BTO protocol and $H_v$ are view equivalent.
\end{lemma}
Concurrent execution of \SContract{s} may lead to inconsistent state, if it is not done carefully. In the concurrent execution of \miner, multiple threads are running concurrently and they can run in any order. But, if we achieve any equivalent serial execution of the concurrent execution then we can ensure that execution done by \conminer{} is consistent.
So, we use an efficient framework, \emph{Software Transactional Memory system (STMs)} for the concurrent execution of \SContract{s} in optimistic manner by \miner. STMs are popular programming paradigm which take care of synchronization issues among the transactions and ensure atomicity. Being a programmer who is using the STM library, need not have to worry about consistency issues because STM library ensures the consistency of concurrent execution which is equivalent to some serial execution. We have started with one of the fashionable protocol of STMs as \emph{Basic Timestamp Ordering (STM\_BTO)} which executes non-conflicting transactions concurrently.
\begin{lemma}
Any concurrent execution of transactions generated by STM\_BTO protocol produces conflict serializable schedule. \cite{WeiVoss:TIS:2002:Morg}
\end{lemma}
If two transactions $T_i$ and $T_j$ are accessing any common shared \tobj{} say $x$ and $ts_i$ is less than $ts_j$ but $T_j$ committed before $T_i$ then STM\_BTO protocol aborts the $T_i$ and retry it again. STM\_BTO protocol produces conflict serializable schedule which follows increasing order of transactions timestamp. It ensures deadlock-freedom by accessing the shared \tobj{s} in increasing order.
Now, how can we ensure that the execution order done by \conminer{} are same as execution done by \convalidator{s}? To ensure not to reject the correct block by \convalidator{s}, we have used the concept of \cgraph{}. While executing the \SContract{s} by \conminer{} using optimistic STMs, it also maintains all the relevant conflicts in \cgraph{} concurrently. \Cgraph{} captures the dependency among the conflicting transactions and says what all transactions can run concurrently. Finally, \conminer{} proposes a block which consist of set of transactions, \cgraph, hash of previous block and final state of each shared \tobj{s} and send it to the \convalidator{s} to validate it. Later, the \convalidator{s} re-execute the same \SContract{} concurrently and deterministically with the help of \cgraph{} given by \conminer{} and verify the final state. If the final state matches then proposed block appended into the blockchain and miner gets incentive otherwise discard the proposed block.
To improve the concurrency further, we use an another prominent STM protocol as \emph{Multi-Version Timestamp Ordering (STM\_MVTO)}. It also follow the increasing order of timestamp to generate the multi-version conflict serializable schedule and ensure the deadlock-freedom same as STM\_BTO.
\begin{lemma}
Any concurrent execution of transactions generated by STM\_MVTO protocol produces multi-version conflict serializable schedule. \cite{Kumar+:MVTO:ICDCN:2014}
\end{lemma}
\cmnt{
\section{Requirements of the Concurrent Miner and Validator}
\label{sec:reqminerval}
The section describes the requirements of concurrent Miner and validator.
\begin{theorem}
Any history $H_m$ generated Concurrent miner should satisfy opacity.
\end{theorem}
Here, miner executes the smart contract concurrently with the help of optimistic STM protocols (BTO and MVTO). Internally, BTO and MVTO \cite{Kumar+:MVTO:ICDCN:2014} protocol ensures opacity. So, history $H_m$ generated Concurrent miner satisfies opacity.
Consider the history $H_m$ generated by BTO protocol and constructs block graph, $BG$ in which each committed transaction $T_i$ consider as vertices and edges between them as follows:
\begin{itemize}
\item r-w: After $T_i$ reads $x$ from $T_k$, $T_j$ writes on $x$ data-object then r-w edge will be from $T_i$ to $T_j$.
\item w-r: If $T_i$ reads $x$ from the value written by $T_j$ then w-r edge will be from $T_i$ to $T_j$.
\item w-w:
\end{itemize}
Concrrent miner provides $BG$ to concurrent validators to ensure the correct output by validators. After that concurrent validator apply topological sort on $BG$ and generates a history $H_v$.
\begin{lemma}
History $H_m$ generated by BTO protocol and $H_v$ are view equivalent.
\end{lemma}
\begin{theorem}
History $H_m$ generated by MVTO protocol and $H_v$ are multi-version view equivalent.
\end{theorem}
}
\subsection{The Linearization Points of Lock-free Graph Library Methods}
Here, we list the linearization points (LPs) of each method. Note that each method can return either true or false. So,
we define the LP for five methods:
\begin{enumerate}
\item addVertex()
\end{enumerate}
}
\section{Introduction}
\label{sec:intro}
It is commonly believed that \bc{} is a revolutionary technology for doing business over the Internet. \BC is a decentralized, distributed database or ledger of records {that store the information in cryptographically linked blocks.} Cryptocurrencies such as Bitcoin \cite{Nakamoto:Bitcoin:2009} and Ethereum \cite{ethereum:url} were the first to popularize the \bc technology. \BC{s} are now considered for automating and securely storing user records such as healthcare, financial services, real estate, etc. \cmnt{\BC{s} ensure that the records are tamper-proof but publicly readable. With their amazing usefulness to revolutionize everyday life, \BC{s} are now considered for automating and securely storing user records such as healthcare, financial services, real estate, and supply chain management.}\BC network consists of multiple peers (or nodes) where peers do not necessarily trust each other. Each node maintains a copy of the distributed ledger. \emph{Clients}, users of the \bc, send requests or \emph{transactions} to the nodes of the \bc called as \emph{miners}. The miners collect multiple transactions from the clients and form a \emph{block}. Miners then propose these blocks to be added to the \bc.
\cmnt{They follow a global consensus protocol to agree on which blocks are chosen to be added and in what order. While adding a block to the \bc, the miner incorporates the hash of the previous block into the current block. This makes it difficult to tamper with the distributed ledger. The resulting structure is in the form of a linked list or a chain of blocks and hence the name \bc.}
The transactions sent by clients to miners are part of a larger code called as \emph{\scontract{s}} that provide several complex services such as managing the system state, ensuring rules, or credentials checking of the parties involved \cite{Dickerson+:ACSC:PODC:2017}. \Scontract{s} are like a `class' in programming languages that encapsulate data and methods which operate on the data. The data represents the state of the \scontract{} (as well as the \bc) and the \mth{s} (or functions) are the transactions that possibly can change contract state. \cmnt{A transaction invoked by a client is typically such a \mth or a collection of \mth{s} of the \scontract{s}.}Ethereum uses Solidity \cite{Solidity} while Hyperledger supports language such as Java, Golang, Node.js, etc.
\cmnt{
\sk{{\textbf{Listing 1: } Send function}}
\begin{lstlisting}[escapechar=|]
send(s_id, r_id, amount)
{
if(amount > bal[s_id]) |\label{line:condition}|
throw;
bal[s_id] -= amount;
bal[r_id] += amount;
}
\end{lstlisting}
}
\noindent
\textbf{Motivation for Concurrent Execution of Smart Contracts: }
Dickerson et al. \cite{Dickerson+:ACSC:PODC:2017} observed that \scontract{} transactions are executed in two different contexts in Ethereum \bc{}. First, executed by miners while forming a block-- a miner selects a sequence of client requests (transactions), executes the smart contract code of these transactions in sequence, transforming the state of the associated contract in this process. The miner then stores the sequence of transactions, the resulting final state of the contracts, and the previous block hash in the block. After creating the block, the miner proposes it to be added to the blockchain through the consensus protocol. The other peers in the system, referred to as \emph{validators} in this context, validate the block proposed by the miner. They re-execute the \scontract{} transactions in the block \emph{serially} to verify the block's final states. If the final states match, then the block is accepted as valid, and the miner who appended this block is rewarded. Otherwise, the block is discarded. Thus the transactions are executed by every peer in the system. It has been observed that the validation runs several times more than the miner code \cite{Dickerson+:ACSC:PODC:2017}.
This design of \scontract{} execution is not efficient as it does not allow any concurrency. In today's world of multi-core systems, the serial execution does not utilize all the cores, resulting in lower throughput. This limitation is not specific only to Ethereum \bc{} but also applies to other popular \bc{s} as well. Higher throughput means more transaction execution per unit time, which clearly will be desired by both miners and validators.
\ignore{
\figref{sece} illustrates the motivation behind the execution of smart contracts by concurrent miner over serial miner. Consider \figref{sece} (a) which consists of two transactions $T_1$, and $T_2$ executed by the serial miner. Here, $T_1$, and $T_2$ are writing on data-objects $x$, and $y$ respectively. Due to the serial execution by miner, all the transactions are executing serially although they are working on different data-objects which tends to limit the throughput of miner. Whereas \figref{sece} (b) represents the concurrent execution by miner with same scenario as \figref{sece} (a) where $T_1$ and $T_2$ are running concurrently because they are working on different data-objects. Hence, concurrent execution by miner improves the throughput as compare to serial miner.
\begin{figure}
\centerline{\scalebox{0.65}{\input{figs/sece.pdf_t}}}
\caption{Efficient execution of smart contracts}
\label{fig:sece}
\end{figure}
}
However, the concurrent execution of smart contract transactions is not straightforward. Because various transactions could consist of conflicting access to the shared data objects. Two contract transactions are said to be in \emph{conflict} if both of them access a shared data object, and at least one performs a write operation. Arbitrary execution of these smart contract transactions by the miners might result in the data-races leading to the inconsistent final state of the \bc. Unfortunately, it is impossible to statically identify conflicting contract transactions since contracts are developed in Turing-complete languages. The common solution for correct execution of concurrent transactions is to ensure that the execution is \emph{\sble} \cite{Papad:1979:JACM}. A usual \cc in databases, \sbty ensure that the concurrent execution is equivalent to some serial execution of the same transactions. Thus miners must ensure that their execution is \sble \cite{Dickerson+:ACSC:PODC:2017} or one of its variants as described later.
The concurrent execution of the \scontract{} transactions of a block by the validators, although highly desirable, can further complicate the situation. Suppose a miner ensures that the concurrent execution of the transactions in a block is \sble. Later a validator re-executes the same transactions concurrently. However, during the concurrent execution, the validator may execute two conflicting transactions in an order different from the miner. Thus the serialization order of the miner is different from the validator. These can result in the validator obtaining a final state different from what was obtained by the miner. Consequently, the validator may incorrectly reject the block although it is valid as depicted in \figref{conmv}.
\begin{figure}
\centerline{\scalebox{0.4}{\input{figs/conMV.pdf_t}}}
\vspace{-.2cm} \caption{\small (a) consists of two concurrent conflicting transactions $T_1$ and $T_2$ working on same shared data-objects $x$ which are part of a block. (b) represents the miner's concurrent execution with an equivalent serial schedule as $T_1$, $T_2$ and final state (or FS) as 20 from the initial state (or IS) 0. Whereas (c) shows the concurrent execution by a validator with an equivalent serial schedule as $T_2$, $T_1$, and the final state as 10 from IS 0, which is different from the final state proposed by the miner. Such a situation leads to the rejection of the valid block by the validator, which is undesirable.
}
\label{fig:conmv}
\end{figure}
Dickerson et al. \cite{Dickerson+:ACSC:PODC:2017} identified these issues and proposed a solution for concurrent execution by both miners and validators. The miner concurrently executes block transactions using abstract locks and inverse logs to generate a serializable execution. Then, to enable correct concurrent execution by the validators, the miners provide a \emph{happen-before~}graph in the block. The happen-before~ graph is a direct acyclic graph over all the transactions of the block. If there is a path from a transaction $T_i$ to $T_j$ then the validator has to execute $T_i$ before $T_j$. Transactions with no path between them can execute concurrently. The validator using the happen-before~ graph in the block executes all the transactions concurrently using the fork-join approach. This methodology ensures that the final state of the \bc generated by the miners and the validators are the same for a valid block and hence not rejected by the validators. The presence of tools such as a happen-before~ graph in the block provides a greater enhancement to validators to consider such blocks. It helps them execute quickly through parallelization instead of a block that does not have any parallelization tools. This fascinates the miners to provide such tools in the block for concurrent execution by the validators.
\ignore {
\figref{cminer}, illustrates the functionality of concurrent miner, which consists of six steps. It has two or more serial miners and one concurrent miner competing to propose a block in the blockchain. Whoever will propose a block first that miner has a chance to get the strong incentive. So the challenge here is to execute the task of the miner concurrently. All the miners are getting the set of transactions from distributed shared memory. As we discussed above, the serial miner executes the transactions one after another and propose the block. Whereas concurrent miner executes the non-conflicting transactions concurrently with Transactional Memory (TM) and finally proposes a block. Complete details about the \figref{cminer} presents in the \subsecref{cminer}.
\begin{figure}
\centering
\captionsetup{justification=centering}
\centerline{\scalebox{0.45}{\input{figs/cminer.pdf_t}}}
\caption{Execution of Concurrent Miner}
\label{fig:cminer}
\end{figure}
}
\vspace{1mm}
\noindent
\textbf{Proposed Solution Approach - Optimistic Concurrent Execution and Lock-Free Graph: } Dickerson et al. \cite{Dickerson+:ACSC:PODC:2017} developed a solution to the problem of concurrent miner and validators using locks and inverse logs. It is well known that locks are \emph{pessimistic} in nature. So, in this paper, we propose a \emph{novel} and \emph{efficient} framework for concurrent miner using \emph{optimistic} Software Transactional Memory Systems (STMs). STMs are suitable for the concurrent executions of transactions without worrying about consistency issues.
The requirement of the miner, is to concurrently execute the \scontract{} transactions correctly and output a graph capturing dependencies among the transactions of the block such as happen-before~ graph. We denote this graph as \emph{\bg} (or BG). The miner uses an optimistic STM system to execute the \scontract{} transactions concurrently in the proposed solution. Since STMs also work with transactions, we differentiate between \scontract{} transactions and STM transactions. The STM transactions invoked by an STM system is a piece of code that it tries to execute atomically even in the presence of other concurrent STM transactions. If the STM system is not able to execute it atomically, then the STM transaction is aborted.
{The expectation of a \scontract{} transaction is that it will be executed serially. Thus, when it is executed in a concurrent setting, it is expected to execute atomically (or serialized).}
To differentiate between \scontract{} transaction from STM transaction, we denote \scontract{} transaction as \emph{\au} (\emph{AU}) and STM transaction as \emph{transaction} in the rest of the document. Thus the miner uses the STM system to invoke a transaction for each AU. In case the transaction gets aborted, then the STM repeatedly invokes new transactions for the same AU until a transaction invocation eventually commits.
A popular correctness guarantee provided by STM systems is \emph{\opty} \cite{GuerKap:Opacity:PPoPP:2008} which is stronger than \sbty. Opacity like \sbty requires that the concurrent execution, including the aborted transactions, be equivalent to some serial execution. This ensures that even aborted transaction reads consistent value until the point of abort. As a result, a miner using an STM does not encounter any undesirable side-effects such as crash failures, infinite loops, divide by zero, etc. STMs provide this guarantee by executing optimistically and support atomic (\opq) reads, writes on \emph{transactional objects} (or \tobj{s}).
Due to simplicity, we have chosen two timestamp based STMs in our design: (1) \emph{Basic Timestamp Ordering} or \emph{BTO} STM \cite[Chap 4]{WeiVoss:TIS:2002:Morg}, maintains only one version for each \tobj. (2) \emph{Multi-Version Timestamp Ordering} or \emph{\mvto} STM \cite{Kumar+:MVTO:ICDCN:2014}, maintains multiple versions corresponding to each \tobj{} which further reduces the number of aborts and improves the throughput.
The advantage of using timestamp-based STM is that the equivalent serial history is ordered based on the transactions' timestamps. Thus using the timestamps, the miner can generate the BG of the AUs. We call it as \emph{STM approach}. Dickerson et al. \cite{Dickerson+:ACSC:PODC:2017}, developed the BG in a serial manner. Saraph and Herlihy~\cite{VikramHerlihy:EmpSdy-Con:Tokenomics:2019} proposed a simple \emph{bin-based two-phase speculative} approach to execute AUs concurrently in the Ethereum blockchain without storing the BG in the block. We analyzed that the bin-based approach reduces the size of the block but fails to exploits the concurrency. We name this approach as \emph{Speculative Bin} (Spec Bin) approach. So, in our proposed approach, we combined spec bin-based approach \cite{VikramHerlihy:EmpSdy-Con:Tokenomics:2019} with the STM approach \cite{Anjana+:CESC:PDP:2019} for the optimal storage of BG in a block and exploit the concurrency. Concurrent miner generates an efficient BG in concurrent and lock-free \cite{HerlihyShavit:Progress:Opodis:2011} manner.
The concurrent miner applies the STM approach to generate two bins while executing AUs concurrently, a concurrent bin and a sequential bin. AUs which can be executed concurrently (without any conflicts) are stored in the concurrent bin. While the AUs having conflicts are stored in a sequential bin in the BG form to record the conflicts. This combined technique reduces the size of the BG than \cite{Anjana+:CESC:PDP:2019} while storing the graph of only sequential bin \sctrn{s} instead of all \sctrn{s}.
We propose a concurrent validator that creates multiple threads. Each of these threads parses the concurrent bin followed by efficient BG provided by the concurrent miner and re-execute the AUs for validation. The BG consists of only dependent AUs. Each validator thread claims a node that does not have any dependency, i.e., a node without any incoming edges by marking it. After that, it executes the corresponding AUs deterministically. Since the threads execute only those nodes with no incoming edges, the concurrently executing AUs will not have any conflicts. Hence the validator threads need not have to worry about synchronization issues. We denote this approach adopted by the validator as a \emph{decentralized approach} as the multiple threads are working on BG concurrently in the absence of a master thread.
The approach adopted by Dickerson et al. \cite{Dickerson+:ACSC:PODC:2017}, works on \emph{fork-join} in which a master thread allocates different tasks to slave threads. The master thread identifies AUs that do not have any incoming dependencies in the BG and allocates them to different slave threads. In this paper, we compare the performance of both these approaches with the serial validator.
\ignore{
executing \au does not have any conflicts, it
At any time, the validator executes only those \au{s} concurrently which don't have any dependency as shown by the block graph. We consider the concurrent execution by validator in two different manners. The first one is inspired by Dickerson et al. \cite{Dickerson+:ACSC:PODC:2017} called as fork-join validator. The proposed fork-join validator works on master-slave concept in which a master or central thread allocates the task to slave threads. We propose a concurrent validator using decentralized approach (or Decentralized Validator) in which multiple threads are working on block graph concurrently and deterministically in the absence of master thread.
In order to execute \scontract{} by concurrent miner, we start with the well known protocol of STMs which is Basic Timestamp Ordering (or STM\_BTO) protocol. STM\_BTO identifies the conflicts between two transactions at run-time and abort one of them and retry again for aborted transaction. It ensures serial order of concurrent execution of transactions which is equivalent to the increasing order of transaction's timestamp. It has been observed by Kumar et al., \cite{Kumar+:MVTO:ICDCN:2014} that storing multiple versions, more concurrency can be gained. So, we motivated towards another popular protocol of STMs which is Multi-Version Timestamp Ordering (or STM\_MVTO) protocol. It store multiple versions corresponding to each data-object so, STM\_MVTO reduces the number of aborts and improves the throughput. STM\_MVTO protocol is also ensuring equivalent serial order as STM\_BTO.
Now, we propose the concurrent validator which is re-executing the same \SContract{} that has been executed by concurrent miner. But it may construct different serial order and different final state than serial order and final state produced by concurrent miner. That may lead to reject the correct proposed block. In order to solve this issue, concurrent miner maintains conflict graph in the form of adjacency-list. Once the transaction commit it adds itself as a vertex into the conflict graph, which is having an edge belonging to each conflicting transactions.
So, concurrent validator executes the transactions deterministically and concurrently with the help of conflict graph given by the miner. It applies the topological sort on the graph and identify the vertex whose indegree is 0. It then execute the \SContract{} concurrently corresponding to identified vertex and compute the final state.
Eventually, it compare the final state present in the block proposed by the miner with its computed final state corresponding to each data-object. If its same then block is appended into the blockchain and concurrent miner rewarded with strong incentive. Otherwise, block is discarded.
}
\noindent
\textbf{The significant contributions of the paper are as follows:
\begin{itemize}[noitemsep]
\item Introduce a novel way to execute the AUs by concurrent miner using \emph{optimistic} STMs (\secref{pm}).
We implement the concurrent miner using BTO and MVTO STM, but it is generic to any STM protocol.
\item We propose a \emph{lock-free} and concurrent graph library to generate the \emph{efficient} BG which contains only dependent \au{s} and optimize the size of the block than \cite{Anjana+:CESC:PDP:2019} (see \secref{pm}).
\item We propose concurrent validator that re-executes the AUs deterministically and efficiently with the help of \emph{concurrent bin} followed by \emph{efficient} BG given by concurrent miner (see \secref{pm}).
\item To make our proposed approach storage optimal and efficient, we have optimized the BG size (see \secref{pm}).
\item We rigorously prove that the concurrent miner and validator satisfies correctness criterion as \emph{opacity} (see \secref{correctness}).
\item We achieve $4.49\times$ and $5.21\times$ average speedups for optimized concurrent miner using BTO and MVTO STM protocol, respectively. Optimized concurrent BTO and MVTO decentralized validator outperform average $7.68\times$ and $8.60\times$ than serial validator, respectively (\secref{opt-result}).
\end{itemize}
\secref{relatedwork} presents the related work on concurrent execution of smart contract transactions. While, \secref{model} includes the notions related to STMs and execution model used in the paper. The conclusion with several future directions is presented in \secref{con}.
\cmnt{
\noindent
\textbf{Summary of differences than \cite{Anjana+:CESC:PDP:2019} are as follows:}
\begin{itemize}
\item We provide the extended system model in \secref{model}.
\item We propose an efficient and storage optimal BG which in turn reduces the size of the block than \cite{Anjana+:CESC:PDP:2019}.
\item \sk{We have also prove that the execution by concurrent validator is equivalent to that of the concurrent miner in \secref{ }.}
\item \sk{We rigorously prove that the concurrent miner satisfies correctness criterion as opacity in \secref{correctness}.}
\item We have performed the extensive experiments while varying the various parameters and analyse the performance benefits than state-of-the-art executions in \secref{opt-result}.
\item We have appended the exhaustive related work and compared them with our proposed approach in \secref{relatedwork}.
\end{itemize}}
\cmnt{
\noindent
\textbf{Related Work:} The first \emph{blockchain} concept has been given by Satoshi Nakamoto in 2009 \cite{Nakamoto:Bitcoin:2009}. He proposed a system as bitcoin \cite{Nakamoto:Bitcoin:2009} which performs electronic transactions without the involvement of the third party. The term \SContract{} \cite{Nick:PublicN:journals:1997} has been introduced by Nick Szabo. \emph{Smart contract} is an interface to reduce the computational transaction cost and provides secure relationships on public networks. There exist few paper in the literature that works on safety and security concern of smart contracts. Luu et al. \cite{Luu+:DIC:CCS:2015} addresses the waste part of the computational effort by miner that can be utilized and lead to award the incentives. Delmolino et al.\cite{Delmolino+:SSTCSC:FC:2016} document presents the common pitfall made while designing a secure smart contract. Nowadays, ethereum \cite{ethereum:url} is one of the most popular smart contract platform which supports a built-in Turing-complete programming language. Ethereum virtual machine (EVM) uses Solidity \cite{Solidity} programming language. Luu et al.\cite{Luu+:MSC:CCS:2016} addresses several security problems and proposed an enhanced mechanism to make the ethereum smart contracts less vulnerable.
Sergey et al. \cite{SergeyandHobor:ACP:2017} elaborates a new perspective between smart contracts and concurrent objects. Zang et al. \cite{ZangandZang:ECSC:WBD:2018} uses any concurrency control mechanism for concurrent miner which delays the read until the corresponding writes to commit and ensures conflict-serializable schedule. Basically, they proposed concurrent validators using MVTO protocol with the help of write sets provided by the concurrent miner. Dickerson et al. \cite{Dickerson+:ACSC:PODC:2017} introduces a speculative way to execute smart contracts by using concurrent miner and concurrent validators. They have used pessimistic software transactional memory systems (STMs) to execute concurrent smart contracts which use rollback if any inconsistency occurs and prove that schedule generated by concurrent miner is \emph{serializable}. We proposed an efficient framework for concurrent execution of the smart contracts using optimistic software transactional memory systems. So, the updates made by a transaction will be visible to shared memory only on commit hence, rollback is not required. Our approach ensures correctness criteria as opacity \cite{GuerKap:Opacity:PPoPP:2008} which considers aborted transactions as well because it read correct values.
Weikum et al. \cite{WeiVoss:TIS:2002:Morg} proposed concurrency control techniques that maintain single-version and multiple versions corresponding to each data-object. STMs \cite{HerlMoss:1993:SigArch,ShavTou:1995:PODC} are alternative to locks for addressing synchronization and concurrency issues in multi-core systems. STMs are suitable for the concurrent executions of smart contracts without worrying about consistency issues. Single-version STMs protocol store single version corresponding to each data-object as BTO STM. It identifies the conflicts between two transactions at run-time and abort one of them and retry again for the aborted transaction. Kumar et al. \cite{Kumar+:MVTO:ICDCN:2014} observe that storing multiple versions corresponding to each data-object reduces the number of aborts and provides greater concurrency that leads to improving the throughput.
\vspace{1mm}
\noindent
\textbf{Related Work:} The first \emph{blockchian} concept has been given by Satoshi Nakamoto in 2009 \cite{Nakamoto:Bitcoin:2009}. He proposed a system as bitcoin \cite{Nakamoto:Bitcoin:2009} which performs electronic transactions without the involvement of the third party. The term \SContract{} \cite{Nick:PublicN:journals:1997} has been introduced by Nick Szabo. \emph{Smart contract} is an interface to reduce the computational transaction cost and provides secure relationships on public networks.
Nowadays, ethereum \cite{ethereum} is one of the most popular smart contract platform which supports a built-in Turing-complete programming language such as Solidity \cite{Solidity}
Sergey et al. \cite{SergeyandHobor:ACP:2017} elaborates a new perspective between smart contracts and concurrent objects. Zang et al. \cite{ZangandZang:ECSC:WBD:2018} uses any concurrency control mechanism for concurrent miner which delays the read until the corresponding writes to commit and ensures conflict-serializable schedule. Basically, they proposed concurrent validators using MVTO protocol with the help of write sets provided by concurrent miner. Dickerson et al. \cite{Dickerson+:ACSC:PODC:2017} introduces a speculative way to execute smart contracts by using concurrent miner and concurrent validators. They have used pessimistic software transactional memory systems (STMs) to execute concurrent smart contracts which use rollback, if any inconsistency occurs and prove that schedule generated by concurrent miner is \emph{serializable}. We propose an efficient framework for the execution of concurrent smart contracts using optimistic software transactional memory systems. So, the updates made by a transaction will be visible to shared memory only on commit hence, rollback is not required. Our approach ensures correctness criteria as opacity \cite{GuerKap:Opacity:PPoPP:2008, tm-book} by Guerraoui \& Kapalka, which considers aborted transactions as well because it read correct values.
}
\section{Requirements of Concurrent Miner, Validator and Block Graph}
\label{sec:req-miner_val}
This section describes the requirements of concurrent miner, validator and block graph to ensure correct concurrent execution of the \scontract transactions.
\subsection{Requirements of the Concurrent Miner}
The miner process invokes several threads to concurrently execute the \scontract transactions or \au{s}. With the proposed optimistic execution approach, each miner thread invokes an \au as a transaction.
The miner should ensure the correct concurrent execution of the smart contract transactions. The incorrect concurrent execution (or consistency issues) may occur when concurrency involved. Any inconsistent read may leads system to divide by zero, infinite loops, crash failure etc. All smart contract transactions take place within a virtual machine \cite{Dickerson+:ACSC:PODC:2017}.
When miner executes the smart contract transactions concurrently on the virtual machine then infinite loop and inconsistent read may occur. So, to ensure the correct concurrent execution, the miner should satisfy the correctness-criterion as opacity \cite{GuerKap:Opacity:PPoPP:2008}.
To achieve better efficiency, sometimes we need to adapt the non-virtual machine environment which necessitates with the safeguard of transactions. As well miner needs to satisfies the correctness-criterion as opacity to ensure the correct concurrent execution of smart contract transactions.
\begin{requirement}
Any history $H_m$ generated by concurrent miner should satisfy opacity.
\end{requirement}
\cmnt{
\begin{proof}
Here, miner executes the smart contract concurrently with the help of optimistic STM protocols (BTO and MVTO). Internally, BTO and MVTO \cite{Kumar+:MVTO:ICDCN:2014} protocol ensures opacity. So, history $H_m$ generated Concurrent miner satisfies opacity.
\end{proof}
}
Concurrent miner maintains a BG and provides it to concurrent validators which ensures the dependency order among the conflicting transactions. As we discussed in \figref{conmv} of \secref{intro}, if concurrent miner will not maintain the BG then a valid block may get rejected by the concurrent validator.
\subsection{Requirements of the Concurrent Validator}
The correct concurrent execution by validator should be equivalent to some serial execution. The serial order can be obtained by applying the topological sort on the BG provided by the concurrent miner. BG gives partial order among the transactions while restricting the dependency order same as the concurrent miner. So, validator executes those transactions concurrently which are not having any dependency among them with the help of BG. Validator need not to worry about any concurrency control issues because BG ensures conflicting transactions never execute concurrently.
\subsection{Requirements of the Block Graph}
As explained above, the miner generates a BG to capture the dependencies between the smart contract transactions which is used by the validator to concurrently execute the transactions again later. The validator executes those transactions concurrently which do not have any path (implying dependency) between them. Thus the execution by the validator is given by a topological sort on the BG.
Now it is imperative that the execution history generated by the validator, $H_v$ is `equivalent' to the history generated by the miner, $H_m$. The precise equivalence depends on the STM protocol followed by the miners and validators. If the miner uses Multi-version STM such as MVTO then the equivalence between $H_v$ and $H_m$ is \mvve. In this case, the graph generated by the miner would be multi-version serialization graph \cite[Chap. 5]{WeiVoss:TIS:2002:Morg}.
On the other hand, if the miner uses single-version STM such as BTO then the equivalence between $H_v$ and $H_m$ is view-equivalence (\vie) which can be approximated by conflict-equivalence (\ce). Hence, in this case, the graph generated by the miner would be conflict graph \cite[Chap. 3]{WeiVoss:TIS:2002:Morg}.
\cmnt{
Block graph ensures view equivalence between the histories of the concurrent miners and validators. During the concurrent execution of miner using STM\_BTO protocol, conflict graph (same as BG) captures the view equivalence between the histories of the concurrent miners and validators while maintaining single-version corresponding to each data-objects. We can use the STMs for the concurrent execution by validators but aborts will be there. In order to improve the efficiency of concurrent validator, we consider $BG$ which allow to execute the transactions concurrently without abort. Consider the history $H_m$ generated by STM\_BTO protocol and constructs $BG$ in which each committed transaction $T_i$ consider as vertices and edges between them as follows:
\begin{itemize}
\item r-w: If $T_j$ writes $x$ after read by $T_i$ in $H_m$, then there is an edge from $v_i$ to $v_j$.
This set of edges are referred to as r-w.
\item w-r: If $T_j$ reads $x$ from $T_i$ in $H_m$, then there is an edge from $v_i$ to $v_j$. Note that in order for this to happen, $T_i$ must have committed before $T_j$ and $c_i$ $<_{H_m}$ $r_j(x)$. This set of edges are referred to as w-r.
\item w-w: If $T_j$ writes to $x$ after written by $T_i$ in $H_m$, then there is an edge from $v_i$ to $v_j$. Note that in order for this to happen, $T_i$ must have committed before $T_j$ and $c_i$ $<_{H_m}$ $w_j(x)$. This set of edges are referred to as w-w.
\item \rt(real-time) edges: If $T_j$ starts after commit of $T_i$ in $H_m$, then there is an edge from $v_i$ to $v_j$. This set of edges are referred to as $\rt(H_m)$.
\end{itemize}
Whereas when we consider the concurrent execution of miner using STM\_MVTO protocol which stores multiple version corresponding to each data-objects, conflict graph can be restrictive. So, we consider Multi-Version Serialization Graph, MVSG $(H_m,\ll)$ for a given version order of history $H_m$ as follows to ensure the multi-version view equivalence between the histories of the concurrent miners $H_m$ and validators $H_v$:
\begin{enumerate}
\item \textit{\rt}(real-time) edges: If $T_i$ commits before $T_j$ starts in $H_m$, then there is an edge from $v_i$ to $v_j$. This set of edges are referred to as $\rt(H_m)$.
\item \textit{\rf}(reads-from) edges: If $T_j$ reads $x$ from $T_i$ who has written on $x$ in $H_m$, then there is an edge from $v_i$ to $v_j$. Note that in order for this to happen, $T_i$ must have committed before $T_j$ and $c_i$ $<_{H_m}$ $r_j(x)$. This set of edges are referred to as $\rf(H_m)$.
\item \textit{\mv}(multi-version) edges: The \mv{} edges capture the multi-version relations and is based on the version order. Consider a successful read \op{} $r_k(x,v)$ and the write \op{} $w_j(x,v)$ belonging to transaction $T_j$ such that $r_k(x,v)$ reads $x$ from $w_j(x,v)$ (it must be noted $T_j$ is a committed transaction and $c_j$ $<_{H_m}$ $r_k$). Consider a committed transaction $T_i$ which writes to $x$, $w_i(x, u)$ where $u \neq v$. Thus the versions created $x_i, x_j$ are related by $\ll$. Then, if $x_i \ll x_j$ we add an edge from $v_i$ to $v_j$. Otherwise ($x_j \ll x_i$), we add an edge from $v_k$ to $v_i$. This set of edges are referred to as $\mv(H_m, \ll)$.
\end{enumerate}
\begin{lemma}
History $H_m$ generated by MVTO protocol and $H_v$ are multi-version view equivalent.
\end{lemma}
}
\subsection{Optimizations}
\label{subsec:opt}
{
To make the proposed approach storage optimal and efficient, this subsection explains the key change performed on top of the solution proposed by Anjana et al.~\cite{Anjana+:CESC:PDP:2019}.
In Anjana et al.~\cite{Anjana+:CESC:PDP:2019}, there is a corresponding vertex node in the \bg{} (BG) for every \sctrn{s} in the block. We observed that all the \sctrn{s} in the block need not have dependencies. Adding a vertex node for such \sctrn{s} takes additional space in the block. This is the first optimization our approach provides. In our approach, only the dependent \sctrn{s} have a vertex in the BG, while the independent \sctrn{s} are stored in the concurrent bin, which does not need any additional space. During the execution, a concurrent miner thread does not add a vertex to the BG if it identifies that the currently executed \sctrn{} does not depend on the \sctrn{s} already executed. However, suppose any other miner thread detects any dependence during the remaining \sctrn{s} execution. That thread will add the dependent \sctrn{s} vertices in the BG.
For example, let say we have $n$ \sctrn{s} in a block and a vertex node size is $\approx m$ kb to store in the BG, then it needs a total of $n*m$ kb of vertex node space for Anjana et al.~\cite{Anjana+:CESC:PDP:2019}. Suppose from $n$ \sctrn{s}, only $\frac{n}{2}$ have the dependencies, then a total of $\frac{n}{2}*m$ kb vertex space needed in the BG. In the proposed approach, the space optimization can be 100\% in the best case when all the \sctrn{s} are independent. While in the worst case, it can be 0\% when all the \sctrn{s} are dependent. However, only a few \sctrn{s} in a block have dependencies. Space-optimized BG helps to improve the network bandwidth and reduces network congestion.
Further, our approach combines the benefit of both \specbin{}-based approach \cite{VikramHerlihy:EmpSdy-Con:Tokenomics:2019} and STM-based approach~\cite{Anjana+:CESC:PDP:2019} to yield maximum speedup that can be achieved by validators to execute \sctrn{s}. So, another optimization is at the validators side; due to the concurrent bin in the block, the time taken to traverse the BG will decrease; hence, speedup increases. The concurrent validators execution is modified and divided into two phases. First, it concurrently executes \sctrn{s} of the concurrent bin using multiple threads, since \sctrn{s} in the concurrent bin will be independent. While in the second phase, dependent \sctrn{s} are stored in the BG and concurrently executed using BG to preserve the transaction execution order as executed by the miner.
}
\section{Proposed Mechanism}
\label{sec:pm}
This section presents the methods of lock-free concurrent block graph library followed by concurrent execution of AUs by miner and validator.
\subsection{Lock-free Concurrent Block Graph}
\label{subsec:bg}
\noindent
\textbf{Data Structure of Lock-free Concurrent Block Graph:} We use the \emph{adjacency list} to maintain the block graph BG(V, E), as shown in \figref{confg} (a). Where V is a set of vertices (or \vnode{s}) which are stored in the vertex list (or \vl{}) in increasing order of timestamp between two sentinel node \vh{} (-$\infty$) and \vt{} (+$\infty$). Each vertex node (or \vnode) contains $\langle ts = i, AU_{id} = id, \inc{} = 0, \vn{} = nil, \en{} = nil\rangle$. Where $i$ is a unique timestamp (or $ts$) of transactions $T_i$. $AU_{id}$ is the $id$ of a \au{} executed by transaction $T_i$. To maintain the indegree count of each \vnode{}, we initialize \inc{} as 0. \vn{} and \en{} initialize as $nil$.
\begin{figure}
\centering
\scalebox{.42}{\input{figs/graphs.pdf_t}}
\centering
\caption{Pictorial representation of Block Graph}
\label{fig:confg}
\end{figure}
\cmnt{
\begin{figure*}[!htb]
\begin{minipage}{0.49\textwidth}
\centering
\scalebox{.31}{\input{figs/graphs.pdf_t}}\vspace{-.3cm}
\caption{Pictorial representation of Block Graph}\label{fig:confg}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\centering
\scalebox{.31}{\input{figs/cminer.pdf_t}}\vspace{-.3cm}
\caption{Execution of Concurrent Miner}\label{fig:cminer}
\end{minipage}
\end{figure*}
}
\cmnt{
\begin{figure*}[!htb]
\centering
\scalebox{.31}{\input{figs/graphs.pdf_t}}\vspace{-.3cm}
\caption{Pictorial representation of Block Graph}\label{fig:confg}
\vspace{.35cm}
\centering
\scalebox{.31}{\input{figs/cminer.pdf_t}}\vspace{-.3cm}
\caption{Execution of Concurrent Miner}\label{fig:cminer}
\end{figure*}
\begin{figure*}
\centering
\begin{minipage}[b]{0.3\textwidth}
\scalebox{.31}{\input{figs/graphs.pdf_t}}
\centering
\caption{Pictorial representation of Block Graph}
\label{fig:confg}
\end{minipage}
\hfill
\begin{minipage}[b]{0.5\textwidth}
\scalebox{.32}{\input{figs/cminer.pdf_t}}
\centering
\caption{Execution of Concurrent Miner}
\label{fig:cminer}
\end{minipage}
\end{figure*}
}
While E is a set of edges which maintains all conflicts of \vnode{} in the edge list (or \el), as shown in \figref{confg} (a). \el{} stores \enode{s} (or conflicting transaction nodes, say $T_j$) in increasing order of timestamp between two sentinel nodes \eh{} (-$\infty$) and \et{} (+$\infty$). Edge node (or \enode{}) contains $\langle$\emph{ts = j, vref}, \en{} = $nil$$\rangle$. Here, $j$ is a unique timestamp (or $ts$) of \emph{committed} transaction $T_j$ having a conflict with $T_i$ and $ts(T_i)$ is less than $ts(T_j)$. We add conflicting edges from lower timestamp to higher timestamp transactions to maintain the acyclicity in the BG i.e., conflict edge is from $T_i$ to $T_j$ in the BG. \figref{confg} (b) illustrates this using three transactions with timestamp 0, 5, and 10, which maintain the acyclicity while adding an edge from lower to higher timestamp. To make it search efficient, \emph{vertex node reference (or vref)} keeps the reference of its own vertex which is present in the \vl and \en{} initializes as $nil$.
The block graph (BG) generated by the concurrent miner helps to execute the validator concurrently and deterministically through lock-free graph library methods. Lock-free graph library consists of five methods as follows: \texttt{addVert(), addEdge(), searchLocal(), searchGlobal()} and \texttt{decInCount()}.
\noindent
\textbf{Lock-free Graph Library Methods Accessed by Concurrent Miner:} The concurrent miner uses \texttt{addVert()} and \texttt{addEdge()} methods of lock-free graph library to build a BG. When concurrent miner wants to add a node in the BG, it first calls the \texttt{addVert()} method. The \texttt{addVert()} method identifies the correct location of that node (or \vgn{}) in the \vl{} at \Lineref{addv2}. If \vgn{} is not part of \vl{}, it creates the node and adds it into \vl{} at \Lineref{addv5} in a lock-free manner using atomic compare and swap (CAS) operation. Otherwise, \vgn{} is already present in \vl{} at \Lineref{addv10}.\vspace{.2cm}
\cmnt{
\noindent
\textbf{Lock-free Graph Library Methods Accessed by Concurrent Miner:} Concurrent miner uses \texttt{addVert()} and \texttt{addEdge()} methods of lock-free graph library to build a BG. When concurrent miner wants to add a node in the BG then first it calls \texttt{addVert()} method. The \texttt{addVert()} method identifies the correct location of that node (or \vgn{}) in the \vl{}. If \vgn{} is not part of \vl{} then it creates the node and adds it into \vl{} in lock-free manner with the help of atomic compare and swap operation.
After successful addition of \vnode{} in the BG concurrent miner calls \texttt{addEdge()} method to add the conflicting node (or \egn{}) corresponding to \vnode{} in the \el{}. First, \texttt{addEdge()} method identifies the correct location of \egn{} in the \el{} of corresponding \vnode{}. If \egn{} is not part of \el{} then it creates the node and adds it into \el{} of \vnode{} in lock-free manner with the help of atomic compare and swap operation. After successful addition of \enode{} in the \el{} of \vnode{}, it increment the \inc{} of \enode.$vref$ (to maintain the indegree count) node which is present in the \vl{}.
\noindent
\textbf{Lock-free Graph Library Methods Accessed by Concurrent Validator:} Concurrent validator uses \texttt{searchLocal(), searchGlobal()} and \texttt{decInCount()} methods of lock-free graph library. First, concurrent validator thread calls \texttt{searchLocal()} method to identify the source node (having indegree (or \inc) 0) in its local \cachel{} (or thread local memory). If any source node exist in the local \cachel{} with \inc{} 0 then it sets \inc{} field to be -1 atomically to claim the ownership of the node
If the source node does not exists in the local \cachel{} then concurrent validator thread calls \texttt{searchGlobal()} method to identify the source node in the BG. If any source node exists in the BG then it will do the same process as done by \texttt{searchLocal()}. After that validator thread calls the \texttt{decInCount()} to decreases the \inc{} of all the conflicting nodes atomically which are present in the \el{} of corresponding source node. While decrementing the \inc{} of each conflicting nodes in the BG, it again checks if any conflicting node became a source node then it adds that node into its local \cachel{} to optimize the search time of identifying the next source node. Due to lack of space, please refer accompanying technical report \cite{Parwat+:BC:Corr:2018} to get the complete details with the algorithm of lock-free graph library methods.
}
\begin{algorithm}[H]
\scriptsize
\label{alg:cg}
\caption{BG(\emph{vNode}, STM): It generates a BG for all the atomic-unit nodes.}
\setlength{\multicolsep}{0pt}
\begin{multicols}{2}
\begin{algorithmic}[1]
\makeatletter\setcounter{ALG@line}{0}\makeatother
\Procedure{BG(\emph{vNode}, STM)}{} \label{lin:cg1}
\State /*Get the \cl{} of transaction $T_i$ from STM*/\label{lin:cg2}
\State clist $\gets$ STM.\gconfl(\emph{vNode}.$ts_i$);\label{lin:cg3}
\State /*$T_i$ conflicts with $T_j$ and $T_j$ existes in conflict list \blank{.3cm} of $T_i$*/\label{lin:cg4}
\ForAll{($ts_j$ $\in$ clist)}\label{lin:cg5}
\State \addv(\emph{$ts_j$}); \label{lin:cg6}
\State \addv(\emph{vNode.$ts_i$});\label{lin:cg7}
\If{($ts_j$ $<$ \emph{vNode}.$ts_i$)}\label{lin:cg8}
\State \adde($ts_j$, \emph{vNode}.$ts_i$);\label{lin:cg9}
\Else\label{lin:cg10}
\State \adde(\emph{vNode}.$ts_i$, $ts_j$);\label{lin:cg11}
\EndIf \label{lin:cg12}
\EndFor\label{lin:cg13}
\EndProcedure\label{lin:cg14}
\end{algorithmic}
\end{multicols}
\end{algorithm}
\begin{algorithm}[H]
\scriptsize
\label{alg:addv}
\caption{\emph{\addv}{($ts_i$)}: It adds the vertex in the BG for $T_i$.
}
\setlength{\multicolsep}{0pt}
\begin{multicols}{2}
\begin{algorithmic}[1]
\makeatletter\setcounter{ALG@line}{14}\makeatother
\Procedure{\emph{\addv}{($ts_i$)}}{} \label{lin:addv1}
\State Identify $\langle$\vp, \vc{}$\rangle$ of \vgn{} of $ts_i$ in \vl{};\label{lin:addv2}
\If{(\vc.$ts_i$ $\neq$ \vgn.$ts_i$)}\label{lin:addv3}
\State Create new Graph Node (\vgn) of $ts_i$ in \vl{};\label{lin:addv4
\If{(\vp.\vn.CAS(\vc, \vgn))}\label{lin:addv5}
\State return$\langle$\emph{Vertex added}$\rangle$;
\label{lin:addv6}
\EndIf\label{lin:addv7}
\State goto \Lineref{addv2}; /*Start with the \vp{} to identify \blank{.7cm} the new $\langle$\vp, \vc{}$\rangle$*/ \label{lin:addv8}
\Else\label{lin:addv9}
\State return$\langle$\emph{Vertex already present}$\rangle$;
\label{lin:addv10}
\EndIf\label{lin:addv11}
\EndProcedure\label{lin:addv12}
\end{algorithmic}
\end{multicols}
\end{algorithm}
\begin{algorithm}[H]
\scriptsize
\label{alg:adde}
\caption{\emph{\adde{(fromNode, toNode)}}: It adds an edge from \emph{fromNode} to \emph{toNode}.}
\setlength{\multicolsep}{0pt}
\begin{multicols}{2}
\begin{algorithmic}[1]
\makeatletter\setcounter{ALG@line}{26}\makeatother
\Procedure{\emph{\adde{(fromNode, toNode)}}}{}\label{lin:adde1}
\State Identify the $\langle$\ep, \ec{}$\rangle$ of \emph{toNode} in \el{} of \blank{.3cm} the \emph{fromNode} vertex in $BG$;\label{lin:adde4}
\If{(\ec.$ts_i$ $\neq$ toNode.$ts_i$)}\label{lin:adde5}
\State Create new Graph Node (or \egn) in \el{};\label{lin:adde6}
\If{(\ep.\en.CAS(\ec, \egn))}\label{lin:adde7}
\State Increment the \inc{} atomically of \blank{1.1cm} \egn.\emph{vref} in \vl{};\label{lin:adde8}
\State return$\langle$\emph{Edge added}$\rangle$;
\label{lin:adde9
\EndIf\label{lin:adde10}
\State goto \Lineref{adde4}; /*Start with the \ep{} to identify \blank{.7cm} the new $\langle$\ep, \ec{}$\rangle$*/\label{lin:adde11}
\Else\label{lin:adde12}
\State return$\langle$\emph{Edge already present}$\rangle$;
\label{lin:adde13}
\EndIf\label{lin:adde14}
\EndProcedure\label{lin:adde15}
\end{algorithmic}
\end{multicols}
\end{algorithm}
\begin{algorithm}[H]
\scriptsize
\caption{\emph{\searchl{(cacheVer, $AU_{id}$)}}: Thread searches source node in \cachel{}.}
\setlength{\multicolsep}{0pt}
\begin{multicols}{2}
\begin{algorithmic}[1]
\makeatletter\setcounter{ALG@line}{39}\makeatother
\Procedure{\emph{\searchl{($cacheVer$)}}}{}
\If{(cacheVer.\inc.CAS(0, -1))} \label{lin:sl1}
\State \nc{} $\gets$ \nc{}.$get\&Inc()$; \label{lin:sl2}
\State $AU_{id}$ $\gets$ cacheVer.$AU_{id}$;
\State return$\langle$cacheVer$\rangle$;\label{lin:sl5}
\Else\label{lin:sl6}
\State return$\langle nil \rangle$;\label{lin:sl7}
\EndIf\label{lin:sl8}
\EndProcedure
\end{algorithmic}
\end{multicols}
\end{algorithm}
\begin{algorithm}[H]
\scriptsize
\caption{\emph{\searchg{(BG, $AU_{id}$)}}: Thread searches the source node in BG.}
\setlength{\multicolsep}{0pt}
\begin{multicols}{2}
\begin{algorithmic}[1]
\makeatletter\setcounter{ALG@line}{48}\makeatother
\Procedure{\emph{\searchg{(BG, $AU_{id}$)}}}{}
\State \vnode{} $\gets$ BG.\vh;
\While{(\vnode.\vn{} $\neq$ BG.\vt)}
\If{(\vnode.\inc.CAS(0, -1))}\label{lin:sg1
\State \nc{} $\gets$ \nc{}.$get\&Inc()$; \label{lin:sg2}
\State $AU_{id}$ $\gets$ \vnode.$AU_{id}$;
\State return$\langle \vnode \rangle$;\label{lin:sg5}
\EndIf\label{lin:sg8}
\State \vnode $\gets$ \vnode.\vn;
\EndWhile
\State return$\langle nil \rangle$;\label{lin:sg7}
\EndProcedure
\end{algorithmic}
\end{multicols}
\end{algorithm}
\begin{algorithm}[H]
\scriptsize
\caption{\emph{decInCount(remNode)}: Decrement the \inc{} of each conflicting node.}
\setlength{\multicolsep}{0pt}
\begin{multicols}{2}
\begin{algorithmic}[1]
\makeatletter\setcounter{ALG@line}{60}\makeatother
\Procedure{\emph{decInCount(remNode)}}{}
\While{(remNode.\en $\neq$ remNode.\et)}
\State Decrement the \emph{inCnt} atomically of \blank{.7cm} remNode.\emph{vref} in the \vl{}; \label{lin:ren1
\If{(remNode.\emph{vref}.\inc{} == 0)}\label{lin:ren2}
\State Add remNode.\emph{verf} node into \cachel{} of \blank{1.1cm} thread local log, \tl{};\label{lin:ren3}
\EndIf\label{lin:ren4}
\State remNode $\gets$ remNode.\en.\emph{verf};
\State return$\langle$remNode$\rangle$;
\EndWhile
\State return$\langle nil \rangle$;
\EndProcedure
\end{algorithmic}
\end{multicols}
\end{algorithm}
\begin{algorithm}[H]
\scriptsize
\label{alg:exec}
\caption{\exec{\emph{(curAU)}}: Execute the current atomic-units.}
\setlength{\multicolsep}{0pt}
\begin{multicols}{2}
\begin{algorithmic}[1]
\makeatletter\setcounter{ALG@line}{71}\makeatother
\Procedure{\exec{($curAU$)}}{}
\While{(curAU.steps.hasNext())} /*Assume that \blank{.35cm} curAU is a list of steps*/
\State curStep = currAU.steps.next();
\Switch{(curStep)}
\EndSwitch
\Case{read($x$):}
\State Read data-object $x$ from a shared memory;
\EndCase
\Case{write($x, v$):}
\State Write $x$ in shared memory with value $v$;
\EndCase
\Case{default:}
\State /*Neither read or write in shared memory*/;
\State execute curStep;
\EndCase
\EndWhile
\State return $\langle void \rangle$
\EndProcedure
\end{algorithmic}
\end{multicols}
\end{algorithm}
After successfully adding \vnode{} in the BG, concurrent miner calls \texttt{addEdge()} method to add the conflicting node (or \egn{}) corresponding to \vnode{} in the \el{}. First, the \texttt{addEdge()} method identifies the correct location of \egn{} in the \el{} of corresponding \vnode{} at \Lineref{adde4}. If \egn{} is not part of \el{}, it creates and adds it into \el{} of \vnode{} at \Lineref{adde7} in a lock-free manner using atomic CAS operation. After successful addition of \enode{} in the \el{} of \vnode{}, it increments the \inc{} of \enode.$vref$ (to maintain indegree count) node, which is present in the \vl{} at \Lineref{adde8}.
\noindent
\textbf{Lock-free Graph Library Methods Accessed by Concurrent Validator:} Concurrent validator uses \texttt{searchLocal(), searchGlobal()} and \texttt{decInCount()} methods of lock-free graph library. First, concurrent validator thread calls \texttt{searchLocal()} method to identify the source node (having indegree (or \inc) 0) in its local \cachel{} (or thread-local memory). If any source node exists in the local \cachel{} with \inc{} 0, then to claim that node, it sets the \inc{} field to -1 at \Lineref{sl1} atomically.
If the source node does not exist in the local \cachel{}, then the concurrent validator thread calls \texttt{searchGlobal()} method to identify the source node in the BG at \Lineref{sg1}. If a source node exists in the BG, it sets \inc{} to -1 atomically to claim that node and calls the \texttt{decInCount()} method to decreases the \inc{} of all conflicting nodes atomically, which are present in the \el{} of corresponding source node at \Lineref{ren1}. While decrementing \inc{s}, it checks if any conflicting node became a source node, then it adds that node into its local \cachel{} to optimize the search time of identifying the next source node at \Lineref{ren3}.
\subsection{Concurrent Miner}
\label{subsec:cminer}
Smart contracts in \bc{} are executed in two different contexts. First, the \Miner{} proposes a new block. Second, multiple \Validator{s} re-execute to verify and validate the block proposed by the \Miner. In this subsection, we describe how miner executes the \SContract{s} concurrently.
\setlength{\intextsep}{0pt}
\begin{algorithm}[!t]
\scriptsize
\caption{\emph{\cminer{(\aul, STM)}}: Concurrently $m$ threads are executing atomic-units from \aul{} (or list of atomic-units) with the help of STM.}
\label{alg:cminer}
\setlength{\multicolsep}{0pt}
\begin{multicols}{2}
\begin{algorithmic}[1]
\makeatletter\setcounter{ALG@line}{85}\makeatother
\Procedure{\emph{\cminer}{(\aul, STM)}}{}\label{lin:cminer1}
\State {/*Add all AUs in the Concurrent Bin (\emph{concBin[]})*/
\State \emph{concBin[]} $\gets$ \aul;} \label{lin:cminer111}
\State /*curAU is the current AU taken from \aul*/
\State curAU $\gets$ $curInd$.$get\&Inc(\aul)$; \label{lin:cminer2}
\State /*Execute until all AUs successfully completed*/\label{lin:cminer3}
\While{(curAU $<$ size\_of(\aul))}\label{lin:cminer4}
\State $T_i$ $\gets$ STM.\begtrans{()}
\label{lin:cminer5}
\While{(curAU.steps.hasNext())}
\label{lin:cminer6}
\State curStep = currAU.steps.next();
\label{lin:cminer7}
\Switch{(curStep)}\label{lin:cminer8}\EndSwitch
\Case{read($x$):}\label{lin:cminer9}
\State $v$ $\gets$ STM.\readi{($x$)};
\label{lin:cminer10}
\If{($v$ == $abort$)}\label{lin:cminer11}
\State goto \Lineref{cminer5};\label{lin:cminer12}
\EndIf\label{lin:cminer13}
\EndCase
\Case{write($x, v$):} \label{lin:cminer14}
\State STM.$write_i$($x, v$); \label{lin:cminer16}
\EndCase
\Case{default:}\label{lin:cminer17}
\State /*Neither read or write in memory*/\label{lin:cminer18}
\State execute curStep;\label{lin:cminer19}
\EndCase
\EndWhile\label{lin:cminer20}
\State /*Try to commit the current transaction $T_i$ and \blank{.7cm} update the \cl{[i]}*/
\label{lin:cminer21}
\State $v$ $\gets$ \tryc{$_i$()}; \label{lin:cminer22}
\If{($v == abort$)}\label{lin:cminer23}
\State goto \Lineref{cminer5};\label{lin:cminer24}
\EndIf \label{lin:cminer25}
\If{($\cl[i] == nil$)}\label{lin:cminer251}
\State {curAU doesn't have dependencies with other \blank{1cm} AUs. So, no need to create a node in BG.}
\Else
\State {create a nodes with respective dependencies \blank{1cm} from curAU to all AUs $\in$ $\cl[i]$ in BG \blank{1cm} and remove curAU and AUs from \emph{concBin[]}} \label{lin:cminer2511}
\State Create \vnode{} with $\langle$\emph{$i$, $AU_{id}$, 0, nil, nil}$\rangle$ as \blank{1cm} a vertex of Block Graph; \label{lin:cminer26}
\State BG(\emph{vNode}, STM); \label{lin:cminer27}
\EndIf
\State curAU $\gets$ $curInd$.$get\&Inc(\aul)$; \label{lin:cminer28}
\EndWhile \label{lin:cminer29}
\EndProcedure\label{lin:cminer30}
\end{algorithmic}
\end{multicols}
\end{algorithm}
\cmnt{\tikz \node[circle,scale=.5,color=black, fill=pink]{\textbf{1}};}A \emph{concurrent miner} gets the set of transactions from the blockchain network. Each transaction is associated with a method (\au{}) of smart contracts. To run the \SContract{s} concurrently, we have faced the challenge of identifying the conflicting transactions at run-time because \SContract{} languages are Turing-complete. Two transactionsconflict if they access a shared data-objects and at least one of them perform write operation. \cmnt{\tikz \node[circle,scale=.5,color=black, fill=pink]{\textbf{2}};}In \conminer{}, conflicts are identified at run-time using an efficient framework provided by the optimistic software transactional memory system (STMs). STMs access the shared data-objects called as \tobj{s}. Each shared \tobj{} is initialized to an initial state (or IS). The \au{s} may modify the IS to some other valid state. Eventually, it reaches the final state (or FS) at the end of block-creation. As shown in \algoref{cminer}, the concurrent miner first copies all the AUs in the concurrent bin at \Lineref{cminer111}. Each transaction $T_i$ gets the unique timestamp $i$ from \texttt{STM.begin()} at \Lineref{cminer5}. Then transaction $T_i$ executes the \au{} of \SContract{s}. \emph{Atomic-unit} consists of multiple steps such as $reads$ and $writes$ on shared \tobj{s} as $x$. Internally, these $read$ and $write$ steps are handled by the \texttt{STM.read()} and \texttt{STM.write()}, respectively. At \Lineref{cminer9}, if current \au{} step (or curStep) is $read(x)$ then it calls the \texttt{STM.read(x)}. Internally, \texttt{STM.read()} identify the shared \tobj{} $x$ from transactional memory (or TM) and validate it. If validation is successful, it gets the value as $v$ at \Lineref{cminer10} and executes the next step of \au{}; otherwise, re-executed the \au{} if $aborted$ at \Lineref{cminer11}.
\cmnt{
\begin{figure}
\scalebox{.32}{\input{figs/cminer.pdf_t}}
\centering
\caption{Execution of Concurrent Miner}
\label{fig:cminer}
\end{figure}
}
\cmnt{
\begin{figure*}
\centering
\centerline{\scalebox{0.45}{\input{figs/cminer.pdf_t}}}
\caption{Execution of Concurrent Miner}
\label{fig:cminer}
\end{figure*}
}
If curStep is $write(x)$ at \Lineref{cminer14} then it calls the \texttt{STM.write(x)}. Internally, \texttt{STM.write()} stores the information of shared \tobj{} $x$ into local log (or \txlog) in write-set (or $wset_i$) for transaction $T_i$. We use an optimistic approach in which the transaction's effect will reflect onto the TM after the successful \texttt{STM.tryC()}. If validation is successful for all the $wset_i$ of transaction $T_i$ in \texttt{STM.tryC()}, i.e., all the changes made by the $T_i$ are consistent, then it updates the TM; otherwise, re-execute the \au{} if $aborted$ at \Lineref{cminer23}. After successful validation of \texttt{STM.tryC()}, it also maintains the conflicting transaction of $T_i$ into the conflict list in TM.
\cmnt{
\begin{figure*}
\centerline{\scalebox{0.45}{\input{figs/graphs.pdf_t}}}
\caption{Pictorial representation of \confg{}, CG}
\label{fig:confg}
\end{figure*}
}
\cmnt{
Once the transaction commits it stores its conflicts in the form of \CG{}. In order to store the \emph{Conflict Graph} or \confg \emph{(CG(V, E))}, we are maintaining \emph{adjacency list}. In which all the vertices (or \vnode{s}) are stored in the vertex list (or \vl{}) in increasing order of timestamp between the two sentinal node \vh{} (-$\infty$) and \vt{} (+$\infty$). The structure of the vertex node (or \vnode) is as $\langle ts_i, AU_{id}, \inc{}, \vn, \en\rangle$. Where $ts$ is a unique timestamp $i$ of committed transactions $T_i$ assign at the beginning of it. $AU_{id}$ is the $id$ of \au{} which is executed by transaction $T_i$. To maintain the indegree count of each \vnode{} we use \inc{} initialize as 0. \vn{} points to the next vertex of the \vl. For corresponding to each \vnode{}, it maintains all the conflicts in the edge list (\el) as shown in \figref{confg}. \el{} stores \enode{} (or conflicting transaction nodes) belonging to \emph{vNode} stores in increasing order of timestamp (or $ts$) in between the two sentinal nodes \eh{} (-$\infty$) and \et{} (+$\infty$). The structure of \enode{} is as $\langle$\emph{$ts_j$, vref}, \en{} $\rangle$. $ts_j$ is a unique timestamp $j$ of \emph{committed} transaction $T_j$ which had a conflict with $T_i$. \emph{Vertex node reference (or vref)} keeps the reference of its own vertex which is present in the \vl. \en{} points the next \enode{} of the \el{}.
}
\cmnt{\tikz \node[circle,scale=.5,color=black, fill=pink]{\textbf{3}};}
If the conflict list is \emph{nil} (\Lineref{cminer251}), there is no need to create a node in the BG. Otherwise, create the node with respective dependencies in the BG and remove those AUs from the concurrent bin (\Lineref{cminer2511}). To maintain the BG, it calls \texttt{addVert()} and \texttt{addEdge()} methods of the lock-free graph library. The details of \texttt{addVert()} and \texttt{addEdge()} methods are explained in \subsecref{bg}.
\cmnt{\tikz \node[circle,scale=.5,color=black, fill=pink]{\textbf{4}};} Once the transactions successfully executed the \au{s} and done with BG construction, the \conminer{} computes the hash of the previous block. Eventually, \cmnt{\tikz \node[circle,scale=.5,color=black, fill=pink]{\textbf{5}};}\conminer{} proposes a block consisting of a set of transactions, BG, the final state of each shared \tobj{s}, previous block hash, and\cmnt{\tikz \node[circle,scale=.5,color=black, fill=pink]{\textbf{6}};} sends it to all other network peers to validate.
\subsection{Concurrent Validator}
\label{subsec:cvalidator}
The concurrent validator validates the block proposed by the concurrent miner.
It executes the block transactions concurrently and deterministically in two phases using a concurrent bin and BG given by the \conminer{}. In the first phase, validator threads execute the independent AUs of concurrent bin concurrently (\Lineref{val11} to \Lineref{val112}). Then in the second phase, it uses BG to executes the dependent AUs by \texttt{executeCode()} method at \Lineref{sl4} and \Lineref{sl41} using \texttt{searchLocal()}, \texttt{searchGlobal()} and \texttt{decInCount()} methods of lock-free graph library at \Lineref{val5}, \Lineref{val16} and (\Lineref{val8}, \Lineref{val20}), respectively. BG consists of dependency among the conflicting transactions that restrict them to execute serially.
The functionality of lock-free graph library methods is explained earlier in \subsecref{bg}.
\begin{algorithm}[!t]
\scriptsize
\caption{\emph{\cvalidator}{(\aul, \emph{BG)}}: Concurrently $V$ threads are executing AUs with the help of concurrent bin followed by the BG given by the miner.}
\setlength{\multicolsep}{0pt}
\begin{multicols}{2}
\begin{algorithmic}[1]
\makeatletter\setcounter{ALG@line}{122}\makeatother
\Procedure{\emph{\cvalidator}{(\aul, BG)}}{}
\State /*Execute until all AUs successfully completed*/ \label{lin:val1}
{
\State /*\textbf{Phase-1}: Concurrent Bin AUs execution.*/
\While{(concCount $<$ size\_of(\emph{concBin[]}))}\label{lin:val11}
\State count $\gets$ concCount.$get\&Inc(\aul)$;
\State $AU_{id}$ $\gets$ \emph{concBin[count]};
\State \exec{($AU_{id}$)};
\EndWhile \label{lin:val112}
\State /*\textbf{Phase-2}: Block Graph AUs execution.*/
}
\While{(\nc{} $<$ size\_of(\aul))} \label{lin:val2}
\While{(\cachel{}.hasNext())}
\label{lin:val3}
\State cacheVer $\gets$ \cachel{}.next(); \label{lin:val4}
\State cacheVertex $\gets$ \searchl{(cacheVer, \blank{1cm} AU$_{id}$)};\label{lin:val5}
\State \exec{($AU_{id}$)};\label{lin:sl4}
\While{(cacheVertex)}\label{lin:val7}
\State cacheVertex $\gets$ decInCount(cacheVertex);\label{lin:val8}
\EndWhile\label{lin:val10}
\State Remove the current node (or cacheVertex) \blank{1cm} from local \cachel; \label{lin:val12}
\EndWhile\label{lin:val13}
\State vexNode $\gets$ \searchg{(BG, $AU_{id}$)};
\label{lin:val16}
\State \exec{($AU_{id}$)};\label{lin:sl41}
\While{(verNode)}\label{lin:val19}
\State verNode $\gets$ decInCount(verNode);\label{lin:val20}
\EndWhile\label{lin:val22}
\EndWhile\label{lin:val27}
\EndProcedure
\end{algorithmic}
\end{multicols}
\end{algorithm}
After the successful execution of all the \au{s}, the \convalidator{} compares its computed final state with the final states given by the \conminer{}. If the final state matches for all the shared data-objects, then the block proposed by the \conminer{} is valid. Finally, {based on consensus between network peers, the block is appended to the blockchain, and the respective \conminer{} is rewarded.}
\cmnt{
\begin{algorithm}
\scriptsize
\label{alg:cvalidator}
\caption{\cvalidator(): Concurrently $V$ threads are executing atomic units of smart contract with the help of $CG$ given by the miner.}
\begin{algorithmic}[1]
\makeatletter\setcounter{ALG@line}{69}\makeatother
\Procedure{\cvalidator()}{} \label{lin:cvalidator1}
\State /*Execute until all the atomic units successfully completed*/\label{lin:cvalidator2}
\While{(\nc{} $<$ size\_of(\aul))}\label{lin:cvalidator3}
\State \vnode{} $\gets$ $CG$.\vh;\label{lin:cvalidator4}
\State \searchl();/*First search into the thread local \cachel*/\label{lin:cvalidator5}
\State \searchg(\vnode);/*Search into the \confg*/\label{lin:cvalidator6}
\EndWhile \label{lin:cvalidator7}
\EndProcedure\label{lin:cvalidator8}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\scriptsize
\label{alg:searchl}
\caption{\searchl(): First thread search into its local \cachel{}.}
\begin{algorithmic}[1]
\makeatletter\setcounter{ALG@line}{77}\makeatother
\Procedure{\searchl()}{}\label{lin:searchl1}
\While{(\cachel{}.hasNext())}/*First search into the local nodes list*/\label{lin:searchl2}
\State cacheVer $\gets$ \cachel{}.next(); \label{lin:searchl3}
\If{( cacheVer.\inc.CAS(0, -1))} \label{lin:searchl4}
\State \nc{} $\gets$ \nc{}.$get\&Inc()$; \label{lin:searchl5}
\State /*Execute the atomic unit of cacheVer (or cacheVer.$AU_{id}$)*/ \label{lin:searchl6}
\State \exec(cacheVer.$AU_{id}$);\label{lin:searchl7}
\While{(cacheVer.\eh.\en $\neq$ cacheVer.\et)} \label{lin:searchl8}
\State Decrement the \emph{inCnt} atomically of cacheVer.\emph{vref} in the \vl{}; \label{lin:searchl9}
\If{(cacheVer.\emph{vref}.\inc{} == 0)}\label{lin:searchl10}
\State Update the \cachel{} of thread local log, \tl{}; \label{lin:searchl11}
\EndIf\label{lin:searchl12}
\State cacheVer $\gets$ cacheVer.\en;\label{lin:searchl13}
\EndWhile\label{lin:searchl14}
\Else\label{lin:searchl15}
\State Remove the current node (or cacheVer) from the list of cached nodes; \label{lin:searchl16}
\EndIf\label{lin:searchl17}
\EndWhile\label{lin:searchl18}
\State return $\langle void \rangle$;\label{lin:searchl19}
\EndProcedure\label{lin:searchl20}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\scriptsize
\label{alg:searchg}
\caption{\searchg(\vnode): Search the \vnode{} in the \confg{} whose \inc{} is 0.}
\begin{algorithmic}[1]
\makeatletter\setcounter{ALG@line}{97}\makeatother
\Procedure{\searchg(\vnode)}{} \label{lin:searchg1}
\While{(\vnode.\vn{} $\neq$ $CG$.\vt)}/*Search into the \confg*/ \label{lin:searchg2}
\If{( \vnode.\inc.CAS(0, -1))} \label{lin:searchg3}
\State \nc{} $\gets$ \nc{}.$get\&Inc()$; \label{lin:searchg4}
\State /*Execute the atomic unit of \vnode (or \vnode.$AU_{id}$)*/\label{lin:searchg5}
\State \exec(\vnode.$AU_{id}$);\label{lin:searchg6}
\State \enode $\gets$ \vnode.\eh;\label{lin:searchg7}
\While{(\enode.\en{} $\neq$ \enode.\et)}\label{lin:searchg8}
\State Decrement the \emph{inCnt} atomically of \enode.\emph{vref} in the \vl{};\label{lin:searchg9}
\If{(\enode.\emph{vref}.\inc{} == 0)}\label{lin:searchg10}
\State /*\cachel{} contains the list of node which \inc{} is 0*/\label{lin:searchg11}
\State Add \enode.\emph{verf} node into \cachel{} of thread local log, \tl{}; \label{lin:searchg12}
\EndIf \label{lin:searchg13}
\State \enode $\gets$ \enode.\en; \label{lin:searchg14}
\EndWhile\label{lin:searchg15}
\State \searchl();\label{lin:searchg16}
\Else\label{lin:searchg17}
\State \vnode $\gets$ \vnode.\vn;\label{lin:searchg18}
\EndIf\label{lin:searchg19}
\EndWhile\label{lin:searchg20}
\State return $\langle void \rangle$;\label{lin:searchg21}
\EndProcedure\label{lin:searchg22}
\end{algorithmic}
\end{algorithm}
}
\section{Related Work}
\label{sec:relatedwork}
This section presents the related work on concurrent execution on \bc{s} in line with the proposed approach.
{The interpretation of \emph{Blockchain} was introduced by Satoshi Nakamoto in 2009 as Bitcoin~\cite{Nakamoto:Bitcoin:2009} to perform electronic transactions without third party interference. Nick Szabo~\cite{Nick:PublicN:journals:1997} introduced \SContract{s} in 1997, adopted by Ethereum \bc{} in 2015 to expand \bc{} functionalities beyond financial transactions (cryptocurrencies)
} A smart contract is an interface to reduce the computational transaction cost and provides secure relationships on distributed networks. There exist several papers \cite{Luu+:DIC:CCS:2015, Delmolino+:SSTCSC:FC:2016, Luu+:MSC:CCS:2016} in the literature that works on the safety and security concern of smart contracts, which is out of the scope of this paper. We mainly focus on the concurrent execution of AUs. {A concise summary of closely related works is given in \tabref{relatedwork}.
\begin{table*}[!tb]
\caption{Related Work Summary}\vspace{-.25cm}
\centering
\label{tab:relatedwork}
\resizebox{1\columnwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\textbf{ }
& \textbf{Miner Approach}
& \textbf{Locks}
& \textbf{Require Block Graph}
& \textbf{Validator Approach}
& \textbf{\begin{tabular}[c]{@{}c@{}}Blockchain Type\end{tabular}}
\\ \hline\hline
Dickerson et al.~\cite{Dickerson+:ACSC:PODC:2017}
& Pessimistic ScalaSTM
& Yes
& Yes
& Fork-join
& Permissionless
\\
Zhang and Zhang~\cite{ZangandZang:ECSC:WBD:2018}
& -
& -
& Read, Write Set
& MVTO Approach
& Permissionless
\\
Anjana et al.~\cite{Anjana+:CESC:PDP:2019}
& Optimistic RWSTM
& No
& Yes
& Decentralized
& Permissionless
\\
Amiri et al.~\cite{amiri2019parblockchain}
& Static Analysis
& -
& Yes
& -
& Permissioned
\\
Saraph and Herlihy~\cite{VikramHerlihy:EmpSdy-Con:Tokenomics:2019}
& Bin-based Approach
& Yes
& No
& Bin-based
& Permissionless
\\
Anjana et al.~\cite{anjana:ObjSC:Netys:2020}
& Optimistic ObjectSTM
& No
& Yes
& Decentralized
& Permissionless
\\
\textbf{Proposed Approach}
& \textbf{Bin+Optimistic RWSTM}
& \textbf{No}
& \textbf{No (if no dependencies) / Yes}
& \textbf{Decentralized}
& \textbf{Permissionless}
\\ \hline
\end{tabular}%
}
\end{table*}
Dickerson et al.~\cite{Dickerson+:ACSC:PODC:2017} introduced concurrent executions of AUs in the \bc{}. They observed that miners and validators could execute \sctrn{s} simultaneously to exploit concurrency offered by ubiquitous multi-core processors. {The approach of this work is given in \secref{intro}.}
\cmnt{They proposed a concurrent miner approach using \emph{pessimistic STM} and abstract locks to execute \sctrn{s} concurrently. The concurrent miner generates the \emph{Block Graph (\blg{})} while executing the \sctrn{s} concurrently to record the dependencies among \sctrn{s}. Later, they proposed a \emph{Fork-Join Validator} (FJ-Validator) approach to execute \sctrn{s} concurrently and deterministically using \blg{} appended in the block. A master validator thread allocates independent \sctrn{s} (the \sctrn{s} which does not have any path or \emph{dependency} from other \sctrn{s}) of \blg{} to the slave validator threads to execute them concurrently.}
Zhang and Zhang~\cite{ZangandZang:ECSC:WBD:2018} proposed a concurrent miner using a pessimistic concurrency control protocol, which delays the read until the corresponding writes to commit and ensures a conflict-serializable schedule. {The proposed concurrent validator uses MVTO protocol to execute transactions concurrently using the write sets provided by the concurrent miner in the block.}
Anjana et al.~\cite{Anjana+:CESC:PDP:2019} proposed optimistic \emph{Read-Write STM }(RWSTM) using BTO and MVTO based protocols. The timestamp-based protocols are used to identify the conflicts between \sctrn{s}. The miner executes the \sctrn{s} using RWSTM and constructs the \blg{} dynamically at the runtime using the timestamps. Later, a concurrent \emph{Decentralized Validator} (Dec-Validator) executes the \sctrn{s} in the block in a decentralized manner. The Decentralized Validator is efficient than the Fork-Join Validator since there is no master validator thread to allocate the \sctrn{s} to the slave validator threads to execute. Instead, all the validator threads identify the source vertex (a vertex with indegree 0) in the \blg{} independently and claim the source node to execute the corresponding \sctrn{}.
Amiri et al.~\cite{amiri2019parblockchain} proposed \emph{ParBlockchain}-- an
approach for concurrent execution of transactions in the block for permissioned blockchain. They developed an \emph{OXII paradigm}\footnote{A paradigm in which transactions are first ordered for concurrent execution then executed by both miners and validators~\cite{amiri2019parblockchain}.} to support distributed applications. The OXII paradigm orders the block transactions based on the agreement between the orderer nodes using static analysis or speculative execution to obtain the read-set and write-set of each transaction, then generates the \blg{} and constructs the block. The executors from respective applications (similar to the executors in fabric channels) execute the transactions concurrently and then validate them by re-executing the transaction. So, the nodes of the ParBlockchain execute the transactions in two phases using the OXII paradigm. A block with \blg{} based on the transaction conflicts is generated in the first phase, known as the \emph{ordering phase}. The second phase, known as the \emph{execution phase}, executes the block transactions concurrently using the \blg{} appended with block.
Saraph and Herlihy~\cite{VikramHerlihy:EmpSdy-Con:Tokenomics:2019} proposed a simple \emph{bin-based two-phase speculative} approach to execute \sctrn{s} concurrently in the Ethereum blockchain. They empirically validated the possible benefit of their approach by evaluating it on historical transactions from the Ethereum. In the first phase, the miner uses locks and executes \sctrn{s} in a block concurrently by rolling back those \sctrn{s} that lead to the conflict(s). All the aborted \sctrn{s} are then kept into a sequential bin and executed in the second phase sequentially. The miner gives concurrent and sequential bin hints in the block to the validator to execute the same schedule as executed by the miner. The validator executes the concurrent bin \sctrn{s} concurrently while executes the sequential bin \sctrn{s} sequentially. Instead of \blg{}, giving hints about bins takes less space. However, it does not harness the maximum concurrency available within the block.
\cmnt{We name this approach as \emph{Speculative Bin} (Spec Bin) approach. }
Later, Anjana et al.~\cite{anjana:ObjSC:Netys:2020} proposed an approach that uses optimistic single-version and multi-version \emph{Object-based STMs (OSTMs)} for the concurrent execution of \sctrn{s} by the miner. The OSTMs operate at a higher (object) level rather than page (read-write) level and constructs the BG.
However, the \blg{} is still quite significantly large in the existing approaches and needs higher bandwidth to broadcast such a large block for validation.
In contrast, we propose an efficient framework for concurrent execution of the AUs using optimistic STMs.
We combine the benefits of both Spec Bin-based and STM-based approaches to optimize the storage aspects (efficient storage optimal \blg{}), which further improves the performance. Due to its optimistic nature, the updates made by a transaction will be visible to shared memory only on commit; hence, rollback is not required. Our approach ensures correctness criteria as opacity~\cite{GuerKap:Opacity:PPoPP:2008}. The proposed approach gives better speedup over state-of-the-art and serial execution of \sctrn{s}
\iffalse
\begin{table*}[!tb]
\caption{Related Work Summary
\centering
\label{tab:relatedwork}
\resizebox{1\columnwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\textbf{ }
& \textbf{Miner Approach}
& \textbf{Locks}
& \textbf{Require Block Graph}
& \textbf{Validator Approach}
& \textbf{\begin{tabular}[c]{@{}c@{}}Blockchain Type\end{tabular}}
\\ \hline\hline
Dickerson et al.~\cite{Dickerson+:ACSC:PODC:2017}
& Pessimistic ScalaSTM
& Yes
& Yes
& Fork-join
& Permissionless
\\
Zhang and Zhang~\cite{ZangandZang:ECSC:WBD:2018}
& -
& -
& Read, Write Set
& MVTO Approach
& Permissionless
\\
Anjana et al.~\cite{Anjana+:CESC:PDP:2019}
& Optimistic RWSTM
& No
& Yes
& Decentralized
& Permissionless
\\
Amiri et al.~\cite{amiri2019parblockchain}
& Static Analysis
& -
& Yes
& -
& Permissioned
\\
Saraph and Herlihy~\cite{VikramHerlihy:EmpSdy-Con:Tokenomics:2019}
& Bin-based Approach
& Yes
& No
& Bin-based
& Permissionless
\\
Anjana et al.~\cite{anjana:ObjSC:Netys:2020}
& Optimistic ObjectSTM
& No
& Yes
& Decentralized
& Permissionless
\\
\textbf{Proposed Approach}
& \textbf{Bin+Optimistic RWSTM}
& \textbf{No}
& \textbf{No/Yes}
& \textbf{Decentralized}
& \textbf{Permissionless}
\\ \hline
\end{tabular}%
}
\end{table*}
\fi
\section{Experimental Evaluation}
\label{sec:opt-result}
We aim to increase the efficiency of the miners and validators by employing concurrent execution of \sctrn{s} while optimizing the size of the BG appended by the miner in the block. To assess the efficiency of the proposed approach, we performed simulation on the series of benchmark experiments with Ethereum~\cite{ethereum:url} smart contracts from Solidity documentation~\cite{Solidity}. Since multi-threading is not supported by the Ethereum Virtual Machine (EVM)~\cite{ethereum:url, Dickerson+:ACSC:PODC:2017}, we converted the Ethereum smart contracts into C++. We evaluated the proposed approach with the state-of-the-art approaches \cite{Anjana+:CESC:PDP:2019, Dickerson+:ACSC:PODC:2017, VikramHerlihy:EmpSdy-Con:Tokenomics:2019} over baseline serial execution on three different workloads by varying the number of \sctrn{s}, the number of threads, and the number of shared objects. {The benchmark experiments are conservative and consist of one or fewer smart contracts \sctrn{s} in a block, }
which leads to a higher degree of conflicts than actual conflicts in practice where a block consists of \sctrn{s} from different contracts ($\approx$ 1.5 million deployed smart contracts \cite{EthereumByNumbers}). Due to fewer conflicts in the actual blockchain, the proposed approach is expected to provide greater concurrency. We structure our experimental evaluation to answer the following questions:
\begin{enumerate}
\item How much speedup is achieved with varying \sctrn{s} by concurrent miners and validators when fixing the number of threads and shared objects? As conflicts increase with increasing \sctrn{s}, we expect a decrease in speedup.
\item How does speedup change when increasing the number of threads with a fixed number of \sctrn{s} and shared objects? We expect to see the speedup increase with increasing threads confined by logical threads available within the system.
\item How does speedup shift over different shared objects with fixed \sctrn{s} and threads? We expect an increase in speedup due to conflict deterioration with objects increase. So, we anticipate concurrent miners and validators overweigh serial miners and validators with fewer conflicts.
\end{enumerate}
\subsection{Contract Selection and Benchmarking}
{This section provides a comprehensive overview of benchmark contracts coin, ballot, and simple auction from Solidity Documentation~\cite{Solidity} selected as real-world examples for evaluating the proposed approach.}
\cmnt{The selected contacts are converted to C++ for concurrent execution. We chose different contracts to demonstrate the wide variety of use-cases, such as a financial application using a coin contract, a collaborative use-case using a ballot, and a bidding application using an auction contract.}The \sctrn{s} in a block for the coin, ballot, and auction benchmark operate on the same contract, i.e., consists of the transaction calls of one or more methods of the same contract. In practice, a block consists of the \sctrn{s} from different contracts; hence we designed another benchmark contract called \emph{mix contract} consisting of contract transactions from coin, ballot, and auction in equal proportion in a block.
The benchmark contracts and respective methods are as follows:
\noindent
\textbf{Coin Contract:} The coin contract is the simplest form of sub-currency. \cmnt{It implements three distinct functions identical to the standard ERC-20 token contract standard.}The users involved in the contract have accounts, and \emph{accounts} are shared objects. It implements methods such as \texttt{mint()}, \texttt{transfer()/send()}, and \texttt{getbalance()} which represent the \sctrn{s} in a block. The contract deployer uses the \texttt{mint()} method to give initial coins/balance to each account with the same fixed amount. We initialized the coin contract's initial state with a fixed number of accounts on all benchmarks and workloads. Using \texttt{transfer()}, users can transfer coin from one account to other account. The \texttt{getbalance()} is used to check the coins in a user account. For the experiments a block consists of 75\% \texttt{getbalance()}, and 25\% \texttt{transfer()} calls. A conflict between \sctrn{s} occurs if they access a common object (account), and at least one of them performs a \texttt{transfer()} operation.
\noindent
\textbf{Ballot Contract:} The ballot contract is an electronic voting contract in which \emph{voters} and \emph{proposals} are shared objects. The \texttt{vote()}, \texttt{delegate()}, and \texttt{winningproposal()} are the methods of ballot contract. The voters use the \texttt{vote()} method to cast their vote to a specific proposal. Alternatively, a voter can delegate their vote to other voter using \texttt{delegate()} method. A voter can cast or delegate their vote only once. At the end of the ballot, the \texttt{winningproposal()} is used to compute the winner. We initialized the ballot contract's initial state with a fixed number of proposals and voters for benchmarking on different workloads for experiments. The proposal to voter ratio is fixed to 5\% to 95\% of the total shared objects. A block consists of 90\% \texttt{vote()}, and a 10\% \texttt{delegate()} method calls followed by a \texttt{winningproposal()} call for the experiments. The \sctrn{s} will conflict if they operate on the same object. So, if two voters \texttt{vote()} for the same proposal simultaneously, then they will conflict.
\noindent
\textbf{Simple Auction Contract:}
It is an online auction contract in which bidders bid for a commodity online. In the end, the amount from the maximum bidder is granted to the owner of the commodity. The \emph{bidders}, \emph{maximum bid}, and \emph{maximum bidder} are the shared object. In our experiments, the initial contract state is a fixed number of bidders with a fixed initial account balance and a fixed period of the auction to end. In the beginning, the maximum bidder and bid are set to null (the base price and the owner can be set accordingly). The bidder uses the contract method \texttt{bid()} to bid for the commodity with their bid amount—the max bid amount and the bidder changes when a bid is higher than the current maximum. A bidder uses the \texttt{withdraw()} method to move the balance of their previous bid into their account. The bidder uses \texttt{bidEnd()} method to know if the auction is over. Finally, when the auction is ended, the maximum bidder (winner) amount is transferred to the commodity owner, and commodity ownership is transferred to the max bidder. For benchmarking in our experiments a block consist of 8\% \texttt{bid()}, 90\% \texttt{withdraw()}, and 2\% \texttt{bidEnd()} method calls. The max bidder and max bid are the conflict points whenever a new bid with the current highest amount occurs.
\noindent
\textbf{Mix Contract:} In this contract, we combine the \sctrn{s} in equal proportion from the above three contracts (coin, ballot, and auction). Therefore, our experiment block consists of an equal number of corresponding contract transactions with the same initial state as initialized in the above contracts.
\subsection{Experimental Setup and Workloads}
We ran our experiments on a large-scale 2-socket Intel(R) Xeon(R) CPU E5-2690 V4 @ 2.60 GHz with a total of 56 hyper-threads (14 cores per socket and two threads per core) with 32 GB of RAM running Ubuntu 18.04.
In our experiments, we have noticed that speedup varies from contract to contract on different workloads. The speedup on various contracts is not for comparison between contracts. Instead, it demonstrates the proposed approach efficiency on several use-cases in the blockchain. We have considered the following three workloads for performance evaluation:
\begin{enumerate}
\item In workload 1 (W1), a block consists of \sctrn{s} varies from 50 to 400, fixed 50 threads, and shared objects of 2K. The \sctrn{s} per block in Ethereum blockchain is on an average of 100, while the actual could be more than 200~\cite{Dickerson+:ACSC:PODC:2017}, however a theoretical maximum of $\approx400$~\cite{EthereumGasLimit2020} after a recent increase in the gas limit. Over time, the number of \sctrn{s} per block is increasing. In practice, one block can have less \sctrn{s} than the theoretical cap, which depends on the gas limit of the block and the gas price of the transactions. We will see that in a block, the percentage of data conflicts increase with increasing \sctrn{s}. The conflict within a block is described by different \sctrn{s} accessing a common shared object, and at least one of them performs an update. We have found that the data conflict varies from contract to contract and has a varied effect on speedup.
\item In workload 2 (W2), we varied the number of threads from 10 to 60 while fixed the \sctrn{s} to 300 and shared objects to 2K. Our experiment system consists of a maximum of 56 hardware threads, so we experimented with a maximum of 60 threads. We observed that the speedup of the proposed approach increases with an increasing number of threads limited by logical threads.
\item The number of \sctrn{s} and threads in workload 3 (W3) are 50 and 300, respectively, although the shared objects range from 1K to 6K. This workload is used with each contract to measure the impact of the number of participants involved. Data conflicts are expected to decrease with an increasing number of shared objects; however, the search time may increases. The speedup depends on the execution of the contract; but, it increases with an increasing number of shared objects.
\end{enumerate}
\subsection{Analysis}
In our experiments, blocks of \sctrn{s} were generated for each benchmark contract on three workloads: W1 (varying \sctrn{s}), W2 (varying threads), and W3 (varying shared objects). Then, concurrent miners and validators execute the blocks concurrently. The corresponding blocks serial execution is considered as a baseline to compute the speedup of proposed concurrent miners and validators. The running time is collected for 15 iterations (times) with 10 blocks per iteration, and 10 validators validate each block. The first block of each iteration is left as a warm-up run, and a total of 150 blocks are created for each reading. So, each block execution time is averaged by 9. Further, the total time taken by all iterations is averaged by the number of iteration for each reading; the \eqnref{time} is used to compute a reading time.
\begin{equation}
\alpha_t = \frac{\displaystyle\sum_{i=1}^{n}\displaystyle\sum_{b=1}^{m-1} \beta_t }{n*(m-1)}
\label{eq:time}
\end{equation}
Where $\alpha_t$ is an average time for a reading, $n$ is the number of iterations, $m$ is the number of blocks, and $\beta_t$ is block execution time.
In all plots, figure (a), (b), and (c) correspond to workload W1, W2, and W3, respectively. \figref{miner-speedup-coin} to \figref{miner-speedup-mix} show the speedup achieved by proposed and state-of-the-art concurrent miners over serial miners for all benchmarks and workloads. \figref{decval-speedup-coin} to \figref{decval-speedup-mix} show the speedup achieved by proposed and state-of-the-art concurrent decentralized validators over serial validators for all benchmarks and workloads. \figref{fj-speedup-coin} to \figref{fj-speedup-mix} show speedup achieved by proposed and state-of-the-art concurrent fork-join validators over serial validators. \figref{bgCoin} to \figref{bgMix} show the average number of edges (dependencies) and vertices (\sctrn{s}) in the block graph for respective contracts on all workloads. While \figref{incSizeCoin} to \figref{incSizeMix} show the percentage of additional space required to store the block graph in Ethereum block. A similar observation has been found \cite{DBLP:journals/corr/abs-1809-01326} for the fork-join validator, the average number of dependencies, and space requirement on other contracts.
We observed that speedup for all benchmark contracts follows the roughly same pattern. In the read-intensive benchmarks (coin and mix), speedup likely to increase on all the workloads, while in the write-intensive benchmark (ballot and auction), speedup drop downs on high contention. We also observed that there might not be much speedup for concurrent miners with fewer \sctrn{s} (less than 100) in the block, conceivably due to multi-threading overhead. However, the speedup for concurrent validators generally increases across all the benchmarks and workloads. Fork-join concurrent validators on W2 is an exception in which speedup drops down with an increase in the number of threads since fork-join follows a master-slave approach where a master thread becomes a performance bottleneck. We also observed that the concurrent validators achieve a higher speedup than the concurrent miners. Because the concurrent miner executes the \sctrn{s} non-deterministically, finds conflicting \sctrn{s}, creates concurrent bin and an efficient BG for the validators to execute the \sctrn{s} deterministically.
Our experiment results also show the BG statics and additional space required to store BG in a block of Ethereum blockchain, which shows the space overhead. We compare our proposed approach with the existing speculative bin (Spec Bin) based approach~\cite{VikramHerlihy:EmpSdy-Con:Tokenomics:2019}, the fork-join approach (FJ-Validator)~\cite{Dickerson+:ACSC:PODC:2017} and the approach proposed in~\cite{Anjana+:CESC:PDP:2019} (we call it default/Def approach). The proposed approach combines the benefit of both bin-based and the STM approaches to get maximum benefit for concurrent miners and validators. The proposed approach\footnote{In the figures, legend items in bold.} produces an optimal BG, reduces the space overhead, and outperforms the state-of-the-art approaches.
\figref{miner-speedup-coin}(a) to \figref{miner-speedup-mix}(a) show the speedup for concurrent miner on W1.
As shown in \figref{miner-speedup-coin}(a) and \figref{miner-speedup-mix}(a) for read-intensive contracts such as in coin and mix contract, the speedup increases with an increase in \sctrn{s}, respectively. While in write-intensive contracts such as ballot and auction contract the speedup does not increase with an increase in \sctrn{s}; instead, it may drop down if \sctrn{s} increases, as shown in \figref{miner-speedup-ballot}(a) and \figref{miner-speedup-auction}(a), respectively. This is because contention increases with an increase in \sctrn{s}.
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-miner-coin.pdf}\vspace{-.4cm}
\caption{Concurrent miner speedup over serial miner for coin contract.}
\label{fig:miner-speedup-coin}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-miner-ballot.pdf}\vspace{-.4cm}
\caption{ Concurrent miner speedup over serial miner for ballot contract.}
\label{fig:miner-speedup-ballot}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-miner-auction.pdf}\vspace{-.4cm}
\caption{Concurrent miner speedup over serial miner for auction contract.}
\label{fig:miner-speedup-auction}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-miner-mix.pdf}\vspace{-.4cm}
\caption{Concurrent miner speedup over serial miner for mix contract.}
\label{fig:miner-speedup-mix}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-dec-val-coin.pdf}\vspace{-.4cm}
\caption{Concurrent decentralized validator speedup over serial validator for coin contract.}
\label{fig:decval-speedup-coin}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-dec-val-ballot.pdf}\vspace{-.4cm}
\caption{Concurrent decentralized validator speedup over serial validator for ballot contract.}
\label{fig:decval-speedup-ballot}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-dec-val-auction.pdf}\vspace{-.4cm}
\caption{Concurrent decentralized validator speedup over serial validator for auction contract.}
\label{fig:decval-speedup-auction}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-dec-val-mix.pdf}\vspace{-.4cm}
\caption{Concurrent decentralized validator speedup over serial validator for mix contract.}
\label{fig:decval-speedup-mix}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-fj-val-coin.pdf}\vspace{-.4cm}
\caption{Concurrent fork join validator speedup over serial validator for coin contract.}
\label{fig:fj-speedup-coin}
\vspace{.2cm}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-fj-val-ballot.pdf}\vspace{-.4cm}
\caption{Concurrent fork join validator speedup over serial validator for ballot contract.}
\label{fig:fj-speedup-ballot}
\vspace{.2cm}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-fj-val-auction.pdf}\vspace{-.4cm}
\caption{Concurrent fork join validator speedup over serial validator for auction contract.}
\label{fig:fj-speedup-auction}
\vspace{.2cm}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-fj-val-mix.pdf}\vspace{-.4cm}
\caption{Concurrent fork join validator speedup over serial validator for mix contract.}
\label{fig:fj-speedup-mix}
\end{figure}
\figref{decval-speedup-coin}(a) through \figref{fj-speedup-mix}(a) show the speedup for concurrent validators over serial validators on W1. The speedup for concurrent validators (decentralized and fork-join) increases with an increase in \sctrn{s}. \figref{decval-speedup-coin}(a) to \figref{decval-speedup-mix}(a) demonstrate the speedup achieved by decentralized validator. It can be observed that for read-intensive benchmarks, the optimized MVTO decentralized validator (Opt-MVTO Dec-Validator) outperforms other validators. In contrast, in write-intensive benchmarks, the default MVTO decentralized validator (Def-MVTO Dec-Validator) achieves better speedup over other validators. Due to the overhead of multithreading for the concurrent bin with very fewer \sctrn{s}. We observed that with increasing \sctrn{s} in the blocks, the conflicts also increase. As a result, the number of transactions in the concurrent bin decreases. The speculative bin decentralized validator (Spec Bin Dec-Validator) speedup is quite less over concurrent STM Dec-Validators. Because STM miner precisely determines the dependencies between the \sctrn{s} of the block and harness the maximum concurrency than the bin-based miner. However, suppose the block consists of the \sctrn{s} with very few dependencies. In that case, Spec Bin Dec-Validator is expected to outperform other validators, as shown in the \figref{decval-speedup-coin}(a).
\figref{fj-speedup-coin}(a) to \figref{fj-speedup-mix}(a) show the speedup for fork-join validators on W1 for all the benchmarks. We can observe that the proposed optimized MVTO fork-join validator (Opt-MVTO FJ-Validator) outperforms other validators due to lower overheads at the fork-join master validator thread to allocate independent \sctrn{s} to slave validator threads. We noticed that decentralized concurrent validators speedup is quite high over fork-join concurrent validators because there is no bottleneck in this approach for allocating the \sctrn{s}. All threads in the decentralized approach work independently. It can also be observed that with fewer \sctrn{s} in several benchmarks, the speedup by fork-join validators drops to the point where it is less than the serial validators due to the overhead of thread creation dominate the speedup achieved, as shown in \figref{fj-speedup-ballot}(a), \figref{fj-speedup-auction}(a) and \figref{fj-speedup-mix}(a).
In W1, concurrent miners achieve a minimum of $\approx 2\times$ and maximum up to 10$\times$ speedup over serial miners across the contracts. The concurrent STM decentralized validators achieve speedup minimum $\approx 4\times$ and maximum up to $\approx 14\times$ while Spec Bin Dec-Validator ranges from $\approx 3\times$ to $\approx 9\times$ over serial miner across the contracts. The fork-join concurrent validators achieve a maximum speedup of $\approx 5\times$ over the serial validator.
\figref{miner-speedup-coin}(b) to \figref{fj-speedup-mix}(b) show the speedup on W2. The speedup increases with an increase in the number of threads. However, it is limited by the maximum number of logical threads in the experimental system. Thus, a slight drop in the speedup can be seen from 50 threads to 60 threads because the experimental system has a maximum of 56 logical threads. The reset of the concurrent miner observations is similar to the workload W1 based on read-intensive and write-intensive benchmarks.
As shown in the \figref{decval-speedup-coin}(b) to \figref{decval-speedup-mix}(b), the concurrent decentralized validators speedup increase with an increase in threads. While as shown in \figref{fj-speedup-coin}(b) to \figref{fj-speedup-mix}(b), the concurrent fork-join validators speedup drops down with an increase in threads. The reason for this drop in the speedup is that the master validator thread in the fork-join approach becomes a bottleneck. The decentralized validator's observation shows that for the read-intensive benchmark, the Opt-MVTO Dec-validator outperforms other validators. While in the write-intensive benchmark, the Def-MVTO Dec-validator outperforms other validators, as shown in \figref{decval-speedup-ballot}(b). However, in the fork-join validator approach, the proposed Opt-MVTO FJ-validator outperforms all other validators due to the optimization benefit of bin based approach inclusion.
In W2, concurrent miners achieve a minimum of $\approx 1.5\times$ and achieves maximum up to $\approx 8\times$ speedup over serial miners across the contracts. The concurrent STM decentralized validators achieve speedup minimum $\approx 4\times$ and maximum up to $\approx 10\times$ while Spec Bin Dec-Validator ranges from $\approx 3\times$ to $\approx 7\times$ over serial miner across the contracts. The fork-join concurrent validators achieve a maximum speedup of $\approx 4.5\times$ over the serial validator.
The plots in \figref{miner-speedup-coin}(c) to \figref{fj-speedup-mix}(c) show the concurrent miners and validators speedup on W3. As shared objects increase, the concurrent miner speedup increases because conflict decreases due to less contention. Additionally, when contention is very low, more \sctrn{s} are added in the concurrent bin. However, it also depends on the contract. If the contract is a write-intensive, fewer \sctrn{s} are added in the concurrent bin. While more \sctrn{s} added in the concurrent bin for read-intensive contracts.
As shown in \figref{miner-speedup-coin}(c) and \figref{miner-speedup-mix}(c), the speculative bin miners surpass STM miners due to read-intensive contracts. While in \figref{miner-speedup-ballot}(c) and \figref{miner-speedup-auction}(c), the Def-MVTO Miner outperform other miners as shared objects increase. In contrast, Def-BTO Miner performs better over other miners when \sctrn{s} are fewer because search time in write-intensive contracts to determine respective versions is much more in MVTO miner than BTO miner. Although, all concurrent miners performers better than the serial miner. In W3, concurrent miners start at around 1.3$\times$ and archives maximum up to 14$\times$ speedup over serial miners across all the contracts.
The speedup by validators (decentralized and fork-join) increases with shared objects. In \figref{decval-speedup-coin}(c), \figref{decval-speedup-auction}(c), and \figref{decval-speedup-mix}(c), proposed Opt-STM Dec-Validator perform better over other validators. However, for write-intensive contracts, the number of \sctrn{s} in the concurrent bin would be less. Therefore, the speedup by Def-STM Dec-Validators is greater than Opt-STM Dec-Validators, as shown in \figref{decval-speedup-ballot}(c). The Spec Bin Dec-Validator speedup is quite less over concurrent STM Dec-Validators because STM miner precisely determines the dependencies between the \sctrn{s} than the bin based miner.
In fork-join validators, proposed Opt-STM FJ-Validators outperform over all other FJ-Validators, as shown in \figref{fj-speedup-coin}(c) to \figref{fj-speedup-mix}(c) because of less contention at the master validator thread in the proposed approach to allocate independent \sctrn{s} to slave validator threads. We noticed that decentralized concurrent validators speedup is relatively high over fork-join concurrent validators with similar reasoning explained above. In W3, concurrent STM decentralized validators start at around 4$\times$ and achieve a maximum up to 14$\times$ speedup while Spec Bin Dec-Validator ranges from 1$\times$ to 14$\times$ speedup over serial miner across the contracts. The fork-join concurrent validators achieve a maximum speedup of 7$\times$ over the serial validator. The concurrent validators benefited from the work of the concurrent miners and outperformed serial validators.
\begin{figure}[!t]
\centering
{\includegraphics[width=1\textwidth]{figs/w1-w2-w3-bg-coin.pdf}}\vspace{-.4cm}
\caption{Average number of edges (dependencies) and vertices (\sctrn{s}) in block graph for coin contract.}
\label{fig:bgCoin}
\vspace{.2cm}
{\includegraphics[width=1\textwidth]{figs/w1-w2-w3-bg-ballot.pdf}}\vspace{-.4cm}
\caption{Average number of edges (dependencies) and vertices (\sctrn{s}) in block graph for ballot contract.}
\label{fig:bgBallot}
\vspace{.2cm}
{\includegraphics[width=1\textwidth]{figs/w1-w2-w3-bg-auction.pdf}}\vspace{-.4cm}
\caption{Average number of edges (dependencies) and vertices (\sctrn{s}) in block graph for auction contract.}
\label{fig:bgAuction}
\vspace{.2cm}
{\includegraphics[width=1\textwidth]{figs/w1-w2-w3-bg-mix.pdf}}\vspace{-.4cm}
\caption{Average number of edges (dependencies) and vertices (\sctrn{s}) in block graph for mix contract.}
\label{fig:bgMix}
\end{figure}
\figref{bgCoin} to \figref{bgMix} show the average number of edges (dependencies as histograms) and vertices (\sctrn{s} as line chart) in the BG for mix contract on all the workloads\footnote{We used histograms and line chart to differentiate vertices and edges to avoid confusion in comparing the edges and vertices.}. The average number of edges (dependencies) in the BG for both Default and Optimized approach for respective STM protocol remains the same; hence only two histograms are plotted for simplicity. As shown in the \figref{bgCoin}(a) to \figref{bgMix}(a) with increasing \sctrn{s} in W1, the BG edges and vertices also increase. It shows that the contention increases with increasing \sctrn{s} in the blocks. As shown in the \figref{bgCoin}(b) to \figref{bgMix}(b) in W2, the number of vertices and edges does not change much. However, in the W3, the number of vertices and edges decreases, as shown in \figref{bgCoin}(c) to \figref{bgMix}(c).
In our proposed approach, the BG consists of vertices respective to only conflicting \sctrn{s}, and non-conflicting \sctrn{s} are stored in the concurrent bin. While in Anjana et al.~\cite{Anjana+:CESC:PDP:2019} approach, all the \sctrn{s} had corresponding vertex nodes in the BG shown in \figref{bgCoin} to \figref{bgMix}. So, in W1, it will be 100 vertices in the BG if block consists of 100 \sctrn{s} and 200 if block consists of 200 \sctrn{s}. In W2 and W3, it will be 300 vertices.
Having only conflicting \sctrn{s} vertices in BG saves much space because each vertex node takes 28-byte storage space.
The average block size in the Bitcoin and Ethereum blockchain is $\approx 1200$ KB~\cite{BitcoinAvgBlockSize} and $\approx 20.98$ KB~\cite{EthereumAvgBlockSize}, respectively measured for the interval of Jan 1$^{st}$, 2019 to Dec 31$^{th}$, 2020. Further, the block size keeps on increasing, and so the number of transactions in the block. The average number of transactions in the Ethereum block is $\approx 100$~\cite{EthereumAvgBlockSize}. Therefore, in the Ethereum blockchain, each transaction size is an average $\approx 0.2$ KB ($\approx 200$ bytes). We computed the block size based on these simple calculations when \sctrn{s} vary in the block for W1. The \eqnref{blocksize} is used to compute the block size (B) for the experiments.
\begin{equation}
B = 200 * N_{\sctrn{s}}
\label{eq:blocksize}
\end{equation}
Where, $B$ is block size in bytes, $N_{\sctrn{s}}$ number of \sctrn{s} in block, and $200$ is the average size of an \sctrn{} in bytes.
To store the block graph $BG(V, E)$ in the block, we used \emph{adjacency list}. In the BG, a vertex node $V_s$ takes $28$ bytes storage, which consists of 3 integer variables and 2 pointers. While an edge node $E_{s}$ needs a total of $20$ bytes storage. The \eqnref{BGSize} is used to compute the size of BG ($\beta$ bytes). While \eqnref{perBG} is used to compute the additional space ($\beta_{p}$ percentage) needed to store BG in the block.
\begin{figure}[H]
\centering
{\includegraphics[width=1\textwidth]{figs/w1-w2-w3-incSize-coin.pdf}}\vspace{-.4cm}
\caption{Percentage of additional space to store block graph in Ethereum block for coin contract.}
\label{fig:incSizeCoin}
\vspace{.22cm}
{\includegraphics[width=1\textwidth]{figs/w1-w2-w3-incSize-ballot.pdf}}\vspace{-.4cm}
\caption{Percentage of additional space to store block graph in Ethereum block for ballot contract.}
\label{fig:incSizeBallot}
\vspace{.22cm}
{\includegraphics[width=1\textwidth]{figs/w1-w2-w3-incSize-auction.pdf}}\vspace{-.4cm}
\caption{Percentage of additional space to store block graph in Ethereum block for auction contract.}
\label{fig:incSizeAuction}
\vspace{.22cm}
{\includegraphics[width=1\textwidth]{figs/w1-w2-w3-incSize-mix.pdf}}\vspace{-.4cm}
\caption{Percentage of additional space to store block graph in Ethereum block for mix contract.}
\label{fig:incSizeMix}
\end{figure}
\begin{equation}
\beta = (V_{s} * N_{\sctrn{s}}) + (E_{s} * M_{e})
\label{eq:BGSize}
\end{equation}
Where, $\beta$ is size of BG in bytes, $V_s$ is size of a vertex node of $BG$ in bytes, $E_{s}$ is size (in bytes) of a edge node of $BG$, and $M_{e}$ is number of edges in $BG$.
\begin{equation}
\beta_{p} = ({\beta*100})/{B}
\label{eq:perBG}
\end{equation}
The plots in \figref{incSizeCoin} to \figref{incSizeMix} demonstrate the average percentage of additional storage space required to store BG in the Ethereum block on all benchmarks and workloads. We can observe that the space requirement also increases with an increase in the number of dependencies and vertices in BG. However, the space requirement of our proposed approach is smaller than the existing default approach. As shown in the \figref{bgBallot}, the dependencies and vertices are highest in the ballot contract compared to other contracts, so the space requirement is also high, as shown in \figref{incSizeBallot}. This is because the ballot is a write-intensive benchmark. It can be seen that the space requirements of BG by Opt-BTO BG and Opt-MVTO BG is smaller than Def-BTO BG and Def-MVTO BG miner, respectively.
The proposed approach significantly reduces the BG size for mix contract as shown in \figref{incSizeMix} across all the workloads. Which clearly shows the storage efficiency of the proposed approach.
The storage advantage comes from using a bin-based approach combined with the STM approach, where concurrent bin information needs to be added into the block, which requires less space than having a corresponding vertex in BG for each \sctrn{s} of the block. So, we combine the advantages of both the approaches (STM and Bin) to get maximum speedup with storage optimal BG. {The average space required for BG in \% w.r.t. block size is $34.55\%$, $31.69\%$, $17.24\%$, and $13.79\%$ by Def-BTO. Def- MVTO, Opt-BTO, and Opt-MVTO approach, respectively. The proposed Opt-BTO and Opt-MVTO BG are $2\times$ (or $200.47\%$) and $2.30\times$ (or $229.80\%$) efficient over Def-BTO and Def-MVTO BG, respectively. With an average speedup of $4.49\times$ and $5.21\times$ for Opt-BTO, Opt-MVTO concurrent miner over serial, respectively. The Opt-BTO and Opt-MVTO decentralized concurrent validator outperform an average of $7.68\times$ and $8.60\times$ than serial validator, respectively.}
\cmnt{
The proposed approach significantly reduces the BG size for mix contract as shown in \figref{incSizeMix} across all the workloads. Which clearly shows the storage efficiency of the proposed approach.
The storage advantage comes from using a bin-based approach combined with the STM approach, where concurrent bin information needs to be added into the block, which requires less space than having a corresponding vertex in BG for each \sctrn{s} of the block. So, we combine the advantages of both the approaches (STM and Bin) to get maximum speedup with storage optimal BG. {The average space required for BG in \% w.r.t. block size is $34.96\%$, $31.76\%$, $17.71\%$, and $13.84\%$ by Def-BTO. Def- MVTO, Opt-BTO, and Opt-MVTO approach, respectively. The proposed Opt-BTO and Opt-MVTO BG are $1.97\times$ (or $197.40\%$) and $2.29\times$ (or $229.478\%$) efficient over Def-BTO and Def-MVTO BG, respectively. With an average speedup of 4.10$\times$ and 4.55$\times$ for Opt-BTO, Opt-MVTO concurrent miner over serial, respectively. The Opt-BTO and Opt-MVTO decentralized concurrent validator outperform an average of $7.05\times$ and $7.84\times$ than serial validator, respectively.}
\psnote{Need to update statics based on all plots and workloads.}
}
\iffalse
\begin{figure}\centering
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-miner-coin.pdf}\vspace{-.3cm}
\caption{Concurrent miner speedup over serial miner for coin contract.}
\label{fig:miner-speedup-coin}
\vspace{.3cm}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-dec-val-coin.pdf}\vspace{-.3cm}
\caption{Concurrent decentralized validator speedup over serial validator for coin contract.}
\label{fig:decval-speedup-coin}
\vspace{.3cm}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-fj-val-coin.pdf}\vspace{-.3cm}
\caption{Concurrent fork join validator speedup over serial validator for coin contract.}
\label{fig:fj-speedup-coin}
\end{figure}
\begin{figure}\centering
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-miner-ballot.pdf}\vspace{-.3cm}
\caption{Concurrent miner speedup over serial miner for ballot contract.}
\label{fig:miner-speedup-ballot}
\vspace{.3cm}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-dec-val-ballot.pdf}\vspace{-.3cm}
\caption{Concurrent decentralized validator speedup over serial validator for ballot contract.}
\label{fig:decval-speedup-ballot}
\vspace{.3cm}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-fj-val-ballot.pdf}\vspace{-.3cm}
\caption{Concurrent fork join validator speedup over serial validator for ballot contract.}
\label{fig:fj-speedup-ballot}
\end{figure}
\begin{figure}\centering
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-miner-auction.pdf}\vspace{-.3cm}
\caption{Concurrent miner speedup over serial miner for auction contract.}
\label{fig:miner-speedup-auction}
\vspace{.3cm}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-dec-val-auction.pdf}\vspace{-.3cm}
\caption{Concurrent decentralized validator speedup over serial validator for auction contract.}
\label{fig:decval-speedup-auction}
\vspace{.3cm}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-fj-val-auction.pdf}\vspace{-.3cm}
\caption{Concurrent fork join validator speedup over serial validator for auction contract.}
\label{fig:fj-speedup-auction}
\end{figure}
\begin{figure}\centering
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-miner-mix.pdf}\vspace{-.3cm}
\caption{Concurrent miner speedup over serial miner for mix contract.}
\label{fig:miner-speedup-mix}
\vspace{.3cm}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-dec-val-mix.pdf}\vspace{-.3cm}
\caption{Concurrent decentralized validator speedup over serial validator for mix contract.}
\label{fig:decval-speedup-mix}
\vspace{.3cm}
\includegraphics[width=1\textwidth]{figs/w1-w2-w3-fj-val-mix.pdf}\vspace{-.3cm}
\caption{Concurrent fork join validator speedup over serial validator for mix contract.}
\label{fig:fj-speedup-mix}
\end{figure}
\fi
|
2,869,038,155,119 | arxiv | \section{Introduction}
As $^{6,8}$He beams with high energy resolution and intensity became available
at different radioactive ion beam facilities around the world, these unstable
helium isotopes are among the most studied light neutron-rich nuclei. Ever
since the pioneering measurement of the interaction cross section in the
late eighties \cite{Tan96,Tan88} which lead to the discovery of the extended
density distribution of the valence halo neutrons in $^{6,8}$He, many recent
experimental efforts are still focused on a more precise determination
of the nuclear radii and radial shape of these nuclei by different methods
\cite{Tan13}. The elastic proton scattering in inverse kinematics at
energies around 700 MeV has been proven to be an accurate method to obtain
information on the nuclear density distributions of the halo nuclei under
study \cite{Al97,Eg01,Dob06,Ili12}. The first experiment on the (inverse
kinematics) elastic proton scattering on $^{6,8}$He at energies around 700 MeV
has been performed at GSI Darmstadt using the hydrogen-filled ionization chamber
IKAR which simultaneously served as a gas target and a detector for the recoil
protons \cite{Neu02}, and the measured elastic scattering data were analyzed within
the Glauber model \cite{Al78} to deduce the matter radii and radial shape
of the nuclear density distributions of these nuclei \cite{Al02}. These same
data were also studied in a Glauber few-body calculation of the elastic $^{6,8}$He+$p$\ scattering
\cite{Al98}, where the few-body degrees of freedom were treated explicitly.
We note that the first measurement \cite{Neu02} has covered only the region of low
momentum transfer because the IKAR active target was limited to the detection
of recoil ions close to $\theta_{\rm lab}\approx 90^\circ$. Recently, a new
experimental setup has been designed to study proton induced reactions on the exotic
nuclei in inverse kinematics using a liquid hydrogen target adapted to obtain
low-background data \cite{Kis11}. The new setup was successfully used to measure
the elastic $^{6,8}$He+$p$\ scattering at energies around 700 MeV/nucleon, and the measured
cross section has been extended to the region of higher momentum transfer as
compared to the previous experiments.
\begin{figure}[t]
\centering
\includegraphics[angle=0,scale=0.42]{Fig1.eps}
\caption{(Color online) Elastic $^{6,8}$He+$p$\ scattering data at the energies around
700 MeV/nucleon measured by Neumaier {\it et al.} \cite{Neu02} and by Kiselev
{\it et al.} \cite{Kis11} at the low and high momentum transfers, respectively.}
\label{f1}
\end{figure}
We note that the considered $^{6,8}$He+$p$\ data \cite{Neu02,Kis11} were originally
deduced in terms of the scattering cross section versus the 4-momentum
transfer squared ($d\sigma/dt$). For a comparison between the results of different
models, it is more convenient to use the elastic $^{6,8}$He+$p$\ scattering cross
section versus the scattering angle ($d\sigma/d\Omega$) in the
center-of-momentum (c.m.) frame. The two cross sections are expressed
through each other by the relativistic kinematics \cite{Rela}
\begin{eqnarray}
\cos\theta_{\rm c.m.}&=&1+\frac{t}{2k^2} \nonumber \\
\Rightarrow {\frac{d\sigma(\theta_{\rm c.m.})}{d\Omega}}&=&
\frac{k^2}{\pi}\frac{d\sigma(t)}{dt}. \label{eq1}
\end{eqnarray}
Here $-t$ and $k$ are the 4-momentum transfer squared and c.m. momentum,
respectively. In terms of the scattering angles, the new data points measured
at high momentum transfer \cite{Kis11} have reached the angular region around
the first diffractive minimum (see Fig.~\ref{f1}), and should be, therefore,
more sensitive to the inner radial part of the ground-state densities of the
$^{6,8}$He nuclei. The (first) diffractive minimum was observed at
$\theta_{\rm c.m.}\approx 20^\circ\div 25^\circ$ in the $^6$He+$p$ and
$^8$He+$p$ data taken at high momentum transfer.
In the present work, the elastic $^{6,8}$He+$p$ scattering data under study
have been analyzed within the Glauber multiple-scattering model (GMSM) using the
same phenomenological parametrizations of the matter densities of $^{6,8}$He
as those used in the earlier GMSM analyses of the GSI data \cite{Al02}. Because
the two measurements of Refs.~\cite{Neu02, Kis11} were done practically at
the same energy, it is possible to combine these two data sets in the present
GMSM analysis.
\section{Glauber multiple-scattering model}
The basic formalism of the GMSM has been given in details in Ref.~\cite{Al78}.
The GMSM was successfully used in Refs.~\cite{Al02,Al97,Eg01} to analyze
the elastic $^{6,8}$He+$p$ scattering data measured at low momentum transfer,
and to deduce the nuclear matter density distributions for these nuclei. However,
the GMSM calculations in Refs.~\cite{Al02,Al97} were performed without taking
into account the spin-orbital (s/o) interaction, because the s/o effects were
known to be negligible at the most forward angles (low momentum transfer)
\cite{Al78}. Given the new data measured at high momentum transfer which cover
the first diffractive minimum, the s/o effects should be significant and
can no more be neglected (see, e.g., Fig.~23 of Ref.~\cite{Al78}). In the
present work, the formalism of the GMSM that takes into account the s/o
interaction has been used in the analysis of the new $^{6,8}$He+$p$ data.
The proton-nucleus\ elastic scattering cross section is determined from the elastic
scattering amplitude $F_{\rm el}$ as
\begin{equation}
{\frac{d\sigma}{d\Omega}}_{\rm c.m.}=|F_{\rm el}(\bm q)|^2.
\end{equation}
In general, the scattering amplitude can be written as \cite{Al02,Glau70}
\begin{equation}
F_{\rm el}(\bm q) = \frac{ik}{2\pi}\int e^{i\bm{qb}}\left\{1-
\prod\limits_{j = 1}^A \left[1-\gamma_{pN}(\bm b - \bm s_j) \right]\right\}
\rho_A(\bm r_1, \bm r_2, ..., \bm r_A) \prod\limits_{j = 1}^A d^3r_jd^2b.
\label{Amp}
\end{equation}
For the light He nuclei, the effect of the center-of-mass motion
should be significant. To effectively remove the spurious c.m. motion,
the proton-nucleus\ scattering amplitude (\ref{Amp}) is multiplied by a correction
factor $H_{\rm c.m.}(\bm q)$
\begin{equation}
H_{\rm c.m.}(\bm q)=\exp\left[\frac{{\bm q}^2R_{\rm m}^2}{6(A - 1)}\right],
\end{equation}
where $R_{\rm m}$ is the root-mean-square matter radius. Such a procedure
is exact for the nucleon distributions of Gaussian form, and also expected
to be accurate for the cases of non-Gaussian distributions \cite{Al78}.
Like the previous GMSM studies \cite{Al02}, we have used in the present
analysis several density models that divide explicitly the nuclear
many-body density $\rho_A(\bm r_1, \bm r_2, ..., \bm r_A)$ into the core
$\rho_c(r)$ and halo $\rho_h(r)$ parts, so that
\begin{equation}
\rho_A(\bm r_1, \bm r_2, ...,\bm r_A)=\prod\limits_{i=1}^4{\rho_{\rm core}(r_i)}
\prod\limits_{j=5}^A{\rho_{\rm halo}(r_j)}. \label{den_ch}
\end{equation}
The representation (\ref{den_ch}) of the many-body density neglects the
correlations between the nucleon locations, with a constraint that the
positions of the core and halo nucleons are treated explicitly. In other
cases, the nuclear many-body density has been simply assumed as
a product of the one-body matter densities $\rho_{\rm m}(r)$
\begin{equation}
\rho_A(\bm r_1,\bm r_2,...,\bm r_A)=\prod\limits_{j=1}^A {\rho_{\rm m}(r_j)}.
\label{den_m}
\end{equation}
In the notations of Eq.~(\ref{Amp}), $\bm b$ is the impact parameter, $\bm q$ is
the momentum transfer, and $A$ is the nuclear mass number. The proton-nucleon
($pN$) profile function $\gamma_{pN}$ is determined from the amplitude $f_{pN}$
of the free $pN$ scattering as
\begin{equation}
\gamma_{pN}(\bm b)=\frac{1}{2\pi ik}\int\exp(-i\bm{qb})f_{pN}(\bm q)d^2q.
\end{equation}
In difference from Refs.~\cite{Neu02,Al02}, the present GMSM calculation
adopts the following parametrization of $f_{pN}$ that takes into account
also the s/o interaction
\begin{equation}
f_{pN}(\bm q)=f^{\rm c}_{pN}(\bm q)+ \bm\sigma(\hat{\bm b}\times\hat{\bm k})
f^{\rm s}_{pN}(\bm q),\ {\rm with}\ \hat{\bm b}=\bm b/b,\ \hat{\bm k}=\bm k/k.
\end{equation}
Here, $f^{\rm c}_{pN}(\bm q)$ and $f^{\rm s}_{pN}(\bm q)$ are the central and
s/o parts of the $pN$ scattering amplitude, $\bm\sigma$ is the Pauli spin
operator. We have parametrized the $f^{\rm s}_{pN}$ amplitude in the same
way as in Refs.~\cite{Aug76,Ray79}, taking into account explicitly the isotopic
difference between the total neutron and proton cross sections. Thus, one has
\begin{eqnarray}
f^{\rm c}_{pN}(\bm q) &= &\frac{k\sigma_{pN}}{4\pi}(\varepsilon_{pN}+i)
\exp\left(-\frac{{\bm q}^2\beta_{pN}}{2}\right),\ N=p,n \nonumber \\
f^{\rm s}_{pN}(\bm q) &=& \frac{k\sigma_{pN}}{4\pi}\sqrt{\frac{{\bm q}^2}{4M^2}}
(i\alpha _{\rm s}-1)D_{\rm s}\exp\left(-\frac{{\bm q}^2\beta_{\rm s}}{2}\right).
\label{NAmp}
\end{eqnarray}
Here $\sigma_{pN}$ is the total $pN$ cross section, parameters $\varepsilon_{pN}$
and $\alpha_{\rm s}$ give the ratios of the real and imaginary strengths,
$\beta_{pN}$ and $\beta_{\rm s}$ are the slope parameters, $D_{\rm s}$ is the
relative strength of the s/o amplitude, and $M$ is the nucleon mass.
In the present work we have assumed the same parameters for the central amplitude
$f^{\rm c}_{pN}$ as those used earlier in Ref.~\cite{Al02}, except the slope
parameters $\beta_{pN}$ that were fine tuned to obtain the best description of
the elastic $p+^4$He\ data at $E_p\approx 700$ MeV \cite{Neu02,Gre89} in the GMSM
calculation that takes into account the s/o interaction explicitly. The reason
is that the $\beta_{pN}$ values used in Ref.~\cite{Al02} were adjusted to the
best GMSM description of the same $p+^4$He\ data without taking into account the
s/o term. Thus, $\beta_{pN}$ and parameters of the s/o term have been readjusted
in the present work to the best description of the elastic $p+^4$He\ data at
700 MeV, as shown in Fig.~\ref{He4}. All the parameters used in the present
GMSM calculation are given in Table~\ref{t1}, with the newly obtained
$\beta_{pN},\ D_{\rm s},\ \beta_{\rm s}$, and $\alpha_{\rm s}$
values being quite close to those suggested earlier in Ref.~\cite{Ray79}.
\begin{figure}[b]
\centering
\includegraphics[angle=0,scale=0.45]{He4.eps
\caption{(Color online) Elastic $p+^4$He\ scattering data measured at proton energies around
700 MeV (circles \cite{Neu02} and squares \cite{Gre89}) in comparison with the
elastic scattering cross section given by the GMSM calculation (solid line), taking
into account the s/o term and using a Gaussian density for $^4$He that gives
$R_{\rm m}=1.47$ fm.} \label{He4}
\end{figure}
The GMSM results shown in Fig.~\ref{He4} agree also with the fully quantal
optical model results given by the complex $p+^4$He\ optical potential
obtained from the folding model calculation \cite{Kho02} using the same Gaussian
density for $^4$He and finite-range t-matrix interaction by Franey and Love
\cite{Fra85}. This validates the parameters chosen for the present
GMSM calculation.
\begin{table}[t]
\centering
\caption{Parameters of the central and s/o scattering amplitudes (\ref{NAmp})
used in the present GMSM analysis of the elastic $^{6,8}$He+$p$\ scattering.} \label{t1}
\vspace{0.5 cm}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline
System & $E_p$ & $\sigma_{pp}$ & $\varepsilon_{pp}$ & $\sigma_{pn}$ &
$\varepsilon_{pn}$ & $\beta_{pp}$ & $\beta_{pn}$ & $D_s$ & $\alpha_s$ &
$\beta_s$ \\
& (MeV) & (mb) & & (mb) & & (fm$^2$) & (fm$^2$) & & & (fm$^2$) \\ \hline
$^8$He+$p$\ & 674 & 41.9 & 0.129 & 37.4 & -0.283 & 0.20 & 0.24 & 0.284 & 13.50 & 0.522 \\ \hline
$^6$He+$p$\ & 717 & 44.6 & 0.069 & 37.7 &-0.307 & 0.20 & 0.24 & 0.284 & 13.50 & 0.522 \\ \hline
\end{tabular}
\end{table}
Using the profile function $\gamma_{pN}$ determined by the $pN$ scattering
amplitude (\ref{NAmp}) and treating the Coulomb term in the standard way
\cite{Al78,Glau70}, the proton-nucleus\ elastic scattering amplitude can be written as
\cite{Fal78,Al78}
\begin{equation}
F^2_{\rm{el}}(q)=|F_{\rm {Coul}}(q)+F_{\rm c}(q)|^2+|F_{\rm s}(q)|^2, \label{CS}
\end{equation}
where $F_{\rm c}$ and $F_{\rm s}$ are the central and s/o proton-nucleus\ amplitudes
\cite{Al78,Fal78}
\begin{eqnarray}
F_{\rm c}(q) &= & ikH_{\rm {CM}}(q)\int[1-G_{\rm c}(b)]\exp[i\chi_{\rm {Coul}}(b)]
J_0(qb)bdb, \nonumber \\
F_{\rm s}(q) &= &-kH_{\rm {CM}}(q)\int G_{\rm s}(b)\exp[i\chi_{\rm {Coul}}(b)]
J_1(qb)bdb. \label{e2}
\end{eqnarray}
The $G$ functions contain explicitly the central and s/o contributions as
\begin{eqnarray}
G_{\rm c}(b)=\frac{1}{2}\left\{\prod\limits_{j = 1}^A[1-\Gamma^{\rm c}_j(b)+
\Gamma^{\rm s}_j(b)]+\prod\limits_{j = 1}^A[1-\Gamma^{\rm c}_j(b)-
\Gamma^{\rm s}_j(b)]\right\}, \nonumber \\
G_{\rm s}(b)=\frac{1}{2}\left\{\prod\limits_{j = 1}^A[1-\Gamma^{\rm c}_j(b)+
\Gamma^{\rm s}_j(b)]-\prod\limits_{j = 1}^A[1-\Gamma^{\rm c}_j(b)-
\Gamma^{\rm s}_j(b)]\right\}. \label{gam}
\end{eqnarray}
The nucleon profile functions $\Gamma^{\rm c}$ and $\Gamma^{\rm s}$ are
determined as
\begin{eqnarray}
\Gamma^{\rm c}_j(b) &=& -\frac{i}{k}\int f^{\rm c}_{pN}(q)S_j(q)J_0(qb)qdq,
\nonumber \\
\Gamma^{\rm s}_j (b) &=& -\frac{1}{k}\int f^{\rm s}_{pN}(q)S_j(q)J_1(qb)qdq.
\label{e3} \end{eqnarray}
Here $J_{0,1}(x)$ are the zero-th and first-order Bessel functions.
$F_{\rm {Coul}}(q)$ and $\chi_{\rm {Coul}}(b)$ are the Coulomb amplitude
and phase, respectively \cite{Glau70}. In difference from the earlier GMSM
calculations \cite{Al02,Dob06,Ili12}, the Sommerfeld parameter (used to determine
the Coulomb term) obtained with the relativistic kinematics has been used in the
present work. The form factor $S_j(q)$ is determined by the Fourier transform
of the single-particle density as
\begin{equation}
S_j(q)=\frac{1}{H_{\rm {CM}}(q)}\int\exp(i\bm{qr})\rho_j(r)d^3r.
\end{equation}
When one writes explicitly the products in Eq.~(\ref{gam}) in terms of the
nucleon profile functions, the proton-nucleus\ scattering amplitude becomes a multiple
scattering series \cite{Al78}. When the s/o term is neglected, the amplitude
(\ref{CS}) is simplified to that used in the earlier GMSM calculation
that did not take into account the s/o interaction \cite{Al02,Dob06,Ili12}.
\section{Nuclear densities}
\subsection{Parametrization of the nuclear matter distribution}
In addition to the proton-nucleon\ scattering amplitudes, the nuclear matter density
distribution is a vital input of the Glauber model calculation. Like in the
previous studies \cite{Al02,Glau70}, the nucleon point-density has been
parametrized in the following phenomenological forms.
\subsubsection{The symmetrized Fermi (SF) density}
The SF density distribution is parametrized \cite{Al02} as
\begin{equation}
\rho_{\rm m}(r)=\frac{3}{4\pi R_0^3}\left[4+\left(\frac{\pi a}{R_0}\right)^2
\right]^{-1}\sinh\left(\frac{R_0}{a}\right)\left[\cosh\left(\frac{R_0}{a}\right)
+\cosh\left(\frac{r}{a}\right)\right]^{-1}, \label{SF}
\end{equation}
where $R_0$ is the half-density radius (at which the density becomes twice
smaller than at the origin) and $a$ is the diffuseness parameter. The
corresponding root-mean-square (rms) matter radius $R_{\rm m}$ is given by
\begin{equation}
R_{\rm m}=\langle r^2\rangle^{1/2}=\left(\frac{3}{5}\right)^{1/2}R_0
\left[1+\frac{7}{3}\left(\frac{\pi a}{R_0}\right)^2\right]^{1/2}.
\end{equation}
\subsubsection{The Gaussian-Halo (GH) density}
The GH density distribution is determined \cite{Al02} as a function of the rms
radius $R_{\rm m}$ as
\begin{equation}
\rho_{\rm m}(r)=\left(\frac{3}{2\pi R^2_{\rm m}}\right)^{3/2}[1+
\alpha\varphi(r)]\exp\left(-\frac{3r^2}{2R^2_{\rm m}}\right)
\end{equation}
with
\begin{equation}
\varphi(r)=\frac{3}{4}\left[5-10\left(\frac{r}{R_{\rm m}}\right)^2+
3\left(\frac{r}{R_{\rm m}} \right)^4 \right]. \label{GH}
\end{equation}
\subsubsection{The Woods-Saxon (WS) density}
The WS density has been used by Glauber in his pioneering work \cite{Glau70}
\begin{equation}
\rho_{\rm m}(r)=\frac{C}{1+\exp\left(\displaystyle\frac{r-R_0}{a}\right)},
\label{WS}
\end{equation}
where $R_0$ and $a$ are the same parameters as those used in Eq.~(\ref{SF}),
and $C$ is normalized such that (\ref{WS}) is the nucleon point-density.
\subsubsection{The Gaussian-Gaussian (GG) density}
In the GG parametrization the locations of the core and halo nucleons are
treated explicitly, with both the core and halo densities assumed to be
in the Gaussian form \cite{Al02}
\begin{equation}
\rho_{\rm core(halo)}(r)=\left({\frac{3}{2\pi R_{\rm c(h)}^2}}\right)^{3/2}
\exp\left(-\frac{3r^2}{2R_{\rm c(h)}^2}\right). \label{GG}
\end{equation}
\subsubsection{The Gaussian-Oscillator (GO) density}
The GO density has the same Gaussian core as in the GG case, but the halo
distribution is parametrized using the $p$-shell harmonic oscillator
wave function \cite{Al02}
\begin{eqnarray}
\rho_{\rm core}(r)& = &\left(\frac{3}{2\pi R_{\rm c}^2}\right)^{3/2}
\exp\left(-\frac{3r^2}{2R_{\rm c}^2}\right) \nonumber \\
\rho_{\rm halo}(r)& = &\frac{5}{3}\left(\frac{5}{2\pi R_{\rm h}^2}\right)^{3/2}
\left(\frac{r}{R_{\rm h}}\right)^2\exp\left(-\frac{5r^2}{2R_{\rm h}^2}\right).
\label{GO} \end{eqnarray}
Because the GG and GO models allow to treat the core and halo nucleons
explicitly, the nuclear volumes of $^6$He and $^8$He can be assumed
to be composed of an $\alpha$-like core plus 2 and 4 halo neutrons,
respectively. The nuclear many-body density based on the GG and GO
parametrizations can be expressed as (\ref{den_ch}). We can further write
\begin{equation}
\rho_{\rm m}(r)=[N_{\rm core}\rho_{\rm core}(r)+N_{\rm halo}
\rho_{\rm halo}(r)]/A, \label{e7}
\end{equation}
where $\rho_{\rm {core(halo)}}$ are normalized to unity like $\rho_{\rm m}$, and
$N_{\rm core}$ and $N_{\rm halo}$ are the nucleon numbers in the core and halo
volumes, respectively. From Eq.~(\ref{e7}) one obtains the rms matter radius
of the nucleus as
\begin{eqnarray}
R_{\rm m}=\left[\int r^2\rho_{\rm m}(r)d^3r\right]^{1/2}. \label{e8}
\end{eqnarray}
The core and halo radii ($R_{\rm c}$ and $R_{\rm h}$) are determined by the
same Eq.~(\ref{e8}) using $\rho_{\rm core}$ and $\rho_{\rm halo}$, respectively.
\subsection{$\chi^2$-fit procedure for the density parameters}
Each phenomenological density distribution determined above has two free
parameters (like $R_0$ and $a$ in the SF and WS parametrizations). The aim of the
present analysis is to find the optimal values of these parameters based on
the best GMSM description of the experimental data. In the $\chi^2$-fit procedure,
the density parameters are varied independently from each other, and the
statistical errors as well as the uncertainty in the absolute normalization of the
measured scattering cross sections are taken properly into account \cite{Al02}.
The elastic scattering cross sections at the low and high momentum transfers
were measured at practically the same energy, and this allows us to combine
both data sets in the present analysis. Thus, the $\chi^2$ function
is determined as
\begin{eqnarray}
\chi^2&=&\sum\limits_{j=1}^{N_L}\left[\frac{A_{\rm L}\sigma_{\rm exp}(\theta_j)
-\sigma_{\rm cal}(\theta_j)}{\Delta\sigma_{\rm exp}(\theta_j)}\right]^2 +
\sum\limits_{k = 1}^{N_H}\left[\frac{A_{\rm H}\sigma_{\rm exp}(\theta_k)-
\sigma_{\rm cal}(\theta_k)}{\Delta \sigma_{\rm exp}(\theta_k)}\right]^2 \nonumber \\
& & +\left(\frac{A_{\rm L}-1}{\Delta A^{\rm L}_{\rm exp}}\right)^2 +
\left(\frac{A_{\rm H}-1}{\Delta A^{\rm H}_{\rm exp}}\right)^2, \label{chi2}
\end{eqnarray}
where $\sigma_{\rm exp}(\theta_j)\equiv
[d\sigma/d\Omega_{\rm c.m.}(\theta_j)]_{\rm exp}$
and $\Delta \sigma_{\rm exp}(\theta_j)$ are the experimental differential cross
sections measured at $\theta_j$ and their statistical errors, and
$\sigma_{\rm cal}(\theta_j)\equiv[d\sigma/d\Omega_{\rm c.m.}(\theta_j)]_{\rm cal}$
are the calculated cross sections. $N_L$ and $N_H$ are the number of data
points measured at low \cite{Neu02} and high momentum transfers \cite{Kis11},
respectively. $A_{\rm L}$ and $A_{\rm H}$ are the absolute normalization of the
data points at low and high momentum transfers, and they are treated as free
parameters in the $\chi^2$-fit, with the estimated uncertainties of the absolute
calibration $\Delta A^{\rm L}_{\rm exp}\approx 3\%$ \cite{Neu02} and
$\Delta A^{\rm H}_{\rm exp}\approx 2.4\%$ \cite{Kis11}.
\section{Results of the GMSM analysis and discussion}
\subsection{The matter radii and matter distributions of $^{6,8}$He}
The $\chi^2$ analysis has been done carefully for each density parametrization
to obtain the best GMSM description of the elastic $^{6}$He and $^{8}$He
scattering data measured at the energies of 717 and 674 MeV/u,
respectively. All the best-fit parameters are presented in Tables \ref{tHe6}
and \ref{tHe8}.
\begin{table}[!b]
\centering
\caption{The best-fit parameters of the nuclear densities (\ref{SF})-(\ref{GO})
obtained from the present GMSM analysis of the combined set of the elastic
$^{6}$He+$p$ scattering data measured at low \cite{Neu02} and high momentum
transfer \cite{Kis11}. The relative $\chi^2_{\rm r}$ is per data point, and the
errors are statistical. The neutron radius $R_{\rm n}$ is determined with the
assumption that the proton and core radii are the same, i.e., $R_{\rm p}=R_{\rm c}$.
The COSMA density (\ref{COSMA}) is parametrized by the same functional as that of the GO
density model, with the corresponding parameters given in round brackets.}
\label{tHe6}\vspace{0.5 cm}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
density & $A_{\rm L}$ & $A_{\rm H}$ & \multicolumn{2}{c|}{density parameters} &
$R_{\rm m}$ & $\chi^2_{\rm r}$ & $R_{\rm n}$ & $R_{\rm n}-R_{\rm p}$\\ \cline{4-5}
& & & (fm) & (fm) & (fm) & & (fm) & (fm) \\ \hline
GG &1.04(3)&1.09(4)&$R_{\rm c}$=1.96(4) &$R_{\rm h}$=3.30(12) &2.48(6)&1.41 & 2.71(7) & 0.75(8)\\ \hline
GO &1.05(2)&1.04(2)&$R_{\rm c}$=1.90(3) &$R_{\rm h}$=3.26(13) &2.44(5)& 0.88 & 2.67(8) & 0.77(9) \\ \hline
GH &1.04(3)&1.09(3)&$R_{\rm m}$=2.45(4)&$\alpha$=0.12(2) &2.45(4)&1.39 & & \\ \hline
SF &1.05(4)&1.09(3)&$R_0$=1.00(8) &$a$=0.61(2) &2.40(5)&1.55 & &\\ \hline
WS &1.04(2)&1.07(3)&$R$=0.99(5) &$a$=0.63(2) &2.45(6)&1.00 & &\\ \hline
COSMA &1.00&1.00&$a$=1.55 ($R_{\rm c}$=1.90)&$b$=2.12 ($R_{\rm h}$=3.35) &2.48&1.49 & 2.72 &0.82\\ \hline
\end{tabular}
\end{table}
\begin{table}[!b]
\centering
\caption{The same as table \ref{tHe6} but for the $^{8}$He+$p$ system} \label{tHe8}
\vspace{0.5 cm}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
density & $A_{\rm L}$ & $A_{\rm H}$ & \multicolumn{2}{c|}{density parameters} &
$R_{\rm m}$ & $\chi^2_{\rm r}$ & $R_{\rm n}$ & $R_{\rm n}-R_{\rm p}$\\ \cline{4-5}
& & & (fm) & (fm) & (fm) & & (fm) & (fm) \\ \hline
GG &1.00(2)&0.99(6)&$R_{\rm c}$=1.81(6)&$R_{\rm h}$=3.12(13)&2.55(8)&1.35 &2.75(10) & 0.94(12)\\ \hline
GO &1.03(2)&0.95(7)&$R_{\rm c}$=1.69(6)&$R_{\rm h}$=2.99(14)&2.43(9)&1.50 &2.63(11) & 0.94(12)\\ \hline
GH &1.01(2)&0.98(6)&$R_{\rm m}$=2.50(5)&$\alpha$=0.13(4) &2.50(5)&1.35 & & \\ \hline
SF &1.01(2)&0.96(5)&$R_0$=0.66(4) &a=0.66(2) &2.51(7)&1.16 & & \\ \hline
WS &1.01(2)&0.97(5)&$R$=0.80(8) &a=0.66(2) &2.51(5)&1.15 & & \\ \hline
COSMA &1.00&1.00&$a$=1.38 ($R_{\rm c}$=1.69) &$b$=1.99 ($R_{\rm h}$=3.15) &2.53&2.15 & 2.75 &1.06\\ \hline
\end{tabular}
\end{table}
The elastic $^{6,8}$He+$p$\ scattering cross sections given by the GMSM calculation
using the best-fit parameters (see Tables~\ref{tHe6} and \ref{tHe8}) of the
nuclear matter densities are compared with the data in Figs.~\ref{He6} and \ref{He8}.
Focusing on the new data points measured at high momentum transfer, one can see
that the first diffraction maximum in the elastic scattering cross section is now
fully covered by the data and it turned out that the combined data set allowed
for an improved determination of the parameters of the density distribution.
The data and the calculated cross sections divided by the Rutherford
cross section are presented in Figs.~\ref{He6} and \ref{He8}, and one can see
that the elastic $^{6,8}$He+$p$\ scattering at the considered energies is strongly dominated
by the nuclear scattering, and that allows the fine-tuning of the density inputs
for the GMSM calculation by the $\chi^2$-fit procedure (\ref{chi2}).
\begin{figure}[!t
\includegraphics[angle=0,scale=0.37]{He6_Coulomb.eps}
\caption{(Color online) Elastic $^6$He+$p$\ scattering cross sections (divided by Rutherford cross section)
obtained with the GMSM calculation (solid curve) using the best-fit parameters of the
GG (a) and GO (b) models of the nuclear density, in comparison
with the data measured by Neumaier {\it et al.} \cite{Neu02} and by Kiselev {\it et al.}
\cite{Kis11} at low and high momentum transfers, respectively.
The dash-dotted curves were obtained with the best-fit parameters of the GG
and GO density models taken from Ref.~\cite{Al02}, and the dashed curves were
obtained with the $^6$He density given by the cluster-orbital shell-model
approximation (COSMA) \cite{Kor97}.}.
\label{He6}
\end{figure}
\begin{figure}[!t
\hspace{-1cm}
\includegraphics[angle=0,scale=0.38]{He8_Coulomb.eps}
\caption{(Color online) The same as Fig.~\ref{He6} but for the $^8$He+$p$\ scattering.} \label{He8}
\end{figure}
From a comparison of the best-fit matter radii $R_{\rm m}$ obtained in the present
work for $^{6,8}$He with the results of the earlier GMSM analysis \cite{Al02}
based on the low-momentum data only \cite{Neu02}, we found that the newly obtained
$R_{\rm m}$ values are slightly larger than those reported in Ref.~\cite{Al02}.
In terms of the $\chi^2$- fit, the accuracy of the present GMSM analysis is
about the same as that of Ref.~\cite{Al02}. The nuclear radii obtained in
the present work are also in a sound agreement with the empirical
matter radii of $^{6,8}$He discussed recently by Tanihata \emph{et al.} in
Ref.~\cite{Tan13}. The GG and GO density models treat the core and halo parts
explicitly, and we could determine from our GMSM analysis the neutron skin
of 0.76(10) and 0.94(13) fm for $^{6}$He and $^{8}$He, respectively.
Such neutron skins are much thicker than, e.g., the neutron skin
of around $0.2\sim 0.3$ fm established for the heavy $^{208}$Pb nucleus with
a large neutron excess, and clearly associated with the halo structure
of the $^{6}$He and $^{8}$He isotopes.
It is noteworthy that both the GG and GO density models give the best-fit core
radius for $^{6}$He slightly larger than that for $^{8}$He, and that makes the
difference in the observed neutron skin because the neutron radii are about the
same for the two nuclei. Such an effect was also found in the earlier GMSM
analysis of the elastic $^{6,8}$He+$p$\ scattering data taken at low-momentum transfer
\cite{Al02}, and it might be due to different polarizing contributions of the
valence neutrons to the motion of the $\alpha$-core. Quite complementary
to this discussion are the high-precision laser spectroscopy data
that yield a charge radius of 2.068(11) fm for $^{6}$He, which is
significantly larger than the charge radius of 1.93(3) fm obtained
for $^{8}$He \cite{Mue07}. After the standard correction for the finite size
of the proton \cite{Tan13}, we can obtain the proton radii of 1.925(12) and 1.81(3)
fm for $^{6}$He and $^{8}$He, respectively, from the laser spectroscopy data.
Such proton radii are in a good agreement with the core radii of $^{6}$He
and $^{8}$He given by the present GMSM analysis (see $R_{\rm c}$ values
in Tables~\ref{tHe6} and \ref{tHe8}).
\begin{figure}[!t]
\centering
\includegraphics[angle=0,scale=0.36]{He6_dens.eps}
\caption{(Color online) The average nuclear matter density distribution of $^6$He (upper panel) deduced from the
GMSM fit to the data using the present SF, GH, WS, GG and GO parametrizations,
with the uncertainty band determined by the statistical errors of the best-fit
parameters of the density models. The same density is plotted in logarithmic scale
in the lower panel to illustrate the uncertainty at large radii.} \label{He6_dens}
\end{figure}
To compare with the available results for the microscopic nuclear densities
predicted by the cluster-orbital shell-model approximation (COSMA) \cite{Kor97},
we have also used the COSMA densities as input for the present GMSM calculation,
and the results are presented in Tables~\ref{tHe6} and \ref{tHe8} and Figs.~\ref{He6}
and \ref{He8}. One can see that the COSMA densities give a good description
of the $^6$He+$p$\ data, but fail to account for the data points taken at angles
beyond the diffractive minimum for the $^8$He+$p$\ system. Because the newly measured
data points at large angles allowed us to improve the density parameters of the
density models, these data are also helpful in fine-tuning the existing
parameters of the COSMA densities \cite{Kor97}. Namely, from the explicit
expression of the COSMA density
\begin{equation}
\rho_{\rm m}(r)= N_{\rm core}\frac{\exp(-r^2/a^2)}{\pi^{3/2}a^3}+N_{\rm halo}
\frac{2\exp(-r^2/b^2)}{3\pi^{3/2}b^5}r^2, \label{COSMA}
\end{equation}
we find immediately from the best-fit parameters of the GO model in Tables~\ref{tHe6}
and \ref{tHe8} that the improved $a$ and $b$ parameters of COSMA are 1.55(2) and
2.06(8) fm, respectively, for $^6$He, and 1.38(5) and 1.89(9) fm for $^8$He.
\begin{figure}[hbt]
\centering
\includegraphics[angle=0,scale=0.36]{He8_dens.eps}
\caption{(Color online) The same as Fig.~\ref{He6_dens} but for the matter distribution
of $^8$He.} \label{He8_dens}
\end{figure}
With the new parameters of the considered density models given in Tables
\ref{tHe6} and \ref{tHe6}, it is of interest to construct the average
radial shape of the nuclear matter density distributions for the
$^6$He and $^8$He isotopes. The radial profiles of the nuclear matter densities
of the $^6$He and $^8$He isotopes based on the best-fit parameters of 5 density
models are plotted in Figs.~\ref{He6_dens} and \ref{He8_dens}, respectively.
The errors in tables \ref{tHe6} and \ref{tHe8} are statistical errors
coming from fitting and data point normalization. The total errors should have included, in
addition, the contribution from the uncertainties in the $pN$ scattering amplitudes,
the $t$-scale calibration, and model uncertainty \cite{Al02}. Thus, the final
averaged nuclear matter radii $R_{\rm m}$ of the $^6$He and $^8$He
isotopes obtained from the consistent GMSM analysis using the
five phenomenological parametrizations of nuclear matter density are
\begin{center}
$R_{\rm m}=2.44 \pm 0.07$ fm for $^6$He, \\
$R_{\rm m}=2.50 \pm 0.08$ fm for $^8$He.
\end{center}
We note further, in connection with the realistic core (or proton) radii of the $^{6}$He
and $^{8}$He nuclei discussed above, the results of the Glauber few-body calculation \cite{Al98}
of the elastic $^{6,8}$He+$p$\ scattering that gives a very nice description of the data measured
at low transfer momentum, using the microscopic few-body model that gives the
nuclear matter radii of the $^6$He and $^8$He nuclei $R_{\rm m}\approx 2.50$ and
2.60 fm, respectively. These values are somewhat larger than those obtained in
Ref.~\cite{Al02} (based on the data measured at low transfer momentum) and in the
present work (based on the complete data set extended to high transfer momentum).
A likely reason for such a disagreement is the assumption of the rigid $\alpha$-core
of the fixed radius $R_{\rm c}\approx 1.49$ fm in the few-body calculation, which
is not the case in view of the high-precision laser spectroscopy data \cite{Mue07}
that give the proton radii of 1.925(12) and 1.81(3) fm for $^{6}$He and $^{8}$He,
respectively. Such an effect is expected to be due to the different polarizing contributions
of the valence neutrons to the motion of the $\alpha$-core in these two nuclei
\cite{Tan13}. It is, therefore, of high interest to have the few-body calculation \cite{Al98}
redone using the quoted experimental values for the $\alpha$-core radius.
\subsection{Sensitivity of the data to the core and halo parts of the
matter distribution, and to the spin-orbit term}
\begin{figure}[t]
\centering
\includegraphics[angle=0,scale=0.36]{He8_Sens_GG.eps}
\caption{(Color online) The sensitivity of the elastic $^8$He+$p$\ data to the core (a)
and halo (b) parts of the GG density of $^{8}$He being used in
the GMSM calculation. See text for more details.} \label{Sens}
\end{figure}
Taking into account the new data taken at high momentum transfer, it is naturally
to expect that these data points are more sensitive to the inner part of the density
distribution compared to the sensitivity of the data taken at low momentum
transfer only. We have made, therefore, some comparisons of the GMSM results
obtained for the $^8$He+$p$\ case with the halo or core radius of the GG or GO density
model fixed, and the other radius (core or halo) being changed up and down by about
0.1 fm from the best-fit values given in Table~\ref{tHe8}. From the GMSM results shown
in the upper panel of Fig.~\ref{Sens} one can see that the data measured at high
momentum transfer are indeed sensitive to the core part of the density distribution
of $^{8}$He. A similar variation of the halo radius resulted on a much smaller
change in the calculated elastic scattering cross section that is hardly
visible in the logarithmic scale (lower panel of Fig.~\ref{Sens}). Similar
results were also found for the $^6$He+$p$\ case, and these results confirm that
the elastic scattering data measured at high momentum transfer are very valuable
for a precise determination of the core matter density of a halo nucleus. Note
that the GG and GO density parametrizations are defined with an assumption that
both the $^6$He and $^8$He nuclei have a $\alpha$-like core. The present GMSM
analysis using the GG and GO density parametrizations has reached a good fit
of the data (see $\chi^2$ values in Tables~\ref{tHe6} and \ref{tHe8}) and
we obtained the following average core and halo radii of the two He isotopes
\begin{center}
$R_{\rm c}=1.93 \pm 0.06$ fm, $R_{\rm h}=3.28 \pm 0.13$ fm for $^6$He, \\
$R_{\rm c}=1.75 \pm 0.08$ fm, $R_{\rm h}=3.06 \pm 0.14$ fm for $^8$He.
\end{center}
As discussed above, the $\alpha$-core radius of $^{6}$He is slightly larger
than that of $^{8}$He, and the $R_{\rm c}$ values are quite close to the proton
radii of $^{6}$He and $^{8}$He deduced from the laser spectroscopy data.
This is a clear indication of the different polarizing contributions of the
valence neutrons to the motion of the $\alpha$-core in the $^{6,8}$He nuclei.
\begin{figure}[bht]
\centering
\includegraphics[angle=0,scale=0.355]{He68_SOComp_GO.eps}
\caption{(Color online) Results of the GMSM calculation of the elastic $^{6,8}$He+$p$\ scattering
using the GO density models, with or without the inclusion of the
spin-orbit term.} \label{SO_com}
\end{figure}
We note further that the inclusion of the spin-orbit amplitude into the GMSM
calculation is necessary for the analysis of the elastic data measured
at large scattering angles or high momentum transfer. The GMSM results
plotted in Fig.~\ref{SO_com} show clearly the important contribution
of the s/o term around the first diffractive minimum as discussed earlier
by Alkhazov \cite{Al78}. The full GMSM calculation with both the central
and s/o amplitudes included also resulted on slightly larger matter
radii for $^{6,8}$He nuclei, which are closed to the empirical
values \cite{Tan13}.
\subsection{Nuclear geometry for the 2-neutron halo in the $^{6}$He nucleus}
In this section, we apply our GMSM results to the 2-neutron halo geometry
like that used by Tanihata {\it et al.} in Ref.~\cite{Tan13} for $^{6}$He.
In this model, the core is assumed to be a free core nucleus that
moves around the nuclear center of mass, like the 2-neutron halo does.
As a result, the size of the effective core is bigger than the free
$\alpha$-particle, and the extended matter distribution is mainly
determined by the location of the 2-neutron halo. The geometrical model
of the Borromean $^{6}$He nucleus is shown in Fig.~\ref{geometry},
where the nuclear radii under discussion are defined.
\begin{figure}[!t]
\centering
\includegraphics[angle=0,scale=0.8]{geometry.eps}
\caption{The nuclear geometry for a 2-neutron halo nucleus. See text
for more details} \label{geometry}
\end{figure}
\begin{table}[bht]
\caption{The radii (in fm) of the geometrical model \cite{Tan13} for the 2-neutron
halo nucleus $^{6}$He in comparison with the results of the present work.}
\vspace{0.5 cm} \label{radii}
\begin{tabular}{|c | c | c | c| } \hline
$^6$He & Definition from Ref.~\cite{Tan13} & Present work & Ref.~\cite{Tan13} \\ \hline
$R_{\rm m}$ &$R_{\rm m}$ & 2.44(7) & 2.43(3) \\ \hline
$R_{\rm p }$ &$R_{\rm c}$ & 1.93(6) & 1.912(18) \\ \hline
$R_{\rm h}$ &$R_{\rm h}$ & 3.28(13) & 3.37(11) \\ \hline
$R_{\rm n}$ & $R_{\rm n}$ & 2.69(9) & 2.65(4) \\ \hline
$R_{\rm n}-R_{\rm p}$ & $R_{\rm n}-R_{\rm p}$ & 0.76(10) & 0.808(47) \\ \hline
$\rho_{\rm c}$ &$(R_{\rm c}^2-r_{\rm {sm}}^2)^{1/2}$ & 1.26(7) & \\ \hline
$R_{\rm 2n}$ &$A_{\rm c}/A_{\rm h}\rho_{\rm c}$ & 2.52(13) & 2.52(5) \\ \hline
$R_{\rm c-2n}$ &$\rho_{\rm c}+R_{\rm 2n}$ & 3.79(14)& 3.84(6) \\ \hline
$R_{\rm di-n}$ &$(R_{\rm h}^2-R_{\rm 2n}^2)^{1/2}$ & 2.09(25) & \\ \hline
$R_{\rm n-n}$ &$2R_{\rm di-n}$ & 4.19(49)& 3.93(25) \\ \hline
$\bm R_{\rm n1}.\bm R_{\rm n2}$ & $(A_{\rm c}^2\rho_{\rm c}^2-R_{\rm n-n}^2)/4$&1.99(119)& 2.70(97) \\ \hline
\end{tabular}
\end{table}
Because the core is an $\alpha$-cluster, the matter, proton and neutron radii
of the core nucleus can be assumed equal
$r_{\rm {sm}}=r_{\rm {sp}}=r_{\rm {sn}}=1.46$ fm \cite{Tan13}.
Using these values and $R_{\rm m}$, $R_{\rm c}$, $R_{\rm h}$, $R_{\rm n}$ radii
given by the present GMSM analysis with the GG and GO density models,
the radii of the geometry shown in Fig.~\ref{geometry} can be determined
\cite{Tan13} as.
\begin{itemize}
\item The distance $\rho_{\rm c}$ between the nuclear center of mass and
the core center is
\begin{equation}
\rho_{\rm c}=\sqrt{R_{\rm c}^2-r_{\rm {sm}}^2}. \label{geob}
\end{equation}
\end{itemize}
\begin{itemize}
\item The vector $R_{\rm 2n}$ joining the nuclear center of mass and the midpoint
of the line connecting the two halo neutrons is determined from the balancing condition
\begin{equation}
A_{\rm c}\rho_{\rm c}=A_{\rm h}R_{\rm 2n}, \ \mbox{where}\ A_{\rm c}=4,\ A_{\rm h}=2.
\end{equation}
\end{itemize}
\begin{itemize}
\item The distance $R_{\rm c-2n}$ from the core center to the two halo neutrons is
\begin{equation}
R_{\rm c-2n}=\rho_{\rm c}+R_{\rm 2n}.
\end{equation}
\end{itemize}
\begin{itemize}
\item The distance $R_{\rm n-n}$ between the two halo neutrons is given by
\begin{equation}
R_{\rm n-n}=2R_{\rm di-n},\ \mbox{where}\ R_{\rm h}^2=R_{\rm 2n}^2+R_{\rm di-n}^2.
\end{equation}
\end{itemize}
\begin{itemize}
\item The radial correlation of the two halo neutrons is determined as
\begin{equation}
\bm R_{\rm n1}.\bm R_{\rm n2}=(A_{\rm c}^2\rho_{\rm c}^2-R_{\rm n-n}^2)/4. \label{geoe}
\end{equation}
\end{itemize}
The results obtained for the considered geometrical model of $^{6}$He are
summarized in Table~\ref{radii}, and one can see a good agreement of our results
with those determined in Ref.~\cite{Tan13}.
Despite the simplicity, the considered geometrical model gives a good illustration
of the core movement inside a 2-neutron halo nucleus, which can be estimated by
the difference between the core matter radius and that of the free core nucleus.
For the $\alpha$-core this is just the difference between the proton radius
of the free $\alpha$-particle and that of the considered halo nucleus because
protons are distributed in the $\alpha$-core only. In a similar manner, one
might suggest the geometry for $^8$He, but in this case 4 halo
neutrons are distributed uniformly against the $\alpha$-core, and the polarizing
contributions of the valence neutrons to the motion of the $\alpha$-core should
be weaker than that found in the case of $^6$He. A direct consequence is a smaller core
radius of $^8$He compared to that of $^6$He as found in the present GMSM analysis. It is noted that the $^8$He geometry can be considered to have 2 halo neutrons \cite{Chu05}. In this case, the same procedure is applied to determine the nuclear radii as discussed above but now the compact core is $^6$He.
\section{Summary}
The detailed GMSM analysis of the latest experimental data of the elastic $^{6,8}$He+$p$\
scattering at 717 and 674 MeV/u has been performed. Based on the new data points
measured up to the first diffractive minimum, the nuclear radii as well as the
radial shape of the matter distribution of these helium halo nuclei have been
determined, and the results are in a sound agreement with the
recent systematics of these quantities given in Ref.~\cite{Tan13}.
The sensitivity of the new data points taken at large momentum transfer to the
core radius of the $^{6,8}$He nuclei as well as to the spin-orbital
term in the GMSM calculation was demonstrated. The combined data set
taken at both low and high momentum transfer were used to fine-tune the parameters
of the nuclear densities of $^{6,8}$He based on the cluster-orbital shell-model
approximation \cite{Kor97}.
The core and halo radii obtained from the present GMSM analysis were used in a
geometrical model suggested for the Borromean nucleus $^{6}$He \cite{Tan13} to
determine various size parameters of this nucleus, and the results agree with
those obtained in Ref.~\cite{Tan13}. The enhancement of the $\alpha$-core radius
of $^{6}$He compared to that of $^{8}$He found in the present GMSM analysis can be
qualitatively understood in that simple geometrical picture.
\section*{Acknowledgments}
The present research has been supported, in part, by the National Foundation
for Science and Technology Development (NAFOSTED project No.103.04-2014.76)
and by the Ministry of Science and Technology of Vietnam (project No.105/2013/HD$-$NDT).
We are grateful to Prof. G.D. Alkhazov and authors of Ref. \cite{Al02} for providing us
with the earlier version of the GMSM code, based on which we have developed
the present version that includes the spin-orbital amplitude.
|
2,869,038,155,120 | arxiv | \section{Introduction}~\label{Section_I}
The growth of Internet-of-Things (IoT) technologies is anticipated to play a significant role in the future of societies by facilitating the connectivity between sensor devices and Internet cloud services. According to a report from Ericsson, it estimates over 5 billion cellular IoT devices connections by 2025~\cite{mobility_report}. For wireless IoT devices, this increase would further congest spectrum resources and increase interference. This is already evident in the license-free industrial, scientific, and medical (ISM) band~\cite{8403749} since this band is available to the public and does not require paid licensing.
Radio frame reception is typically based on matched filter receiver architecture, where the signal is compared, or \emph{matched}, to a known template. This method is proven to be optimal under additive white Gaussian noise (AWGN) conditions~\cite{simon2005digital}. In centrally controlled networks, the base station, e.g., gNB in 5G, orchestrate the access to the spectral resources by employing a multiple-access technique such as time-division multiple access (TDMA), frequency-division multiple access (FDMA), code-division multiple access (CDMA), and orthogonal frequency-division multiple access (OFDMA). These access techniques are required to alleviate the co-channel interference. Nevertheless, such techniques require significant overhead, adding complexity to the protocol stack. Thus, typical license-free IoT networks employ random access techniques to simplify the protocol stack because a complex access technique would not be efficient in the first place given the interference originating from other co-existing systems in the same band.
Efficient signal detection is crucial for Low power wide area networks (LPWAN) technologies, which are receiving increasing attention due to the increasing use of IoT applications. LPWANs can achieve long-distance communication while maintaining low-power consumption, which is indeed at the cost of a reduced bit rate. In a license-free spectrum, LPWAN network performance is typically interference-limited in urban environments, while it becomes noise-limited in rural and remote locations. A prominent LPWAN technology is LoRaWAN, adopted by the LoRa Alliance~\cite{lorawan}. LoRaWAN typically uses an unslotted ALOHA-based protocol, allowing multiple IoT devices to transmit without coordination. As such, the transmitted signals are prone to packet collisions which significantly reduces the performance~\cite{RN590,9239466,9395074,RN308,8581011}. LoRaWAN utilizes the LoRa modulation technique, which is based on chirp spread spectrum (CSS) modulation spreading the signal energy over a wider bandwidth to combat narrowband interference~\cite{RN308}. Furthermore, the time-spreading of the transmission can be controlled using a parameter called \emph{spreading factor} (SF), which increases the energy at the receiver without the need to transmit at a higher RF power. Aside from potential narrowband interference, LoRa-to-LoRa induced interference significantly degrades the performance, especially when using the same SF.
The rapid evolution of neural networks (NN), particularly deep learning (DL) methods, have shown great potential for signal detection~\cite{RN525,RN318,RN558,RN565}. The main appeal of DL signal detection is its strength against non-linearities~\cite{RN30}. Notably, the Convolutional Neural Network (CNN) has been utilized to classify stochastic signals such as in image classification~\cite{GU2018354}, which is not practically possible using deterministic methods such as the matched filter.
This paper presents a new framework, \emph{HybNet}, that switches between matched filter detection and a proposed deep learning detector, where the switching is performed based on the interference level. Our framework harnesses the benefits of both the matched filter's ideal performance under AWGN noise-limited scenarios and the improved performance of the proposed DL detector in LoRa-to-LoRa interference. Additionally, we explore three different input data modalities for the DL-based detection; (i) time-domain (I/Q samples), (ii) time-frequency domain (spectrogram), and (iii) frequency domain (spectrum). Results show that the proposed DL-detectors outperform traditional coherent and noncoherent detection performance in non-Gaussian interference scenarios. In order to evaluate the proposed framework at different interference/noise mixtures, we vary the interference to noise ratio (INR) and obtain the corresponding bit-error rate (BER). Accordingly, the overall BER performance results indicate that the proposed DL-based detectors can significantly improve LoRa detection performance for LoRa symbols under high interference scenarios. The contribution of this work is summarized as follows,
\begin{itemize}
\item It develops a novel hybrid framework comprising of a CNN interference detector that switches between two different pathways: (i) a coherent detector and (ii) a CNN signal detector.
\item It presents a method for creating a dataset of LoRa symbols in three modalities: the time domain, time-frequency domain, and frequency domain representations with AWGN impairment and LoRa interference for training CNN networks for detection.
\item It provides a comparison framework for evaluating symbol detection under different interference scenarios and input data modalities.
\end{itemize}
The rest of this paper is organized as follows. A literature review is presented in Section~\ref{Section_II}. An overview of the systems model which utilizes LoRa modulation along with LoRa emulation and dataset creation is covered in Section~\ref{Section_III}. Section~\ref{Section_IV} discusses the DL detection architectures utilized in this paper. Section~\ref{Section_V} discusses the results obtained from the experiments. Finally, Section~\ref{Section_VI} concludes the paper.
\section{Background and Related work}~\label{Section_II}
Interference is a significant issue that hinders efficient packet reception, particularly in the license-free spectrum bands. In such bands, different users and systems access the spectrum without resource coordination. Thus, extensive literature is available on the Cognitive Radio (CR) concept. CR senses spectrum occupancy and devises a spectrum access plan to mitigate co-channel interference~\cite{RN232}. However, the efficiency of CR is limited to the predictability of the spectrum. As such, random-access networks severely reduce such capacity. Also, CR requires additional electric power for spectrum sensing, which is not ideal for battery-operated devices~\cite{RN564}. In all cases, CR has not been adopted in practical IoT communication systems due to these limitations, among others. Another technique to increase the capacity of a system with interference from another signal is to utilize successive interference cancellation (SIC), initially proposed in~\cite{RN597}. In SIC, signals coming from different users are successively decoded, each decoded signal is subtracted from the received signal, and the cycle is repeated. SIC is utilized in non-orthogonal multiple access (NOMA) for the next generation of wireless communications. However, the channel gains of all users should be known by the base station (BS) for correct decoding of the superimposed signals, which entails additional overhead~\cite{RN598}.
In the LoRa modulation method, transmissions with different SFs demonstrate a certain level of orthogonality, as the cross-energy between two non-synchronized LoRa packets is reduced~\cite{RN66}. However, collisions of inter-SF signals would in-practice still cause packet loss, as shown in~\cite{RN66,RN308,RN332,unknown}. To add to the problem, the collision of LoRa signals with the same SF is even more severe, thus requiring a higher signal to interference plus noise ratio (SINR) to be successfully detected~\cite{RN308}. To address same-SF interference, authors in~\cite{RN343} propose the use of allocating different SFs. However, while the proposed method improves the overall throughput performance, LoRa interference still poses a challenge. Recent research has also shown how coherent detection enhances the performance in same-SF interference scenarios, where theoretical approximations and Monte Carlo simulations are shown in~\cite{unknown}. However, the work concentrates on same-SF interference as inter-SF interference has a broader spectrum band and lower power spectral density after dechirping~\cite{unknown}. Additional references on investigating the BER performance of traditional LoRa receivers under interference can be found in~\cite{8903531} and~\cite{8581011}. Another research work has also explored theoretical LoRa BER performance for conventional receivers under AWGN and fading channel conditions in~\cite{lora_closed_form}.
Machine learning (ML) methods, with a focus on DL, have been recently applied in the research field of wireless communications, including works on automatic modulation recognition (AMR)~\cite{RN232,RN9}, occupancy detection in the license-free band~\cite{9205874,RN1}, optimization of interference management algorithms~\cite{RN555}, among many more applications, which are further reviewed in~\cite{RN30} for the physical layer of communications, and in~\cite{RN541} for higher layers. These works show that DL is a promising tool in the research area of wireless communications. Our previous work in~\cite{RN525} explored the use of DL-based detection techniques with CNN models for the LoRa modulation scheme under AWGN, time offset, and frequency offset conditions. Another research work that utilizes DL sequence detector networks can be found in~\cite{RN318}, constructing a sliding bidirectional recurrent neural network (SBRNN). The SBRNN can take in information from previous symbols, unlike symbol-by-symbol detectors, to combat inter-symbol interference (ISI) for non-LoRa signals. Another advantage of DL-based detectors is that they do not need channel state information (CSI) which is further demonstrated in~\cite{RN558} proposing a detection method based on the generative adversarial network (GAN). The work further develops the DL-based symbol detector presented by~\cite{RN565}, presenting a model-based approach that uses DL to learn the log-likelihoods and performs Viterbi detection. In addition to solid detection performance, the research work also proposes a method to train in real-time and adapt to time-varying channels.
Another research area on utilizing DL in communications systems is to emulate end-to-end physical layer using an autoencoder (AE), called the channel AE~\cite{o2016learning}. The idea is to replace the transmitter with an encoder and the receiver as the decoder, which is realized with DL~\cite{RN520}. The channel AE assumes the complete communication procedure, including detection. Authors in~\cite{o2016learning} present a channel AE with a pre-known theoretical channel model, further optimized to combat common channel impairments, such as AWGN, unknown time and rate of arrival, delay spread, and carrier frequency and phase offset. However, when the communication channel is more complex, the performance is worse than in~\cite{RN566}, which does not assume the channel model in advance.
For non-Gaussian interference, research work showing DL-based channel estimation and signal detection is proposed in~\cite{RN324} for OFDM systems, showing robustness to non-linear channel impairments, interference, and comparable BER performance to traditional detection methods. In~\cite{RN526}, the authors investigate DL-based detection using a CNN and a fully connected NN on symbols with QPSK and 8PSK modulation schemes and are superimposed by co-channel interference. Authors in~\cite{RN513} show a DL-based approach for detecting MIMO signals with correlated interference using a maximum likelihood detector and a DL network to predict and remove local correlation among the noise in different symbols. Nonetheless, there seems to be no other research work investigating the issue of LoRa with the same technology interference utilizing DL-based detection.
\section{IoT Signal Model}~\label{Section_III}
As a prominent application of an IoT signal, we utilize the popular LoRa modulation scheme. This section explains the general structure of a LoRa symbol and outlines the two main traditional detection methods. We then cover the dataset creation methodology as used to train the deep learning networks.
\subsection{LoRa Modulation}
LoRa CSS modulation is based on linear cyclic chirping within a bandwidth $B$. Each chirp encodes one symbol with a duration $T_\mathrm{s}$. The bandwidth in LoRaWAN can take one of the following values as $B \in \{125,250,500\}$~kHz and the chirp rate of the symbol is controlled by the SF, where the SF in LoRa has a range of $\text{SF} \in \{7,8,9,10,11,12\}$, as per the LoRaWAN standard. The SF can also be used to calculate the range of possible symbol values that a symbol can encode, which is represented as $M = 2^\mathrm{SF}$. We follow the same notation as in~\cite{9395074} and~\cite{RN525} to describe LoRa signal modulation, where a single symbol is represented as,
\begin{equation}
s_k(t) = \exp\left(j2\pi \int_{0}^{t} \left[(\beta x + \zeta_k)_{\text{mod}_B} - \frac{B}{2}\right] \mathrm{d}x \right) ,
\end{equation}
where $\zeta_k$ is the frequency representing the symbol value as follows,
\begin{equation}
\zeta_k = m_k\Delta_\mathrm{f},
\end{equation}
where $m_k$ is the data symbol value ${m_k\in\{\text{0,1,...,}M-1\}}$, $k$ is the symbol's index, and $\Delta_\mathrm{f}$ represents the frequency step between the shifts. The frequency step in LoRa is selected to be equal to the symbol rate itself $B/M$. $\beta$ represents the chirp rate given by,
\begin{equation}
\beta = \frac{f_{\text{high}} - f_{\text{low}}}{T_\mathrm{s}} = \frac{B}{T_\mathrm{s}},
\end{equation}
where $f_{\text{low}}=f_{\text{c}}+\frac{B}{2}$ and $f_{\text{high}}=f_{\text{low}}-\frac{B}{2}$ are the lower and upper frequency bounds of the chirp respectively around the carrier $f_\mathrm{c}$. Thus, a sequence of symbols can be written as,
\begin{equation}
x(t)=\sum_{k=1}^{K} s_k(t-k T_\mathrm{s}) ,
\end{equation}
where $K$ is the total number of symbols contained within a message.
\subsection{Conventional LoRa detection}
Typical detection of LoRa symbols can be divided into two steps; (i) The symbol is dechirped with the same chirping rate in order to convert the received symbol into a single tone, accordingly the sequence of symbols will manifest as a Multiple Frequency Shift Keying (MFSK), (ii) Then in the second step the symbol is detected.
To dechirp the signal, each symbol in the received LoRa waveform is multiplied by an inverted chirp with zero frequency shift. The dechirping symbol train is represented as follows,
\begin{equation}
s^{\star}(t) = \sum_{k=1}^{K}\exp\left(j\pi B(t - k T_\mathrm{s})-j2\pi\beta (t - k T_\mathrm{s})^2\right) ,
\end{equation}
In an ideal channel, each received LoRa symbol can then be dechirped as follows,
\begin{equation}
z(t) = \sum_{k=1}^{K}\exp\left(j2\pi \zeta_k (t-k T_s)\right),
\end{equation}
which is a sequence of single tones, each tone with frequency offset $\zeta_k$ corresponding to the value of the symbol. This is a typical MFSK signal. Accordingly, we can use conventional MFSK detection methods to detect the symbols. Two main approaches are utilized; (i) noncoherent detection when phase information is not available, and (ii) coherent detection when an actual in-phase and quadrature path are used in the receiver. The traditional detection methods are detailed as follows,
\subsubsection{Noncoherent detection}
For non-coherent detection, after dechirping, the square-law (envelope) detector can be used~\cite{simon2005digital} and then the absolute maximum is taken as follows,
\begin{equation}
\hat{m}_{\text{ncoh}} = \argmax_k\left|\int_{0}^{\infty} z(t)S_{\mathrm{k}}(t - \tau) d\tau\right|,
\end{equation}
where the dechirped signal is denoted as $z(t)$. $S_\mathrm{k}(t)$ represents the possible dechirped realizations of $z(t)$, where $k$ is an integer representing all symbol possibilities ${k = \{0,1, ... , M-1\}}$. Equivalently, the fast Fourier transform (FFT) can be employed, where the maximum magnitude denotes the symbol estimate $\hat{m}_{\text{ncoh}}$ as follows,
\begin{equation}
\hat{m}_{\text{ncoh}} = \left\lfloor\frac{1}{\delta_\mathrm{f}} \argmax_f|Y(f)|\right\rceil,
\end{equation}
where $Y(f) = \text{FFT}\{y(t)\}$ denotes the FFT of the received FSK signal and $\lfloor.\rceil$ represents the rounding function. This method achieves the same performance as by correlating with all possible values, at much less complexity. However, non-coherent detection comes at the cost of higher BER compared to coherent detection in AWGN environments.
\subsubsection{Coherent detection}~\label{Coh_subsection}
Optimal detection of a LoRa symbol in AWGN environments can be achieved using coherent detection. Coherent detection is executed by correlating the dechirped signal with all possible frequency shifts, then the frequency shift with the largest real valued output denotes the symbol value. MF of an MFSK LoRa symbol is shown as follows,
\begin{equation}
\hat{m}_{\text{coh}} = \argmax_k\operatorname{Re}\left[\int_{0}^{\infty} y(t)w_{\mathrm{k}}(t - \tau) d\tau\right] ,
\end{equation}
where $w_{\mathrm{k}}(t) = \exp\left(j2\pi k\Delta_{\mathrm{f}} t\right)$ represents a set of harmonics. Like with noncoherent detection, coherent detection can be efficiently implemented as the FFT~\cite{RN516}, as follows,
\begin{equation}
\hat{m}_{\text{coh}} = \left\lfloor\frac{1}{\delta_\mathrm{f}} \argmax_f\operatorname{Re}\left[Y(f)\right]\right\rceil.
\end{equation}
While coherent detection is optimal in AWGN environments, since only the real part is used for detection, phase impairments cause a loss in information~\cite{RN516}.
\subsection{Dataset Creation}
To emulate the target LoRa symbols, we use an open-source MATLAB emulator~\cite{9395074,Emulator} which was previously developed by our team. The generated LoRa signal is an I/Q waveform, with a $\text{SF} = 7$, and a $B = 125$~kHz. A randomly generated LoRa message consisting of the symbol sequence vector ${M = \{m_\mathrm{1},m_\mathrm{2},...,m_\mathrm{n}\}}$ is utilized to create the waveforms.
For the training and detection process, a sequence is picked up arbitrarily to represent the \emph{target} signal and give it a controlled power of $p_\mathrm{s}$, while another signal is picked up to represent the \emph{interference}. In addition to the interference, the received target signal is further impaired with a Gaussian noise process having a controlled power. We denote controlled interference power as $p_\mathrm{I}$ and the noise power as $\sigma^2$. The INR is denoted as $\alpha$, and the SINR is donated as $\gamma$,
\begin{align}
\alpha = \frac{p_\mathrm{I}}{\sigma^2}, && \gamma = \frac{p_\mathrm{s}}{p_\mathrm{I}+\sigma^2} .
\end{align}
Since the the performance is only related to the INR and SINR, we normalize the interference and noise power with respect to the target LoRa signal power, i.e., $p_\mathrm{s}=1$. Accordingly, the stored emulated received waveform is comprised of three parts; (i) the target LoRa signal, (ii) the interference, (iii) the AWGN noise, as follows,
\begin{equation}
r(t) = \underbrace{ \vphantom{{\sqrt{\frac{\alpha}{\gamma 0+ \alpha\gamma}}}} x(t)}_{\text{Signal}} + \underbrace{\sqrt{\frac{\alpha}{\gamma + \alpha\gamma}} x_\mathrm{I}(t-\tau)}_{\text{Interference}} + \underbrace{\sqrt{\frac{1}{\gamma + \alpha\gamma}} n(t)}_{\text{AWGN}},
\end{equation}
where $x(t)$ is the target LoRa baseband signal, $x_\mathrm{I}(t)$ is the interfering baseband LoRa signal with a time shift of $\tau$, and $n(t) \sim \mathcal{CN}\left(0,1\right)$ is the complex zero-mean AWGN.
The complex baseband signal is cropped into LoRa symbols, where each target symbol is synchronized in time. The cropped symbols are sampled at $f_\mathrm{s} = 125~kHz$. Each training symbol is labeled and can be represented as follows,
\begin{equation}
T = \{(H_\mathrm{1},m_\mathrm{1}),(H_\mathrm{2},m_\mathrm{2}),...,(H_\mathrm{k},m_\mathrm{k})\} ,
\end{equation}
where $H_\mathrm{k}$ is the type of representation of the symbol, either in I/Q, STFT, or FFT, depending on the network that is to get trained (one of the three datasets, shown in Fig~\ref{Fig_Dataset}), and $m$ is the symbol label. Note that this work concentrates on detecting target LoRa symbols with $\text{SF} = 7$. Training to detect symbols with a SF greater than SF7 requires that for every increment of SF, the size of the training dataset be cumulatively doubled. The training dataset must increase due to the number of classes increasing according to the number of possible symbol values that a symbol can encode, which is equal to $2^\mathrm{SF}$. If the training dataset size is not increased to accommodate the SF change, the BER performance would also decrease. Consequently, the training time would be much longer, and the network complexity would also need to increase due to the increased size of the input data. Additionally, we investigate only a single interfering LoRa signal, where the DL-based detectors are trained with interfering LoRa signals with a SF of 7 to 12. For our DL-based models to detect LoRa symbols with more than one Lora interferer, the training datasets would need to have LoRa frames with multiple interfering LoRa signals. An illustration of the dataset creation is depicted in Fig~\ref{Fig_Dataset}. Note that the process of storing each signal to be trained for the three different data modality networks is covered in subsection~\ref{Sec_Modelities}.
\begin{figure*}[tbh!]
\vspace{0.7cm}
{\centering
\includegraphics[width=\linewidth]{dataset_gen.png}
\caption{Dataset generation process for training each CNN. }
\label{Fig_Dataset}}
\footnotesize
\vspace{\baselineskip}
\end{figure*}
\begin{table}
\caption{Notations and Symbols}
\centering
\begin{tabular}{l c c}
\hline\hline
Parameter & Symbol &Value \\
\hline
Spreading factor & SF & 7 \\
Bandwidth & $B$ & 125~kHz \\
Sampling rate & $f_\mathrm{s}$ & 125~kHz \\
Samples per symbol & $N$ & 128 \\
Frequency offset & $\zeta$ & - \\
Chirp slope & $\beta$ & - \\
Symbol time & $T_\mathrm{s}$ & 1.024~ms \\
Noise power spectral density & $N_{\mathrm{o}}$ & -\\
Symbol & $m$ & -\\
Discrete frequency step & $\delta_{\text{f}}$ & 976.56~sym/s \\
Average noise power & $\sigma^2$ & - \\
LoRa symbol & $s_{k}(t)$ & - \\
Dechirping LoRa signal & $s^\star(t)$ & - \\
Dechirped LoRa signal & $z(t)$ & - \\
FSK signal FFT & $R(f)$ & -\\
Non-coherent FSK symbol & $\hat{m}_{\text{n-coh}}$ & -\\
Coherent FSK symbol & $\hat{m}_{\text{coh}}$ & -\\
INR & $\alpha$ & -\\
SINR & $\gamma$ & -\\
Target signal power & $p_\mathrm{s}$ & 1 \\
Interference signal power & $p_\mathrm{I}$ & - \\
Number of training frames &-& 110,000 \\
Number of validation frames &-& 30,000 \\
Labeled Lora dataset & $T$ & - \\
STFT window type & - & Hamming \\
Points per FFT window for the STFT & $W$ & 64 \\
STFT window overlap & $L$ & 63 \\
Symbol STFT & $X_\mathrm{STFT}$ & - \\
I/Q modality network input & $R_\mathrm{k}$ & - \\
Time-frequency modality network input & $X_\mathrm{k}$ & - \\
Spectrum modality network input & $Y_\mathrm{k}$ & - \\
\hline
\end{tabular}
\label{Table_LoRa}
\end{table}
\section{Deep Learning Detection}~\label{Section_IV}
The elements that make up the CNNs used in this paper are as follows: (i) convolutional layers interlaced with (ii) max-pooling layers and (iii) batch normalization layers. The convolutional layer performs the convolution operation on the input with a kernel. The window of the kernel slides across the input data with a unity stride in the proposed networks. The output is then multiplied by an activation function, where the Rectified Linear Unit (ReLU) is utilized. Note that the ReLU function is linear for positive inputs and gives zero for negative inputs. In our work, each convolutional layer is followed by a batch normalization layer, which normalizes the mean and variances of the convolutional layer output, which speeds up training. A max-pooling layer precedes the final batch normalization layer in the network, which down-samples the data by taking the maximum value in each max-pooling filter. The output is then passed into a fully connected layer that uses a Softmax activation function that outputs an $l$-length vector of scores summing to 1, where $l$ is the number of classes. Finally, a classification layer assigns the classes according to the probabilities.
\subsection{Data Modalities}\label{Sec_Modelities}
In this section, we explain the three different data modalities that could be used for detecting LoRa signals:
\subsubsection{I/Q Modality}
The first model is constructed with the time-domain in-phase and quadrature (I/Q) samples. The model takes in the complex input signal represented as the real part,
\begin{equation}
R_{k}^\mathrm{I} = \operatorname{Re}[r_\mathrm{k}(1),r_\mathrm{k}(2),...,r_\mathrm{k}(N)] ,
\end{equation}
and the imaginary part,
\begin{equation}
R_{k}^\mathrm{Q} = \operatorname{Im}[r_\mathrm{k}(1),r_\mathrm{k}(2),...,r_\mathrm{k}(N)] ,
\end{equation}
where for each k-th symbol there are $N$ temporal samples. The samples are then arranged into two 1D vectors to be used in the DL network as follows,
\begin{equation}
R_{k} = \left[ \begin{array}{cc}
R_\mathrm{k}^\mathrm{I} \\
R_\mathrm{k}^\mathrm{Q} \end{array} \right] .
\end{equation}
\subsubsection{Time-Frequency Modality}
In the second modality, we convert the time domain samples into a spectrogram using short-time Fourier transform (STFT). STFT works by taking segments, \emph{strides}, of the time domain signal and converting each one using FFT. After that, the FFT vectors are combined in a 2D matrix representing the spectral change across time. In such representation, a linear chirp, for example, will appear as a straight line. The STFT matrix is expressed as follows,
\begin{equation}
X_\mathrm{STFT}(\omega,p)=\sum_{n=0}^{N-1} r_\mathrm{k}(n)g(k-pL)e^{-j\omega k} ,
\end{equation}
where $X_\mathrm{STFT}(\omega,p) \in \mathbb{C}^{W \times \frac{N - L}{W - L}}$. ${r_\mathrm{k}(.)}$ denotes the captured sample of a cropped discrete-time LoRa baseband symbol. A Hamming windowing function is denoted by $g(.)$ with a length of ${W}$ and $L$ is the overlap length between each Discrete Fourier Transform (DFT). The parameters chosen for the STFT are outlined in Table~\ref{Table_LoRa}.
The samples are then arranged into two 2D vectors to be used in the DL network as follows,
\begin{equation}
X_\mathrm{k} = \left[ \begin{array}{cc}
\operatorname{Re}[X_\mathrm{STFT}] \\
\operatorname{Im}[X_\mathrm{STFT}] \end{array} \right] .
\end{equation}
\subsubsection{Spectrum Modality}
The third modality is based on the frequency domain, where only the real components of an FFT of a dechiped LoRa symbol are used. The input signal can be represented as follows,
\begin{equation}
Y_\mathrm{k} = \operatorname{Re}\{\text{FFT}{[y_\mathrm{k}(1),y_\mathrm{k}(2),...,y_\mathrm{k}(N)]}\} ,
\end{equation}
where $y_\mathrm{k}(.)$ is the dechirped LoRa symbol.
\subsection{Hybrid Architecture: HybNet}
The HybNet architecture switches between two different detection branches; (i) the FFT-CNN as referred to in~Fig.~\ref{Fig_Networks}), and (ii) coherent detection. This architecture is designed to incorporate the best performance in AWGN and in co-channel interference conditions. Thus, a supervisory interference detector CNN is utilized to make the decision whether to pass the signal to the first branch or the second. Note that this supervisory network is trained with a similar dataset used to train the FFT-CNN branch. However, the LoRa symbols are labeled with either one of two possible classes; (i) \emph{Noise only} or (ii) \emph{Interference only}. If the interference detector network classifies the received baseband LoRa symbol as ``Noise only'', indicating the received symbol does not have LoRa interference or the gain corresponding to the interfering LoRa signal is less than the gain of the target. Consequently, the received symbol is passed to be purely detected by the coherent detector (outlined in subsection~\ref{Coh_subsection}). Suppose the interference detector network classifies the received baseband LoRa symbol as ``LoRa interference'' whereby the gain corresponding to the LoRa interference is greater than the gain corresponding to the target symbol. In that case, the received symbol is passed to the FFT-CNN branch. An illustration of the utilized switching architecture is shown in~Fig.~\ref{Fig_Models}.
\begin{figure}[!t]
{\centering
\includegraphics[width=\linewidth]{Fig_Switching_CNN.png}
\caption{Illustration of the proposed HybNet architecture switching between a deep-learning branch and a matched filter-branch based on the interference to noise ratio. }
\label{Fig_Models}}
\footnotesize
\end{figure}
\subsection{CNN Setup Description}
Three different CNN networks were designed to cater to the three data modalities explained in Section~\ref{Sec_Modelities}, and this is because they both input demotions and contents are different. For each network of the three networks, we utilize a Bayesian optimizer to choose the network's hyperparameters. Bayesian optimization is a more efficient method for selecting hyperparameters compared to search methods such as brute-force, grid search, and random search~\cite{bayesian}. The hyperparameters are chosen to be optimized (i) the number of convolutional layers, (ii) convolutional filter size, (iii) initial learning rate, and (iv) the dropout rate. The consequent network dimensions are summarized in Table~\ref{Table_CNN}. The CNN architectures are also illustrated in Fig~\ref{Fig_Networks}, showing the input format to each network. The initial learning rate of the different architectures is optimized to 0.015, 0.001, and 0.0056 for the IQ-CNN, STFT-CNN, and FFT-CNN, respectively. Additionally, considering the simple classification task, the network architectures are shallow.
The interference detector network was manually optimized for the hard switching architecture since high accuracy is achieved with a straightforward network. A simple two-layer CNN was used, which is described in Table~\ref{Table_ID}. All the networks were trained with stochastic gradient descent with momentum (SGDM) over 60 epochs. The training parameters for all the networks discussed in this paper are outlined in Table~\ref{Table_trainingOpt}.
\begin{table*}\centering
\caption{ CNN Network Summary }
\centering
\begin{tabular}{@{}lccccc@{}}
\hline\hline
{Layer} & \multicolumn{3}{c}{Shape} &
\phantom{abc} & {Parameter} \\
\cmidrule{2-4}
& $\mathrm{IQ-CNN}$ & $\mathrm{S-CNN}$ & $\mathrm{FFT-CNN}$ \\
& $\mathrm{Net(a)}$ & $\mathrm{Net(b)}$ & $\mathrm{Net(c)}$ \\
\hline
Input Shape & $128 \times 1 \times 2$ & $64 \times 65 \times 2$ & $128 \times 1 \times 1$ &&-\\
Convolutional Layer 1 & $8$, $5 \times 1$ & $9$, $7 \times 7$ & $8$, $19 \times 1$ && ReLU\\
Batch Normalization Layer 1 &- &- &- &&-\\
Convolutional Layer 2 & $8$, $5 \times 1$ & $9$, $7 \times 7$ & $8$, $19 \times 1$ && ReLU\\
Batch Normalization Layer 2 &- &- &- &&-\\
Convolutional Layer 3 & $8$, $5 \times 1$ & $9$, $7 \times 7$ & $8$, $19 \times 1$ && ReLU\\
Batch Normalization Layer 3 &- &- &- &&-\\
Convolutional Layer 4 & $8$, $5 \times 1$ & - & $8$, $19 \times 1$ && ReLU\\
Batch Normalization Layer 4 &- &- &- &&-\\
Max pooling Layer & $2 \times 1$ & $2 \times 1$ & $2 \times 1$ && -\\
Dropout Layer & $0.36$ & $0.37$ & $0.24$ && - \\
Fully Connected & $128$ & $128$ & $128$ && Softmax \\
\hline
\end{tabular}
\label{Table_CNN}
\end{table*}
\begin{figure*}[tbh!]
\vspace{0.7cm}
{\centering
\includegraphics[width=\linewidth]{all_models_detailed_2.png}
\caption{Illustration of all three DL-based detector models used in this paper. }
\label{Fig_Networks}}
\footnotesize
\vspace{\baselineskip}
\end{figure*}
\begin{table}
\caption{Interference Detector Network}
\centering
\begin{tabular}{l c c}
\hline\hline
Layer &Shape &Parameters\\
\hline
Input Shape & $128 \times 1 \times 1$ &-\\
Convolutional Layer 1 & $4$, $19 \times 1$ & ReLU\\
Batch Normalization Layer 1 &- &-\\
Convolutional Layer 2 & $4$, $19 \times 1$ & ReLU\\
Batch Normalization Layer 2 &- &-\\
Max pooling Layer & $2 \times 1$ & -\\
Dropout Layer & $0.30$ & -\\
Fully Connected & $2$ & Softmax \\
\hline
\end{tabular}
\label{Table_ID}
\end{table}
\begin{table}
\caption{Training Options}
\centering
\begin{tabular}{l c}
\hline\hline
Parameter & Value \\
\hline
Optimizer & {SGDM} \\
Momentum & {0.9} \\
Epochs & {60} \\
Learning rate drop schedule & {40} \\
Learning rate drop factor & {0.1} \\
Mini-batch size & {256} \\
L2 regularization & {0.0001} \\
\hline
\end{tabular}
\label{Table_trainingOpt}
\end{table}
\section{Simulation Results and Discussion}~\label{Section_V}
This section presents the performance benchmarking of the different architectures based on the BER indicator. A Monte-Carlo simulation of both \textit{same-SF} interference (SF7 on SF7) and \textit{inter-SF} interference (SF8 on SF7) is performed. The performance is investigated for a variable level of INR. A lower INR value indicates a noise-dominant scenario, while a higher value indicates a more interference-dominant scenario.
We have not presented BER plots for varying SINR because BER is known to enhance with the increasing SINR, a consistent trend in any detection algorithm. As such, we pick a transitional SINR where INR significantly impacts the BER performance, without loss of generality, so we fix the SINR at value $\gamma=-15 dB$.
\subsection{Detection Performance}
The BER performance of the DL-based techniques, along with traditional coherent and noncoherent methods, is depicted in Fig.~\ref{Fig_ber_8_-15}. DL-based methods outperform conventional detectors in interference-limited scenarios while maintaining a better performance than noncoherent detection in noise-limited scenarios. Furthermore, we put them to the test when we check the performance for a same-SF LoRa interferer, with SF7 in Fig.~\ref{Fig_ber_7_-15}. The figure shows that the trend holds especially for FFT-CNN, which produces good performance under both noise-limited and interference-limited scenarios.
In Fig.~\ref{Fig_ber_7_-15}, we show the BER results for detection of target LoRa symbols with SF7 and an interference LoRa signal also with SF7. From the plot, it can be observed that FFT-CNN outperforms other detectors in an interference-limited scenario. However, as expected, the coherent detector has the best performance in a noise-limited scenario.
As such, the DL-based detection techniques outperform the conventional detection techniques for LoRa-on-LoRa interference scenarios when the power of the interfering signal is higher than the power of the target signal. This is because the DL network is trained on LoRa interference scenarios, so the DL-based techniques learn to discern the interference from the target LoRa signal.
\begin{figure}[!t]
{\centering
\includegraphics[width=\linewidth]{ber_7_-15.png}
\caption{detection performance for a target LoRa symbol with SF7, an interference LoRa signal with SF7, and a fixed $\text{SINR} = -15 \text{dB}$. }
\label{Fig_ber_7_-15}}
\footnotesize
\end{figure}
\begin{figure}[!t]
{\centering
\includegraphics[width=\linewidth]{ber_8_-15.png}
\caption{detection performance for a target LoRa symbol with SF7, and an interference LoRa signal with SF8, and a fixed $\text{SINR} = -15 \text{dB}$. }
\label{Fig_ber_8_-15}}
\footnotesize
\end{figure}
The performance of HybNet is depicted in Fig.~\ref{Fig_ber_7_-15_Hyb}, Fig.~\ref{Fig_ber_8_-15_Hyb} for SF7 and SF8 interference, respectively. It can be clearly seen how the proposed HybNet architecture can effectively switch between the coherent detector and the FFT-CNN branches and thus follows the optimal performance in both noise-limited and interference-limited scenarios. The efficient switching indicates that the interference detector network can accurately detect LoRa interference.
\begin{figure}[!t]
{\centering
\includegraphics[width=\linewidth]{ber_switching_7_-15.png}
\caption{detection performance of the switching architecture for a target LoRa symbol with $\text{SF} = 7$, an interference LoRa signal with $\text{SF} = 7$, and a fixed $\text{SINR} = -15 \text{dB}$. }
\label{Fig_ber_7_-15_Hyb}}
\footnotesize
\end{figure}
\begin{figure}[!t]
{\centering
\includegraphics[width=\linewidth]{ber_switching_8_-15.png}
\caption{detection performance of the switching architecture for a target LoRa symbol with $\text{SF} = 7$, an interference LoRa signal with $\text{SF} = 8$, and a fixed $\text{SINR} = -15 \text{dB}$. }
\label{Fig_ber_8_-15_Hyb}}
\footnotesize
\end{figure}
\subsection{Complexity Analysis}~\label{Section_complexity}
We further analyze the training time and detection time of the three main architectures discussed in this paper, the IQ-CNN, STFT-CNN, and the FFT-CNN, in addition to the performance of HybNet. We utilize MATLAB for prepossessing and for DL. The system used for the experimentation has a 16 logical core Intel Xeon at 3.2 GHz CPU and an Nvidia Quadro 4000 GPU. Fig.~\ref{Fig_barComplexity} shows the time performance of the networks, including the accuracy as the number of convolutional layers increases. The results show that at a convolutional depth of one layer, none of the architectures can learn from the input data. At the optimized depths chosen by Bayesian optimization, shown in Fig.~\ref{Fig_Networks}, the networks have the optimal classification accuracy, thereby validating the optimized architecture. The figure also shows the training time for each architecture, where the STFT-CNN takes the longest to train since the input is a 2D STFT with two channels compared to the 1D input used by the IQ-CNN and FFT-CNN. Finally, the figure shows the detection time required to identify a packet consisting of 20 symbols. These results show that the FFT-CNN has the most desirable performance in classification accuracy, training time, and the lowest detection time. It is clear that the detection time can be controlled by reducing the number of convolutional layers. However, reducing the number of convolutional layers comes at the cost of reduced overall classification accuracy.
\begin{figure}[!t]
{\centering
\includegraphics[width=\linewidth]{barPlot_journalFigCompexity_2.png}
\caption{Comparison of network performance in terms of network depth for classification accuracy (top plot), training time (middle plot), and time complexity per 20 symbol packet (bottom plot.) For the top and bottom plots, error bars are plotted showing the 95$\%$ confidence interval.}
\label{Fig_barComplexity}}
\footnotesize
\end{figure}
For HybNet, the training time is the sum of the FFT-CNN and the interference detector network, which is $\approx$ 5 minutes on the used platform. So the overall training time for the HybNet is $\approx$ 30 minutes. The average detection time per a 20 symbols long packet for HybNet is about twice the time of the FFT-CNN, which is $\approx$ 6.5 milliseconds.
We further explored the detection time of the different networks as the number of frames to be classified changes. The theoretical time complexity for all the networks can be expressed as $O\left(L \sum_{l=1}^{M} K_{l-1} F_l W_l K_l\right)$~\cite{he2015convolutional}, where $l$ is the index of convolutional layers and $M$ is the number of layers. $L$ is the number of input symbols, $K_{l-1}$ denotes the number of input channels. $F_{l}$ is the dimensions of the convolutional filter multiplied together. $W_{l}$ is the number of filters per convolutional layer. Finally, $K_{l}$ is the dimensions of the output multiplied together. The \textit{max-pooling}, \textit{dropout}, and \textit{fully connected layers} have an insignificant time-complexity compared to the convolution layers, so their complexity is not included in the calculation. From the theoretical expression, the time-complexity linearly increases with a number of input symbols for all three networks. The theoretical expression also shows the time-complexity is much higher for the STFT-CNN since the convolutional filters $F_{l}$ as well as the output dimensions $K_{l}$ are 2-dimensional. This is experimentally demonstrated in Fig.~\ref{Fig_timeComplexity}.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{timeComplexity.png}
\caption{The packet detection time-varying number of input symbols, comparing the three deep networks investigated in this paper.}
\label{Fig_timeComplexity}
\footnotesize
\end{figure}
\section{Conclusion}~\label{Section_VI}
This paper investigated deep learning approaches that rely on convolutional neural networks for the detection of LoRa symbols in the presence of AWGN and \emph{colored} interference. We presented a new framework \emph{HybNet}, a switching detection architecture, which combines the merits of (i) the optimal detection in Gaussian noise based on the matched filter, with (ii) the improved performance of deep learning detector under non-Gaussian interference. We tested different input data modalities for deep learning, namely; (i) I/Q-based, (ii) time-frequency-based, and (iii) spectrum-based. The spectrum-based deep detector showed the best detection performance in heavy interference conditions and the lowest time complexity compared to the I/Q-based and time-frequency-based networks. This performance suggests that a hybrid deep learning-match filter receiver would outperform conventional detection methods in a broader range of applications, especially in random access IoT networks where the interference is caused by overlapping co-channel transmissions.
\bibliographystyle{ieeetr}
|
2,869,038,155,121 | arxiv | \section{Introduction}
Many types of generative models have been proposed for relational data in several fields, including machine learning and statistics.
For i.i.d. data, a parametrized model defines a distribution over samples of a fixed size $n$, for every $n$. The analogue for generative relational models is a distribution $Q^{(n)}$ over complex multi-relational graphs (``worlds'' in logical terminology) of a fixed size $n$, for every $n$. Research in statistical theory and discrete mathematics on the one hand, and AI and machine learning on the other hand has focussed on somewhat different aspects of relational models: the former is mostly concerned with internal model properties such as exchangeability, projectivity and behavior in the limit, whereas the latter is focussed on learning and inference tasks for one size $n$ at a time.
It is well known that in many popular statistical relational learning (SRL) frameworks the dependence of $Q^{(n)}$ on $n$ exhibits sometimes counter-intuitive and hard to control behavior. Most types of SRL models are not projective
in the sense that the distribution $Q^{(n)}$ for $n$ nodes is the marginal distribution derived from the $Q^{n+1}$ distribution~\cite{Shalizi2013,Jaeger2018}. For exponential random graph and Markov logic network (MLN) models it has also been observed that the $Q^{(n)}$ tend to become degenerate as $n$ increases in the sense that the probability becomes concentrated on a few ``extreme'' structures~\cite{rinaldo2009,chatterjee2013estimating,poole2014population}. Some authors have proposed to better control the behavior of MLNs by adjusting the model parameters as a function of $n$~\cite{jain2010adaptive}; however, no strong theoretical guarantees have yet been derived for such approaches.
In this paper we focus on projectivity as a very powerful condition to control the behavior of $Q^{(n)}$. In projective models, inferences about a fixed set of individuals are not sensitive to population size. This implies that inference trivially becomes \emph{domain-lifted}~\cite {broeck2011completeness}, convergence of query probabilities becomes trivial, and certain statistical guarantees for learning from sub-sampled relational structures can be obtained~\cite{Jaeger2018}. These benefits come at a certain cost in terms of expressivity: projective models are necessarily ``dense'' in the sense that, e.g., the expected number of edges in a projective random graph model is quadratic in $n$. In spite of these limitations, there exist projective model types such as the stochastic block model and the infinite relational model~\cite{xu2006learning,kemp2006learning} that have been proven very useful in practice. It thus seems very relevant to fully exploit the capabilities of projective models by developing maximally expressive projective representation, learning and inference frameworks. In this paper we take an important step in this direction by deriving a complete characterization of projective models as a certain class of directed latent variable models.
While the characterization we obtain is completely general, we approach our problem from the perspective that knowledge about the distributions $Q^{(n)}$ is given in the form of
statistical frequencies of substructures of a small size $k$. For example, $k$ could be the maximal number of variables in an MLN formula, in which case the substructure frequencies are a sufficient statistics for learning the MLN parameters. In a somewhat different setting, $k$ can be the number of variables used in a Halpern/Bacchus-style statistical probability formula forming a statistical knowledge base~\cite{Halpern90,Bacchus90}. In all cases the question arises of how to generalize this knowledge to infer probabilities for specific instances (``beliefs"), either by statistical model estimation (as in most current SRL frameworks), or by inferring plausible beliefs based on invariance or maximum entropy principles, as in the random worlds approach of Bacchus et al.~\shortcite{BaGroHalKol92}, and more recently in~\cite{kern2010novel} and~\cite{kuzelka2018relational}. A fundamental question that then arises is whether the given substructure frequencies can actually be the marginal distribution of $Q^{(n)}$ for large $n$. Results about the random worlds method need to be conditioned on the assumption that the statistical knowledge is ``eventually consistent''~\cite[Chapter 11]{halpern2017reasoning}. Similar assumptions are made in~\cite{kuzelka2018relational}. As a by-product of our characterization of projective models we obtain that the same characterization also describes the distributions that can be induced as marginals of arbitrary $Q^{(n)}$.
\section{Related Work} We discuss work on generative graph models related to exchangeability and projectivity, the two key properties in our study.
\paragraph{Exchangeability.} Exchangeability requires that a generative model should assign the same probability to graphs that differ only in node labellings. This is true for the large class of template-based relational models, because typical model discovery methods do not introduce templates that reference individual nodes~\cite{Kimmig2014}. For example, they may only construct first-order logic formulas with no constant symbols. This includes most structure learning algorithms for Markov Logic Networks (e.g., \cite{Schulte2012}).\footnote{An exception is the Boostr system \cite{Khot2013}, which constructs first-order MLN formulas with constants.} Similarly, the sufficient statistics of exponential random graph models (e.g., the number of triangles in a graph) are typically defined without special reference to any particular node. Niepert and Van den Broeck~\shortcite{niepert2014tractability} have exploited the weaker notion of \emph{partial exchangeability} to obtain tractable inference for certain SRL models.
\paragraph{Projectivity.} The importance of projectivity for graph modelling has been discussed previously~\cite{Shalizi2013,Jaeger2018}. Chatterjee and Diaconis~\shortcite{chatterjee2013estimating} discuss how estimation and inference in exponential random graph models depends on the sample size. Shalizi and Rinaldo~\shortcite{Shalizi2013} give necessary and sufficient projectivity conditions for an exponential random graph model; they show that these are satisfied only in rare conditions. Jaeger and Schulte~\shortcite{Jaeger2018} discuss a number of common SRL models, including MLNs and Relational Bayesian Networks, and show that they are projective only under restrictive conditions. Projective models used in practice factor a graph into independent components given a set of latent variables. Popular examples include the stochastic block model and generalizations~\cite{hoff2002latent}, the infinite relational model~\cite{Orbanz2014}, and recent graph neural network models such as the graph variational auto-encoder~\cite{Kipf2016}. Our work shows that a latent conditional independence representation is not only sufficient for projectivity, but also necessary. We prove this result for a very large class of structured data, essentially general finite multi-dimensional arrays (tensors) with no restrictions on their dimensionality.
Our results heavily depend on the theory of infinite exchangeable multi-dimensional arrays~\cite{hoover1979relations,aldous1981representations,kallenberg2006probabilistic,Orbanz2014}.
The question of realizability of a given frequency distribution as a relational marginal has also been raised by Kuzelka et al.\shortcite{kuzelka2018relational}, who then focus on approximate realizability, rather than characterizations of exact realizability.
\section{Background}
\subsection{Basic Definitions}
We use the following basic notation. The set of integers $\{1,\ldots,n\}$ is denoted $[n]$.
For any $d\geq 1$, we write
$[n]_{\neq}^d$ for the set of $d$-tuples containing $d$ distinct elements from $[n]$.
The subset of $[n]_{\neq}^d$ containing tuples in which the elements appear in their
natural order is denoted
$\langle n \rangle^d$ (so that $\langle n \rangle^d$
corresponds to a standardized representation for the set of all $d$-element subsets of $[n]$).
Extending this notation to the infinite case, we can also write $[\Nset]_{\neq}^d$ and
$\langle \Nset \rangle^d$.
\paragraph{Relations and Possible Worlds.}
A relational \emph{signature} $S$\ contains relations
of varying arities. We
refer to the maximal arity of relations contained in $S$ as the \emph{arity of} $S$, denoted
$\emph{arity}(S)$.
A \emph{possible world} $\omega$\ (for $S$) specifies
1) a finite domain $D=\{d_1,\ldots,d_n\}$, 2)
for each $m$-ary relation from $S$ an $m$-dimensional binary adjacency matrix.
We refer to $n$ as the \emph{size} of
$\omega$, and also call $\omega$ an $n$-world. For most purposes, we can assume that $D=[n]$, or
at least $D\subset \Nset$. However, even if we make this assumption for convenience of presentation,
we do not generally assume that the
integer label of a randomly observed domain element can also be observed.
We denote by $\Omega^{(n)}$ the set of
all possible worlds for a given signature $S$ with domain $[n]$. The relevant signature
is usually implicit from the context, and not made explicit in the notation.
Finally, $\Omega:=\cup_n \Omega^{(n)}$.
\paragraph{Relational Substructures.}
We also require notation to refer to different types of substructures of a possible $n$-world $\omega$. For
a subset $I\subset [n]$ of size $|I|=m<n$ we denote with $\omega\downarrow I$ the $m$-world induced by
$I$, i.e., the possible world with domain $I$, and the relations of $\omega$ restricted to arguments from $I$.
For a tuple $\boldi\in [n]^m_{\neq}$ we denote with $\omega\downarrow \boldi$ the world
over the domain $[m]$ obtained by relabeling the domain elements in the sub-world induced by the
set $\boldi$ as $i_h\mapsto h$ (cf. Figure~\ref{fig:worlddefs}, top row).
A little less conventional is the following concept, that will become important for our main theorem: for
$m=1,\ldots, \emph{arity}(S)$ we define $D_m(\omega)$ as the \emph{arity-$m$ data of $\omega$}. Informally speaking,
$D_m(\omega)$ collects all the information from all adjacency arrays of $\omega$ that refers to
exactly $m$ distinct elements. For example (cf. Figure~\ref{fig:worlddefs}), $D_1(\omega)$ contains the
data (adjacency arrays) of all unary relations of $S$, but also the information contained on the diagonal
of a two-dimensional adjacency array for a binary (edge) relation, i.e., the information about self-loops
of that relation. A possible world can then also be described by the tuple $(D_m)_{m=1,\ldots,\emph{arity}(S)}$.
Furthermore, $D_m(\omega)$ can be decomposed into the factors $D_m(\omega\downarrow \boldi)$, where
$\boldi$ ranges over $\langle n\rangle^m$. We denote with ${\cal T}_m$ the space of possible
values of $D_m(\omega\downarrow \boldi)$ ($|\boldi|=m$). A possible world $\omega\in\Omega^{(n)}$ then also is given
by an assignement of a value in ${\cal T}_m$ for all $\boldi\in\langle n\rangle^m$ ($m=1,\ldots, \emph{arity}(S)$).
\begin{figure}
\centering
\includegraphics{./worlds.pdf}
\caption{Top left: world $\omega$ with one unary relation (black/white) and one
binary (edge) relation; top middle/right: sub-worlds induced by $I=\{1,3\}$ and
$\boldi=(3,1)$;
second row: unary and binary data parts; bottom: spaces ${\cal T}_1,{\cal T}_2$ for
the given signature.}
\label{fig:worlddefs}
\end{figure}
\section{Worldlet Frequency Distributions}
\label{sec:worldletfreqs}
Many graph analysis methods examine frequent characteristic subgraphs to provide information about a larger graph. We can think of a subgraph as a template that can be instantiated multiple times in a large graph. For example, in a social network we can count the number of friendship triangles among women. Depending on the framework, such templates go by different names (e.g., graphlets, motifs, frequency subgraphs) and are represented using different syntax (e.g., SQL queries, first-order logic, semantic relationships). We observe that subgraph templates can be represented in a general syntax-independent way as the collection of fully specified graphs $\Omega^{(k)}$ of a fixed size $k$, where we think of $k$ as a small number (typically in the range $k=2,\ldots,5$). When seen as a subgraph pattern, we refer to a world $\omega \in \Omega^{(k)}$ as a \defterm{worldlet}.
We assume that for every worldlet, the frequency of its occurrence in a larger world is available, through learning or expert elicitation (cf. \cite{Bacchus90}). As a notational convention, we use $k$ and $n$ to denote domain sizes of (small) worldlets and large ``real'' worlds, respectively. This convention only is intended to support intuitions,
and does not have any strict mathematical implications.
\paragraph{Statistical Frequency Distributions.}
The intuitive idea of observing random worlds by sampling subsets of larger domains
can be formalized in slightly different ways, e.g. by assuming sampling with or without replacement, or
by interpreting the observation as a unique world, or only an isomorphism
class~\cite{diaconis2007graph,kuzelka2018relational}. In many aspects alternative sampling models
become essentially equivalent as $n\rightarrow\infty$~\cite{diaconis2007graph}.
We here adopt a sampling model in which an ordered sample is drawn without replacement.
Thus, a sample from a world $\omega\in\Omega^{(n)}$ is given by one of the $n!/(n-k)!$ tuples
$\boldi\in[n]^k_{\neq}$, and the observed worldlet then is $\omega\downarrow\boldi$.
Note that this sampling method does not rely on observing the original labels of elements drawn from
$\omega$ to obtain the labeling of elements in the sampled worldlet, and therefore also makes sense when
the elements of $\omega$ can not be assumed to have (observable) integer labels.
The frequency distribution obtained through this sampling method is denoted ${P}^{(k)}(\cdot|\omega)$.
\begin{example}
Let $S=\{e\}$ consist of a single binary relation. Let $\omega\in\Omega^{(n)}$ be a ``star'' with center 1, i.e.,
$e$ consists of the edges $\{1\rightarrow l: l=2,\ldots,n\}$. The probability that a random draw of 2 elements
contains the node 1 then is $2/n$, with equal probability that 1 is the first or second drawn element.
The three worldlets
$1\bullet\!\! {\white\rightarrow} \!\! \bullet 2$,
$1\bullet\!\! \rightarrow\!\! \bullet 2$ and $1\bullet\!\! \leftarrow \!\! \bullet 2$
then have probabilities $1-2/n, 1/n, 1/n$ (in this order)
under ${P}^{(k)}(\cdot|\omega)$.
\end{example}
Every world $\omega$ defines a frequency distributions ${P}^{(k)}(\cdot|\omega)$.
If first a random $\omega$ is selected, we obtain a two-step sampling procedure that was first described
in a more general context by Fenstad~\shortcite{Fenstad67}.
\paragraph{Fenstad Sampling.}
Given a possible world distribution $Q^{(n)}$, we define the \defterm{expected statistical frequency}
distribution $P^{(k)}\circQ^{(n)}$ for $k$-worlds $\omega'$ as follows:
\begin{equation}
\label{eq:twostepsampling}
(P^{(k)}\circQ^{(n)}) (\omega') := \sum_{\omega\in\Omega^{(n)}} Q^{(n)}(\omega)P^{(k)}(\omega'\mid\omega).
\end{equation}
We denote with $\Delta^{(k)}_{n}$ the set of distributions on $\Omega^{(k)}$ that have a representation
of the form (\ref{eq:twostepsampling}) for some $Q^{(n)}$. If $k<l<n$, then
$P^{(k)}\circ(P^{(l)}\circQ^{(n)})=P^{(k)}\circQ^{(n)}$, and thus $\Delta^{(k)}_{n}\subseteq \Delta^{(k)}_{l}$.
\begin{figure}
\centering
\includegraphics[scale=0.2]{./samplingdistributions.png}
\put(-185,212){$n=3$}
\put(-65,212){$n=4$}
\put(-185,93){$n=5$}
\put(-65,93){$n=6$}
\caption{Illustration of $\Delta^{(k)}_{n}$ for $k=3$ and $n=3,4,5,6$. Cf. examples~\ref{ex:Deltakn} and
\ref{ex:running1}}
\label{fig:scatterfig}
\end{figure}
\begin{example}
\label{ex:Deltakn}
In this example and some of the following, we take $S$ to contain a single undirected edge relation $e$. In order
to comply with our general definitions, which are based on directed relations, we consider an undirected edge
$i\bullet - \bullet j$ to be a shorthand for the conjunction $i\bullet \rightarrow \bullet j$ and
$i\bullet \leftarrow \bullet j$, and we assume that all worlds with uni-directional edges
($i\bullet \rightarrow \bullet j$ but not
$i\bullet \leftarrow \bullet j$) or self-loops ( $i\bullet \rightarrow \bullet i$) have probability zero.
Disregarding these probability zero worlds, $\Omega^{(3)}$ then contains 8 possible worlds belonging to
4 different isomorphism classes. The top row of Table~\ref{tab:wdistributions} depicts these isomorphism
classes, together with the count of worlds in each class.
Figure~\ref{fig:scatterfig} illustrates for $n=3,4,5,6$ the worldlet frequency distributions $P^{(k)}(\cdot|\omega)$ defined
by the worlds $\omega\in\Omega^{(n)}$. Each (blue) dot is the distribution defined by one world
after projecting its 8-dimensional probability vector into 2-dimensional space. Some jitter is applied to
exhibit the multiplicities of $n$-worlds defining the same distribution on worldlets of size 3.
The sets $\Delta^{(k)}_{n}$ are the convex hulls of these points. The distribution marked by the (red) + in
Table~\ref{tab:wdistributions} and Figure~\ref{fig:scatterfig} belongs to
$\Delta^{(k)}_{n}$ for $n=3,4$, but not for $n=5,6$.
\end{example}
\section{Relational Models and Distribution Families}
\label{sec:properties}
As our goal is to examine properties of relational models that are independent of a particular model syntax, we use a family of distributions as a semantic view of a parametrized model. The two key properties of families in our study are exchangeability and projectivity.
\subsection{Distribution Families: Exchangeability, and Projectivity}
\begin{definition}
A \emph{family of distributions} $\{Q^{(n)}: n \in\Nset\}$ specifies, for each finite domain size $n$, a distribution $Q^{(n)}$ on the possible world set $\Omega^{(n)}$.
\end{definition}
\begin{definition}
A probability distribution $Q^{(n)}$ on $\Omega^{(n)}$ is \emph{exchangeable}, if
$Q^{(n)}(\omega)=Q^{(n)}(\omega')$ whenever $\omega$ and $\omega'$ are isomorphic.
A family is exchangeable, if every member of the family is exchangeable.
\end{definition}
Intuitively a distribution family is projective if its members are mutually consistent in the sense that the world distribution over a smaller domain size is the marginal distribution over a larger one. For a precise definition, we
follow our notation for relational substructures, and for each $n$-world $\omega$, write $\omega \downarrow [m]$ for the size-$m$ subworld that results from restricting $\omega$ to the first $m$ elements.
A distribution $Q^{(n)}$ over $n$-worlds then induces a {\em marginal} probability for an $m$-world $\omega'$ as follows:
\begin{equation*}
Q^{(n)} \downarrow[m] (\omega') = \sum_{\omega \in \Omega^{(n)}: \omega \downarrow [m] = \omega'}\!\!\!\!\!\! Q^{(n)}(\omega)
\end{equation*}
Projectivity is the central concept for our investigation:
\begin{definition}
\label{def:projective}
An exchangeable family $(Q^{(n)})_{n\in\Nset}$ is \emph{projective}, if for all
$m<n$: $Q^{(n)}\downarrow [m] =Q^{(m)}$.
\end{definition}
Note that in contrast to more
general notions of projectivity found in the theory of stochastic processes,
we here define projectivity only for exchangeable families. Exchangeability implies that
the marginal distribution $Q^{(n)}\downarrow I$ is the same for all subsets $I$ of size $m$, and therefore
we only need to consider the marginal $Q^{(n)}\downarrow [m]$ as a prototype.
\begin{example}
Statistical frequency distributions $P^{(k)}(\cdot\mid\omega)$ always are
exchangeable. As a special case, if $\omega\in\Omega^{(n)}$, then $P^{(n)}(\cdot\mid\omega)$ samples a random
permutation of $\omega$, i.e., is the uniform distribution on the isomorphism class of $\omega$.
It follows that distributions
defined by Fenstad sampling (\ref{eq:twostepsampling}) also are exchangeable, for any
$Q^{(n)}$.
\end{example}
We approach the question of how to characterize and represent projective families through the more
specific question of whether a given distribution $Q^{(k)}$ can be embedded in a projective family. The
following definition provides the necessary terminology.
\begin{definition}
\label{def:extendable}
Let $Q^{(k)}$ be an exchangeable distribution on $\Omega^{(k)}$. $Q^{(k)}$ is called
\begin{itemize}
\item \emph{$n$-extendable}, if $Q^{(k)}\in\Delta^{(k)}_{n}$; any $Q^{(n)}$ that
induces $Q^{(k)}$ via (\ref{eq:twostepsampling}) is called an \emph{extension} of $Q^{(k)}$.
\item \emph{extendable}, if it is $n$-extendable for all $n>k$;
\item \emph{projective extendable} if there exists a projective family $(Q^{(n)})_n$ of extensions
of $Q^{(k)}$.
\end{itemize}
\end{definition}
\begin{table}
\centering
\begin{tabular}{cccc|p{10mm}}
\includegraphics[scale=0.2]{./w33.pdf} ($\times 1$)&
\includegraphics[scale=0.2]{./w32.pdf} ($\times 3$)&
\includegraphics[scale=0.2]{./w31.pdf} ($\times 3$)&
\includegraphics[scale=0.2]{./w30.pdf} ($\times 1$)& Name \\
\hline
1 & 0 & 0 & 0 & $\pointmass{E_3}$ \\
0 & 0 & 0 & 1 & $\pointmass{K_3}$ \\
0 & 1/3 & 0 & 0 & + \\
1/4 & 0 & 1/4 & 0 & bipart\\
\end{tabular}
\caption{Some example worldlet distributions}
\label{tab:wdistributions}
\end{table}
\begin{example}
\label{ex:running1}
The rows in Table~\ref{tab:wdistributions} specify several exchangeable distributions on $\Omega^{(3)}$ (in the
undirected graph setting, as described in Example~\ref{ex:Deltakn}). The numbers in the table specify the probabilities
of each world in a given isomorphism class, not the total probability of the isomorphism class.
The first two are the point masses on the
empty graph (denoted $E_3$) and complete graph ($K_3$), respectively. If $\pointmass{E_n}$ denotes the point mass
on the empty graph of size $n$, then $( \pointmass{E_n})_n$ is a projective family. Similarly for the
family $(\pointmass{K_n})_n$, and the family of mixtures $(0.5\cdot\pointmass{E_n}+0.5\cdot\pointmass{K_n})_n$.
The row labeled + is the distribution marked by the (red) + in the plots of Figure~\ref{fig:scatterfig}.
If $\omega\in\Omega^{(4)}$ is the graph that contains the two edges $1\bullet\!\!-\!\!\bullet 2$ and
$3\bullet\!\!-\!\!\bullet 4$, then this
distribution is equal to $P^{(3)}(\cdot|\omega)$. Thus, it is 4-extendable, which is also visible in the top right
panel of Figure~\ref{fig:scatterfig} showing that '+' coincides with sampling distributions induced by
4-worlds.
However, '+' is not $n$-extendable for any $n\geq 5$.
This is visible in Figure~\ref{fig:scatterfig} as for $n=5,6$ '+' lies outside the convex hull
of the worldlet frequency distributions. Proposition~\ref{prop:modularity} below will provide a simple tool
for proving the non-extendability of '+'.
The last row in the table describes the distribution that in the limit for $n\rightarrow \infty$ is the
worldlet frequency distribution defined by complete, balanced bipartite graphs, i.e., graphs whose edge set
is equal to $\{i\bullet\!\!-\!\!\bullet j: 1\leq i\leq \lfloor n/2 \rfloor; \lfloor n/2 \rfloor +1\leq j \leq n \}$. It will follow
from our main theorem that this distribution is projective extendable.
\end{example}
\subsection{Domain Sampling Distributions}
Extendable distributions $Q^{(k)}$ in the sense of Definition~\ref{def:extendable} are mixtures of worldlet frequency
distributions. An important special case is when $Q^{(k)}$ is a pure worldlet frequency distribution
$P^{(k)}(\cdot|\omega)$ defined by a single world $\omega$. In that case, however, one cannot expect that $Q^{(k)}$
can be represented in this form with suitable $\omega$ for all $n$, because the sets
$\{P^{(k)}(\cdot|\omega): \omega\in\Omega^{(n)} \}$ for different $n$ are concentrated on different
grids of rational numbers, and therefore are largely disjoint (cf. Figure~\ref{fig:scatterfig}).
Following the approach already taken by Bacchus et al. to give semantics to statistical probability terms
in the random worlds approach~\cite{BaGroHalKol92,halpern2017reasoning}
we therefore only require that $Q^{(k)}$ is approximately equal to some $ P^{(k)}(\cdot|\omega)$, with an increasing
accuracy in the approximation as the size of $\omega$ increases.
\begin{definition}
\label{def:dsrealizable}
Let {$Q^{(k)}$} be a probability distribution on
$\Omega^{(k)}$. We say that {$Q^{(k)}$} is a \emph{domain sampling distribution}
if the following holds:
for every $\epsilon >0$ there exists $n\in\Nset$, such that for every $n'\geq n$: there exists
a possible $n'$-world $\omega$, so that for all $\omega'\in\Omega^{(k)}$:
\begin{equation}
\label{eq:realizable}
|P^{(k)}(\omega'\mid \omega)-Q^{(k)}(\omega')|<\epsilon.
\end{equation}
\end{definition}
Thus, the property of being a domain sampling distribution strengthens the property
of extendability in that in the representation (\ref{eq:twostepsampling})
only point masses
$Q^{(n)}=\pointmass{\omega}$ are allowed, but weakens it in that (\ref{eq:realizable}) only
requires approximate equality.
\begin{example}
For the worldlet distributions of Table~\ref{tab:wdistributions} we have
$\pointmass{E_3}=P^{(3)}(\cdot| E_n)$ for all $n\geq 3$,
so that $\pointmass{E_3}$ is a domain sampling distribution
(with zero approximation error). Similarly for $\pointmass{K_3}$. The mixture
$0.5\cdot \pointmass{E_3}+0.5\cdot \pointmass{K_3}$ is projective extendable, but not a domain
sampling distribution. The distribution '$+$' is not a domain sampling distribution. This is
indicated by Figure~\ref{fig:scatterfig}, because already for $n=6$ the distribution is separated
by a distance $\epsilon>0$ from the set $\Delta^{(3)}_6$. Because of the nested structure of the
$\Delta^{(3)}_n$ there then also cannot be better approximations for larger $n>6$. The last
'bipart' distribution in Table~\ref{tab:wdistributions} again is a domain sampling distribution
with a non-zero approximation error that only vanishes as $n\rightarrow\infty$.
\end{example}
\section{A Representation Theorem}
\label{sec:reptheo}
We now proceed to derive our main result, which is a comprehensive characterization of
families $(Q^{(n)})_n$ and worldlet marginals $Q^{(k)}$ with the structural properties
described in Section~\ref{sec:properties}.
We introduce a representation for projective families that is based on the analysis and
representation theorems for infinite exchangeable arrays developed by
Aldous~\shortcite{aldous1981representations} and Hoover~\shortcite{hoover1979relations}. The definitive treatment is given
by Kallenberg~\shortcite{kallenberg2006probabilistic}. We therefore call the following an AHK model.
\begin{definition}
\label{def:ahkmodel}
Let $S$ be a signature with maximal $\emph{arity}(S)=a\geq 1$. An AHK model for $S$ is given by
\begin{itemize}
\item A family of i.i.d. random variables $\{U_{\boldi}| \boldi\in \langle \Nset \rangle^m, m=0,\ldots,a\}$,
where each $U_{\boldi}$ is uniformly distributed on $[0,1]$.
\item A family of random variables $\{D_{\boldi}| \boldi\in \langle \Nset \rangle^m, m=1,\ldots,a\}$.
For $\boldi\in \langle \Nset \rangle^m$ the variable $D_{\boldi}$ takes values in ${\cal T}_m$.
\item For each $m=1,\ldots,a$ a measurable function
\begin{equation}
\label{eq:ffunction}
f^m: [0,1]^{2^m}\rightarrow {\cal T}_m
\end{equation}
so that
\begin{itemize}
\item for $\boldi=(i_1,\ldots,i_m)\in\langle \Nset \rangle^m$ the value of $D_{\boldi}$ is defined as
$f^m(\boldU_{\boldi})$, where
\begin{multline}
\label{eq:Tfromf}
\boldU_{\boldi}=(U_{\emptyset},U_{i_1},\ldots,U_{i_m},U_{( i_1,i_2)},\ldots,\\
U_{( i_{m-1},i_m)},\ldots
\ldots, U_{( i_1,\ldots,i_m) }),
\end{multline}
is the vector containing all $U_{\boldi'}$-variables with $\boldi'\subseteq\boldi$ in
lexicographic order.
\item $f^m$ is permutation equivariant, in the sense that for any permutation $\pi$ of $[m]$
\begin{displaymath}
f^m(\pi\boldU_{\boldi})=\pi f^m(\boldU)
\end{displaymath}
where $\pi\boldU_{\boldi}$ is the permutation of $\boldU_{\boldi}$ that in the place of
$U_{\boldi'}$ contains $U_{\pi\boldi'}$ with $\pi\boldi'$ the ordered tuple of the elements
$\{\pi(i): i\in\boldi'\}$.
\end{itemize}
\end{itemize}
An AHK model that does not contain the $U_{\emptyset}$ variable is called an AHK$^-$ model.
\end{definition}
\begin{figure}[tb]
\centering
\includegraphics{./ahk-plates-portrait.pdf}
\caption{Plate representation of AHK model with $a=3$}
\label{fig:ahkplates}
\end{figure}
Figure~\ref{fig:ahkplates} gives an illustration of the structure of an AHK model in plate notation.
An AHK model is fully determined by the functions
$\boldf:=(f^m)_{m=1\ldots,a}$, and we therefore write $\boldf$ to refer to an AHK model.
By a slight abuse of notation, we also use $\boldf$ to denote the distribution
defined by the model on the possible worlds over the infinite domain $\Nset$, and
write $\boldf\downarrow [n]$ for the marginal on the induced sub-world over
the domain $[n]$.
The following example gives a simple illustration of how the permutation equivariance condition for the
functions $f^m$ ensures exchangeability.
\begin{example}
We encode a version of the Erd\H{o}s-R\'enyi random graph model in which any pair of nodes is
connected with probability 1/2 by an edge, and that edge is given a random direction. Thus, the target distribution on worldlets
of size 2 is $P(1\bullet \leftarrow \bullet 2)=P(1\bullet \rightarrow \bullet 2)=0.25$,
$P(1\bullet \hspace{3mm} \bullet 2)=0.5$.
The state space ${\cal T}_1$ contains the
two states ``self-loop'' and ``no self-loop''. Since self-loops have probability zero, we simply
let $f^1$ be the constant function that returns ``no self-loop'' regardless of the input $U$-variables.
The state space ${\cal T}_2$ contains the four states
$1\bullet\!\! {\white\rightarrow} \!\! \bullet 2$,
$1\bullet\!\! \rightarrow\!\! \bullet 2$, $1\bullet\!\! \leftarrow \!\! \bullet 2$,
and $1\bullet\!\! \leftrightarrow \!\! \bullet 2$, of which only the first three have non-zero probability.
Let
\begin{multline*}
f^2(x_0,x_1,x_2,x_3):= \\
\left\{
\begin{array}{ll}
1\bullet \rightarrow \bullet 2 & \mbox{if}\ x_1 < x_2 \ \mbox{and}\ x_3 <0.5 \\
1\bullet \leftarrow \bullet 2 & \mbox{if}\ x_2 < x_1 \ \mbox{and}\ x_3 <0.5 \\
1\bullet \hspace{3mm} \bullet 2 & \mbox{otherwise.}
\end{array}
\right.
\end{multline*}
For clarity we here use a notation that makes it clear that the functions $f^m$ are defined on
arrays of length $2^m$, and their definition distinguishes arguments by their position in the input
array, not by their semantic nature as a variable $U_{\boldi'}$.
For $\pi: 1\mapsto 2, 2\mapsto 1$ we then have $\pi(1\bullet \rightarrow \bullet 2 )= 1\bullet \leftarrow \bullet 2$,
and $f^2(\pi\boldU_{(1,2)})=f^2(U_{\emptyset},U_2,U_1,U_{(1,2)})= \pi f^2(\boldU_{(1,2)})$. Together with the fact that
the tuples $\boldU_{(1,2)}$ and $\pi\boldU_{(1,2)}$ have identical distribution, this implies that
the two values $ 1\bullet \rightarrow \bullet 2$, $ 1\bullet \leftarrow \bullet 2$ of $D_{(1,2)}$ have the same
probability.
\end{example}
Generalizing from this example, and also noting that the plate representation of the AHK models
directly implies that marginals $\boldf\downarrow [n]$ simply are given by
instantiating the plate model only for $\boldi\subset [n]$, we can note the following proposition.
\begin{proposition}
\label{prop:ahkprojective}
Let $\boldf$ be an AHK model.
The marginals $\boldf\downarrow [n]$ are exchangeable, and the family
$(\boldf\downarrow [n])_n$ is projective.
\end{proposition}
For a given worldlet distribution $Q^{(k)}$ with $k\geq \emph{arity}(S)$ we say that $Q^{(k)}$
\emph{has an AHK representation}, if there exists an $\boldf$ with $\boldf\downarrow [k]=Q^{(k)}$.
We can now formulate our main result.
\begin{theorem}
\label{theo:extendable2}
Let $Q^{(k)}$ be an exchangeable distribution on $\Omega^{(k)}$ with $k\geq \emph{arity}(S)$. For the statements
\begin{description}
\item[(A)] $Q^{(k)}$ is a domain sampling distribution.
\item[(B)] $Q^{(k)}$ has a AHK$^-$ representation
\item[(C)] $Q^{(k)}$ is a finite mixture of domain sampling distributions
\item[(D)] $Q^{(k)}$ is extendable
\item[(E)] $Q^{(k)}$ is projective extendable
\item[(F)] $Q^{(k)}$ has a AHK representation
\end{description}
the following implications hold:
\begin{displaymath}
\mbox{\bf (A)}\Leftrightarrow \mbox{\bf (B)}
\Rightarrow \mbox{\bf (C)}
\Leftrightarrow \mbox{\bf (D)}
\Leftrightarrow \mbox{\bf (E)}
\Leftrightarrow \mbox{\bf (F)}
\end{displaymath}
\end{theorem}
The full proof of the theorem is given in the extended online version of this paper (\url{http://arxiv.org/abs/2004.10984}).
\section{Discussion}
In this section we consider some of the trade-offs between limitations in expressivity of
projective models on the one hand, and
gain in algorithmic and statistical tractability on the other hand.
Limitations in expressivity can be considered in terms of what distributions $Q^{(n)}$, for a fixed $n$, can
be represented, and in terms of the limitations for the family $\{Q^{(n)}|n\in\Nset\}$ as a whole.
Considering a single distribution $Q^{(n)}$, we can observe a \emph{modularity} property as described
by the following proposition, and illustrated in Figure~\ref{fig:modular}
\begin{figure}
\centering
\includegraphics{./worlds-extend.pdf}
\caption{Modularity of AHK models: if $\omega$ on the left has nonzero probability, then also
the set of worlds $O$ on the right.}
\label{fig:modular}
\end{figure}
\begin{proposition}
\label{prop:modularity}
Let $\boldf$ be an AHK model, $\omega\in\Omega^{(n)}$ with $\boldf\downarrow[n](\omega)>0$.
Let $O\subset\Omega^{(n+1)}$ be the set of $n+1$ worlds $\omega'$ for which
$\omega'\downarrow [n]=\omega'\downarrow \{1,\ldots,n-1,n+1 \} =\omega$.
Then $\boldf\downarrow[n](O)>0$. Moreover, if $\boldf$ is an AHK$^-$ model, then
$\omega'\downarrow [n]=\omega$ and $\omega'\downarrow \{1,\ldots,n-1,n+1 \} =\omega$ are
independent events given $\omega'\downarrow [n-1]$.
\end{proposition}
Figure~\ref{fig:modular} illustrates the proposition with $n=3$: if the world $\omega$ on the left
has nonzero probability, then also the set of 4-worlds $O$ on the right has nonzero probability.
$O$ is the set of 4-worlds for which the substructures induced by
$\{1,2,3\}$ and $\{1,2,4\}$ are both isomorphic to $\omega$. The dashed
arc connecting nodes 3 and 4 on the right indicates that the value of $D_{(3,4)}$ determining the
relations between nodes 3 and 4 can vary for
different elements of $O$.
As an application of Proposition~\ref{prop:modularity} we can see that the '+' distribution of Table~\ref{tab:wdistributions}
does not have an AHK representation, and therefore cannot be extendable (cf. Example~\ref{ex:running1}):
letting $n=2$ and $\omega= 1\bullet\!\!-\!\!\bullet2$, we obtain from the proposition that also 3-worlds with
two edges $1\bullet\!\!-\!\!\bullet2$ and $2\bullet\!\!-\!\!\bullet3$ must have nonzero probability, which is not
the case for '+'.
We now turn to structural limitations of the whole family $\{Q^{(n)}|n\in\Nset\}$ implied by an AHK representation.
As already
mentioned in the introduction, projective families generate structures that are ``dense'' in the limit.
More precisely, if $\omega\in\Omega^{(k)}$ is a worldlet with $\boldf\downarrow [k] (\omega)>0$, then
the expected number of $k$-tuples in worlds of size $n$ which induce sub-worlds isomorphic to $\omega$ grows
linearly in $n^k$. Specifically, if graph edges have a nonzero probability at all, then the expected
number of edges grows linearly in $n^2$. It must be emphasized, though, that this only imposes limits on
modeling the asymptotic behavior of evolving graphs. For any fixed domain size, an AHK model can fit
any observed degree distribution:
\begin{example}
Let $n^*\in\Nset$, and let $f(d)$ ($d=0,1,\ldots n^*$) denote an out-degree distribution for
directed graphs on $[n^*]$.
For arbitrary $n$ we can normalize out-degrees in graphs of size $n$ via $d\mapsto d/n$.
Let $F(\delta)$ ($\delta\in[0,1]$) be the cumulative distribution function obtained from $f()$
for the normalized degrees $d\mapsto d/n^*$. We now define
\begin{multline*}
f^2(U_i,U_j,U_{(i,j)}):= \\
\left\{
\begin{array}{ll}
i\bullet\!\! \rightarrow \!\! \bullet j & \mbox{if}\ U_i\geq F(U_{(i,j)})\ \mbox{and}\ U_j < F(U_{(i,j)}) \\
i\bullet\!\! \leftarrow \!\! \bullet j & \mbox{if}\ U_j\geq F(U_{(i,j)})\ \mbox{and}\ U_i < F(U_{(i,j)}) \\
i\bullet\!\! \leftrightarrow \!\! \bullet j & \mbox{if}\ U_i\geq F(U_{(i,j)})\ \mbox{and}\ U_j \geq F(U_{(i,j)}) \\
i\bullet\!\! {\white\leftrightarrow} \!\! \bullet j & \mbox{otherwise}
\end{array}
\right.
\end{multline*}
Let $\delta_i$ denote the normalized out-degree of node $i$.
Then for all $u\in[0,1]$ we obtain the expected normalized out-degree:
\begin{equation}
\label{eq:expoutdeg}
E[\delta_i|U_i=u]=F^{-1}(u).
\end{equation}
$U_i$ being uniformly distributed, the right-hand side of (\ref{eq:expoutdeg}) is distributed with cdf
$F()$, and so the expected normalized degree distribution follows $F()$. In the special case $n=n^*$ then
the expected absolute degree distribution is the original $f()$.
\end{example}
On the positive side, we obtain significant computational and robustness advantages from the use of projective
models: inference is \emph{lifted} in the strongest possible sense that the complexity of computing a query
probability for a query involving $k$ named entities is independent of the size of the domain in which
the entities are embedded. For learning, projectivity is a necessary condition for consistent estimation from
substructures randomly sampled from domains of unknown size. However, further conditions beyond projectivity
are required to formulate and derive precise consistency guarantees~\cite{Jaeger2018}.
Statistical consistency and robustness results can therefore not be directly given for AHK models in general
without first identifying a suitable effectively representable and parameterizable class of functions
from which the $f^m$ can be constructed. Identifying rich and tractable such classes, and evaluating their
learning capabilities empirically and theoretically is future work.
When evaluating the trade-offs of AHK models for a particular application, it must always be born in mind
that the strenghts of generative, projctive models only come to bear when one needs to deal with diverse
types of queries (so that a discriminative model for a fixed prediction task would be inadequate), and
when one has to deal with data from domains of different and/or uncertain sizes. We note that this is basically
the opposite side of the task spectrum from where many current popular node classification and link
prediction problems are situated, in which both learning and inference is conducted for a fixed task on
a single given graph, e.g., \cite{wu2020comprehensive}.
\section{Conclusion}
In this paper we have laid theoretical foundations for the study and application of
rich classes of projective families.
Bringing together research strands in statistical graph theory and statistical relational learning
we have derived an explicit characterization of projective families in the form of a directed
graphical (plate) model. We have shown that closely linked to projectivity is the (approximate)
realizability as a statistical frequency distributions of worldlet samples drawn from large domain.
These results give us a characterization of the form of statistical knowledge to which
the random worlds approach of Bacchus et al.~\shortcite{BaGroHalKol92} can be applied.
Interestingly, the structure of
AHK models has much in common with the ``independent choice logic'' family of SRL
frameworks~\cite{Sato95,Poole97,KimDemDeRSanRoc11} that also generate random relational structures as
deterministic functions of a set of a-priori independent random variables. However, the continuous
nature of the $U_{\boldi}$ variables in the AHK model, and the potential need of functions $f^m$ not readily
expressible in existing SRL languages pose significant challenges for the direct application of existing
SRL techniques.
On the theoretical side, many interesting questions remain regarding statistical principles of model
selection, and unbiasedness and consistency of estimation: for a given worldlet distribution $Q^{(k)}$ there
will often be multiple AHK models that precisely fit $Q^{(k)}$ and therefore are indistinguishable based on
likelihood scores. What invariance, parsimony, or plain parameter regularization principles are
then most useful for model selection?
\section*{Acknowledgments} Oliver Schulte's contribution was supported by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada.
\vspace{5mm}
|
2,869,038,155,122 | arxiv | \section{Introduction}
The description of the dynamics at high density parton regime is one of the main open
questions of the strong interactions theory. While in the region of moderate Bjorken $x$
($x \ge 10^{-2}$) the well-established methods of
operator product expansion and renormalization group equations have been applied successfully,
the small $x$ region still lacks a consistent theoretical framework (For a review see \cite{cooper}).
Basically, its is questionable the use of
the DGLAP equations \cite{dglap}, which reflects the dynamics at moderate $x$, in
the region of small values of $x$. The traditional procedure of using the DGLAP equations to
calculate the gluon distribution at small $x$ and large momentum transfer $Q^2$ is by summing
the leading powers of $\alpha_s\,ln\,Q^2\,ln(\frac{1}{x})$, where $\alpha_s$ is the strong coupling constant, known as the double-leading-logarithm
approximation (DLLA). In axial gauges, these leading double logarithms are generated by
ladder diagrams in which the emitted gluons have strongly ordered transverse
momenta, as well as strongly ordered longitudinal momenta.
Therefore the DGLAP must breakdown at small values of $x$, firstly because this
framework does not account for the contributions to the cross section
which are leading in $\alpha_s \, ln (\frac{1}{x})$ \cite{bfkl}. Secondly, because the
parton densities become large and there is need to develop a high density
formulation of QCD \cite{100}.
There has been intense debate on to which extent non-conventional QCD evolution is
required by the deep inelastic $ep$ HERA data \cite{cooper,hera96}. Good fits to the $F_2$ data
for $Q^2 \ge 1\,GeV^2$ can be obtained from
distinct approaches, which consider DGLAP and/or BFKL evolution equations
\cite{ball,martin}. In particular, the conventional
perturbative QCD approach is very successful in describing the main features
of HERA data and, hence, the signal of non-conventional QCD dynamics is
hidden or mimicked by a strong background of conventional QCD evolution.
Our goal in this paper is the role of the shadowing corrections (SC) in $F_2$ and its slope. In the last twenty years,
several authors (see \cite{sha} for some phenomenological analysis) have performed a detailed study of the shadowing
effect although without a strong
experimental evidence of this effect in the data, mainly since the main observable,
the $F_2$ structure function, is inclusive to the effects in the gluon distribution. Recently
we have estimated the shadowing corrections to the $F_2^c$ and $F_L$ at
HERA kinematic region using the eikonal
approach \cite{prd}. These observables are directly dependent on the behavior
of the gluon distribution. We have shown that the shadowing corrections to these
observables are important, however the experimental errors in these
observables are still large to allow a discrimination between our predictions
and the DGLAP predictions.
Here we estimate the shadowing corrections to the scaling violations
of the proton structure function. Basically, there are two possibilities to estimate
the SC using the eikonal approach. We can calculate damping factors, which
represent the ratio between the observable with and without shadowing, and subsequently
apply these factors in the conventional DGLAP predictions. This procedure was
used in refs. \cite{glmn1,glmn}, also considering a two radius model for the nucleon. In this paper we propose a second procedure to estimate the SC in DIS,
where the observables are directly calculated in the eikonal approach and the distinct
contributions to the SC are analysed in the same approach, reducing the number
of free parameters. A larger discussion about the distinct procedures is made in section II.
The recent HERA data on the slope of the
$F_2$ structure function \cite{zeus} present at small values of $x$ and $Q^2$ a different
behavior than predicted by the standard DGLAP framework. Basically, the HERA
data present a `turn over' of the slope around $x \approx 10^{-4}$, which cannot be
described using the GRV94 parametrization \cite{grv95} and the DGLAP evolution
equations. We show that this behavior is predicted
by the eikonal approach considering the shadowing corrections for the gluon and quark sectors.
The value of the shadowing corrections depends crucially on the size of the target $R$.
The value of the effective radius $R$ depends on how the gluon ladders couple to
the proton; {\it i.e.}, on
how the gluons are distributed within the proton \cite{hotspot}. In this paper we estimate
the $R$ dependence of the SC. We show that the HERA data on the $F_2$
and its slope can be described consistently using $R^2 = 5\,GeV^{-2}$. This value
agrees with the HERA results on the diffractive $J/\Psi$ photoproduction \cite{zeusjpsi,h1jpsi}.
The steep increase of the gluon distribution predicted by DGLAP and BFKL equations at
high energies would
eventually violate the Froissart bound \cite{froi}, which restricts the rate of
growth of the total cross section to $ln^2(\frac{1}{x})$. This bound may not be
applicable in the case of particles off-mass shell \cite{yndu}, but in this paper we present an
approach for this problem. Basically, we estimate a limit below which the unitarity
corrections may be disregarded and show that the recent HERA data surpass this
boundary, as predicted in \cite{plb}, at small values of $x$ and $Q^2$.
This paper is organized as follows. In section II, the eikonal approach and the shadowing
corrections for $F_2$ and its slope are considered. We estimate the distinct contributions
for the SC and demonstrate that the $\frac{dF_2(x,Q^2)}{dlogQ^2} $ data may be
described considering the shadowing in the gluon and quark sectors. In section III, we
estimate the $R$ dependence of the shadowing corrections. In section IV, we present
a boundary related to unitarity for $F_2$ and $\frac{dF_2(x,Q^2)}{dlogQ^2} $ and show
that the actual HERA data for small $x$ and $Q^2$ overcomes this boundary. Therefore,
the shadowing corrections should be considered in the calculation of the observables in
this kinematic region.
Finally, in section V, we present a summary of our results.
\section{The Shadowing Corrections in pQCD}
The deep inelastic scattering (DIS) is usually described in a frame where the proton is
going very fast. In this case the shadowing effect is a result of an overlap of the parton
clouds in the longitudinal direction. Other interpretation of DIS is the intuitive view
proposed by V. N. Gribov many years ago for
the DIS on nuclear targets \cite{gribov}. Gribov's assumption is that at small values of $x$ the virtual
photon fluctuates into a $q\overline{q}$ pair well before the interaction
with the target, and this system interacts with
the target. This formalism has been established as an useful tool for
calculating deep inelastic and related diffractive cross section for
$\gamma^*\,p$ scattering in the last years \cite{nik,buch}.
The Gribov factorization follows from the fact that the lifetime of the $q\overline{q}$
fluctuation is much larger than the time of the interactions with partons. According to
the uncertainty principle, the fluctuation time is $\approx \frac{1}{m\,x}$,
where $m$ denotes the target mass.
The space-time picture of the DIS in the target
rest frame can be viewed as the decay of the virtual photon at high energy
(small $x$) into a quark-antiquark pair long before the
interaction with the target. The $q\overline{q}$ pair subsequently interacts
with the target. In the small $x$ region, where
$x \ll \frac{1}{2mR}$ ($R$ is the size of the target), the $q\overline{q}$ pair
crosses the target with fixed
transverse distance $r_t$ between the quarks. It allows to factorize the total
cross section between the wave function of the photon and the interaction
cross section of the quark-antiquark pair with the target. The photon wave function
is calculable and the interaction cross section is modeled. Therefore we have that the
proton structure function is given by \cite{nik}
\begin{eqnarray}
F_2(x,Q^2) = \frac{Q^2}{4 \pi \alpha_{em}} \int dz \int d^2r_t |\Psi(z,r_t)|^2 \, \sigma^{q\overline{q}}(z,r_t)\,\,,
\label{f2target}
\end{eqnarray}
where
\begin{eqnarray}
|\Psi(z,r_t)|^2 = \frac{6 \alpha_{em}}{(2 \pi)^2} \sum^{n_f}_i e_f^2 \{[z^2
+ (1-z)^2] \epsilon^2\, K_1(\epsilon r_t)^2 + m_f^2\, K_0^2(\epsilon r_t)^2\}\,\,,
\label{wave}
\end{eqnarray}
$\alpha_{em}$ is the electromagnetic coupling constant,
$\epsilon = z(1-z)Q^2 + m_f^2$, $m_f$ is the quark mass, $n_f$ is the number
of active flavors, $e_f^2$ is the square of the parton charge (in units of $e$), $K_{0,1}$
are the modified Bessel functions and $z$ is the fraction of the photon's light-cone
momentum carried by one of the quarks of the pair. In the
leading log$(1/x)$ approximation we can neglect the change of $z$ during the
interaction and describe the cross section $\sigma^{q\overline{q}}(z,r_t^2)$ as
a function of the variable $x$. Considering only light quarks ($i=u,\,d,\,s$) $F_2$
can be expressed by \cite{plb}
\begin{eqnarray}
F_2(x,Q^2) = \frac{1}{4 \pi^3} \sum_{u,d,s} e_f^2 \int_{\frac{1}
{Q^2}}^{\frac{1}{Q_0^2}} \frac{ d^2r_t}{r_t^4}\,\sigma^{q\overline{q}}(x,r_t) \,\,.
\label{f2sim}
\end{eqnarray}
We have introduced a cutoff in the superior limit of the integration in order to eliminate
the long distance (non-perturbative) contribution in our calculations.
In this paper we assume $Q_0^2 = 0.4\,GeV^2$ as in our previous works in this subject.
We estimated the shadowing corrections considering the eikonal approach \cite{chou},
which is formulated in the impact parameter space. Here we review the main assumptions of the eikonal approach.
In the impact parameter representation, the scattering amplitude $A(s,t)$, where
$t= - q_t^2$ is the momentum transfer squared, is given by
\begin{eqnarray}
a(s,b_t) = \frac{1}{2\pi} \int d^2q_t \, e^{-i\vec{q_t}.\vec{b_t}} A(s,t) \,\,.
\end{eqnarray}
The total cross section is written as
\begin{eqnarray}
\sigma_{tot}(s) = 2 \int d^2b_t \, Im \,a(s,b_t)\,\,,
\label{tot}
\end{eqnarray}
and the unitarity constraint stands as
\begin{eqnarray}
2\, Im \,a(s,b_t) = |a(s,b_t)|^2 + G_{in}(s,b_t)
\label{uni}
\end{eqnarray}
at fixed $b_t$, where $G_{in}$ is the sum of all inelastic channels. For high energies
the general solution of Eq. (\ref{uni}) is:
\begin{eqnarray}
a(s,b_t) = i \left[1 - e^{-\frac{\Omega(s,b_t)}{2}}\right]\,\,,
\label{soluni}
\end{eqnarray}
where the opacity $\Omega(s,b_t)$ is a real arbitrary function, which is modeled in the
eikonal approach.
Using the $s$-channel unitarity constraint (\ref{soluni}) in the expression (\ref{f2sim}),
the $F_2$ structure function can be written in the eikonal approach as \cite{ayala2}
\begin{eqnarray}
F_2(x,Q^2) = \frac{1}{2\pi^3} \sum_{u,d,s} e_f^2 \int_{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}} \frac{d^2r_t}{r_t^4} \int d^2b_t
\{1 - e^{-\frac{1}{2}\Omega_{q\overline{q}}(x,r_t,b_t)}\}\,\,,
\label{f2eik}
\end{eqnarray}
where the opacity $\Omega_{q\overline{q}}(x,r_t,b_t)$ describes the interaction
of the $q\overline{q}$ pair with the target.
In the region where $\Omega_{q\overline{q}}$ is small $(\Omega_{q\overline{q}} \ll 1)$ the
$b_t$ dependence can be factorized as $\Omega_{q\overline{q}} = \overline{\Omega_{q\overline{q}}} S(b_t)$ \cite{100},
with the normalization $\int d^2b_t\, S(b_t) = 1$. The eikonal approach assumes that
the factorization of the $b_t$ dependence
$\Omega_{q\overline{q}} = \overline{\Omega_{q\overline{q}}} S(b_t)$, which is
valid in the region where $\Omega_{q\overline{q}}$ is small, occurs in the whole kinematical region.
The main assumption of the eikonal approach in pQCD is the identification of opacity
$\overline{\Omega_{q\overline{q}}}$ with the gluon distribution.
In \cite{plb} the opacity is given by
\begin{eqnarray}
\overline{\Omega_{q\overline{q}}} = \frac{ \alpha_s}{3}\,\pi^2\,r_t^2\,
xG(x, Q^2)\,\,,
\label{omega}
\end{eqnarray}
where $xG(x,Q^2)$ is the gluon
distribution. Therefore the behavior of the $F_2$ structure function
(\ref{f2eik}) in the small-$x$ region is mainly determined by the behavior of
the gluon distribution in this region.
The use of the Gaussian parametrization for
the nucleon profile function $S(b_t) = \frac{1}{\pi R^2} e^{-\frac{b^2}{R^2}}$,
where $R$ is a free parameter, simplifies the calculations.
In general this parameter is identified with the proton radius. However, $R$ is associated
with the spatial gluon distribution within the proton, which may be smaller than the
proton radius (see discussion in the next section).
Using the expression (\ref{omega}) in (\ref{f2eik}) and
doing the integral over $b_t$, the master equation for $F_2$ is obtained \cite{ayala2}
\begin{eqnarray}
F_2(x,Q^2) = \frac{2R^2}{3\pi^2} \sum_{u,d,s} e_f^2 \int_{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}} \frac{d^2r_t}{\pi r_t^4} \{C + ln(\kappa_q(x, r_t^2)) + E_1(\kappa_q(x, r_t^2))\}\,\,,
\label{diseik2}
\end{eqnarray}
where $C$ is the Euler constant, $E_1$ is the exponential function, and the function
$\kappa_q(x, r_t^2) = \frac{ \alpha_s}{3 R^2}\,\pi\,r_t^2\,
xG(x,\frac{1}{r_t^2})$. Expanding the equation (\ref{diseik2}) for small $\kappa_q$,
the first term (Born term) will correspond to the usual DGLAP equation in the small $x$
region, while the other terms will take into account the shadowing corrections.
The slope of $F_2$ structure function in the eikonal approach is straightforward from the expression (\ref{diseik2}). We obtain that
\begin{eqnarray}
\frac{dF_2(x,Q^2)}{dlogQ^2} = \frac{2R^2 Q^2}{3\pi^2} \sum_{u,d,s} e_f^2
\{C + ln(\kappa_q(x, r_t^2)) + E_1(\kappa_q(x, r_t^2))\}\,\,.
\label{df2eik}
\end{eqnarray}
The expressions (\ref{diseik2}) and (\ref{df2eik}) predict the behavior of the shadowing
corrections to $F_2$ and its slope considering the eikonal approach for the interaction
of the $q\overline{q}$ with the target. In this case we are calculating the SC associated
with the passage of the $q\overline{q}$ pair through the target. Following \cite{glmn}
we will denote this contribution as the quark sector contribution to the SC.
The behavior of $F_2$ and its slope are associated with the behavior of the gluon
distribution used as input in (\ref{diseik2}) and (\ref{df2eik}). In general, it is assumed
that the gluon distribution is described by a parametrization of the parton distributions
(for example: GRV, MRS, CTEQ) \cite{grv95,mrs,cteq}. In this case the shadowing
in the gluon distribution is not included explicitly.
In a general case we must also estimate the shadowing corrections for the gluon
distribution, {\it i.e.} in the quark and the gluon sectors. In this case we must estimate
the SC for the gluon distribution using the eikonal approach, similarly to the $F_2$ case.
This was made in \cite{ayala1} and here we only present the main steps of the approach.
The gluon distribution can be obtained in the target
rest frame considering the decay of a virtual gluon at high energy
(small $x$) into a gluon-gluon pair long before the
interaction with the target. The $gg$ pair subsequently interacts
with the target, with the transverse distance $r_t$ between the gluons assumed fixed.
In this case the cross section of the absorption of a gluon $g^*$ with virtuality $Q^2$
can be written as
\begin{eqnarray}
\sigma^{g^* + \rm nucleon}(x,Q^2) = \int_0^1 dz \int \frac{d^2r_t}{\pi}
|\Psi_t^{g^*}(Q^2,r_t,x,z)|^2 \sigma^{gg+\rm nucleon}(z,r_t^2)\,\,,
\label{sec1}
\end{eqnarray}
where $z$ is the fraction of energy carried by the gluon and $\Psi_t^{g^*}$ is the
wave function of the transverse polarized
gluon in the virtual probe. Furthermore, $\sigma^{gg+\rm nucleon}(z,r_t^2)$ is the
cross section of the interaction of the $gg$ pair with the nucleon.
Considering the $s$-channel unitarity and the eikonal model, equation (\ref{sec1})
can be written as
\begin{eqnarray}
\sigma^{g^* + \rm nucleon}(x,Q^2) = \int_0^1 dz \int \frac{d^2r_t}{\pi}
\int \frac{d^2b_t}{\pi} |\Psi_t^{g^*}(Q^2,r_t,x,z)|^2
\,\left(1 - e^{-\frac{1}{2} \overline{\Omega_{gg}} S(b_t)}\right)\,\,, \nonumber
\label{diseik}
\end{eqnarray}
where the factorization of the $b_t$ dependence in the opacity $\Omega_{gg}
(x,r_t,b_t)$ was assumed.
Using the relation $\sigma^{g^* + \rm nucleon}(x,Q^2) = \frac{4\pi^2 \alpha_s}{Q^2}xG(x,Q^2)$
and the expression of the wave $\Psi^{g^*}$ calculated
in \cite{mueller,ayala1}, the
Glauber-Mueller formula for the gluon distribution is obtained as
\begin{eqnarray}
xG(x,Q^2) = \frac{4}{\pi^2} \int_x^1 \frac{dx^{\prime}}{x^{\prime}}
\int_{\frac{4}{Q^2}}^{\infty} \frac{d^2r_t}{\pi r_t^4} \int_0^{\infty}
\frac{d^2b_t}{\pi}\,2\,\left[1 - e^{-\frac{1}{2}\sigma_N^{gg}(x^{\prime}
,\frac{r_t^2}{4})S(b_t)}\right]\,\,,
\label{gluon}
\end{eqnarray}
where
$ \overline{\Omega_{gg}} = \sigma_N^{gg}$ describes the interaction of the $gg$ pair with the target.
Using the Gaussian parametrization for
the nucleon profile function, doing the integral over $b_t$, the master equation
for the gluon distribution is obtained as
\begin{eqnarray}
xG(x,Q^2) = \frac{2R^2}{\pi^2}\int_x^1 \frac{dx^{\prime}}{x^{\prime}}
\int_{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}} \frac{d^2r_t}{\pi r_t^4} \{C
+ ln(\kappa_G(x^{\prime}, r_t^2)) + E_1(\kappa_G(x^{\prime}, r_t^2))\} \,\,,
\label{masterg}
\end{eqnarray}
where the function
$\kappa_G(x, r_t^2) = \frac{3 \alpha_s}{2R^2}\,\pi\,r_t^2\,
xG(x,\frac{1}{r_t^2})$. Again, if equation (\ref{masterg}) is expanded for small $\kappa_G$,
the first term (Born term) will correspond to
the usual DGLAP equation in the small $x$ region, while
the other terms will take into account the shadowing corrections.
The expressions (\ref{diseik2}), (\ref{df2eik}) and (\ref{masterg}) are correct
in the double leading logarithmic approximation (DLLA). As shown in \cite{ayala2}
the DLLA does not work quite well in the accessible kinematic region ($Q^2 >
0.4 \,GeV^2$ and $x > 10^{-6}$). Consequently, a more realistic approach must
be considered to calculate the observables. In \cite{ayala2} the subtraction of the
Born term and the addition of the GRV parametrization were proposed to the $F_2$
and $xG$ cases. In these cases we have
\begin{eqnarray}
F_2(x,Q^2) = F_2(x,Q^2) \mbox{[Eq. (\ref{diseik2})]} - F_2(x,Q^2)
\mbox{[Born]} + F_2(x,Q^2) \mbox{[GRV]} \,\,\, ,
\label{f2}
\end{eqnarray}
and
\begin{eqnarray}
xG(x,Q^2) = xG(x,Q^2) \mbox{[Eq. (\ref{masterg})]} - xG(x,Q^2)
\mbox{[Born]} + xG(x,Q^2) \mbox{[GRV]} \,\,\, ,
\label{xg}
\end{eqnarray}
where the Born term is the first term in the expansion in $\kappa_q$ and $\kappa_g$
of the equations (\ref{diseik2}) and (\ref{masterg}), respectively (see \cite{prd}
for more details). Here we present this procedure for the $F_2$ slope. In this case
\begin{eqnarray}
\frac{dF_2(x,Q^2)}{dlogQ^2} = \frac{dF_2(x,Q^2)}{dlogQ^2} \mbox{[Eq.
(\ref{df2eik})]} - \frac{dF_2(x,Q^2)}{dlogQ^2} \mbox{[Born]} +
\frac{dF_2(x,Q^2)}{dlogQ^2} \mbox{[GRV]} \,\,\, ,
\label{df2}
\end{eqnarray}
where the Born term is the first term in the expansion in $\kappa_q$ of the
equation (\ref{df2eik}). The last term is associated with
the traditional DGLAP framework, which
at small values of $x$ predicts
\begin{eqnarray}
\frac{dF_2(x,Q^2)}{dlogQ^2} = \frac{10 \alpha_s(Q^2)}{9 \pi} \int_0^{1-x}
dz \, P_{qg}(z) \, \frac{x}{1-z}g\left(\frac{x}{1-z},Q^2\right)\,\,,
\label{df2glap}
\end{eqnarray}
where $\alpha_s(Q^2)$ is the running coupling constant and the splitting function
$P_{qg}(x)$ gives the probability to find a quark with momentum fraction $x$
inside a gluon. This equation describes the scaling violations of the proton
structure function in terms of the gluon distribution. We use the GRV parametrization
as input in the expression (\ref{df2glap}).
\
In the general approach proposed in this paper we will use the solution of the
equation (\ref{xg}) as input in the first terms of (\ref{f2}) and (\ref{df2}).
As the expression (\ref{xg}) estimates the gluon shadowing, the use of this distribution in
the expressions (\ref{f2}) and (\ref{df2}), which consider the contribution to SC
associated with the passage of $q\overline{q}$ pair through the target, allows to
estimate the SC to both sectors (quark + gluon) of the observables. Our goal is
the discrimination of the distinct contributions to the SC in $F_2$ and
$\frac{dF_2(x,Q^2)}{dlogQ^2}$.
In Fig. \ref{fig1} we present our results for the $F_2$ structure function as a function
of the variable $ln\,(\frac{1}{x})$ for different virtualities. We have used $R^2
= 5\,GeV^{-2}$ in these calculations. In the next section the $R$ dependence
of our results is analysed.
We present our results using the expression (\ref{f2}) (quark sector) and using the
solution of the equation (\ref{xg}) as input in the first term of (\ref{f2}) (quark
+ gluon sector). The predictions of the GRV parametrization are also shown.
We consider the HERA data at low $Q^2$ since for $Q^2 > 6\,GeV^2$ the
SC start to fall down (For a discussion of the SC to $F_2$ considering the quark sector see \cite{ayala2,ayala3}). We can see that at small values of $Q^2$ the predictions
for $F_2$ considering the quark and the quark-gluon sector are approximately
identical. However, for larger values of $Q^2$ the predictions of the quark-gluon
sector disagree with the H1 data \cite{h1}. Therefore,
the contribution of the gluon shadowing to $F_2$ in an eikonal approach
superestimates the shadowing corrections at large $Q^2$ values.
In Fig. \ref{fig2} we present our results for the SC in the
$\frac{dF_2(x,Q^2)}{dlogQ^2}$ as a function of $x$.
The ZEUS data points \cite{zeus} correspond to different $x$ and $Q^2$ value. The
$(x,Q^2)$ points are averaged values obtained from each of the experimental
data distribution bins. Only the data points with $<Q^2> \, \ge 0.52\,GeV^2$
and $x < 10^{-1}$ were used here.
The SC are estimated considering the expression (\ref{df2}) (quark sector) and
using the solution of the equation (\ref{xg}) as input in the first term of
(\ref{df2}) (quark + gluon sector). Moreover, the predictions of the
traditional DGLAP framework, which
at small values of $x$ is given by the expression (\ref{df2glap}) are also presented.
We can see that the DGLAP predictions fail to describe the ZEUS data at small
values of $x$ and $Q^2$.
However we see that in the traditional framework (DGLAP + GRV94) a 'turn over'
is also present at small values of $x$ and $Q^2$. Basically, this occurs since the
smaller $Q^2$ value used ($<Q^2> = 0.52\,GeV^2$) is very near the initial virtuality
of the GRV parametrization, where the gluon distribution is 'valence like'. Therefore the
gluon distribution and the $F_2$ slope are approximately flat in this region. For the
second smaller value of $Q^2$ used ($<Q^2> = 1.1\,GeV^2$) the evolution length is
larger, which implies that the gluon distribution (and the $F_2$ slope) already presents
a steep behavior. The link between these points implies the 'turn over' presented
in Fig. \ref{fig2}. The main problem is that this 'turn over' is higher than observed in the ZEUS data.
This implies that $xG(x,Q^2)$ differs
from the previous standard expectations in the limit of small $x$ and $Q^2$.
This effect is not observed in the $F_2$ structure function since it is inclusive to the
behavior of the gluon distribution, which can be verified analysing the predictions of the
distinct parametrizations. The gluon distribution predicted by these parametrizations
differs in approximately 50 $\%$.
The prediction of the gluon sector, which is
obtained using the solution of the expression (\ref{xg}) as input in (\ref{df2glap}) is also presented. We can see that at
larger values of $Q^2$ and $x$ all predictions are approximately identical. However,
at small values of $x$ and $Q^2$, the ZEUS data
is not well described considering only the quark or the gluon sector to the SC. The
contribution of the gluon shadowing is essential in the region of small values of $x$
and $Q^2$, {\it i.e.} a shadowed gluon distribution should be used as input in the
eikonalized expression (\ref{df2}) in this kinematic region.
Our conclusion is that
at small values of $x$ and $Q^2$ it should be considered the contribution of the
gluon shadowing to estimate the SC to $F_2$ and its slope in the eikonal approach.
While for $F_2$ the contribution of the gluon shadowing may be disregarded, it is
essential for the $F_2$ slope.
The $\frac{dF_2(x,Q^2)}{dlogQ^2}$ data show that a consistent approach
should consider both contributions at small $x$ and $Q^2$.
Before we conclude this section some comments are in order.
We show that the $\frac{dF_2(x,Q^2)}{dlogQ^2}$ data can be successfully
described considering the shadowing corrections in the quark and gluon sectors.
A similar conclusion was obtained in \cite{glmn1,glmn}, where the eikonal approach was
also used to estimate the SC in the quark and gluon sectors, but a distinct procedure
was used to estimate the SC for the $F_2$ slope. In \cite{glmn} damping factors
are calculated separately for both sectors and applied to the standard DGLAP predictions.
The behavior of the gluon distribution at small values of $Q^2$ was modeled separately, since the gluon distribution (\ref{masterg}) vanish for $Q^2 = Q_0^2$.
This procedure introduces a free parameter $\mu^2$,
beyond the usual ones used in the eikonal approach ($Q_0^2, \, R^2$).
The distinct procedure
proposed here estimates the observables directly within the eikonal approach and the
shadowing corrections in the different sectors are calculated within the same approach.
In our calculations there are only two free parameters: (i) the cutoff ($Q_0^2 = 0.4
\, GeV^2$) in order to eliminate the long distance contribution, and (ii) the radius
$R$ ($R^2 = 5 \, GeV^{-2}$). The choice of these parameters is associated with
the initial virtuality of the GRV parametrization used in our calculations,
and the estimates
obtained using the HERA data on diffractive photoproduction of $J/\Psi$
vector meson
(see discussion in the next section) respectively \cite{zeusjpsi,h1jpsi}.
In our procedure the region of
small values of $Q^2 \approx Q_0^2$ is determined by the behavior of the
GRV parameterization in this region, since we are using
the eq. (\ref{xg}) to calculate the gluon distribution.
For $Q^2 = Q_0^2$
the two first terms of (\ref{xg}) vanish and the gluon distribution is described by the GRV parameterization, {\it i.e.} $xG(x,Q_0^2) = xG(x,Q_0^2) \mbox{[GRV]}$.
The eikonal approach describes the ZEUS data,
as well as the DGLAP evolution equations using modified parton distributions.
Recently, the MRST group \cite{mrst} has proposed a different set of parton
parametrizations which consider a initial 'valence-like' gluon distribution.
This
parametrization allows to describe the $F_2$ slope data without an unconventional effect.
This occurs because there is a large freedom in the initial parton distributions and the
initial virtuality used in these parametrizations. We believe that only
a comprehensive
analysis of distinct observables ($F_L, \, F_2^c, \, \frac{dF_2(x,Q^2)}{dlogQ^2}$)
will allow a more careful evaluation of the shadowing corrections at
small $x$ \cite{prd,ayala3}.
\section{ The radius dependence of the shadowing corrections}
The value of SC crucially depends on the size of the target \cite{hotspot}. In pQCD
the value of $R$ is associated with the coupling of the gluon ladders with the
proton, or to put it in another way, on how the gluons are distributed within
the proton. $R$ may be of the order of the proton radius if the gluons are distributed
uniformly in the whole proton disc or much smaller
if the gluons are concentrated, {\it i.e.} if the gluons in the proton are confined in a
disc with smaller radius than the size of the proton.
Considering the expression (\ref{f2eik}), assuming $\Omega_{q\overline{q}} < 1$
and expanding the expression to ${\cal{O}}(\Omega^2)$ we obtain
\begin{eqnarray}
F_2(x,Q^2) = \frac{1}{2\pi^3} \sum_{u,d,s} e_f^2 \int_{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}} \frac{d^2r_t}{r_t^4} \int d^2b_t
\left\{\frac{1}{2}\Omega_{q\overline{q}} - \frac{1}{8}\Omega^2_{q\overline{q}}\right\}\,\,.
\label{f2eikexp}
\end{eqnarray}
Using the factorization of the opacity and the normalization of the profile function we
can write $F_2$ as
\begin{eqnarray}
F_2(x,Q^2) = \frac{1}{2\pi^3} \sum_{u,d,s} e_f^2 \int_{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}} \frac{d^2r_t}{r_t^4}
\left\{\frac{1}{2}\overline{\Omega_{q\overline{q}}} - \frac{1}{8}\overline
{\Omega_{q\overline{q}}}^2\int d^2b_t S^2(b_t)\right\}\,\,.
\label{f2eikexp2}
\end{eqnarray}
The second term of the above equation represents the first shadowing corrections for
the $F_2$ structure function. Assuming a Gaussian parametrization
for the profile function we obtain that the screening is inversely proportional to the radius.
Therefore the shadowing corrections are strongly associated with the distributions of the
gluons within the proton. In this section we estimate the radius dependence of the
shadowing corrections, considering the $F_2$ and $\frac{dF_2(x,Q^2)}{dlogQ^2}$
data. First we explain why the radius is expected to be smaller than the proton radius.
Consider the first order contribution to the shadowing corrections, where two ladders
couple to the proton. The ladders may be attached to different constituents of the
proton or to the same constituent. In the first case the shadowing corrections are
controlled by the proton radius, while in the second case these corrections are controlled
by the constituent radius, which is smaller than the proton radius. Therefore, on the
average, we expect that the radius will be smaller than the proton radius. Theoretically,
$R^2$
reflects the integration over $b_t$ in the first diagrams for the SC.
In Fig. \ref{fig3}
we present the ratio
\begin{eqnarray}
R_2 = \frac{F_2(x,Q^2)\mbox{[Eq. (\ref{f2})]}}{F_2(x,Q^2)\mbox{[GRV]}}\,\,,
\label{r2}
\end{eqnarray}
where
$F_2(x,Q^2) [\mbox{GRV}] = \sum_{u,d,s} e_f^2 \,[xq(x,Q^2) + x
\overline{q}(x,Q^2)] + F_2^c(x,Q^2)$
is calculated using the GRV parametrization.
For the treatment of the charm component of the structure function we consider the charm
production via boson-gluon fusion \cite{grv95}. In this paper we assume $m_c = 1.5\,GeV$.
In Fig. \ref{fig4} we present the ratio
\begin{eqnarray}
R_s = \frac{\frac{dF_2(x,Q^2)}{dlogQ^2}\mbox{[Eq. (\ref{df2})]}}{\frac{dF_2(x,Q^2)}{dlogQ^2}\mbox{[GRV]}}\,\,.
\label{rs}
\end{eqnarray}
The function $\frac{dF_2(x,Q^2)}{dlogQ^2}\mbox{[GRV]}$ was calculated using the
expression (\ref{df2glap}) and the GRV parametrization.
Our results are presented as a function of $ln(\frac{1}{x})$ at different virtualities.
We can see that the SC are larger in the ratio $R_s$ and that our predictions of SC
are strongly dependent of the radius $R$. Moreover, we see clearly the SC behavior
inversely proportional with the radius.
In Fig. \ref{fig5} we compare our predictions for the SC in the $F_2$ structure
function and the H1 data \cite{h1} as a function of $ln(\frac{1}{x})$ at different
virtualities and some values of the radius. Our goal is not a best fit of the radius, but
eliminate some values of radius comparing the predictions of the eikonal approach
and HERA data. We consider only the quark sector in the calculation of SC, which
is a good approximation in this observable, as shown in the previous section. The
choice $R^2 = 1.5 \, GeV^2$ does not describe the data, {\it i.e.} the data discard
the possibility of very large SC in the HERA kinematic region. However, there are still
two possibilities for the radius which reasonably describe the $F_2$ data. To discriminate
between these possibilities we must consider the behavior of the $F_2$ slope.
In Fig. \ref{fig6} we present our results for $\frac{dF_2(x,Q^2)}{dlogQ^2}$ considering
the SC only in the quark sector. Although in the previous section we have demonstrate that
the contributions of the quark and gluon sectors should be considered,
here we will test other possibilities to describe the data: the dependence on the radius $R$.
Our results show that the best fit of the data occurs at small values of $R^2$, which are
discarded by the $F_2$ data. Therefore, in agreement with our previous conclusions, we
must consider a general approach to describe consistently the $F_2$ and
$\frac{dF_2(x,Q^2)}{dlogQ^2}$ data. In Fig. \ref{fig7} we present our results for $\frac{dF_2(x,Q^2)}{dlogQ^2}$ considering the SC in the gluon and quark sector for
different values of $R^2$, calculated using the general approach proposed in the previous section.
The best result occurs for $R^2 = 5\,GeV^{-2}$, which also describes the $F_2$ data.
The value for the squared radius $R^2 = 5\,GeV^{-2}$ obtained in our analysis agrees
with the estimates obtained using the
HERA data on diffractive photoproduction of $J/\Psi$ meson \cite{zeusjpsi,h1jpsi}.
Indeed, the experimental values for the slope are
$B_{el} = 4 \, GeV^{-2}$ and $B_{in} = 1.66\,GeV^{-2}$ and the cross section
for $J/\Psi$ diffractive production with and without photon dissociation
are equal. Neglecting the $t$ dependence of the pomeron-vector meson
coupling the value of $R^2$ can be estimated \cite{plb}. It turns out that
$R^2 \approx 5\,GeV^{-2}$, {\it i.e.}, approximately 2 times smaller than
the radius of the proton.
As an
additional comment let us say that the SC to $F_2$ and
its slope may also be analysed using a two radii model for the
proton \cite{glmn1}. This analysis is motivated by the large difference
between the measured slopes in elastic and inelastic diffractive
leptoproduction of vector mesons in DIS. An analysis using the two radii
model for the proton is not a goal of this paper, since a definite
conclusion on the correct model is still under debate.
The summary of this point is that the analysis of the $F_2$ and $\frac{dF_2(x,Q^2)}
{dlogQ^2}$ data using the eikonal model implies that the gluons are not distributed
uniformly in the whole proton disc, but behave as concentrated in smaller regions.
This conclusion motivates an analysis of the jet production, which probes smaller
regions within the proton, using an approach which considers the shadowing corrections.
\section{A screnning boundary}
The common feature of the BFKL and DGLAP equations is the steep increase of the
cross sections as $x$ decreases. This steep increase cannot persist down to arbitrary
low values of $x$ since it violates a fundamental principle of quantum theory, {\it i.e.} the
unitarity. In the context of relativistic quantum field theory of the strong interactions, unitarity
implies the cross section of a hadronic scattering reaction cannot increase with increasing
energy $s$ above $log^2 \,s$: the Froissart's theorem \cite{froi}. The Froissart
bound cannot be proven for off-mass-shell amplitudes \cite{yndu}, which is the case for
deep inelastic scattering \cite{glmuni}.
Our goal in this section is by using the $s$-channel unitarity (\ref{uni}) and the eikonal approach
to estimate a superior limit from which the shadowing corrections cannot be disregarded in $F_2$
and its slope.
Considering the expression (\ref{f2eik}) for the $F_2$ structure function,
we can write a $b_t$ dependent structure function given by
\begin{eqnarray}
F_2(x,Q^2,b_t) = \frac{1}{2\pi^3} \sum_{u,d,s} e_f^2 \int_{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}} \frac{dr^2_t}{r_t^4}
\left\{1 - e^{-\frac{1}{2}\Omega_{q\overline{q}}(x,r_t,b_t)}\right\}\,\,.
\label{f2eikuni}
\end{eqnarray}
The relation between the opacity and the gluon distribution (\ref{omega}) obtained in
\cite{plb}, is valid in the kinematical region where $\Omega \ll 1$. In the eikonal
approach for pQCD we make the assumption that the relation (\ref{omega}) is valid in
all kinematic region.
To obtain an estimate of the region where the SC are important we consider
a superior limit for the expression (\ref{f2eikuni}), which occurs for $\Omega \gg 1$. In this limit
the second term in the above equation can be disregarded. As the shadowing
terms are negative and reduce the growth of the $F_2$ structure function,
disregarding the shadowing terms we are estimating a superior limit for the
region where these terms are not important, {\it i.e.} a screnning boundary
which establishes the region where the shadowing corrections are required to calculate the observables.
The $b_t$ dependent structure function in the limit $\Omega \gg 1$ is such that
\begin{eqnarray}
F_2(x,Q^2,b_t) < \frac{1}{2\pi^3} \sum_{u,d,s} e_f^2 \int_{\frac{1}
{Q^2}}^{\frac{1}{Q_0^2}} \frac{dr^2_t}{r_t^4} \,\,.
\label{f2eikuni3}
\end{eqnarray}
Making the assumption that the $b_t$ dependence of the structure function is factorized \cite{plb}:
\begin{eqnarray}
F_2(x,Q^2,b_t) = F_2(x,Q^2) \, S(b_t) \nonumber \,\,,
\end{eqnarray}
and considering a Gaussian parametrization for the profile function and its value
for $b_t = 0$ we get ($n_f = 3$)
\begin{eqnarray}
F_2(x,Q^2) < \frac{R^2}{3\pi^2} \int_{\frac{1}{Q^2}}^{\frac{1}{Q_0^2}}
\frac{dr^2_t}{r_t^4} \,\,.
\label{f2eikuni4}
\end{eqnarray}
As a result
\begin{eqnarray}
F_2(x,Q^2) & < & \frac{R^2}{3\pi^2} (Q^2 - Q_0^2) \nonumber \\
& < & \frac{R^2 Q^2}{3\pi^2} \,\,.
\label{unif2}
\end{eqnarray}
The above limit is our estimate for the screnning boundary for the $F_2$ structure function.
The screnning boundary for the $F_2$ slope is straightforward from the
expression (\ref{f2eikuni3}). We get
\begin{eqnarray}
\frac{dF_2(x,Q^2)}{dlogQ^2} < \frac{R^2 Q^2}{3\pi^2} \,\,.
\label{unidf2}
\end{eqnarray}
This expression agrees with the expression obtained in \cite{plb}.
Clearly expressions (\ref{unif2}) and (\ref{unidf2}) serve only as a rough prescription
for estimating the region where the corrections required by unitarity cannot be disregarded. A more
rigorous treatment would be desirable, but remains to be developed.
Using the above expressions we can make an analysis of HERA data. We use $R^2 = 5
\, GeV^{-2}$ in the calculations. In Fig. \ref{fig8} we compare our predictions
with the $F_2$ data from the H1 collaboration. We can see that data at larger values of
$Q^2$ ($Q^2 \ge 8.5 \, GeV^2$) do not violate the limit (\ref{unif2}). However, the
data at smaller values of $Q^2$ and $x$ violate this limit. This indicates
that we should consider the SC for this kinematical region. In Fig. \ref{fig9} we present
our results for the $F_2$ slope. We see that the data for small $Q^2$
and $x$ ($Q^2 \le 2.5 \,GeV^2$, $x \le 10^{-4}$) violate
the limit (\ref{unidf2}), stressing the need of the shadowing corrections.
Therefore for small values of $x$ and $Q^2$ the observables must be calculated using
an approach which takes them into account.
\section{Summary}
In this paper we have presented our analysis of the shadowing corrections in the scaling
violations using the eikonal approach. We shown that the $\frac{dF_2(x,Q^2)}{dlogQ^2}$
data can be described successfully considering the shadowing corrections in the quark and gluon sectors.
Furthermore, we have considered the radius dependence of these corrections and an
unitarity boundary. From the analysis of the $R$ dependence of the SC in the eikonal
approach we have shown that the value $R^2 = 5\, GeV^{-2}$ allows to describe
the HERA data. This value agrees with the estimate obtained independently in the
diffractive $J/\Psi$ photoproduction. Using the eikonal approach and the assumption of $b_t$ factorization
of the $F_2$ structure function a screnning boundary is analysed.
This boundary constrains the region where the corrections required by unitarity may be disregarded;
or in other words, a limit for applicablity of standard perturbative QCD framework.
We have shown that the HERA data at small $x$ and $Q^2$
violate this limit, which implies that the shadowing corrections are important
in the HERA kinematic region.
Our conclusion is that the shadowing effect is important already at HERA kinematic region.
We believe that the analysis of distinct observables
($F_L, \, F_2^c, \, \frac{dF_2(x,Q^2)}{dlogQ^2}$) at small values of $x$
and $Q^2$ will allow to evidentiate the shadowing corrections.
\section*{Acknowledgments}
MBGD acknowledges enlightening discussions with F. Halzen at University of Wisconsin,
S. J. Brodsky at SLAC and E. M. Levin during the completion of this work.
|
2,869,038,155,123 | arxiv | \section{Introduction}
\label{S:1}
High-fidelity simulations of systems characterized by nonlinear partial differential equations (PDEs) represent large compute costs and are prohibitive for decision-making tasks for many fast-query applications. In order to reduce costs, there has recently been significant interest in the reduced-order modeling (ROM) of such systems \cite{carlberg2011efficient,wang2012proper,san2015principal,ballarin2015supremizer,san2018extreme,wang2019non,choi2019space}. As such, this field finds extensive application in control \cite{proctor2016dynamic,peitz2019multiobjective,noack2011reduced,rowley2017model}, multi-fidelity optimization \cite{peherstorfer2016optimal}, uncertainty quantification \cite{sapsis2013statistically,zahr2018efficient} and data-assimilation \cite{arcucci2019optimal} among others. However, ROMs are limited in how they handle nonlinear dependence and perform poorly for complex physical phenomena, which are inherently multiscale in space and time \cite{wells2017evolve,xie2018data,san2018neural,san2019artificial}. Researchers continue to search for efficient and reliable ROM techniques for such transient nonlinear systems \cite{hess2019localized,kramer2019nonlinear,swischuk2019projection,hamzi2019local,rahman2019dynamic,wang2019non}. The identification of a reduced basis to ensure a compressed representation that is minimally \emph{lossy} is a core component of most ROM development strategies (some examples include \cite{san2014basis,korda2018data,kalb2007intrinsic}). Once this basis is identified, we need a cost-effective strategy for accurate nonlinear dynamical system evolution to reproduce the full-order spatiotemporal complexity of the problem in the reduced basis. For example, we could use intrusive methods (which project the governing equations onto the reduced-basis), as seen in \cite{kalashnikova2010stability,mohebujjaman2019physically}, which use a Galerkin projection or \cite{carlberg2011efficient,xiao2013non,fang2013non}, which use the Petrov-Galerkin method (see \cite{carlberg2017galerkin} for the comparison of these two methods). Finally, reconstruction of the compressed representation is required for full-order space post-processing and visualization. In this study, we utilize convolutional autoencoders (CAE) and long short-term memory neural networks (LSTM) \cite{hochreiter1997long} for parametric ROMs of advection-dominated and inviscid systems. The former are used to identify reduced-representations of the full-order fields and the latter are used for the temporal evolution of these compressed representations. LSTMs have recently become popular for the non-intrusive characterization of dynamical systems \cite{vlachas2018data,ahmed2019long,maulik2019time,mohan2019compressed,mohan2018deep,wang2019recurrent} although most such studies perform latent space embedding through the use of linear embeddings such as the proper orthogonal decomposition (POD). Additionally, we propose a parametric extension of the CAE-LSTM for exploring parametric search spaces through training on multiple offline simulations. In contrast with studies outlined in \cite{lusch2018deep,erichson2019physics}, we deploy our framework on problems requiring shock capturing mechanisms as well as for fully inviscid system simulations. Ref. \cite{lee2020model} also uses a CAE to nonlinearly embed states, but solves the governing equations on the
nonlinear manifold defined by the autoencoder. We reduce computation by evolving with an LSTM network instead. An important difference with another similar study outlined in \cite{gonzalez2018learning} is that our framework allows for the explicit embedding of control parameters such as viscosity or a parameterization of the initial condition into the LSTM. This allows for the independent training of our CAE and LSTM neural networks. A similar implementation has been demonstrated in \cite{xu2019multi} where another neural network is used to link global parameters with the latent space representations. We simplify this by directly embedding the parameters into the latent space. In \citep{maulik2020latent}, the latent space was obtained using a CAE followed by which a Gaussian process regressor was utilized for a continous representation of the temporal evolution of the state. However, this method is limited when the number of snapshots is very large due to the inherent limitations of most Gaussian process regression techniques.
Our forward models for the purpose of data generation and subsequent testing are given by parametric partial differential equations. For the rest of this article, we shall represent a generic partial differential equation using the following notation:
\begin{linenomath*}
\begin{align}
\label{gen1}
\dot{\mathbf{q}}(\mathbf{x},t,\mathbf{p}) + \mathcal{N}[\mathbf{q}(\mathbf{x},t,\mathbf{p})] + \mathcal{L}[\mathbf{q}(\mathbf{x},t,\mathbf{p}); \mathbf{p}] = 0, \quad (\mathbf{x},t,\mathbf{p}) \in \Omega \times \mathcal{T} \times \mathcal{P},
\end{align}
\end{linenomath*}
where $\Omega \subset \mathbb{R}^i, \mathcal{T} = [0,T], \mathcal{P} \subset \mathbb{R}^p$, and $\mathcal{N}$, $\mathcal{L}$ are non-linear and linear operators, respectively. Our system is characterized by a solution field $\mathbf{q} : \Omega \times \mathcal{T} \times \mathcal{P} \rightarrow \mathbb{R}^d$ and appropriately chosen initial and boundary conditions, where $i$ is the number of spatial dimensions, $d$ is the number of dependent variables of the PDE, and $p$ is the number of control parameters in the problem. We assume that our system of equations can be solved in space-time on a discrete grid resulting in the following system of parameterized ODEs:
\begin{linenomath*}
\begin{align}
\dot{\mathbf{q}_h}(t,\mathbf{p}) + \mathbf{N}_{h}[\mathbf{q}_h(t,\mathbf{p})] + \mathbf{L}_h[\mathbf{q}_h(t,\mathbf{p}); \mathbf{p}] = 0 \quad(t,\mathbf{p}) \in \mathcal{T} \times \mathcal{P},
\end{align}
\end{linenomath*}
where $\mathbf{q}_h : \mathcal{T} \times \mathcal{P} \rightarrow \mathbb{R}^{N_h}$ is a discrete solution and $N_h$ is the number of spatial degrees of freedom. In this problem, our goal is to bypass the solution of Equation \ref{gen1} by constructing a compression manifold and a time advancement technique on this manifold solely from training data. Such ROMs hold great promise for characterizing the spatiotemporal dynamics of systems for which observations may be available, but little knowledge of the governing equations exist.
{To summarize, the contributions of this article are:
\begin{itemize}
\item We propose a deep learning based emulation strategy for nonlinear partial differential equations.
\item We introduce a convolutional autoencoder architecture that obtains nonlinear embeddings with high compression ratios.
\item We propose the use of long short-term memory networks for the evolution of the state in embedded space.
\item We make our emulator parametric by passing the parameters into the latent space. This allows generalization across, for example, a range of viscosities.
\item We demonstrate the performance of the proposed formulation for non-intrusive modeling of advection dominated physics obtained from the viscous Burgers and inviscid shallow water equations.
\end{itemize}}
\section{Proper orthogonal decomposition}
\label{S:2}
In this section, we review the POD technique for the construction of a reduced basis \cite{kosambi1943statistics,berkooz1993proper}. The interested reader may also find an excellent explanation of POD and its relationship with other dimension-reduction techniques in \cite{taira2019modal}. The POD procedure is tasked with identifying a space
\begin{linenomath*}
\begin{align}
\mathbf{X}^{f}=\operatorname{span}\left\{\boldsymbol{\vartheta}^{1}, \dots, \boldsymbol{\vartheta}^{f}\right\},
\end{align}
\end{linenomath*}
which approximates snapshots optimally with respect to the $L^2-$norm. The process of $\boldsymbol{\vartheta}$ generation commences with the collection of snapshots in the \emph{snapshot matrix}
\begin{linenomath*}
\begin{align}
\mathbf{S} = [\begin{array}{c|c|c|c}{\hat{\mathbf{q}}^{1}_h} & {\hat{\mathbf{q}}^{2}_h} & {\cdots} & {\hat{\mathbf{q}}^{N_{s}}_h}\end{array}] \in \mathbb{R}^{N_{h} \times N_{s}},
\end{align}
\end{linenomath*}
where $N_s$ is the number of snapshots, and $\hat{\mathbf{q}}^i_h \in \mathbb{R}^{N_h}$ corresponds to an individual snapshot in time of the discrete solution domain. Our POD bases can then be extracted efficiently through the method of snapshots where we solve the eigenvalue problem on the correlation matrix $\mathbf{C} = \mathbf{S}^T \mathbf{S} \in \mathbb{R}^{N_s \times N_s}$. Then
\begin{linenomath*}
\begin{align}
\begin{gathered}
\mathbf{C} \mathbf{W} = \mathbf{W} \Lambda,
\end{gathered}
\end{align}
\end{linenomath*}
where $\Lambda = \operatorname{diag}\left\{\lambda_{1}, \lambda_{2}, \cdots, \lambda_{N_{s}}\right\} \in \mathbb{R}^{N_{s} \times N_{s}}$ is the diagonal matrix of eigenvalues and $\mathbf{W} \in \mathbb{R}^{N_{s} \times N_{s}}$ is the eigenvector matrix. Our POD basis matrix can then be obtained by
\begin{linenomath*}
\begin{align}
\begin{gathered}
\boldsymbol{\vartheta} = \mathbf{S} \mathbf{W} \in \mathbb{R}^{N_h \times N_s}.
\end{gathered}
\end{align}
\end{linenomath*}
In practice a reduced basis $\boldsymbol{\psi} \in \mathbb{R}^{N_h \times N_r}$ is built by choosing the first $N_r$ columns of $\boldsymbol{\vartheta}$ for the purpose of efficient ROMs, where $N_r \ll N_s$. This reduced basis spans a space given by
\begin{linenomath*}
\begin{align}
\mathbf{X}^{r}=\operatorname{span}\left\{\boldsymbol{\psi}^{1}, \dots, \boldsymbol{\psi}^{N_r}\right\}.
\end{align}
\end{linenomath*}
The coefficients of this reduced basis (which capture the underlying temporal effects) may be extracted as
\begin{linenomath*}
\begin{align}
\begin{gathered}
\mathbf{A} = \boldsymbol{\psi}^{T} \mathbf{S} \in \mathbb{R}^{N_r \times N_s}.
\end{gathered}
\end{align}
\end{linenomath*}
The POD approximation of our solution is then obtained via
\begin{linenomath*}
\begin{align}
\hat{\mathbf{S}} = [\begin{array}{c|c|c|c}{\tilde{\mathbf{q}}^{1}_h} & {\tilde{\mathbf{q}}^{2}_h} & {\cdots} & {\tilde{\mathbf{q}}^{N_{s}}_h}\end{array}] \approx \boldsymbol{\psi} \mathbf{A} \in \mathbb{R}^{N_h \times N_s},
\end{align}
\end{linenomath*}
where $\tilde{\mathbf{q}}_h^i \in \mathbb{R}^{N_h}$ corresponds to the POD approximation to $\hat{\mathbf{q}}_h^i$. The optimal nature of reconstruction may be understood by defining the relative projection error
\begin{linenomath*}
\begin{align}
\frac{\sum_{i=1}^{N_{s}}\left\|\hat{\mathbf{q}}^i_h-\tilde{\mathbf{q}}^i_h \right\|_{\mathbb{R}^{N_{h}}}^{2}}{\sum_{i=1}^{N_{s}}\left\|\hat{\mathbf{q}}^i_h\right\|_{\mathbb{R}^{N_{h}}}^{2}}=\frac{\sum_{i=N_r+1}^{N_{s}} \lambda_{i}^{2}}{\sum_{i=1}^{N_{s}} \lambda_{i}^{2}},
\end{align}
\end{linenomath*}
which exhibits that with increasing retention of POD bases, increasing reconstruction accuracy may be obtained. We remark that for dimension $d>1$, the solution variables may be stacked to obtain this set of bases that are utilized for the reduction of each PDE within the coupled system. Another approach may be to obtain reduced bases for each dependent variable within the coupled system and evolve each PDE on a different manifold. Each dependent variable is projected onto bases constructed from its snapshots alone. This affects the computation of $\mathcal{N}$ for computing the updates for each dimension in $\mathbf{q}$. In practice, this operation manifests itself in the concatenation of reduced bases to obtain one linear operation for reconstruction of all field quantities.
\subsection{The POD Galerkin projection}
The POD basis may be leveraged for a Galerkin projection of each partial differential equation forming the coupled system onto its corresponding reduced basis. We start by revisiting Equation (\ref{gen1}) written in the form of an evolution equation for fluctuation components i.e.,
\begin{linenomath*}
\begin{align}
\dot{\hat{\mathbf{q}}}_h(\mathbf{x},t,\mathbf{p}) + \mathcal{N}_h[\hat{\mathbf{q}}_h(\mathbf{x},t,\mathbf{p})] + \mathcal{L}_h[\hat{\mathbf{q}}_h(\mathbf{x},t,\mathbf{p}); \mathbf{p}] = 0,
\end{align}
\end{linenomath*}
which can be expressed in the reduced basis as
\begin{linenomath*}
\begin{align}
\boldsymbol{\psi} \dot{\mathbf{q}_r}(t,\mathbf{p}) + \mathcal{N}_h[\boldsymbol{\psi} \mathbf{q}_r(t,\mathbf{p})] + \mathcal{L}_h[\boldsymbol{\psi} \mathbf{q}_r(t,\mathbf{p}); \mathbf{p}] = 0,
\end{align}
\end{linenomath*}
where $\mathbf{q}_r \in \mathbb{R}^{N_r}$ corresponds to the temporal coefficients at one time instant of the system evolution (i.e., equivalent to a particular column of $\mathbf{A}$). The orthogonal nature of the reduced basis can be leveraged to obtain
\begin{linenomath*}
\begin{align}
\dot{\mathbf{q}_r}(t,\mathbf{p}) + \mathcal{N}_h[\boldsymbol{\psi} \mathbf{q}_r(t,\mathbf{p})] + \mathcal{L}_r[\mathbf{q}_r(t,\mathbf{p}); \mathbf{p}] = 0,
\end{align}
\end{linenomath*}
where $\mathcal{L}_r$ is a precomputed Laplacian operator in reduced space. This equation is denoted the POD Galerkin-projection formulation (POD-GP). We have assumed that the residuals generated by the truncated representation of the full-order model are orthogonal to the reduced basis. A significant source of error in the forward evolution of this system of equations is due to the absence of higher-basis nonlinear interactions as shown in in Section \ref{SS:6}. Also, POD-GP essentially consists of $N_r$ coupled ODEs and is solved by a standard fourth-order accurate Runge-Kutta method. The reduced degrees of freedom lead to very efficient forward solves of the problem even though accuracy is limited. This transformed problem has initial conditions given by
\begin{linenomath*}
\begin{align}
\mathbf{q}_r(t=0)=\left(\boldsymbol{\psi}^T \hat{\mathbf{q}}_h(t=0) \right).
\end{align}
\end{linenomath*}
\section{Deep neural networks}
In the following section, we introduce our deep neural network architectures for establishing a viable emulation strategy for data obtained from nonlinear partial differential equations.
\subsection{Convolutional autoencoders}
\label{S:3}
Autoencoders are neural networks that learn a new representation of the input data, usually with lower dimensionality. The initial layers, called the \emph{encoder}, map the input $\mathbf{x}\in \mathbb{R}^m$ to a new representation $\mathbf{z} \in \mathbb{R}^k$ with $k << m$. The remaining layers, called the \emph{decoder}, map $\mathbf{z}$ back to $\mathbb{R}^m$ with the goal of reconstructing $\mathbf{x}$. The objective is to minimize the reconstruction error. Autoencoders are unsupervised; the data $\mathbf{x}$ is given, but the representation $\mathbf{z}$ must be learned.
More specifically, we use autoencoders that have some convolutional layers. In a convolutional layer, instead of learning a matrix that connects all $m$ neurons of layer's input to all $n$ neurons of the layer's output, we learn a set of filters. Each filter $\mathbf{f_i}$ is convolved with patches of the layer's input. Suppose a 1-d convolutional layer has filters of length $m_{f_i}$. Then each of the layer's output neurons corresponding to filter $\mathbf{f_i}$ is connected to a patch of $m_{f_i}$ of the layer's input neurons. In particular, a 1-d convolution of filter $\mathbf{f}$ and patch $\mathbf{p}$ is defined as $\mathbf{f} \ast \mathbf{p} = \sum_j f_j p_j$ (For neural networks, convolutions are usually technically implemented as cross-correlations). Then, for a typical 1-d convolutional layer, the layer's output neuron $y_{ij} = \varphi (\mathbf{f_i} \ast \mathbf{p_j} +B_{i})$ where $\varphi$ is an activation function, and $B_i$ are the entries of a bias term. As $j$ increases, patches are shifted by stride $s$. For example, a 1-d convolutional layer with a filter $f_0$ of length $m_f = 3$ and stride $s=1$ could be defined so that $y_{0j}$ involves the convolution of $f_0$ and inputs $j-1, j$, and $j+1$. To calculate the convolution, it is common to add zeros around the inputs to a layer, which is called \emph{zero padding}. In the decoder, we use deconvolutional layers to return to the original dimension. These layers upsample with nearest-neighbor interpolation.
Two-dimensional convolutions are defined similarly, but each filter and each patch are two-dimensional. A 2-d convolution sums over both dimensions, and patches are shifted both ways. For a typical 2-d convolutional layer, the output neuron $y_{ijk} = \varphi (\mathbf{f_i} \ast \mathbf{p_{jk}} +B_{i})$. Input data can also have a ``channel'' dimension, such as RGB for images. The convolutional operator sums over channel dimensions, but each patch contains all of the channels. The filters remain the same size as patches, so they can have different weights for different channels. It is common to follow a convolutional layer with a \emph{pooling} layer, which outputs a sub-sampled version of the input. In this paper, we specifically use max-pooling layers. Each output of a max-pooling layer is connected to a patch of the input, and it returns the maximum value in the patch.
\subsection{Long short-term memory networks}
\label{S:4}
The LSTM network is a specialization of the recurrent neural network and was introduced to consider time-delayed processes where events further back in the past may potentially affect predictions for the current location in the sequence. The basic equations of the LSTM in our context for an arbitrary input variable $\mathbf{a}$ are given by
\begin{linenomath*}
\begin{align}
\begin{split}
\text{input gate: }& \boldsymbol{G}_{i}=\boldsymbol{\varphi}_{S} \circ \mathcal{F}_{i}^{N_{c}}(\mathbf{a}), \\
\text{forget gate: }& \boldsymbol{G}_{f}=\boldsymbol{\varphi}_{S} \circ \mathcal{F}_{f}^{N_{c}}(\mathbf{a}), \\
\text{output gate: }& \boldsymbol{G}_{o}=\boldsymbol{\varphi}_{S} \circ \mathcal{F}_{o}^{N_{c}}(\mathbf{a}), \\
\text{internal state: }& \boldsymbol{s}_{t}=\boldsymbol{G}_{f} \odot \boldsymbol{s}_{t-1}+\boldsymbol{G}_{i} \odot\left(\boldsymbol{\varphi}_{T} \circ \mathcal{F}_{\mathbf{a}}^{N_{c}}(\mathbf{a})\right), \\
\text{output: }& \mathbf{h}_t = \boldsymbol{G}_{o} \circ \boldsymbol{\varphi}_{T}\left(\boldsymbol{s}_{t}\right),
\end{split}
\end{align}
\end{linenomath*}
where $\mathbf{a}$ is a vector of inputs comprising a snapshot of information in time. Within this study, this vector is generally the encoded representation after either the POD or CAE embedding. Also, $\boldsymbol{\varphi}_{S}$ and $\boldsymbol{\varphi}_{T}$ refer to tangent sigmoid and tangent hyperbolic activation functions respectively, $N_c$ is the number of hidden layer units in the LSTM network. Here, $\mathcal{F}^{n}$ refers to a linear operation given by a matrix multiplication and subsequent bias addition i.e,
\begin{linenomath*}
\begin{align}
\mathcal{F}^{n}(\boldsymbol{x})=\boldsymbol{W} \boldsymbol{x}+\boldsymbol{B},
\end{align}
\end{linenomath*}
where $\boldsymbol{W} \in \mathbb{R}^{n \times m}$ and $\boldsymbol{B} \in \mathbb{R}^{n}$ for $\mathbf{x} \in \mathbb{R}^m$ and where $\mathbf{a} \odot \mathbf{b}$ refers to a Hadamard product of two vectors. The LSTM implementation is used to advance $\mathbf{a}$ as a function of time. The LSTM network's primary utility is the ability to control information flow through time with the use of the gating mechanisms. A quantity that preserves information of past inputs and predictions is the internal state $\mathbf{s}_t$ which is updated using the result of the input and forget gates every time the LSTM operations are performed. A greater value of the forget gate (post sigmoidal activation), allows for a greater preservation of past state information through the sequential inference of the LSTM, whereas a smaller value suppresses the influence of the past. Details of our LSTM deployments for the different experiments utilized in this article are provided in Section \ref{S:5}.
\subsection{Combining CAE and LSTM for surrogate modeling}
{Our data-driven emulation strategy shall rely on the use of CAE for dimensionality reduction and LSTM for latent space temporal evolution of the state. The benefit of this formulation, in comparison to POD-GP, is the improved compression ratios obtained by the nonlinear embedding of the CAE and the equation-free evolution of the state using the LSTM. The basic schematic for this formulation is shown in Figure \ref{1D_Schematic} where a one-dimensional field is compressed to a low-dimensional latent space and then evolved non-intrusively. Our training framework is \emph{separate} in that the snapshot data of the flow-field is first used to obtain a low dimensional embedding before a data-driven time-series forecast technique is used for evolving the state in this space. As mentioned previously, this is in contrast to previous studies where latent space embedding and temporal evolution have been performed in a simultaneous optimization \cite{lusch2018deep,gonzalez2018learning,erichson2019physics}. We pursue this direction for our emulation strategy to allow for greater flexibility in modeling the evolution of the latent space. In particular, the choice for a novel state evolution mechanism will not require retraining a nonlinear embedding. In addition, uneven samples of snapshot data (for instance when new snapshots become available at arbitrary locations in time) may be deployed with time-series methods that are customized for irregular data without retraining an embedding \cite{rubanova2019latent}. The deployment of a concurrent optimization for an embedding and a time series forecast strategy usually relies on the construction of a loss-function that penalizes reconstruction and forecast accuracy together. The joint optimization can result in slower training and requires deciding how to weight the two loss functions. Specific details of the CAE and LSTM combinations for our test cases shall be described in Section \ref{S:5}.}
\section{Experiments}
\label{S:5}
In the following, we introduce the two representative problems used to assess the proposed framework. We demonstrate framework performance for the viscous Burgers equation, which is characterized by an advecting shock and the conservative inviscid shallow water equations with varying initial conditions. While the first problem requires that our framework is able to capture the advection of a shock profile accurately in time, the second problem requires interpolation in initial condition space. These varying initial conditions are given by different locations of a Gaussian blob at the starting time. The specific details of the ML framework used in the following experiments may be found in our supporting source code available at \texttt{https://github.com/Romit-Maulik/CAE\_LSTM\_ROMS}.
\subsection{Burgers}
\label{SS:6}
Our first problem is given by the one-dimensional viscous Burgers' equation with Dirichlet boundary conditions which can be represented as
\begin{linenomath*}
\begin{align}
\begin{gathered}
\label{gen3}
\dot{u} + u\frac{\partial u}{\partial x} = \nu \frac{\partial^2 u}{\partial x^2}, \\
u(x,0) = u_0, \quad x \in [0,L], \quad u(0,t) = u(L,t) = 0.
\end{gathered}
\end{align}
\end{linenomath*}
It is well known that the above equation is capable of generating discontinuous solutions even if initial conditions are smooth and $\nu$ is sufficiently small due to advection-dominated behavior. We specifically consider the initial condition
\begin{linenomath*}
\begin{align}
u(x, 0) &=\frac{x}{1+\sqrt{\frac{1}{t_{0}}} \exp \left(R e \frac{x^{2}}{4}\right)},
\end{align}
\end{linenomath*}
and we set $L=1$ and maximum time $t_{max}=2$. An analytical solution exists and is given by
\begin{linenomath*}
\begin{align}
\label{Burgers_Sol}
u(x, t)=\frac{\frac{x}{t+1}}{1+\sqrt{\frac{t+1}{t_{0}}} \exp \left(R e \frac{x^{2}}{4 t+4}\right)},
\end{align}
\end{linenomath*}
where $t_0=\text{exp}(Re/8)$ and $Re = 1/\nu$.
\subsubsection{Convolutional autoencoder} \label{Burgers_CAE}
We proceed by detailing the architecture of our CAE for effective compression of the full-order solution field. We use a one-dimensional convolutional framework with multiple strided filters to obtain a low-dimensional representation of the solution field. Figure \ref{1D_Schematic} is a schematic of the architecture. We utilize several pairs of convolutional and max-pooling layers to reduce dimensionality of the input image to a size of \emph{solely} two degrees of freedom in the encoded space. Following this, the two-dimensional state is convolved and upsampled several times to return to the dimensionality of the full-order field. Each layer consists of rectified linear (ReLU) activations and utilizes a zero-padding at the edges of the domain for the purpose of convolution. The dynamics studied in this test case are not critically affected by the absence of accurate padding at the boundaries. Our network is trained by using a standard mean-squared error loss with a batch size of 10, a learning rate of 0.001 and the Adam optimizer. The choice of hyperparameters for this architecture (i.e., the number of layers, channels, latent-space dimension, learning rate and batch-size) were manually tuned to obtain the current performance. Also, each convolutional layer in the autoencoder utilized a ReLU activation function, with the exception of the output layer and the final layer of the encoder. No regularization was used in the process of training this model and approximately 10\% of the total (non-test) data was set aside for the purpose of validation (i.e., for preventing overfitting through an early-stopping criterion).
\begin{figure}
\centering
\fbox{\includegraphics[width=0.95\textwidth]{1D_Schematic.pdf}}
\caption{A schematic of the one-dimensional CAE-LSTM for the viscous Burgers equation. The nonlinear autoencoder embeds the data into latent space, and then the recurrent network can be used for time-series advancement.}
\label{1D_Schematic}
\end{figure}
\subsubsection{LSTM}
In this section, we introduce architectural details of the LSTM used to advance latent space representations obtained by the CAE for the Burgers problem. We shall be outlining results from two different LSTM architectures: one that is valid for only one choice of $\nu$ and one that is valid for parametric interpolation. We observe that, in general, the latter requires more complex models.
Our basic LSTM architecture for this test case is two cells that are stacked on top of a windowed input of latent space representations. This leads to a windowed-input advancement of dynamics with the output being the prediction of the latent space representation at the next time step. This prediction is then fed back into the framework in an autoregressive manner. Our learning rate for the LSTM is the default 0.001, and we use the Adam optimizer for training. As in the case of the CAE, our cost function is the mean-squared error between predictions and targets. The LSTM hidden cells contain 20 neurons, and the batch size is set to 64 samples. Like in the case of the CAE, we do not employ any regularization and 10\% of the snapshot data is set aside for the purpose of validation. The trained LSTM is deployed for emulating the evolution of the same data in a recursive fashion (i.e., outputs from the LSTM are used as inputs at the next time step).
\subsubsection{CAE-LSTM modeling}
We assess the proposed framework on multiple datasets, each with a single value of $\nu$. Solution fields that vary in time are generated using the analytical solution described in Equation \ref{Burgers_Sol}. In this set of tests, we check the accuracy for different physics ranging from more dissipative at high values of viscosity to more advective at lower values. Error metrics and latent space visualization are provided to evaluate if any trends emerge that generalize in physics. We select four values of $Re=1000,2000,3000,4000$, each with 400 snapshots of the solution field uniformly distributed in time. For the purpose of comparison we also provide results from the POD-Galerkin projection methodology.
Figure \ref{Burgers_1000_Rec} shows the performance of CAE deployment for $Re=1000$. This parameter choice leads to viscous effects damping the shock profile as it is advected in the positive $x$ direction. The latent space consisting of two variables has a consistent trend in time which is repeated for other parameters. We draw attention to the difference in magnitude of the latent space variables at the final snapshot. Empirically, this difference is correlated with the dominance of advection to dissipation in the physics of the Burgers equation. Figure \ref{Burgers_2000_Rec} shows similar results for a higher value of $Re=2000$. We remark here that the training for this particular case was completely independent of the other values of $Re$. A good performance in capturing the (now sharper) shock profile is observed. The profile of the latent space evolution is very similar to the previous test case, although the magnitudes of the representation seem to be different. This could possibly be due to scaling through the bias terms of the CAE. Results in Figure \ref{Burgers_3000_Rec} for $Re=3000$ and Figure \ref{Burgers_4000_Rec} for $Re=4000$ show similar behavior in latent space trends as well, indicating that there may be a universality in the compressed representation of this particular problem. This also has implications for the \emph{generation} of new advection-dissipation profiles. We also observe that final time magnitudes of each dimension of the two-dimensional compressed representations appear to be closer to each other with increasing $Re$ perhaps allowing for some interpretability of the latent space. Recall that the different values of $Re$ selected for assessment essentially control the \emph{sharpness} of the shock and have limited effect on the location of the shock. A thorough investigation of interpretability, however, is beyond the scope of this article. At this point, we have not deployed any latent space model and these assessments are purely related to the CAE. In the following, we incorporate a latent space time-series model to obtain a 2 degree-of-freedom dynamical model of the advecting shock profile.
\begin{figure}
\centering
\includegraphics[width=0.32\textwidth]{Re1000_ti.png}
\includegraphics[width=0.32\textwidth]{Re1000_tf.png}
\includegraphics[width=0.32\textwidth]{Re1000_ls.png}
\caption{Reconstruction ability of the CAE for initial condition (left) and the final field (middle). Evolution of the latent space (right) for $Re=1000$.}
\label{Burgers_1000_Rec}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.32\textwidth]{Re2000_ti.png}
\includegraphics[width=0.32\textwidth]{Re2000_tf.png}
\includegraphics[width=0.32\textwidth]{Re2000_ls.png}
\caption{Reconstruction ability of the CAE for initial condition (left) and the final field (middle). Evolution of the latent space (right) for $Re=2000$.}
\label{Burgers_2000_Rec}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.32\textwidth]{Re3000_ti.png}
\includegraphics[width=0.32\textwidth]{Re3000_tf.png}
\includegraphics[width=0.32\textwidth]{Re3000_ls.png}
\caption{Reconstruction ability of the CAE for initial condition (left) and the final field (middle). Evolution of the latent space (right) for $Re=3000$.}
\label{Burgers_3000_Rec}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.32\textwidth]{Re4000_ti.png}
\includegraphics[width=0.32\textwidth]{Re4000_tf.png}
\includegraphics[width=0.32\textwidth]{Re4000_ls.png}
\caption{Reconstruction ability of the CAE for initial condition (left) and the final field (middle). Evolution of the latent space (right) for $Re=4000$.}
\label{Burgers_4000_Rec}
\end{figure}
We now assess the ability of the proposed framework to mimic a standard reduced-order model. We start with an assessment of the POD-GP implementations at different values of $Re$ as shown in Figure \ref{POD_GP_Limitation}. The linear encoding leads to slow convergence of ROM representations to the shock profile. In addition, we also observe high frequency instabilities as the number of retained POD modes is increased for higher values of $Re$. This is due to the use of schemes which are not shock capturing, which causes Gibbs oscillations near the advecting discontinuity. This manifests itself in a solution that fails to converge at $Re=4000$ for 30 retained modes and highlights a critical issue with the reduced-order modeling of advection-dominated problems. Each POD-GP deployment utilized basis vectors from their respective full-order models. In comparison, we show results from the CAE-LSTM implementation in Figure \ref{Burgers_4000_ROM} which shows the ability of the proposed framework to capture the sharp profile advection with only two degrees of freedom. Figure \ref{Burgers_4000_LSTM} shows the prediction of the latent-space model in comparison to the latent space representation by compressing each of the true snapshots. The evolution in the encoded space is recursive, in that the outputs of the LSTM are fed back into the input layer through a windowed input to obtain single time step output. The window is initialized with the true values of the first 10 time steps which implies that, in practice, a short duration of the simulation must be deployed with a full-order model following which the CAE-LSTM can take over non-intrusively. Research is underway to bypass this limitation by appending ghost-points in time to the training data in latent space to mimic a \emph{burn-in} for the windowed input.
\begin{figure}
\centering
\mbox{
\subfigure[$Re=1000$]{\includegraphics[width=0.48\textwidth]{POD_GP_Burgers_1000.png}}
\subfigure[$Re=2000$]{\includegraphics[width=0.48\textwidth]{POD_GP_Burgers_2000.png}}
}
\\
\mbox{
\subfigure[$Re=3000$]{\includegraphics[width=0.48\textwidth]{POD_GP_Burgers_3000.png}}
\subfigure[$Re=4000$]{\includegraphics[width=0.48\textwidth]{POD_GP_Burgers_4000.png}}
}
\caption{A demonstration of the limitations of the POD-Galerkin methods for building surrogates of advection dominated partial differential equations. Convergence to the true solution is slow and often limited by numerical instability.}
\label{POD_GP_Limitation}
\end{figure}
\begin{figure}
\centering
\mbox{
\subfigure[$t=0.5$]{\includegraphics[width=0.48\textwidth]{4000_LSTM_t0.png}}
\subfigure[$t=1.0$]{\includegraphics[width=0.48\textwidth]{4000_LSTM_t1.png}}
}
\\
\mbox{
\subfigure[$t=1.5$]{\includegraphics[width=0.48\textwidth]{4000_LSTM_t2.png}}
\subfigure[$t=2.0$]{\includegraphics[width=0.48\textwidth]{4000_LSTM_t3.png}}
}
\caption{Reduced-order modeling capability of the CAE for $Re=4000$ showing evolution in physical space. We remind the reader that the system evolution has been performed using an LSTM in latent space, and these images are reconstructed from two degrees of freedom representations.}
\label{Burgers_4000_ROM}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{4000_LSTM.png}
\caption{Learning dynamics in latent space obtained using CAE for $Re=4000$. The $y$-axes indicate the magnitudes of the first (left) and second (right) latent space encoding.}
\label{Burgers_4000_LSTM}
\end{figure}
We perform assessments for the CAE-LSTM (as outlined for $Re=4000$ above) for other parameter choices and report error metrics (given by $L_2$-norms at the final time step) in Table \ref{Table1}. These show the accuracy of the framework when compared to the POD-GP method for different POD mode retentions. In general, when dynamics are more advective, the CAE-LSTM has lower errors due to the self-similarity in the advecting shock profile. In comparison, the POD-GP method shows an order of magnitude greater errors at a comparable compression of 2 modes and displays trouble in dealing with strong advective physics for $Re=4000$. Also, the CAE-LSTM, while unable to match POD-GP accuracies at greater mode retentions and lower $Re$, obtains an order of magnitude lower error across different $Re$ for the same latent space dimensions (two degrees of freedom only). This establishes, empirically, that advective physics benefits from nonlinear encoding in space and nonlinear modeling in time for effective surrogates. Table \ref{Table1} is complementary to Figure \ref{POD_GP_Limitation}. While POD-GP shows greater oscillations even at high modal coefficient retention, overall $L_2$-error metrics are comparable (if not better) to the proposed framework. This is elaborated in Figure \ref{GP_CAE_Plot}, which shows that the CAE results in noise in the reconstructed fields even if the oscillations due to the POD-GP implementation are stabilized.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{GP_CAE_Plot.png}
\caption{A direct comparison of the POD-GP and CAE-LSTM methods for $Re=4000$ where one can observe noise in the ML predictions even if oscillations are stabilized.}
\label{GP_CAE_Plot}
\end{figure}
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& Re = 1000 & Re = 2000 & Re = 3000 & Re = 4000 \\ \hline
GP 2 modes & 4.197e-3 & 5.558e-3 & 6.12e-3 & 6.418e-3 \\ \hline
GP 5 modes & 1.57e-3 & 7e-3 & 1.244e-2 & 1.65e-2 \\ \hline
GP 10 modes & 1.497e-4 & 5.047e-4 & 1.063e-3 & 1.525e-3 \\ \hline
GP 20 modes & 4.607e-5 & 1.679e-4 & 4.099e-4 & 7.336e-4 \\ \hline
GP 30 modes & 4.938e-5 & 1.102e-4 & 8.333e-5 & NaN \\ \hline
CAE LSTM & 4.181e-4 & 3.912e-4 & 1.409e-4 & 1.551e-4 \\
\hline
\end{tabular}
\caption{$L_2-norm$ error metrics for the final time reconstructions of the CAE-LSTM compared against POD-GP. This table outlines results where the CAE-LSTM and POD-GP deployments are trained anew for each $Re$. The CAE-LSTM error is lower for comparable compression (two degrees of freedom).}
\label{Table1}
\end{table}
We now extend the CAE-LSTM for parametric interpolation. By training the framework for full-order datasets generated for different $Re$, our framework can interpolate in a physical regime for quick generation of full-order dynamics at novel parameter choices. We achieve this by appending another scalar component, the viscosity, to the latent space dimension. For training, we obtain snapshots from 19 simulations (i.e., with uniformly varying values of $Re$) and train a common CAE for all of the simulations. This lets us obtain a sequence of latent space representations for each full-order model concatenated with their respective viscosities. We then train an LSTM, also common across all of the simulations, on these sequences in the same manner as the previous experiments. Inferences can then be performed at a novel parameter choice with ease.
Our parametric LSTM has a similar architecture to the one we used for single-parameter data in the previous sections. The differences include 40 neurons in the hidden cells and a smaller batch size of 32. We remark that the CAE is identical to the one used previously. The performance of the CAE to reconstruct fields with varying dissipation on the shocked profiles is shown in Figure \ref{MP_Reconstruction_Burgers}. The latent space representation of 2 degrees of freedom is expressive enough to capture the difference in the sharpness of the discontinuity for different viscosities. A parametric LSTM is then trained on these compressed representations with results as shown in Figure \ref{MP_Burgers_LSTM}. We observe that the trends are reproduced appropriately for parameters that were not a part of the training data set. Finally, in Figure \ref{Burgers_CAE_LSTM_ROM}, we demonstrate that reconstructing full-order dynamics for a novel testing parameter accurately adheres to the true solution over time. The final time reconstruction mean-squared errors averaged across different testing viscosities was found to be $1.17e-4$, which was comparable to the cases where training was performed for solely one viscosity.
\begin{figure}
\centering
\mbox{
\subfigure[$Re=250$]{\includegraphics[width=0.42\textwidth]{MP_Reconstruction_tf_1.png}}
\subfigure[$Re=450$]{\includegraphics[width=0.42\textwidth]{MP_Reconstruction_tf_2.png}}
} \\
\mbox{
\subfigure[$Re=650$]{\includegraphics[width=0.42\textwidth]{MP_Reconstruction_tf_5.png}}
\subfigure[$Re=850$]{\includegraphics[width=0.42\textwidth]{MP_Reconstruction_tf_6.png}}
} \\
\mbox{
\subfigure[$Re=1050$]{\includegraphics[width=0.42\textwidth]{MP_Reconstruction_tf_10.png}}
\subfigure[$Re=1250$]{\includegraphics[width=0.42\textwidth]{MP_Reconstruction_tf_11.png}}
}
\caption{The ability of a CAE to reconstruct fields sampled from different parameters (Reynolds numbers) showing different sharpness in shock profiles. These snapshots are for parameters that were not included in the training dataset and are obtained by evolving only in the latent space.}
\label{MP_Reconstruction_Burgers}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{MP_LS.png}
\caption{The ability for the parametric LSTM to learn latent space trends for different parameters that are not a part of the training data set. The $y$-axes indicate the magnitudes of the latent space encoding.}
\label{MP_Burgers_LSTM}
\end{figure}
\begin{figure}
\centering
\mbox{
\subfigure[$t=0.2$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_10.png}}
\subfigure[$t=0.4$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_20.png}}
\subfigure[$t=0.6$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_30.png}}
} \\
\mbox{
\subfigure[$t=0.8$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_40.png}}
\subfigure[$t=1.0$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_50.png}}
\subfigure[$t=1.2$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_60.png}}
} \\
\mbox{
\subfigure[$t=1.4$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_70.png}}
\subfigure[$t=1.6$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_80.png}}
\subfigure[$t=1.8$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_90.png}}
}
\caption{An example ROM characterized by the CAE-LSTM compared to the full-order solution. This parameter was not a part of the training dataset for either the CAE or the parametric LSTM.}
\label{Burgers_CAE_LSTM_ROM}
\end{figure}
\subsection{Shallow water equations}
\label{SS:7}
Our two-dimensional assessments utilize the inviscid shallow water equations which are a prototypical system of equations for geophysical flows. The governing equations are hyperbolic in nature and are
\begin{align}
\begin{gathered}
\frac{\partial(\rho \eta)}{\partial t}+\frac{\partial(\rho \eta u)}{\partial x}+\frac{\partial(\rho \eta v)}{\partial y} =0, \\
\frac{\partial(\rho \eta u)}{\partial t}+\frac{\partial}{\partial x}\left(\rho \eta u^{2}+\frac{1}{2} \rho g \eta^{2}\right)+\frac{\partial(\rho \eta u v)}{\partial y} = 0, \\
\frac{\partial(\rho \eta v)}{\partial t}+\frac{\partial(\rho \eta u v)}{\partial x}+\frac{\partial}{\partial y}\left(\rho \eta v^{2}+\frac{1}{2} \rho g \eta^{2}\right) = 0.
\end{gathered}
\end{align}
In the above set of equations, $\eta$ corresponds to the total fluid column height, and $(u,v)$ is the fluid's horizontal flow velocity, averaged across the vertical column. Further $g$ is acceleration due to gravity, and $\rho$ is the fluid density, which we fix at 1.0. The first equation captures the law of mass conservation whereas the second two denote the conservation of momentum. Our initial conditions are
\begin{align}
\rho \eta (x,y,t=0) &= e^{-\left(\frac{(x-\bar{x})^2}{2(5e+4)^2} + \frac{(y-\bar{y})^2}{2(5e+4)^2}\right)}, \\
\rho \eta u(x,y,t=0) &= 0, \\
\rho \eta v(x,y,t=0) &= 0,
\end{align}
while our two-dimensional domain is a square with periodic boundary conditions. We generate data with full-order solves of the above system of equations until $t=0.5$ with a time step of 0.001. Our full-order model uses a 4\textsuperscript{th}-order accurate Runge-Kutta temporal integration scheme and a fifth-order accurate weighted essentially non-oscillatory scheme (WENO) \cite{liu1994weighted} for computing state reconstructions at cell faces. The Rusanov Reimann solver is utilized for flux reconstruction after cell-face quantities are calculated. The reader is directed to \cite{hairer1991solving} for greater discussion of the temporal integration scheme and \cite{maulik2017resolution} for details on WENO and the Riemann solver implementation in two-dimensional problems. For ease of notation we denote $\rho \eta$ as $q_1$, $\rho \eta u$ as $q_2$ and $\rho \eta v$ as $q_3$ in our subsequent discussions. The control parameters in the case of the shallow water equations are $\bar{x}$ and $\bar{y}$, which control the initial location of the Gaussian pulse in the domain. Our goal is to obtain a reduced-basis evolution of a new choice for these control parameters given \emph{a priori} snapshots from full-order forward solves at pre-selected control parameter choices. We use 90 full-order simulations for the training and validation and 10 test simulations for \emph{a posteriori} assessments. One hundred snapshots are utilized for each simulation, i.e., a snapshot is saved every five steps of the time integrator.
\subsubsection{Convolutional autoencoder} \label{Shallow_CAE}
For the nonlinear encoding of the shallow water equations, we use the two-dimensional CAE detailed in the schematic in Figure \ref{2D_Schematic}. Our three conserved variables are encoded using three input and output channels in our autoencoder. We scaled the data to zero mean and unit variance to ensure that losses due to inaccurate reconstruction were weighted fairly across the different variables. We use an architecture that is similar to the Burgers' example in that a bottlenecked framework ensures the compression of the full-order field. A key difference is that the ``bottleneck" layers are supplemented with fully connected layers to allow for an arbitrary latent dimensionality. We choose a latent space of 6 degrees of freedom for this problem which represents an approximate compression ratio of 680. Also, a batch size of 24 with a learning rate of 0.001 was utilized to train the framework. Each two-dimensional convolutional layer and densely connected bottleneck layer utilized the swish activation function, which has been shown to be superior to ReLU for significantly deep networks \cite{ramachandran2017searching}. In contrast, the output layer of the network is a linear layer. 9000 snapshots are randomly partitioned into 8100 for training and 900 for validation, with the latter used for an early stopping criterion. The trained CAE is tested on 1000 snapshots from 10 held-out simulations. Specific details of the architecture such as the number of channels in each layer of the CAE and the size of the pooling may be found in our supporting code. Figure \ref{SWE_CAE_Reconstruction} shows the ability of the decoder to reconstruct from the latent space.
\begin{figure}
\centering
\fbox{\includegraphics[trim={2.5cm 8.5cm 2.5cm 2.5cm},clip,width=0.95\textwidth]{2D_Schematic.pdf}}
\caption{A schematic of the two-dimensional CAE-LSTM for the shallow water equations. The nonlinear autoencoder embeds the data into latent space, and then the recurrent network can be used for time-series advancement of a flattened representation of the multidimensional system.}
\label{2D_Schematic}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim={0cm 3cm 0 0},clip,width=\textwidth]{SWE_Reconstruction_Test.png}
\caption{The reconstruction ability of the CAE for the conserved variables in the shallow water equations. This snapshot is from a representative test simulation starting from an unseen initial condition.}
\label{SWE_CAE_Reconstruction}
\end{figure}
\subsubsection{LSTM}
We again couple the CAE with an LSTM that is conditioned on the control parameters. In this set of experiments, our control parameter affects the location of a Gaussian pulse applied to $\rho \eta$ at $t=0$. Our goal is to replicate trends of field evolution for a novel initial condition given examples of full-order forward solves to train from. To create training data for the LSTM, we apply the trained CAE to compress the data, then concatenate the parameter information. Our LSTM architecture is 3 cells with 50 neurons in each cell. A batch size of 64 is used with the default learning rate of 0.001 for the Adam optimizer. As outlined previously, 10\% of the total non-test data is set aside for the purpose of validation and early stopping. A time-window of 10 points, utilized for the LSTM forecasts, provided adequately accurate trajectory predictions in the latent space.
\subsubsection{CAE-LSTM modeling}
Figure \ref{SWE_LSTM_Testing} shows the ability of the LSTM module to reconstruct dynamical trends in the latent space for a sample test simulation. The reference truth for these curves has been obtained by reconstructing (with use of the CAE) full-order solutions for a test control parameter that was not utilized during training. One can observe that dynamical trends are replicated by the parameterized LSTM. Evolutionary trends towards the end of the dynamics suggest that the dissipation of energy in the system by the numerical method is captured adequately. Figure \ref{SWE_ROM_1} shows the ability of the CAE-LSTM surrogate model to identify coherent spatial features in a sample test simulation. For comparison, we show results from benchmark POD-GP deployments for 6 and 40 retained modes. At an equivalent compression ratio, the CAE-LSTM is able to represent the solution well. At 40 retained modes, the severe truncation of dynamics in POD space still leads to Gibbs' phenomena by POD-GP, which demonstrates the robustness of our proposed method. Contour plots at two representative times are shown in Figure \ref{SWE_Contours_1} and \ref{SWE_Contours_2} where one can clearly observe that the coherent structures in the flow fields are adequately recovered by the CAE-LSTM in comparison to both 6 and 40 mode POD-GP deployments. However, one can also discern that the POD-GP method gradually converges to the true dynamics with increasing modal retention.
In terms of computational costs, the CAE-LSTM was able to provide an LSTM-based latent space forecast at 1.746 seconds per simulation. Reconstruction from latent space for a 100 snapshot simulation required 0.167 seconds. In comparison a POD-GP ROM deployment (using either 6 or 40 retained modes) required an average 24.67 seconds per simulation. The primary cost in POD-GP deployments is the reconstruction of the nonlinear term for the numerical calculation of fluxes which is independent of the number of latent degrees of freedom. The nonlinear term computation for this test case was performed using a fifth-order WENO scheme just like its full-order counterpart and thus is a memory and compute cost that the machine learned model bypasses. In terms of quantitative error metrics, the $q_1$ mean-squared error for all the testing data was 4.8e-4 for CAE-LSTM, 5.6e-4 for POD-GP (6 modes) and 1.7e-4 for POD-GP (40 modes). Similar trends were observed for $q_2$ (4.8e-4, 7.8e-4, 2.6e-4) and $q_3$ (3e-3, 3.3e-3, 1.1e-3). Although the mean-squared error metrics support superiority for POD-GP at 40 retained modes, coherent structure reproduction is more accurate via the CAE-LSTM as demonstrated in the contour plots above. Mean-squared error metrics were affected by the greater amount of fine-scale noise in CAE-LSTM reconstructions. A possible avenue for addressing this limitation is to use intelligent loss functions or embed physics-inspired regularization in the optimization problem.
\begin{figure}
\centering
\mbox{
\subfigure[Latent Dimension 1]{\includegraphics[width=0.42\textwidth]{LSTM_Sim_4_Mode_0.png}}
\subfigure[Latent Dimension 2]{\includegraphics[width=0.42\textwidth]{LSTM_Sim_4_Mode_1.png}}
} \\
\mbox{
\subfigure[Latent Dimension 3]{\includegraphics[width=0.42\textwidth]{LSTM_Sim_4_Mode_2.png}}
\subfigure[Latent Dimension 4]{\includegraphics[width=0.42\textwidth]{LSTM_Sim_4_Mode_3.png}}
} \\
\mbox{
\subfigure[Latent Dimension 5]{\includegraphics[width=0.42\textwidth]{LSTM_Sim_4_Mode_4.png}}
\subfigure[Latent Dimension 6]{\includegraphics[width=0.42\textwidth]{LSTM_Sim_4_Mode_5.png}}
}
\caption{Hidden space evolution of a testing simulation using a parametric LSTM. The curves, here, indicate the individual degrees of freedom of a 6-dimensional latent space with the $y$-axes indicating their magnitudes.}
\label{SWE_LSTM_Testing}
\end{figure}
\begin{figure}
\centering
\mbox{
\subfigure[True $q_1$]{\includegraphics[width=0.32\textwidth]{True_q1_Time_1.png}}
\subfigure[True $q_2$]{\includegraphics[width=0.32\textwidth]{True_q2_Time_1.png}}
\subfigure[True $q_3$]{\includegraphics[width=0.32\textwidth]{True_q3_Time_1.png}}
} \\
\mbox{
\subfigure[CAE-LSTM]{\includegraphics[width=0.32\textwidth]{CAE_q1_Time_1.png}}
\subfigure[CAE-LSTM]{\includegraphics[width=0.32\textwidth]{CAE_q2_Time_1.png}}
\subfigure[CAE-LSTM]{\includegraphics[width=0.32\textwidth]{CAE_q3_Time_1.png}}
} \\
\mbox{
\subfigure[POD-GP (6 modes)]{\includegraphics[width=0.32\textwidth]{GP_6_q1_Time_1.png}}
\subfigure[POD-GP (6 modes)]{\includegraphics[width=0.32\textwidth]{GP_6_q2_Time_1.png}}
\subfigure[POD-GP (6 modes)]{\includegraphics[width=0.32\textwidth]{GP_6_q3_Time_1.png}}
}
\\
\mbox{
\subfigure[POD-GP (40 modes)]{\includegraphics[width=0.32\textwidth]{GP_40_q1_Time_1.png}}
\subfigure[POD-GP (40 modes)]{\includegraphics[width=0.32\textwidth]{GP_40_q2_Time_1.png}}
\subfigure[POD-GP (40 modes)]{\includegraphics[width=0.32\textwidth]{GP_40_q3_Time_1.png}}
}
\caption{A qualitative assessment of reconstructed dynamics using the Galerkin projection methodology for a test simulation. The superiority of the CAE reconstruction over POD-GP at the same compression ratio (latent dimension 6) is evident. POD-GP performance improves as we capture more of the variance in the data set by increasing the number of modes.}
\label{SWE_ROM_1}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim={7.5cm 0cm 0 0},clip,width=\textwidth]{CAE_GP_Comparison_0.png}
\caption{Contour plots showing true, CAE-LSTM and GP obtained results for the three conserved variables at time $t=0.005$. This corresponds to one quarter of the simulation completed. The CAE-LSTM is seen to capture full-order spatial structures accurately in comparison to the POD-GP method (at 6 latent space dimensions).}
\label{SWE_Contours_1}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim={7.5cm 0cm 0 0},clip,width=\textwidth]{CAE_GP_Comparison_1.png}
\caption{Contour plots showing true, CAE-LSTM and GP obtained results for the three conserved variables at time $t=0.15$. This corresponds to 30\% of the simulation completed. The CAE-LSTM is seen to capture full-order spatial structures accurately in comparison to the POD-GP method (at 6 latent space dimensions).}
\label{SWE_Contours_2}
\end{figure}
\section{Discussion and Conclusions}
\label{S:8}
In this study, we propose using a recurrent CAE framework for the reduced-order modeling of systems that are inherently advective and, therefore, high-dimensional. These systems suffer from slow convergence and instability in a linear reduced-basis space given by the POD and a Galerkin projection of the governing equations onto this space. In contrast, we demonstrate that the nonlinear embedding obtained by the CAE and the equation-free dynamics characterization by the LSTM network leads to stable reconstructions of high-dimensional physics in both space and time. We extend our machine learning framework to a parametric formulation where we concatenate the low-dimensional embedding with control parameter information to interpolate between full-order sample points in the data generation phase. Our results indicate that the proposed framework can be used for rapid exploration of a design space conditioned on a set of control parameters. Our framework utilizes a \emph{burn-in} period for the LSTM that necessitates a short compute of less than 10\% of the full-order compute. This is necessary to create a windowed input to the LSTM network. Results on test datasets show a good ability to recover physical trends on unseen control parameter choices. We are currently extending the framework by exploring couplings with active learning wherein we adaptively learn control parameters during training in order to characterize parametric variations optimally. In addition, we are also exploring data-augmentation strategies to preclude the initial compute required for the initial LSTM window in latent space. The former will rely on the generation of so-called \emph{ghost} points to serve as a burn-in to the ROM. Some key challenges also include the ability to incorporate unstructured grid information particularly for problems where there is significant anisotropy in spatial field. There is some promising work in this direction using generalized moving least squares methods \cite{trask2019gmls} and point-cloud networks \cite{kashefi2020point}. The final goal will be to incorporate these surrogate models in design frameworks that may utilize derivative-based or derivative-free optimization.
\section*{Acknowledgments}
The authors acknowledge helpful comments from Dr. Sandeep Madireddy and Dr. Arvind Mohan. This material is based upon work supported by the U.S. Department of Energy (DOE), Office of Science, Office of Advanced Scientific Computing Research, under Contract DE-AC02-06CH11357. This research was funded in part and used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. DOE or the United States Government.
\bibliographystyle{elsarticle-num-names}
\section{Introduction}
\label{S:1}
High-fidelity simulations of systems characterized by nonlinear partial differential equations (PDEs) represent large compute costs and are prohibitive for decision-making tasks for many fast-query applications. In order to reduce costs, there has recently been significant interest in the reduced-order modeling (ROM) of such systems \cite{carlberg2011efficient,wang2012proper,san2015principal,ballarin2015supremizer,san2018extreme,wang2019non,choi2019space}. As such, this field finds extensive application in control \cite{proctor2016dynamic,peitz2019multiobjective,noack2011reduced,rowley2017model}, multi-fidelity optimization \cite{peherstorfer2016optimal}, uncertainty quantification \cite{sapsis2013statistically,zahr2018efficient} and data-assimilation \cite{arcucci2019optimal} among others. However, ROMs are limited in how they handle nonlinear dependence and perform poorly for complex physical phenomena, which are inherently multiscale in space and time \cite{wells2017evolve,xie2018data,san2018neural,san2019artificial}. Researchers continue to search for efficient and reliable ROM techniques for such transient nonlinear systems \cite{hess2019localized,kramer2019nonlinear,swischuk2019projection,hamzi2019local,rahman2019dynamic,wang2019non}. The identification of a reduced basis to ensure a compressed representation that is minimally \emph{lossy} is a core component of most ROM development strategies (some examples include \cite{san2014basis,korda2018data,kalb2007intrinsic}). Once this basis is identified, we need a cost-effective strategy for accurate nonlinear dynamical system evolution to reproduce the full-order spatiotemporal complexity of the problem in the reduced basis. For example, we could use intrusive methods (which project the governing equations onto the reduced-basis), as seen in \cite{kalashnikova2010stability,mohebujjaman2019physically}, which use a Galerkin projection or \cite{carlberg2011efficient,xiao2013non,fang2013non}, which use the Petrov-Galerkin method (see \cite{carlberg2017galerkin} for the comparison of these two methods). Finally, reconstruction of the compressed representation is required for full-order space post-processing and visualization. In this study, we utilize convolutional autoencoders (CAE) and long short-term memory neural networks (LSTM) \cite{hochreiter1997long} for parametric ROMs of advection-dominated and inviscid systems. The former are used to identify reduced-representations of the full-order fields and the latter are used for the temporal evolution of these compressed representations. LSTMs have recently become popular for the non-intrusive characterization of dynamical systems \cite{vlachas2018data,ahmed2019long,maulik2019time,mohan2019compressed,mohan2018deep,wang2019recurrent} although most such studies perform latent space embedding through the use of linear embeddings such as the proper orthogonal decomposition (POD). Additionally, we propose a parametric extension of the CAE-LSTM for exploring parametric search spaces through training on multiple offline simulations. In contrast with studies outlined in \cite{lusch2018deep,erichson2019physics}, we deploy our framework on problems requiring shock capturing mechanisms as well as for fully inviscid system simulations. Ref. \cite{lee2020model} also uses a CAE to nonlinearly embed states, but solves the governing equations on the
nonlinear manifold defined by the autoencoder. We reduce computation by evolving with an LSTM network instead. An important difference with another similar study outlined in \cite{gonzalez2018learning} is that our framework allows for the explicit embedding of control parameters such as viscosity or a parameterization of the initial condition into the LSTM. This allows for the independent training of our CAE and LSTM neural networks. A similar implementation has been demonstrated in \cite{xu2019multi} where another neural network is used to link global parameters with the latent space representations. We simplify this by directly embedding the parameters into the latent space. In \citep{maulik2020latent}, the latent space was obtained using a CAE followed by which a Gaussian process regressor was utilized for a continous representation of the temporal evolution of the state. However, this method is limited when the number of snapshots is very large due to the inherent limitations of most Gaussian process regression techniques.
Our forward models for the purpose of data generation and subsequent testing are given by parametric partial differential equations. For the rest of this article, we shall represent a generic partial differential equation using the following notation:
\begin{linenomath*}
\begin{align}
\label{gen1}
\dot{\mathbf{q}}(\mathbf{x},t,\mathbf{p}) + \mathcal{N}[\mathbf{q}(\mathbf{x},t,\mathbf{p})] + \mathcal{L}[\mathbf{q}(\mathbf{x},t,\mathbf{p}); \mathbf{p}] = 0, \quad (\mathbf{x},t,\mathbf{p}) \in \Omega \times \mathcal{T} \times \mathcal{P},
\end{align}
\end{linenomath*}
where $\Omega \subset \mathbb{R}^i, \mathcal{T} = [0,T], \mathcal{P} \subset \mathbb{R}^p$, and $\mathcal{N}$, $\mathcal{L}$ are non-linear and linear operators, respectively. Our system is characterized by a solution field $\mathbf{q} : \Omega \times \mathcal{T} \times \mathcal{P} \rightarrow \mathbb{R}^d$ and appropriately chosen initial and boundary conditions, where $i$ is the number of spatial dimensions, $d$ is the number of dependent variables of the PDE, and $p$ is the number of control parameters in the problem. We assume that our system of equations can be solved in space-time on a discrete grid resulting in the following system of parameterized ODEs:
\begin{linenomath*}
\begin{align}
\dot{\mathbf{q}_h}(t,\mathbf{p}) + \mathbf{N}_{h}[\mathbf{q}_h(t,\mathbf{p})] + \mathbf{L}_h[\mathbf{q}_h(t,\mathbf{p}); \mathbf{p}] = 0 \quad(t,\mathbf{p}) \in \mathcal{T} \times \mathcal{P},
\end{align}
\end{linenomath*}
where $\mathbf{q}_h : \mathcal{T} \times \mathcal{P} \rightarrow \mathbb{R}^{N_h}$ is a discrete solution and $N_h$ is the number of spatial degrees of freedom. In this problem, our goal is to bypass the solution of Equation \ref{gen1} by constructing a compression manifold and a time advancement technique on this manifold solely from training data. Such ROMs hold great promise for characterizing the spatiotemporal dynamics of systems for which observations may be available, but little knowledge of the governing equations exist.
\textcolor{blue}{To summarize, the contributions of this article are:
\begin{itemize}
\item We propose a deep learning based emulation strategy for nonlinear partial differential equations.
\item We introduce a convolutional autoencoder architecture that obtains nonlinear embeddings with high compression ratios.
\item We propose the use of long short-term memory networks for the evolution of the state in embedded space.
\item We make our emulator parametric by passing the parameters into the latent space. This allows generalization across, for example, a range of viscosities.
\item We demonstrate the performance of the proposed formulation for non-intrusive modeling of advection dominated physics obtained from the viscous Burgers and inviscid shallow water equations.
\end{itemize}}
\section{Proper orthogonal decomposition}
\label{S:2}
In this section, we review the POD technique for the construction of a reduced basis \cite{kosambi1943statistics,berkooz1993proper}. The interested reader may also find an excellent explanation of POD and its relationship with other dimension-reduction techniques in \cite{taira2019modal}. The POD procedure is tasked with identifying a space
\begin{linenomath*}
\begin{align}
\mathbf{X}^{f}=\operatorname{span}\left\{\boldsymbol{\vartheta}^{1}, \dots, \boldsymbol{\vartheta}^{f}\right\},
\end{align}
\end{linenomath*}
which approximates snapshots optimally with respect to the $L^2-$norm. The process of $\boldsymbol{\vartheta}$ generation commences with the collection of snapshots in the \emph{snapshot matrix}
\begin{linenomath*}
\begin{align}
\mathbf{S} = [\begin{array}{c|c|c|c}{\hat{\mathbf{q}}^{1}_h} & {\hat{\mathbf{q}}^{2}_h} & {\cdots} & {\hat{\mathbf{q}}^{N_{s}}_h}\end{array}] \in \mathbb{R}^{N_{h} \times N_{s}},
\end{align}
\end{linenomath*}
where $N_s$ is the number of snapshots, and $\hat{\mathbf{q}}^i_h \in \mathbb{R}^{N_h}$ corresponds to an individual snapshot in time of the discrete solution domain. Our POD bases can then be extracted efficiently through the method of snapshots where we solve the eigenvalue problem on the correlation matrix $\mathbf{C} = \mathbf{S}^T \mathbf{S} \in \mathbb{R}^{N_s \times N_s}$. Then
\begin{linenomath*}
\begin{align}
\begin{gathered}
\mathbf{C} \mathbf{W} = \mathbf{W} \Lambda,
\end{gathered}
\end{align}
\end{linenomath*}
where $\Lambda = \operatorname{diag}\left\{\lambda_{1}, \lambda_{2}, \cdots, \lambda_{N_{s}}\right\} \in \mathbb{R}^{N_{s} \times N_{s}}$ is the diagonal matrix of eigenvalues and $\mathbf{W} \in \mathbb{R}^{N_{s} \times N_{s}}$ is the eigenvector matrix. Our POD basis matrix can then be obtained by
\begin{linenomath*}
\begin{align}
\begin{gathered}
\boldsymbol{\vartheta} = \mathbf{S} \mathbf{W} \in \mathbb{R}^{N_h \times N_s}.
\end{gathered}
\end{align}
\end{linenomath*}
In practice a reduced basis $\boldsymbol{\psi} \in \mathbb{R}^{N_h \times N_r}$ is built by choosing the first $N_r$ columns of $\boldsymbol{\vartheta}$ for the purpose of efficient ROMs, where $N_r \ll N_s$. This reduced basis spans a space given by
\begin{linenomath*}
\begin{align}
\mathbf{X}^{r}=\operatorname{span}\left\{\boldsymbol{\psi}^{1}, \dots, \boldsymbol{\psi}^{N_r}\right\}.
\end{align}
\end{linenomath*}
The coefficients of this reduced basis (which capture the underlying temporal effects) may be extracted as
\begin{linenomath*}
\begin{align}
\begin{gathered}
\mathbf{A} = \boldsymbol{\psi}^{T} \mathbf{S} \in \mathbb{R}^{N_r \times N_s}.
\end{gathered}
\end{align}
\end{linenomath*}
The POD approximation of our solution is then obtained via
\begin{linenomath*}
\begin{align}
\hat{\mathbf{S}} = [\begin{array}{c|c|c|c}{\tilde{\mathbf{q}}^{1}_h} & {\tilde{\mathbf{q}}^{2}_h} & {\cdots} & {\tilde{\mathbf{q}}^{N_{s}}_h}\end{array}] \approx \boldsymbol{\psi} \mathbf{A} \in \mathbb{R}^{N_h \times N_s},
\end{align}
\end{linenomath*}
where $\tilde{\mathbf{q}}_h^i \in \mathbb{R}^{N_h}$ corresponds to the POD approximation to $\hat{\mathbf{q}}_h^i$. The optimal nature of reconstruction may be understood by defining the relative projection error
\begin{linenomath*}
\begin{align}
\frac{\sum_{i=1}^{N_{s}}\left\|\hat{\mathbf{q}}^i_h-\tilde{\mathbf{q}}^i_h \right\|_{\mathbb{R}^{N_{h}}}^{2}}{\sum_{i=1}^{N_{s}}\left\|\hat{\mathbf{q}}^i_h\right\|_{\mathbb{R}^{N_{h}}}^{2}}=\frac{\sum_{i=N_r+1}^{N_{s}} \lambda_{i}^{2}}{\sum_{i=1}^{N_{s}} \lambda_{i}^{2}},
\end{align}
\end{linenomath*}
which exhibits that with increasing retention of POD bases, increasing reconstruction accuracy may be obtained. We remark that for dimension $d>1$, the solution variables may be stacked to obtain this set of bases that are utilized for the reduction of each PDE within the coupled system. Another approach may be to obtain reduced bases for each dependent variable within the coupled system and evolve each PDE on a different manifold. Each dependent variable is projected onto bases constructed from its snapshots alone. This affects the computation of $\mathcal{N}$ for computing the updates for each dimension in $\mathbf{q}$. In practice, this operation manifests itself in the concatenation of reduced bases to obtain one linear operation for reconstruction of all field quantities.
\subsection{The POD Galerkin projection}
The POD basis may be leveraged for a Galerkin projection of each partial differential equation forming the coupled system onto its corresponding reduced basis. We start by revisiting Equation (\ref{gen1}) written in the form of an evolution equation for fluctuation components i.e.,
\begin{linenomath*}
\begin{align}
\dot{\hat{\mathbf{q}}}_h(\mathbf{x},t,\mathbf{p}) + \mathcal{N}_h[\hat{\mathbf{q}}_h(\mathbf{x},t,\mathbf{p})] + \mathcal{L}_h[\hat{\mathbf{q}}_h(\mathbf{x},t,\mathbf{p}); \mathbf{p}] = 0,
\end{align}
\end{linenomath*}
which can be expressed in the reduced basis as
\begin{linenomath*}
\begin{align}
\boldsymbol{\psi} \dot{\mathbf{q}_r}(t,\mathbf{p}) + \mathcal{N}_h[\boldsymbol{\psi} \mathbf{q}_r(t,\mathbf{p})] + \mathcal{L}_h[\boldsymbol{\psi} \mathbf{q}_r(t,\mathbf{p}); \mathbf{p}] = 0,
\end{align}
\end{linenomath*}
where $\mathbf{q}_r \in \mathbb{R}^{N_r}$ corresponds to the temporal coefficients at one time instant of the system evolution (i.e., equivalent to a particular column of $\mathbf{A}$). The orthogonal nature of the reduced basis can be leveraged to obtain
\begin{linenomath*}
\begin{align}
\dot{\mathbf{q}_r}(t,\mathbf{p}) + \mathcal{N}_h[\boldsymbol{\psi} \mathbf{q}_r(t,\mathbf{p})] + \mathcal{L}_r[\mathbf{q}_r(t,\mathbf{p}); \mathbf{p}] = 0,
\end{align}
\end{linenomath*}
where $\mathcal{L}_r$ is a precomputed Laplacian operator in reduced space. This equation is denoted the POD Galerkin-projection formulation (POD-GP). We have assumed that the residuals generated by the truncated representation of the full-order model are orthogonal to the reduced basis. A significant source of error in the forward evolution of this system of equations is due to the absence of higher-basis nonlinear interactions as shown in in Section \ref{SS:6}. Also, POD-GP essentially consists of $N_r$ coupled ODEs and is solved by a standard fourth-order accurate Runge-Kutta method. The reduced degrees of freedom lead to very efficient forward solves of the problem even though accuracy is limited. This transformed problem has initial conditions given by
\begin{linenomath*}
\begin{align}
\mathbf{q}_r(t=0)=\left(\boldsymbol{\psi}^T \hat{\mathbf{q}}_h(t=0) \right).
\end{align}
\end{linenomath*}
\section{Deep neural networks}
In the following section, we introduce our deep neural network architectures for establishing a viable emulation strategy for data obtained from nonlinear partial differential equations.
\subsection{Convolutional autoencoders}
\label{S:3}
Autoencoders are neural networks that learn a new representation of the input data, usually with lower dimensionality. The initial layers, called the \emph{encoder}, map the input $\mathbf{x}\in \mathbb{R}^m$ to a new representation $\mathbf{z} \in \mathbb{R}^k$ with $k << m$. The remaining layers, called the \emph{decoder}, map $\mathbf{z}$ back to $\mathbb{R}^m$ with the goal of reconstructing $\mathbf{x}$. The objective is to minimize the reconstruction error. Autoencoders are unsupervised; the data $\mathbf{x}$ is given, but the representation $\mathbf{z}$ must be learned.
More specifically, we use autoencoders that have some convolutional layers. In a convolutional layer, instead of learning a matrix that connects all $m$ neurons of layer's input to all $n$ neurons of the layer's output, we learn a set of filters. Each filter $\mathbf{f_i}$ is convolved with patches of the layer's input. Suppose a 1-d convolutional layer has filters of length $m_{f_i}$. Then each of the layer's output neurons corresponding to filter $\mathbf{f_i}$ is connected to a patch of $m_{f_i}$ of the layer's input neurons. In particular, a 1-d convolution of filter $\mathbf{f}$ and patch $\mathbf{p}$ is defined as $\mathbf{f} \ast \mathbf{p} = \sum_j f_j p_j$ (For neural networks, convolutions are usually technically implemented as cross-correlations). Then, for a typical 1-d convolutional layer, the layer's output neuron $y_{ij} = \varphi (\mathbf{f_i} \ast \mathbf{p_j} +B_{i})$ where $\varphi$ is an activation function, and $B_i$ are the entries of a bias term. As $j$ increases, patches are shifted by stride $s$. For example, a 1-d convolutional layer with a filter $f_0$ of length $m_f = 3$ and stride $s=1$ could be defined so that $y_{0j}$ involves the convolution of $f_0$ and inputs $j-1, j$, and $j+1$. To calculate the convolution, it is common to add zeros around the inputs to a layer, which is called \emph{zero padding}. In the decoder, we use deconvolutional layers to return to the original dimension. These layers upsample with nearest-neighbor interpolation.
Two-dimensional convolutions are defined similarly, but each filter and each patch are two-dimensional. A 2-d convolution sums over both dimensions, and patches are shifted both ways. For a typical 2-d convolutional layer, the output neuron $y_{ijk} = \varphi (\mathbf{f_i} \ast \mathbf{p_{jk}} +B_{i})$. Input data can also have a ``channel'' dimension, such as RGB for images. The convolutional operator sums over channel dimensions, but each patch contains all of the channels. The filters remain the same size as patches, so they can have different weights for different channels. It is common to follow a convolutional layer with a \emph{pooling} layer, which outputs a sub-sampled version of the input. In this paper, we specifically use max-pooling layers. Each output of a max-pooling layer is connected to a patch of the input, and it returns the maximum value in the patch.
\subsection{Long short-term memory networks}
\label{S:4}
The LSTM network is a specialization of the recurrent neural network and was introduced to consider time-delayed processes where events further back in the past may potentially affect predictions for the current location in the sequence. The basic equations of the LSTM in our context for an arbitrary input variable $\mathbf{a}$ are given by
\begin{linenomath*}
\begin{align}
\begin{split}
\text{input gate: }& \boldsymbol{G}_{i}=\boldsymbol{\varphi}_{S} \circ \mathcal{F}_{i}^{N_{c}}(\mathbf{a}), \\
\text{forget gate: }& \boldsymbol{G}_{f}=\boldsymbol{\varphi}_{S} \circ \mathcal{F}_{f}^{N_{c}}(\mathbf{a}), \\
\text{output gate: }& \boldsymbol{G}_{o}=\boldsymbol{\varphi}_{S} \circ \mathcal{F}_{o}^{N_{c}}(\mathbf{a}), \\
\text{internal state: }& \boldsymbol{s}_{t}=\boldsymbol{G}_{f} \odot \boldsymbol{s}_{t-1}+\boldsymbol{G}_{i} \odot\left(\boldsymbol{\varphi}_{T} \circ \mathcal{F}_{\mathbf{a}}^{N_{c}}(\mathbf{a})\right), \\
\text{output: }& \mathbf{h}_t = \boldsymbol{G}_{o} \circ \boldsymbol{\varphi}_{T}\left(\boldsymbol{s}_{t}\right),
\end{split}
\end{align}
\end{linenomath*}
where $\mathbf{a}$ is a vector of inputs comprising a snapshot of information in time. Within this study, this vector is generally the encoded representation after either the POD or CAE embedding. Also, $\boldsymbol{\varphi}_{S}$ and $\boldsymbol{\varphi}_{T}$ refer to tangent sigmoid and tangent hyperbolic activation functions respectively, $N_c$ is the number of hidden layer units in the LSTM network. Here, $\mathcal{F}^{n}$ refers to a linear operation given by a matrix multiplication and subsequent bias addition i.e,
\begin{linenomath*}
\begin{align}
\mathcal{F}^{n}(\boldsymbol{x})=\boldsymbol{W} \boldsymbol{x}+\boldsymbol{B},
\end{align}
\end{linenomath*}
where $\boldsymbol{W} \in \mathbb{R}^{n \times m}$ and $\boldsymbol{B} \in \mathbb{R}^{n}$ for $\mathbf{x} \in \mathbb{R}^m$ and where $\mathbf{a} \odot \mathbf{b}$ refers to a Hadamard product of two vectors. The LSTM implementation is used to advance $\mathbf{a}$ as a function of time. The LSTM network's primary utility is the ability to control information flow through time with the use of the gating mechanisms. A quantity that preserves information of past inputs and predictions is the internal state $\mathbf{s}_t$ which is updated using the result of the input and forget gates every time the LSTM operations are performed. A greater value of the forget gate (post sigmoidal activation), allows for a greater preservation of past state information through the sequential inference of the LSTM, whereas a smaller value suppresses the influence of the past. Details of our LSTM deployments for the different experiments utilized in this article are provided in Section \ref{S:5}.
\subsection{Combining CAE and LSTM for surrogate modeling}
\textcolor{blue}{Our data-driven emulation strategy shall rely on the use of CAE for dimensionality reduction and LSTM for latent space temporal evolution of the state. The benefit of this formulation, in comparison to POD-GP, is the improved compression ratios obtained by the nonlinear embedding of the CAE and the equation-free evolution of the state using the LSTM. The basic schematic for this formulation is shown in Figure \ref{1D_Schematic} where a one-dimensional field is compressed to a low-dimensional latent space and then evolved non-intrusively. Our training framework is \emph{separate} in that the snapshot data of the flow-field is first used to obtain a low dimensional embedding before a data-driven time-series forecast technique is used for evolving the state in this space. As mentioned previously, this is in contrast to previous studies where latent space embedding and temporal evolution have been performed in a simultaneous optimization \cite{lusch2018deep,gonzalez2018learning,erichson2019physics}. We pursue this direction for our emulation strategy to allow for greater flexibility in modeling the evolution of the latent space. In particular, the choice for a novel state evolution mechanism will not require retraining a nonlinear embedding. In addition, uneven samples of snapshot data (for instance when new snapshots become available at arbitrary locations in time) may be deployed with time-series methods that are customized for irregular data without retraining an embedding \cite{rubanova2019latent}. The deployment of a concurrent optimization for an embedding and a time series forecast strategy usually relies on the construction of a loss-function that penalizes reconstruction and forecast accuracy together. The joint optimization can result in slower training and requires deciding how to weight the two loss functions. Specific details of the CAE and LSTM combinations for our test cases shall be described in Section \ref{S:5}.}
\section{Experiments}
\label{S:5}
In the following, we introduce the two representative problems used to assess the proposed framework. We demonstrate framework performance for the viscous Burgers equation, which is characterized by an advecting shock and the conservative inviscid shallow water equations with varying initial conditions. While the first problem requires that our framework is able to capture the advection of a shock profile accurately in time, the second problem requires interpolation in initial condition space. These varying initial conditions are given by different locations of a Gaussian blob at the starting time. The specific details of the ML framework used in the following experiments may be found in our supporting source code available at \texttt{https://github.com/Romit-Maulik/CAE\_LSTM\_ROMS}.
\subsection{Burgers}
\label{SS:6}
Our first problem is given by the one-dimensional viscous Burgers' equation with Dirichlet boundary conditions which can be represented as
\begin{linenomath*}
\begin{align}
\begin{gathered}
\label{gen3}
\dot{u} + u\frac{\partial u}{\partial x} = \nu \frac{\partial^2 u}{\partial x^2}, \\
u(x,0) = u_0, \quad x \in [0,L], \quad u(0,t) = u(L,t) = 0.
\end{gathered}
\end{align}
\end{linenomath*}
It is well known that the above equation is capable of generating discontinuous solutions even if initial conditions are smooth and $\nu$ is sufficiently small due to advection-dominated behavior. We specifically consider the initial condition
\begin{linenomath*}
\begin{align}
u(x, 0) &=\frac{x}{1+\sqrt{\frac{1}{t_{0}}} \exp \left(R e \frac{x^{2}}{4}\right)},
\end{align}
\end{linenomath*}
and we set $L=1$ and maximum time $t_{max}=2$. An analytical solution exists and is given by
\begin{linenomath*}
\begin{align}
\label{Burgers_Sol}
u(x, t)=\frac{\frac{x}{t+1}}{1+\sqrt{\frac{t+1}{t_{0}}} \exp \left(R e \frac{x^{2}}{4 t+4}\right)},
\end{align}
\end{linenomath*}
where $t_0=\text{exp}(Re/8)$ and $Re = 1/\nu$.
\subsubsection{Convolutional autoencoder} \label{Burgers_CAE}
We proceed by detailing the architecture of our CAE for effective compression of the full-order solution field. We use a one-dimensional convolutional framework with multiple strided filters to obtain a low-dimensional representation of the solution field. Figure \ref{1D_Schematic} is a schematic of the architecture. We utilize several pairs of convolutional and max-pooling layers to reduce dimensionality of the input image to a size of \emph{solely} two degrees of freedom in the encoded space. Following this, the two-dimensional state is convolved and upsampled several times to return to the dimensionality of the full-order field. Each layer consists of rectified linear (ReLU) activations and utilizes a zero-padding at the edges of the domain for the purpose of convolution. The dynamics studied in this test case are not critically affected by the absence of accurate padding at the boundaries. Our network is trained by using a standard mean-squared error loss with a batch size of 10, a learning rate of 0.001 and the Adam optimizer. The choice of hyperparameters for this architecture (i.e., the number of layers, channels, latent-space dimension, learning rate and batch-size) were manually tuned to obtain the current performance. Also, each convolutional layer in the autoencoder utilized a ReLU activation function, with the exception of the output layer and the final layer of the encoder. No regularization was used in the process of training this model and approximately 10\% of the total (non-test) data was set aside for the purpose of validation (i.e., for preventing overfitting through an early-stopping criterion).
\begin{figure}
\centering
\fbox{\includegraphics[width=0.95\textwidth]{1D_Schematic.pdf}}
\caption{A schematic of the one-dimensional CAE-LSTM for the viscous Burgers equation. The nonlinear autoencoder embeds the data into latent space, and then the recurrent network can be used for time-series advancement.}
\label{1D_Schematic}
\end{figure}
\subsubsection{LSTM}
In this section, we introduce architectural details of the LSTM used to advance latent space representations obtained by the CAE for the Burgers problem. We shall be outlining results from two different LSTM architectures: one that is valid for only one choice of $\nu$ and one that is valid for parametric interpolation. We observe that, in general, the latter requires more complex models.
Our basic LSTM architecture for this test case is two cells that are stacked on top of a windowed input of latent space representations. This leads to a windowed-input advancement of dynamics with the output being the prediction of the latent space representation at the next time step. This prediction is then fed back into the framework in an autoregressive manner. Our learning rate for the LSTM is the default 0.001, and we use the Adam optimizer for training. As in the case of the CAE, our cost function is the mean-squared error between predictions and targets. The LSTM hidden cells contain 20 neurons, and the batch size is set to 64 samples. Like in the case of the CAE, we do not employ any regularization and 10\% of the snapshot data is set aside for the purpose of validation. The trained LSTM is deployed for emulating the evolution of the same data in a recursive fashion (i.e., outputs from the LSTM are used as inputs at the next time step).
\subsubsection{CAE-LSTM modeling}
We assess the proposed framework on multiple datasets, each with a single value of $\nu$. Solution fields that vary in time are generated using the analytical solution described in Equation \ref{Burgers_Sol}. In this set of tests, we check the accuracy for different physics ranging from more dissipative at high values of viscosity to more advective at lower values. Error metrics and latent space visualization are provided to evaluate if any trends emerge that generalize in physics. We select four values of $Re=1000,2000,3000,4000$, each with 400 snapshots of the solution field uniformly distributed in time. For the purpose of comparison we also provide results from the POD-Galerkin projection methodology.
Figure \ref{Burgers_1000_Rec} shows the performance of CAE deployment for $Re=1000$. This parameter choice leads to viscous effects damping the shock profile as it is advected in the positive $x$ direction. The latent space consisting of two variables has a consistent trend in time which is repeated for other parameters. We draw attention to the difference in magnitude of the latent space variables at the final snapshot. Empirically, this difference is correlated with the dominance of advection to dissipation in the physics of the Burgers equation. Figure \ref{Burgers_2000_Rec} shows similar results for a higher value of $Re=2000$. We remark here that the training for this particular case was completely independent of the other values of $Re$. A good performance in capturing the (now sharper) shock profile is observed. The profile of the latent space evolution is very similar to the previous test case, although the magnitudes of the representation seem to be different. This could possibly be due to scaling through the bias terms of the CAE. Results in Figure \ref{Burgers_3000_Rec} for $Re=3000$ and Figure \ref{Burgers_4000_Rec} for $Re=4000$ show similar behavior in latent space trends as well, indicating that there may be a universality in the compressed representation of this particular problem. This also has implications for the \emph{generation} of new advection-dissipation profiles. We also observe that final time magnitudes of each dimension of the two-dimensional compressed representations appear to be closer to each other with increasing $Re$ perhaps allowing for some interpretability of the latent space. Recall that the different values of $Re$ selected for assessment essentially control the \emph{sharpness} of the shock and have limited effect on the location of the shock. A thorough investigation of interpretability, however, is beyond the scope of this article. At this point, we have not deployed any latent space model and these assessments are purely related to the CAE. In the following, we incorporate a latent space time-series model to obtain a 2 degree-of-freedom dynamical model of the advecting shock profile.
\begin{figure}
\centering
\includegraphics[width=0.32\textwidth]{Re1000_ti.png}
\includegraphics[width=0.32\textwidth]{Re1000_tf.png}
\includegraphics[width=0.32\textwidth]{Re1000_ls.png}
\caption{Reconstruction ability of the CAE for initial condition (left) and the final field (middle). Evolution of the latent space (right) for $Re=1000$.}
\label{Burgers_1000_Rec}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.32\textwidth]{Re2000_ti.png}
\includegraphics[width=0.32\textwidth]{Re2000_tf.png}
\includegraphics[width=0.32\textwidth]{Re2000_ls.png}
\caption{Reconstruction ability of the CAE for initial condition (left) and the final field (middle). Evolution of the latent space (right) for $Re=2000$.}
\label{Burgers_2000_Rec}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.32\textwidth]{Re3000_ti.png}
\includegraphics[width=0.32\textwidth]{Re3000_tf.png}
\includegraphics[width=0.32\textwidth]{Re3000_ls.png}
\caption{Reconstruction ability of the CAE for initial condition (left) and the final field (middle). Evolution of the latent space (right) for $Re=3000$.}
\label{Burgers_3000_Rec}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.32\textwidth]{Re4000_ti.png}
\includegraphics[width=0.32\textwidth]{Re4000_tf.png}
\includegraphics[width=0.32\textwidth]{Re4000_ls.png}
\caption{Reconstruction ability of the CAE for initial condition (left) and the final field (middle). Evolution of the latent space (right) for $Re=4000$.}
\label{Burgers_4000_Rec}
\end{figure}
We now assess the ability of the proposed framework to mimic a standard reduced-order model. We start with an assessment of the POD-GP implementations at different values of $Re$ as shown in Figure \ref{POD_GP_Limitation}. The linear encoding leads to slow convergence of ROM representations to the shock profile. In addition, we also observe high frequency instabilities as the number of retained POD modes is increased for higher values of $Re$. This is due to the use of schemes which are not shock capturing, which causes Gibbs oscillations near the advecting discontinuity. This manifests itself in a solution that fails to converge at $Re=4000$ for 30 retained modes and highlights a critical issue with the reduced-order modeling of advection-dominated problems. Each POD-GP deployment utilized basis vectors from their respective full-order models. In comparison, we show results from the CAE-LSTM implementation in Figure \ref{Burgers_4000_ROM} which shows the ability of the proposed framework to capture the sharp profile advection with only two degrees of freedom. Figure \ref{Burgers_4000_LSTM} shows the prediction of the latent-space model in comparison to the latent space representation by compressing each of the true snapshots. The evolution in the encoded space is recursive, in that the outputs of the LSTM are fed back into the input layer through a windowed input to obtain single time step output. The window is initialized with the true values of the first 10 time steps which implies that, in practice, a short duration of the simulation must be deployed with a full-order model following which the CAE-LSTM can take over non-intrusively. Research is underway to bypass this limitation by appending ghost-points in time to the training data in latent space to mimic a \emph{burn-in} for the windowed input.
\begin{figure}
\centering
\mbox{
\subfigure[$Re=1000$]{\includegraphics[width=0.48\textwidth]{POD_GP_Burgers_1000.png}}
\subfigure[$Re=2000$]{\includegraphics[width=0.48\textwidth]{POD_GP_Burgers_2000.png}}
}
\\
\mbox{
\subfigure[$Re=3000$]{\includegraphics[width=0.48\textwidth]{POD_GP_Burgers_3000.png}}
\subfigure[$Re=4000$]{\includegraphics[width=0.48\textwidth]{POD_GP_Burgers_4000.png}}
}
\caption{A demonstration of the limitations of the POD-Galerkin methods for building surrogates of advection dominated partial differential equations. Convergence to the true solution is slow and often limited by numerical instability.}
\label{POD_GP_Limitation}
\end{figure}
\begin{figure}
\centering
\mbox{
\subfigure[$t=0.5$]{\includegraphics[width=0.48\textwidth]{4000_LSTM_t0.png}}
\subfigure[$t=1.0$]{\includegraphics[width=0.48\textwidth]{4000_LSTM_t1.png}}
}
\\
\mbox{
\subfigure[$t=1.5$]{\includegraphics[width=0.48\textwidth]{4000_LSTM_t2.png}}
\subfigure[$t=2.0$]{\includegraphics[width=0.48\textwidth]{4000_LSTM_t3.png}}
}
\caption{Reduced-order modeling capability of the CAE for $Re=4000$ showing evolution in physical space. We remind the reader that the system evolution has been performed using an LSTM in latent space, and these images are reconstructed from two degrees of freedom representations.}
\label{Burgers_4000_ROM}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{4000_LSTM.png}
\caption{Learning dynamics in latent space obtained using CAE for $Re=4000$. The $y$-axes indicate the magnitudes of the first (left) and second (right) latent space encoding.}
\label{Burgers_4000_LSTM}
\end{figure}
We perform assessments for the CAE-LSTM (as outlined for $Re=4000$ above) for other parameter choices and report error metrics (given by $L_2$-norms at the final time step) in Table \ref{Table1}. These show the accuracy of the framework when compared to the POD-GP method for different POD mode retentions. In general, when dynamics are more advective, the CAE-LSTM has lower errors due to the self-similarity in the advecting shock profile. In comparison, the POD-GP method shows an order of magnitude greater errors at a comparable compression of 2 modes and displays trouble in dealing with strong advective physics for $Re=4000$. Also, the CAE-LSTM, while unable to match POD-GP accuracies at greater mode retentions and lower $Re$, obtains an order of magnitude lower error across different $Re$ for the same latent space dimensions (two degrees of freedom only). This establishes, empirically, that advective physics benefits from nonlinear encoding in space and nonlinear modeling in time for effective surrogates. Table \ref{Table1} is complementary to Figure \ref{POD_GP_Limitation}. While POD-GP shows greater oscillations even at high modal coefficient retention, overall $L_2$-error metrics are comparable (if not better) to the proposed framework. This is elaborated in Figure \ref{GP_CAE_Plot}, which shows that the CAE results in noise in the reconstructed fields even if the oscillations due to the POD-GP implementation are stabilized.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{GP_CAE_Plot.png}
\caption{A direct comparison of the POD-GP and CAE-LSTM methods for $Re=4000$ where one can observe noise in the ML predictions even if oscillations are stabilized.}
\label{GP_CAE_Plot}
\end{figure}
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& Re = 1000 & Re = 2000 & Re = 3000 & Re = 4000 \\ \hline
GP 2 modes & 4.197e-3 & 5.558e-3 & 6.12e-3 & 6.418e-3 \\ \hline
GP 5 modes & 1.57e-3 & 7e-3 & 1.244e-2 & 1.65e-2 \\ \hline
GP 10 modes & 1.497e-4 & 5.047e-4 & 1.063e-3 & 1.525e-3 \\ \hline
GP 20 modes & 4.607e-5 & 1.679e-4 & 4.099e-4 & 7.336e-4 \\ \hline
GP 30 modes & 4.938e-5 & 1.102e-4 & 8.333e-5 & NaN \\ \hline
CAE LSTM & 4.181e-4 & 3.912e-4 & 1.409e-4 & 1.551e-4 \\
\hline
\end{tabular}
\caption{$L_2-norm$ error metrics for the final time reconstructions of the CAE-LSTM compared against POD-GP. This table outlines results where the CAE-LSTM and POD-GP deployments are trained anew for each $Re$. The CAE-LSTM error is lower for comparable compression (two degrees of freedom).}
\label{Table1}
\end{table}
We now extend the CAE-LSTM for parametric interpolation. By training the framework for full-order datasets generated for different $Re$, our framework can interpolate in a physical regime for quick generation of full-order dynamics at novel parameter choices. We achieve this by appending another scalar component, the viscosity, to the latent space dimension. For training, we obtain snapshots from 19 simulations (i.e., with uniformly varying values of $Re$) and train a common CAE for all of the simulations. This lets us obtain a sequence of latent space representations for each full-order model concatenated with their respective viscosities. We then train an LSTM, also common across all of the simulations, on these sequences in the same manner as the previous experiments. Inferences can then be performed at a novel parameter choice with ease.
Our parametric LSTM has a similar architecture to the one we used for single-parameter data in the previous sections. The differences include 40 neurons in the hidden cells and a smaller batch size of 32. We remark that the CAE is identical to the one used previously. The performance of the CAE to reconstruct fields with varying dissipation on the shocked profiles is shown in Figure \ref{MP_Reconstruction_Burgers}. The latent space representation of 2 degrees of freedom is expressive enough to capture the difference in the sharpness of the discontinuity for different viscosities. A parametric LSTM is then trained on these compressed representations with results as shown in Figure \ref{MP_Burgers_LSTM}. We observe that the trends are reproduced appropriately for parameters that were not a part of the training data set. Finally, in Figure \ref{Burgers_CAE_LSTM_ROM}, we demonstrate that reconstructing full-order dynamics for a novel testing parameter accurately adheres to the true solution over time. The final time reconstruction mean-squared errors averaged across different testing viscosities was found to be $1.17e-4$, which was comparable to the cases where training was performed for solely one viscosity.
\begin{figure}
\centering
\mbox{
\subfigure[$Re=250$]{\includegraphics[width=0.42\textwidth]{MP_Reconstruction_tf_1.png}}
\subfigure[$Re=450$]{\includegraphics[width=0.42\textwidth]{MP_Reconstruction_tf_2.png}}
} \\
\mbox{
\subfigure[$Re=650$]{\includegraphics[width=0.42\textwidth]{MP_Reconstruction_tf_5.png}}
\subfigure[$Re=850$]{\includegraphics[width=0.42\textwidth]{MP_Reconstruction_tf_6.png}}
} \\
\mbox{
\subfigure[$Re=1050$]{\includegraphics[width=0.42\textwidth]{MP_Reconstruction_tf_10.png}}
\subfigure[$Re=1250$]{\includegraphics[width=0.42\textwidth]{MP_Reconstruction_tf_11.png}}
}
\caption{The ability of a CAE to reconstruct fields sampled from different parameters (Reynolds numbers) showing different sharpness in shock profiles. These snapshots are for parameters that were not included in the training dataset and are obtained by evolving only in the latent space.}
\label{MP_Reconstruction_Burgers}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{MP_LS.png}
\caption{The ability for the parametric LSTM to learn latent space trends for different parameters that are not a part of the training data set. The $y$-axes indicate the magnitudes of the latent space encoding.}
\label{MP_Burgers_LSTM}
\end{figure}
\begin{figure}
\centering
\mbox{
\subfigure[$t=0.2$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_10.png}}
\subfigure[$t=0.4$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_20.png}}
\subfigure[$t=0.6$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_30.png}}
} \\
\mbox{
\subfigure[$t=0.8$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_40.png}}
\subfigure[$t=1.0$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_50.png}}
\subfigure[$t=1.2$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_60.png}}
} \\
\mbox{
\subfigure[$t=1.4$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_70.png}}
\subfigure[$t=1.6$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_80.png}}
\subfigure[$t=1.8$]{\includegraphics[width=0.33\textwidth]{MP_Reconstruction_90.png}}
}
\caption{An example ROM characterized by the CAE-LSTM compared to the full-order solution. This parameter was not a part of the training dataset for either the CAE or the parametric LSTM.}
\label{Burgers_CAE_LSTM_ROM}
\end{figure}
\subsection{Shallow water equations}
\label{SS:7}
Our two-dimensional assessments utilize the inviscid shallow water equations which are a prototypical system of equations for geophysical flows. The governing equations are hyperbolic in nature and are
\begin{align}
\begin{gathered}
\frac{\partial(\rho \eta)}{\partial t}+\frac{\partial(\rho \eta u)}{\partial x}+\frac{\partial(\rho \eta v)}{\partial y} =0, \\
\frac{\partial(\rho \eta u)}{\partial t}+\frac{\partial}{\partial x}\left(\rho \eta u^{2}+\frac{1}{2} \rho g \eta^{2}\right)+\frac{\partial(\rho \eta u v)}{\partial y} = 0, \\
\frac{\partial(\rho \eta v)}{\partial t}+\frac{\partial(\rho \eta u v)}{\partial x}+\frac{\partial}{\partial y}\left(\rho \eta v^{2}+\frac{1}{2} \rho g \eta^{2}\right) = 0.
\end{gathered}
\end{align}
In the above set of equations, $\eta$ corresponds to the total fluid column height, and $(u,v)$ is the fluid's horizontal flow velocity, averaged across the vertical column. Further $g$ is acceleration due to gravity, and $\rho$ is the fluid density, which we fix at 1.0. The first equation captures the law of mass conservation whereas the second two denote the conservation of momentum. Our initial conditions are
\begin{align}
\rho \eta (x,y,t=0) &= e^{-\left(\frac{(x-\bar{x})^2}{2(5e+4)^2} + \frac{(y-\bar{y})^2}{2(5e+4)^2}\right)}, \\
\rho \eta u(x,y,t=0) &= 0, \\
\rho \eta v(x,y,t=0) &= 0,
\end{align}
while our two-dimensional domain is a square with periodic boundary conditions. We generate data with full-order solves of the above system of equations until $t=0.5$ with a time step of 0.001. Our full-order model uses a 4\textsuperscript{th}-order accurate Runge-Kutta temporal integration scheme and a fifth-order accurate weighted essentially non-oscillatory scheme (WENO) \cite{liu1994weighted} for computing state reconstructions at cell faces. The Rusanov Reimann solver is utilized for flux reconstruction after cell-face quantities are calculated. The reader is directed to \cite{hairer1991solving} for greater discussion of the temporal integration scheme and \cite{maulik2017resolution} for details on WENO and the Riemann solver implementation in two-dimensional problems. For ease of notation we denote $\rho \eta$ as $q_1$, $\rho \eta u$ as $q_2$ and $\rho \eta v$ as $q_3$ in our subsequent discussions. The control parameters in the case of the shallow water equations are $\bar{x}$ and $\bar{y}$, which control the initial location of the Gaussian pulse in the domain. Our goal is to obtain a reduced-basis evolution of a new choice for these control parameters given \emph{a priori} snapshots from full-order forward solves at pre-selected control parameter choices. We use 90 full-order simulations for the training and validation and 10 test simulations for \emph{a posteriori} assessments. One hundred snapshots are utilized for each simulation, i.e., a snapshot is saved every five steps of the time integrator.
\subsubsection{Convolutional autoencoder} \label{Shallow_CAE}
For the nonlinear encoding of the shallow water equations, we use the two-dimensional CAE detailed in the schematic in Figure \ref{2D_Schematic}. Our three conserved variables are encoded using three input and output channels in our autoencoder. We scaled the data to zero mean and unit variance to ensure that losses due to inaccurate reconstruction were weighted fairly across the different variables. We use an architecture that is similar to the Burgers' example in that a bottlenecked framework ensures the compression of the full-order field. A key difference is that the ``bottleneck" layers are supplemented with fully connected layers to allow for an arbitrary latent dimensionality. We choose a latent space of 6 degrees of freedom for this problem which represents an approximate compression ratio of 680. Also, a batch size of 24 with a learning rate of 0.001 was utilized to train the framework. Each two-dimensional convolutional layer and densely connected bottleneck layer utilized the swish activation function, which has been shown to be superior to ReLU for significantly deep networks \cite{ramachandran2017searching}. In contrast, the output layer of the network is a linear layer. 9000 snapshots are randomly partitioned into 8100 for training and 900 for validation, with the latter used for an early stopping criterion. The trained CAE is tested on 1000 snapshots from 10 held-out simulations. Specific details of the architecture such as the number of channels in each layer of the CAE and the size of the pooling may be found in our supporting code. Figure \ref{SWE_CAE_Reconstruction} shows the ability of the decoder to reconstruct from the latent space.
\begin{figure}
\centering
\fbox{\includegraphics[trim={2.5cm 8.5cm 2.5cm 2.5cm},clip,width=0.95\textwidth]{2D_Schematic.pdf}}
\caption{A schematic of the two-dimensional CAE-LSTM for the shallow water equations. The nonlinear autoencoder embeds the data into latent space, and then the recurrent network can be used for time-series advancement of a flattened representation of the multidimensional system.}
\label{2D_Schematic}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim={0cm 3cm 0 0},clip,width=\textwidth]{SWE_Reconstruction_Test.png}
\caption{The reconstruction ability of the CAE for the conserved variables in the shallow water equations. This snapshot is from a representative test simulation starting from an unseen initial condition.}
\label{SWE_CAE_Reconstruction}
\end{figure}
\subsubsection{LSTM}
We again couple the CAE with an LSTM that is conditioned on the control parameters. In this set of experiments, our control parameter affects the location of a Gaussian pulse applied to $\rho \eta$ at $t=0$. Our goal is to replicate trends of field evolution for a novel initial condition given examples of full-order forward solves to train from. To create training data for the LSTM, we apply the trained CAE to compress the data, then concatenate the parameter information. Our LSTM architecture is 3 cells with 50 neurons in each cell. A batch size of 64 is used with the default learning rate of 0.001 for the Adam optimizer. As outlined previously, 10\% of the total non-test data is set aside for the purpose of validation and early stopping. A time-window of 10 points, utilized for the LSTM forecasts, provided adequately accurate trajectory predictions in the latent space.
\subsubsection{CAE-LSTM modeling}
Figure \ref{SWE_LSTM_Testing} shows the ability of the LSTM module to reconstruct dynamical trends in the latent space for a sample test simulation. The reference truth for these curves has been obtained by reconstructing (with use of the CAE) full-order solutions for a test control parameter that was not utilized during training. One can observe that dynamical trends are replicated by the parameterized LSTM. Evolutionary trends towards the end of the dynamics suggest that the dissipation of energy in the system by the numerical method is captured adequately. Figure \ref{SWE_ROM_1} shows the ability of the CAE-LSTM surrogate model to identify coherent spatial features in a sample test simulation. For comparison, we show results from benchmark POD-GP deployments for 6 and 40 retained modes. At an equivalent compression ratio, the CAE-LSTM is able to represent the solution well. At 40 retained modes, the severe truncation of dynamics in POD space still leads to Gibbs' phenomena by POD-GP, which demonstrates the robustness of our proposed method. Contour plots at two representative times are shown in Figure \ref{SWE_Contours_1} and \ref{SWE_Contours_2} where one can clearly observe that the coherent structures in the flow fields are adequately recovered by the CAE-LSTM in comparison to both 6 and 40 mode POD-GP deployments. However, one can also discern that the POD-GP method gradually converges to the true dynamics with increasing modal retention.
In terms of computational costs, the CAE-LSTM was able to provide an LSTM-based latent space forecast at 1.746 seconds per simulation. Reconstruction from latent space for a 100 snapshot simulation required 0.167 seconds. In comparison a POD-GP ROM deployment (using either 6 or 40 retained modes) required an average 24.67 seconds per simulation. The primary cost in POD-GP deployments is the reconstruction of the nonlinear term for the numerical calculation of fluxes which is independent of the number of latent degrees of freedom. The nonlinear term computation for this test case was performed using a fifth-order WENO scheme just like its full-order counterpart and thus is a memory and compute cost that the machine learned model bypasses. In terms of quantitative error metrics, the $q_1$ mean-squared error for all the testing data was 4.8e-4 for CAE-LSTM, 5.6e-4 for POD-GP (6 modes) and 1.7e-4 for POD-GP (40 modes). Similar trends were observed for $q_2$ (4.8e-4, 7.8e-4, 2.6e-4) and $q_3$ (3e-3, 3.3e-3, 1.1e-3). Although the mean-squared error metrics support superiority for POD-GP at 40 retained modes, coherent structure reproduction is more accurate via the CAE-LSTM as demonstrated in the contour plots above. Mean-squared error metrics were affected by the greater amount of fine-scale noise in CAE-LSTM reconstructions. A possible avenue for addressing this limitation is to use intelligent loss functions or embed physics-inspired regularization in the optimization problem.
\begin{figure}
\centering
\mbox{
\subfigure[Latent Dimension 1]{\includegraphics[width=0.42\textwidth]{LSTM_Sim_4_Mode_0.png}}
\subfigure[Latent Dimension 2]{\includegraphics[width=0.42\textwidth]{LSTM_Sim_4_Mode_1.png}}
} \\
\mbox{
\subfigure[Latent Dimension 3]{\includegraphics[width=0.42\textwidth]{LSTM_Sim_4_Mode_2.png}}
\subfigure[Latent Dimension 4]{\includegraphics[width=0.42\textwidth]{LSTM_Sim_4_Mode_3.png}}
} \\
\mbox{
\subfigure[Latent Dimension 5]{\includegraphics[width=0.42\textwidth]{LSTM_Sim_4_Mode_4.png}}
\subfigure[Latent Dimension 6]{\includegraphics[width=0.42\textwidth]{LSTM_Sim_4_Mode_5.png}}
}
\caption{Hidden space evolution of a testing simulation using a parametric LSTM. The curves, here, indicate the individual degrees of freedom of a 6-dimensional latent space with the $y$-axes indicating their magnitudes.}
\label{SWE_LSTM_Testing}
\end{figure}
\begin{figure}
\centering
\mbox{
\subfigure[True $q_1$]{\includegraphics[width=0.32\textwidth]{True_q1_Time_1.png}}
\subfigure[True $q_2$]{\includegraphics[width=0.32\textwidth]{True_q2_Time_1.png}}
\subfigure[True $q_3$]{\includegraphics[width=0.32\textwidth]{True_q3_Time_1.png}}
} \\
\mbox{
\subfigure[CAE-LSTM]{\includegraphics[width=0.32\textwidth]{CAE_q1_Time_1.png}}
\subfigure[CAE-LSTM]{\includegraphics[width=0.32\textwidth]{CAE_q2_Time_1.png}}
\subfigure[CAE-LSTM]{\includegraphics[width=0.32\textwidth]{CAE_q3_Time_1.png}}
} \\
\mbox{
\subfigure[POD-GP (6 modes)]{\includegraphics[width=0.32\textwidth]{GP_6_q1_Time_1.png}}
\subfigure[POD-GP (6 modes)]{\includegraphics[width=0.32\textwidth]{GP_6_q2_Time_1.png}}
\subfigure[POD-GP (6 modes)]{\includegraphics[width=0.32\textwidth]{GP_6_q3_Time_1.png}}
}
\\
\mbox{
\subfigure[POD-GP (40 modes)]{\includegraphics[width=0.32\textwidth]{GP_40_q1_Time_1.png}}
\subfigure[POD-GP (40 modes)]{\includegraphics[width=0.32\textwidth]{GP_40_q2_Time_1.png}}
\subfigure[POD-GP (40 modes)]{\includegraphics[width=0.32\textwidth]{GP_40_q3_Time_1.png}}
}
\caption{A qualitative assessment of reconstructed dynamics using the Galerkin projection methodology for a test simulation. The superiority of the CAE reconstruction over POD-GP at the same compression ratio (latent dimension 6) is evident. POD-GP performance improves as we capture more of the variance in the data set by increasing the number of modes.}
\label{SWE_ROM_1}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim={7.5cm 0cm 0 0},clip,width=\textwidth]{CAE_GP_Comparison_0.png}
\caption{Contour plots showing true, CAE-LSTM and GP obtained results for the three conserved variables at time $t=0.005$. This corresponds to one quarter of the simulation completed. The CAE-LSTM is seen to capture full-order spatial structures accurately in comparison to the POD-GP method (at 6 latent space dimensions).}
\label{SWE_Contours_1}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim={7.5cm 0cm 0 0},clip,width=\textwidth]{CAE_GP_Comparison_1.png}
\caption{Contour plots showing true, CAE-LSTM and GP obtained results for the three conserved variables at time $t=0.15$. This corresponds to 30\% of the simulation completed. The CAE-LSTM is seen to capture full-order spatial structures accurately in comparison to the POD-GP method (at 6 latent space dimensions).}
\label{SWE_Contours_2}
\end{figure}
\section{Discussion and Conclusions}
\label{S:8}
In this study, we propose using a recurrent CAE framework for the reduced-order modeling of systems that are inherently advective and, therefore, high-dimensional. These systems suffer from slow convergence and instability in a linear reduced-basis space given by the POD and a Galerkin projection of the governing equations onto this space. In contrast, we demonstrate that the nonlinear embedding obtained by the CAE and the equation-free dynamics characterization by the LSTM network leads to stable reconstructions of high-dimensional physics in both space and time. We extend our machine learning framework to a parametric formulation where we concatenate the low-dimensional embedding with control parameter information to interpolate between full-order sample points in the data generation phase. Our results indicate that the proposed framework can be used for rapid exploration of a design space conditioned on a set of control parameters. Our framework utilizes a \emph{burn-in} period for the LSTM that necessitates a short compute of less than 10\% of the full-order compute. This is necessary to create a windowed input to the LSTM network. Results on test datasets show a good ability to recover physical trends on unseen control parameter choices. We are currently extending the framework by exploring couplings with active learning wherein we adaptively learn control parameters during training in order to characterize parametric variations optimally. In addition, we are also exploring data-augmentation strategies to preclude the initial compute required for the initial LSTM window in latent space. The former will rely on the generation of so-called \emph{ghost} points to serve as a burn-in to the ROM. Some key challenges also include the ability to incorporate unstructured grid information particularly for problems where there is significant anisotropy in spatial field. There is some promising work in this direction using generalized moving least squares methods \cite{trask2019gmls} and point-cloud networks \cite{kashefi2020point}. The final goal will be to incorporate these surrogate models in design frameworks that may utilize derivative-based or derivative-free optimization.
\section*{Acknowledgments}
The authors acknowledge helpful comments from Dr. Sandeep Madireddy and Dr. Arvind Mohan. This material is based upon work supported by the U.S. Department of Energy (DOE), Office of Science, Office of Advanced Scientific Computing Research, under Contract DE-AC02-06CH11357. This research was funded in part and used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. DOE or the United States Government.
\bibliographystyle{elsarticle-num-names}
|
2,869,038,155,124 | arxiv | \section{Introduction}
The qualitative feature that
$\Gamma(K^0\to\pi^0\pi^0) \gg \Gamma(K^+\to\pi^+\pi^0)$
is one of the oldest problems in kaon decays that is not fully understood
qualitatively. This is known as the $\Delta I=1/2$ rule.
The isospin-2 final state amplitude $A_2$ is much smaller
than the isospin-0 amplitude $A_0$. Experimentally we have
$|A_0/A_2| = 22.1$, the precise definition used here can be found
in \cite{BP1} and a review of Kaon physics is in \cite{kreview}
More references can be found in either of these two.
The underlying standard model process is the exchange of a $W$-boson
but due to the large difference in the Kaon and $W$-mass very large
corrections can come into play and even normally suppressed contributions
can be enhanced by large factors
$\ln(m_W^2/m_K^2)\approx 10$. At the same time, at low energies the strong
interaction coupling $\alpha_S$ becomes very large which requires us to use
non-perturbative methods at those scales.
The resummation of large logarithms at short-distance can be done
using renormalization group methods. At a high scale the exchange
of $W$-bosons is replaced by a sum over local operators. For weak decays
these start at dimension 6. The scale can then be lowered using the
renormalization group.
The short-distance running is now known to two-loops
\cite{two-loops1,two-loops2} (NLO)
which sums the $\left(\alpha_S\ln(m_W/\mu)\right)^n$ and
$\alpha_S\left(\alpha_S\ln(m_W/\mu)\right)^n$ terms. A review of this can
be found in the lectures by A. Buras \cite{Buras}.
The major remaining problem is to calculate the matrix elements
of the local operators at some low scale. I will address some progress
on this issue in this talk. The main method was originally
proposed in Ref. \cite{BBG} arguing that $1/N_c$ counting could
be used to systematically calculate the matrix elements.
Various improvements have since been
introduced. The correct momentum routing was introduced
in \cite{BBG2}. The use of the extended Nambu-Jona-Lasinio model as
an improved low energy model was introduced for weak matrix
elements in \cite{BP2} and a short discussion of its major
advantages and disadvantages can be found in \cite{BPP}.
The results obtained were encouraging but a major problem remained.
At NLO order the short-distance running becomes dependent
on the precise definition of the local operators. This dependence
should also be reflected in the calculations of the matrix elements
as well as a correct identification of the scale of the
renormalization group in the matrix element calculation.
The more precise interpretation of the scheme of \cite{BBG}
introduced in \cite{BP2} was shown there at one-loop to satisfy the
latter criterion. Here I present in the next section
how this method also satisfies the latter
at NLO and how it solves the first problem as well. We call
this method the $X$-boson method. The third section describes the
numerical results
we obtained in \cite{BP1}.
Other recent work on matrix elements is the work of
\cite{Hambye} and \cite{eduardo} using the $1/N_c$ method as well.
A more model dependent approach is \cite{Trieste}.
\section{The $X$-boson method}
The basic underlying idea is that we know how to hadronize
currents or at least that this is a tractable problem. So we replace
the effect of the local operators of
$H_W(\mu) = \sum_i C_i(\mu) Q_i(\mu)$ at a scale $\mu$
by the exchange of a series of colourless $X$-bosons at a low scale $\mu$.
The scale $\mu$ should be such that the $1/N_c$ suppressed contributions
have no longer large logarithmic corrections.
Let me illustrate the procedure in a simpler case of only one operator
and neglecting penguin contributions.
In the more general case all coefficients become matrices.
\begin{equation}
C_1(\mu)(\bar s_L\gamma_\mu d_L)(\bar u_L\gamma^\mu u_L)
\Longleftrightarrow
X_\mu\left[g_1 (\bar s_L\gamma^\mu d_L)+g_2 (\bar u_L\gamma^\mu u_L)
\right]\,.
\end{equation}
Summation over colour indices is understood inside the brackets.
We now determine $g_1$, $g_2$ as a function of $C_1$. This is done by
equalizing matrix elements of $C_1 Q_1$ with the equivalent ones
of $X$-boson exchange. The matrix elements are at the scale $\mu$ chosen
such that perturbative QCD methods can still be used and thus we can use
external states of quarks and gluons.
To lowest order this is simple. The tree level diagram
from Fig. \ref{figX}(a) is set equal to that of Fig. \ref{figX}(b)
leading to
\begin{equation}
C_1 = \frac{g_1 g_2}{M_X^2}\,.
\end{equation}
At NLO diagrams
like those of Fig. \ref{figX}(c)
and \ref{figX}(d) contribute as well leading to
\begin{figure}[htb]
\includegraphics[width=\textwidth]{figX.eps}
\caption{\label{figX} The diagrams needed for the identification
of the local operator $Q$ with $X$-boson exchange in the case of
only one operator and no Penguin diagrams. The wiggly line
denotes gluons, the square the operator Q and the dashed line
the $X$-exchange. The external lines are quarks.}
\end{figure}
\begin{equation}
C_1\left(1+\alpha_S(\mu)r_1\right)
= \frac{g_1 g_2}{M_X^2}\left(1+\alpha_S(\mu)a_1+
\alpha_S(\mu)b_1\log\frac{M_X^2}{\mu^2}\right)\,.
\end{equation}
At this level the scheme-dependence disappears. The left-hand-side (lhs)
is scheme-independent. The right-hand-side can be calculated in a
very different renormalization scheme from the lhs.
The infrared dependence of $r_1$ is
present in precisely the same
way in $a_1$ such that $g_1$ and $g_2$ are scheme-independent
and independent of the precise infrared definition of the external state
in Fig. \ref{figX}.
One step remains, we now have to calculate the matrix element
of $X$-boson exchange between meson external states.
The integral over $X$-boson momenta we split in two
\begin{equation}
\label{split}
\int_0^\infty dp_X\frac{1}{p_X^2-M_X^2}
\Longrightarrow
\int_0^{\mu_1}dp_X\frac{1}{p_X^2-M_X^2}
+\int_{\mu_1}^\infty dp_X\frac{1}{p_X^2-M_X^2}\,.
\end{equation}
The second term involves a high momentum that needs to flow back
through quarks or gluons and leads through diagrams like the one
of Fig. \ref{figX}(c)
to a four quark-operator with a coefficient
\begin{equation}
\frac{g_1 g_2}{M_X^2}\left(\alpha_S(\mu_1)a_2
+\alpha_S(\mu_1)b_1\log\frac{M_X^2}{\mu^2}\right)\,.
\end{equation}
The four-quark operator thus
needs to be evaluated only in leading order in $1/N_c$.
The first term we have to evaluate in a low-energy model with as much
QCD input as possible.
The $\mu_1$ dependence cancels between the two terms in (\ref{split})
if the low-energy model is good enough and all dependence on
$M_X^2$ cancels out to the order required as well.
Calculating the coefficients $r_1$, $a_1$ and $a_2$ gives the
required correction to the naive factorization method as used
in previous $1/N_c$ calculations.
It should be stressed that in the end all dependence on $M_X$ cancels
out. The $X$-boson is a purely technical device to correctly
identify the four-quark operators in terms of well-defined products of
nonlocal currents.
\section{Numerical results}
We now use the $X$-boson method with $r_1$ as given in \cite{two-loops1}
and $a_1=a_2=0$, the calculation of the latter is in progress,
and $\mu=\mu_1$. For $B_K$ we can extrapolate to the pole
for the real case ($\hat B_K$) and in the chiral limit ($\hat B_K^\chi$)
and for $K\to\pi\pi$ we can get at the
values of the octet ($G_8$), weak mass term ($G_8^\prime$)
and 27-plet ($G_{27}$) coupling.
We obtain
\begin{equation}
\hat B_K = 0.69\pm0.10\,;~ \hat B_K^\chi= 0.25\mbox{--}0.40\,;~
G_8= 4.3\mbox{--}7.5\,;~ G_{27}=0.25\mbox{--}0.40\mbox{ and }
G_8^\prime=0.8\mbox{--}1.1\,,
\end{equation}
to be compared with the experimental values
$G_8\approx6.2$ and $G_{27}\approx0.48$ \cite{BP1,Kambor}.
In Fig. \ref{figg8} the $\mu$ dependence of $G_8$ is shown
and in Fig. \ref{figg8_comp} the contribution from the various
different operators.
\begin{figure}
\begin{minipage}[t]{0.485\textwidth}
\includegraphics[width=\textwidth]{g8.eps}
\caption{\label{figg8} The octet coefficient $G_8$ as a function of
$\mu$ using the ENJL model and the one-loop Wilson coefficients,
the 2-loop ones and those including the $r_1$ (SI). In
the latter case also the factorization (SI fact)
and the approach of \cite{Hambye} (SI~quad) are shown.}
\end{minipage}
\hfill
\begin{minipage}[t]{0.485\textwidth}
\includegraphics[width=\textwidth]{g8_comp.eps}
\caption{\label{figg8_comp} The composition of $G_8$ as a function of
$\mu$. Shown are $Q_2$, $Q_1+Q_2$, $Q_1+Q_2+Q_6$ and all 6 $Q_i$.
The coefficients $r_1$ are included in the Wilson coefficients.}
\end{minipage}
\end{figure}
\section{Conclusions}
I showed how the $X$-boson method allows to correctly treat NLO
scheme dependence and that using that method and the
ENJL model at low energies reproduces the $\Delta I=1/2$ rule
{\em quantitatively} without any free parameters.
|
2,869,038,155,125 | arxiv | \section{Introduction}
O and B type stars, with typical values of projected rotational velocities ($v \sin i$) around 100 km\,s$^{-1}$ and higher, have the largest average $v \sin i$ values among all main-sequence stars. Stellar rotation appears to be a fundamental parameter constraining the formation of these massive stars and the environments in which they are born, as well as their subsequent evolution. For instance, there is observational evidence that stars formed in denser environments tend to rotate faster than those formed in associations \citep{wolff07} and for O and B stars in the field the proportion of slow rotators seems to be even higher (see \citealt{hg06a} for open clusters and \citealt{daflon07} for the Cep OB2 association). In addition, rotation may modulate the formation of massive field stars. \cite{OL11} cite this trend, together with additional empirical evidence based on the stellar clustering law, IMF, and direct observations, as evidence that significant numbers of field massive stars form {\it in situ}, i.e., they
were not born in clusters. Also, rotation might help in understanding the origin of runaway stars. $V \sin i$ distributions of runaway stars have not been much studied in the literature. \cite{martin06} studied the $v\sin i$ distribution of high latitude OB runaway stars and noted the lack of slow rotators compared to a field sample. This was interpreted in that study as evidence that those runaway stars might have been ejected from OB associations.
The study of $v \sin i$ distributions of samples of OB stars born in different environments, such as clusters, OB associations or the general Galactic field, and selected without bias concerning cluster membership, can be used to probe the interplay between star formation and stellar rotation.
In this paper we analyse such a sample; we present the spectroscopic observations and a first characterization of
a sample of 350 OB stars located within $\sim$ 2 kpc from the Sun.
The goal of this study is to define the stars in terms of their effective temperatures, along with their projected rotational velocities, with the emphasis on the $v\sin i$ distributions from stars in different environments.
These stars will be analysed in terms of their chemical composition in a future study.
This paper is divided as follow: Sect. \ref{observation} describes the observations and sample selection; Sect. \ref{binarity} selects from the observed sample the binary or multiple stars; Sect. \ref{spectral} discusses the derived effective temperatures and spectral classification for the sample. Finally, projected rotational velocities are derived in Sect. \ref{velocity}. In Sect. \ref{discussion} we discuss the $v \sin i$ distributions obtained for the studied sample and in Sect. \ref{conclusions} we present the conclusions.
\section{Observations and the Sample}
\label{observation}
Based on the spectral type as the sole criterion, we selected 379 O9 to B4 main sequence stars from the HIPPARCOS catalogue \citep{Perryman07}.
High-resolution spectra were then obtained for these stars on January 8, 9
and April 8, 2007 with the MIKE spectrograph at the Magellan Clay 6.5 m telescope on Las Campanas observatory in Chile. MIKE \citep{Bernstein03} is a double \'echelle spectrograph that registers the whole spectrum on two CCDs (red side $\lambda 4900 - 9500$ \AA, and blue side $\lambda 3350-5000$ \AA) in a single exposure. Here, the blue spectra are analyzed as these contain most of the diagnostic spectral lines needed for estimating $v\sin i$, spectral type, and the effective temperature ($T_{eff}$) of the star. The spectral resolution of the observed spectra is $R\sim55,000$, and were obtained using a slit width of 0.7 arcsec.
In order to minimize possible evolutionary effects on the $v\sin i$ and given that the He {\sc i} line width calibration adopted in this study (\citealt{daflon07}, Section 4) is valid for main sequence stars, we screened the observed spectra in order to exclude all evolved stars from the sample.
The Balmer lines and other spectral features which are sensitive to surface gravity such as,
the line ratios $\lambda 4686$ He {\sc ii}/$\lambda 4713$ He {\sc ii} (stars with spectral types O9--B0),
and $\lambda 4552$ Si {\sc iii}/$\lambda4387$ He {\sc i}
(stars classified as B1 or later), were used as the primary luminosity criteria.
Our final sample consists of 350 stars and is expected to contain only main sequence stars and not giants or supergiants.
The observed sample of stars is displayed in Fig. \ref{fig:coords} in terms of their Galactic longitude and heliocentric distance projected onto the Galactic plane.
The stars in the sample are all nearby ($\sim80\%$ is within 700 pc) and relatively bright ($V\sim5-10$). Spectra with signal-to-noise ratios of the order of 100 were achieved with short exposure times ranging from a few seconds to a few minutes.
The spectra were reduced with the Carnegie Observatories python pipeline\footnote{Available at {\tt http://obs.carnegiescience.edu/Code/mike}} and followed standard data reduction procedures: bias subtraction, division by flat field, wavelength
calibration. In addition, small pieces containing the lines of interest were manually normalized to a unit continuum
using the task \texttt{continuum} in IRAF\footnote{http://iraf.noao.edu/}.
Sample spectra are shown in Fig. \ref{fig:spec} in the spectral region between $\lambda\lambda4625-4665$ \AA, which contains spectral lines of C, N, O and Si. The spectra are shown for 5 target stars and these are displayed in order of increasing temperature.
\begin{figure}
\plotone{f1.eps}
\caption{Polar plot showing the positions of the sample stars projected onto the Galactic plane.
The radius is limited to 1 kpc and the concentric dotted circle represents the distance of 0.7 Kpc,
within which $\sim$80\% of the stars in our sample are located.
The open red circles are spectroscopic binaries/multiple systems identified in our sample.
Values for distance of the stars are more uncertain beyond 0.5 kpc of the Sun.}
\label{fig:coords}
\end{figure}
\begin{figure}
\plotone{f2.eps}
\caption{Example spectra of five sample stars in the region $\lambda\lambda 4625-4665$ \AA. Some spectral lines are identified.
The spectra were arbitrarily displaced in intensity for better viewing.
\label{fig:spec}}
\end{figure}
\section{Stellar Characterization}
\label{char}
\subsection{Identification of Spectroscopic Binaries}
\label{binarity}
It is likely that most massive OB stars form in clusters or associations, with the probability of a star forming with a companion being high.
The recent study by \cite{oudmaijer10}, for example, found a binary fraction of $\sim30\%$ in their photometric survey of B and Be stars.
A first objective in this study is to identify those stars, among the 350 stars observed, that show spectral signatures of binary or multiple components.
This was done through a careful visual inspection of their spectra. Single-line spectroscopic binaries are
not detected here, as the spectra are only from single epoch observations.
Spectroscopic binaries will be discarded from further analysis in this study since the methodology here is most
appropriate for spectra showing a single component.
Some stars in our sample were identified as clearly having double, multiple or asymmetric spectral lines.
In addition, some stars in our sample which were found to be binary or multiple systems in the large survey of stellar multiplicity within the HIPPARCOS catalogue by \cite{eggleton08} and/or appeared as binaries in the study of OB star variability based on HIPPARCOS photometry by \cite{lefevre09}.
Table \ref{tab:bin} lists 78 stars culled from the sample as spectroscopic binaries or multiple systems,
representing 22\% of the stars in our sample. Column 1 has the star identification, column 2 lists the spectral types from SIMBAD\footnote{http://simbad.u-strasbg.fr}. In column 3 stars are classified as `SB', if they were found here to be a spectroscopic binary or multiple system and `asym', if they exhibited asymmetric line profiles; ET08 and Lef09 if they were in \cite{eggleton08} or \cite{lefevre09}.
The stars in Table 1 will not be analyzed in the remainder of this paper.
\subsection{Spectral Types and Effective Temperatures}
\label{spectral}
The spectral types of the stars were determined based on the classification system
presented in the Atlas of OB stars by \cite{WaFi90}.
Relative intensities of some key absorption line ratios such as:
$\lambda 4471 $ He {\sc i} / $\lambda 4481 $ Mg {\sc ii}; $\lambda 4630 $ N {\sc ii} / $\lambda 4631 $ Si {\sc iv};
$\lambda 4641 $ N {\sc iii} / $\lambda 4643 $ N {\sc ii}, and
$\lambda 4649 $ C {\sc iii} / $\lambda 4650 $ O {\sc i} were used to assign spectral types.
In order to map the Walborn \& Fitzpatrick spectral types into our sample,
a small grid of non-LTE synthetic spectra of two spectral regions, $\lambda\lambda$4450 -- 4490 \AA\
and $\lambda\lambda$4630 -- 4700 \AA\, were computed for $T_{eff}$'s between 15,000 -- 33,000K, logarithmic of the surface gravity
$\log g = 4.0$, and solar composition. The theoretical spectra were
calculated with the codes \texttt{TLUSTY} and \texttt{SYNPLOT} (\citealt{hubeny88,HubenyLanz95}).
The Walborn \& Fitzpatrick standard star spectra
were then visually matched to their closest synthetic counterpart in the grid;
spectral types assigned as O9, B0, B1, B2, B3, B4 and B5 were found to correspond to model spectra
with $T_{eff}$'s of 33,000K; 30,000K; 25,000K; 20,000K; 18,000K; 16,000K and 15,000K, respectively.
Synthetic and observed spectra were then compared by visual inspection in order to assign spectral types for the target stars.
The goal was simply to determine an appropriate spectral type to each star, and not
to match in detail the observed and theoretical spectra in a fine analysis.
Since a fraction of the stars in our sample have spectral lines somewhat blended by rotation,
synthetic spectra were convolved for $v\sin i$ (in steps of $v\sin i = 50$ km s$^{-1}$)
in order to aid in the assignment of spectral types of broad lined stars.
Spectral types for the target stars are listed in Table \ref{tab:sample} (column 2).
Effective temperatures for the stars were estimated from a calibration
of the classical reddening free parameter $Q$ (\citealt{johnson58}; $Q=(U-B)-X\cdot(B-V)$, where $X=E(U-B)/E(B-V)$).
In order to estimate $T_{eff}$ for the sample stars in this study we will adopt the $T(Q)$ calibration presented in \cite{massey89} and defined below:
\begin{equation}
\log T_{eff} = 3.994-0.267\cdot Q+0.364\cdot Q^2.
\end{equation}
A $T(Q)$ calibration has also been proposed by \cite{daflon99}. However, a large number of stars in the sample studied here are much cooler than the validity range of the Daflon {\it et al.} calibration.
Figure \ref{fig:calib} shows as a solid blue line the calibration by \cite{massey89} for the $Q$-interval of the stars in this study.
The calibration by \cite{daflon99} is also shown in Fig. \ref{fig:calib} as black dashed line, for comparison.
The average differences between the two calibrations are relatively small:
$\langle\Delta T_{eff}\rangle= -380$ K and $\sigma= 177$ K, for $Q$-values ranging between $-0.62$ and $-0.87$; and $\langle\Delta T_{eff}\rangle= +583$ K and $\sigma = 405$ K for $Q$-values
between $-0.61$ to $-0.53$.
Effective temperatures for those stars with measured radius from \cite{code76} are shown by red circles in Fig. \ref{fig:calib}. The overall agreement of the \cite{code76} results with the calibrations is generally good but with significant scatter, which is indicative of the uncertainties when using the $Q$-index as a temperature indicator.
More recently, \cite{paunzen05} also presented a calibration for the $Q$-index with the effective temperature
and the $T\times Q$ relation in that study is quite similar to the one derived in \cite{massey89}.
\begin{figure}
\plotone{f3.eps}
\caption{The T(Q) calibration from \citet[solid blue line]{massey89} which was adopted in this study to estimate effective temperatures for the target stars. The Q-index calibration from \cite{daflon99} is also shown for comparison (black dashed line).
The red filled circles represent the stars with measured radius and effective temperatures in \cite{code76}. \label{fig:calib}}
\end{figure}
The Johnson color indices $(U-B)$ and $(B-V)$ for the studied stars were obtained from \cite{mermilliod87}. For those 57 stars in the sample without published Johnson photometry, UBV colors were computed from Str\"omgren photometry from \cite{HM80, HM98}, using the transformation in \cite{HB01}. In addition, there were 41 remaining stars in our sample for which there was no available photometry in the literature, and in those cases we relied on spectral types in order to obtain the intrinsic colors from the tables in \cite{fitzgerald70} and then estimate $Q$.
In columns 3, 4, and 5 of Table 2 we list the $V$ magnitudes, the $Q$ parameters, and the derived $T_{eff}$'s for 272 stars of the observed sample.
The estimated $T_{eff}$'s here are good for the purpose of a rough stellar characterization of our sample and, in particular, these suffice for a solid derivation of $v\sin i$ values since the grid of synthetic spectra used here (Sect. \ref{velocity}) has been computed for steps of 5,000 K in $T_{eff}$.
\section{Projected Rotational Velocities}
\label{velocity}
Projected rotational velocities for the targets were estimated from measurements of the full width at half maximum (FWHM) of 3 He {\sc i} lines at $\lambda4026$ \AA, $\lambda4388$ \AA\ and $\lambda4471$ \AA. The FWHMs of the He {\sc i} line profiles were measured using the IRAF package \texttt{splot}, using a procedure consistent with that adopted
in \cite{daflon07}: the continuum level was marked at the line center, and the half-width of the red wing was measured at the half-maximum and then doubled in order to derive the FWHM.
Figure \ref{fig:He_lines} shows examples of the sample He {\sc i} lines for the observed stars HIP 73624 (black continuous line) and HIP 33492 (red dashed line).
\begin{figure}
\plotone{f4.eps}
\caption{Sample spectra showing the 3 He {\sc i} lines that were used to derive the projected rotational velocities for the target stars. The bottom spectra (black) in the three panels are for the star HIP 73624 with $v\sin i = 17 $ km s$^{-1}$
and the top spectra (red) are for the star HIP 33492 with $v\sin i = 71$ km s$^{-1}$.
The spectra were arbitrarily displaced in intensity for better viewing.}
\label{fig:He_lines}
\end{figure}
The measured FWHM were converted to $v\sin i$'s via interpolating in the grid of synthetic FWHM of He {\sc i} lines presented in Table 2 of \cite{daflon07} for the adopted effective temperature of each star. The synthetic He {\sc i} profiles in that study were computed in non-LTE using the codes \texttt{DETAIL} \citep{giddings81} and \texttt{SURFACE} \citep{butler85} and were based on the helium model atom described in \cite{przybilla05}. We note that the macroturbulent velocity was kept as zero in the calculation of the synthetic profiles by \cite{daflon07} but it is likely to result in additional broadening of the line profiles.
\cite{simon-diaz10} did a careful analysis and disentangled the effects of macroturbulence and rotation in line profiles by using Fourier Transform method and obtained macroturbulent velocities for early B-type dwarfs
that are generally lower than 20 km s$^{-1}$, with a clear trend of
decreasing for late B-types. In order to test the importance of neglecting macroturbulence in the synthetic FWHM of the He lines we did a test calculation including a gaussian
macroturbulent velocity of 20 km s$^{-1}$. The results indicate that considering the uncertainties of the method adopted here, including macroturbulence at this level has negligible effect in the measured FWHM of the synthetic spectra of sample He {\sc i} lines.
The measured values of FHWM for the 3 He {\sc i} lines used in the $v\sin i$ determinations are found in Table \ref{tab:sample} (columns 6, 7, and 8); columns 9, 10 and 11 list the $v\sin i$'s for each He line; columns 12 and 13, the final $v\sin i$ values for the studied stars: these represent the average values and the standard deviations in each case.
We note that $v \sin i$'s were not derived for 6 stars with $T_{eff}$'s higher than 33,700 K, as they fell out of the validity of
the $v \sin i$ calibration from \cite{daflon07}.
Figure \ref{fig:comparison} shows a comparison of the $v\sin i$ results in this study with those from other determinations in the literature: results from \cite{ALG02} are represented as filled blue circles while results from the \cite{wolff07} study are represented as red filled triangles.
\cite{ALG02} derived $v\sin i$ for a sample of B stars of The Bright Star Catalogue
with luminosity classes between I and V, using a calibration for FWHM of He {\sc i} and
Mg {\sc ii} lines anchored on standard stars of \cite{Slettebak75}.
\cite{wolff07} obtained a relationship between FWHM and $v\sin i$ based on
results from He {\sc i} lines of \cite{hg06a}.
We note that the $v\sin i$'s for the stars in common with \cite{ALG02} are systematically lower than the ones here in the range between $\sim$ 0 -- 90 km s$^{-1}$ ($\langle\Delta v\sin i \rangle$(This Study -- Abt et al)$ = 9$ km s$^{-1}$ for 24 stars in common); higher than ours in the range between 90 -- 150 km s$^{-1}$ ($\langle\Delta v\sin i\rangle$(This Study -- Abt {\it et al.})$\geq -15$); and in rough agreement for the largest $v \sin i$ (except for one star). The $v\sin i$'s in \cite{wolff07} are mostly higher than ours, except for stars with the lowest $v \sin i$'s. The average $v \sin i$ difference (This study -- Wolff {\it et al.}) is $-26$ km s$^{-1}$ for 17 stars in common. Given the uncertainties in the determinations and the methods adopted, there is reasonable agreement between the three different studies.
\begin{figure}
\plotone{f5.eps}
\caption{A comparison between the $v\sin i$'s derived in this study for stars in common with other two studies in the literature: \cite{ALG02} (blue circles) and \cite{wolff07} (white triangles). The solid line represents the locus of equal values.
\label{fig:comparison}}
\end{figure}
\section{Discussion}
\label{discussion}
\subsection {The Entire Sample}
\begin{figure}
\plotone{f6.eps}
\caption{Histogram showing the distribution of effective temperatures for the studied sample.}
\label{fig:teff}
\end{figure}
We start our discussion by showing results for the derived effective temperatures for the stars.
A histogram showing the distribution of effective temperatures for 272 OB stars is shown in Fig. \ref{fig:teff}.
The effective temperatures of the targets sample peak around 17,000K, with most stars being cooler than 28,000K.
Figure \ref{fig:SP} illustrates the box plots for the $v\sin i$ values for the studied stars in each corresponding spectral type.
The box extends from the lower to upper quartile values of the data, with a line at the median and a small box as the mean. The whiskers extend from the box to show the range of the data. The crosses are the outliers.
An inspection of this figure indicates that the mean $v\sin i$ for each spectral type bin is roughly consistent with a constant value across spectral type.
The average $v \sin i$ value computed for the studied sample is $98$ km s$^{-1}$.
\cite{hg06a} also found a distribution of mean $v\sin i$ for cluster stars which is basically flat over a similar spectral type range, although their study also includes giant stars.
Overall the mean $v\sin i$'s obtained here for spectral types bins B0--B2 and B3--B5 are in rough agreement with the average results for luminosity classes IV and V in \cite{ALG02}.
(see Sect. \ref{velocity} for comparisons the $v\sin i$ for stars in common in the two studies).
\begin{figure}
\plotone{f7.eps}
\caption{Box plot for the studied stars in terms of the spectral type.
The average $v\sin i$ for the stars in each spectral type bin is roughly constant, even considering the least populated bins.
\label{fig:SP}}
\end{figure}
The $v\sin i$ distribution of the current sample of 266 O and B stars is shown in top panel of Fig. \ref{fig:Vmean}.
The distribution has a modest peak at low $v\sin i$'s ($\sim$ 0 -- 50 km s$^{-1}$) but it is overall flat (a
broad distribution) for $v\sin i$'s roughly between 0 -- 150 km s$^{-1}$; the number of stars drops for higher values of $v\sin i$.
As previously mentioned, the targets in this study were selected considering only their spectral types in the HIPPARCOS catalogue. The sample studied here includes both stars in
clusters and OB associations, as well as isolated stars that can represent some sort of field population.
\begin{figure}
\plotone{f8.eps}
\caption{Histogram of $v\sin i$ distribution of our sample on the top panel. The bottom panel compares the normalized distribution of a subsample of stars in our sample with a magnitude cut in $V=6.5$ and a sample with 312 field stars (spectral types O9--B4IV/V) culled from \cite{ALG02}.
\label{fig:Vmean}}
\end{figure}
One of the difficulties in making meaningful comparisons between rotational velocity
distributions of stars in clusters versus stars in the `field' is in defining what
constitutes a `field' star sample. This discussion is, in fact, related to the question
of whether OB stars can form in isolation and if all OB stars, although isolated, belonged
in the past to a cluster. The initial idea was that OB stars were only formed in clusters
and associations but later on were ejected or dispersed into the Galactic field.
There is growing evidence, however, that at a least a small fraction of the O
stars may be born in isolation (or from small molecular clouds).
For instance, \cite{krumholz09} used a 3-D hydrodynamic simulation to
show that the formation of isolated massive stars is possible; they successfully form a massive
binary (having 41.5 M$_\odot$ and 29.2 M$_\odot$) from a 100 M$_\odot$ molecular gas.
One strong observational evidence that field OB stars may form {\it in situ} was presented by \cite{lamb10}, who found very low mass companions around apparently isolated field OB stars in the SMC. Indeed, \cite{OL11} cite several lines
of empirical evidence to suggest that {\it in situ} field massive stars constitute a significant, and perhaps dominant, component of the field OB star population.
Although samples of field stars are contaminated at some level with stars that are in the field now but were born in dense environments,
a comparison of the $v\sin i$'s obtained for the entire sample studied here with other samples taken as representative of the field population is of interest.
\cite{ALG02} provide the cornerstone work of the distributions
of projected rotational velocities of the so-called field OB stars.
The targets in that study were taken from The Bright Star Catalogue and also include stars that are members of clusters and associations.
For the sake of comparison with a field sample that is representative of the spectral types and luminosity classes of most of the studied stars,
we culled from the \cite{ALG02} sample those stars with spectral types O9 -- B4 and luminosity classes IV and V. The distribution
of $v \sin i$'s for this subsample is shown as the dashed line histogram in the bottom panel of Fig. \ref{fig:Vmean}.
We thus selected those stars of our sample with $V<6.5$ which is the magnitude limit of The Bright Star Catalogue (\citealt{HoffleitJaschek82}) and this subsample is also presented in the the bottom panel of Fig. \ref{fig:Vmean}.
A Kolmogorov-Smirnov test gives more than 90\% of probability that both distributions are drawn from the same population.
These results suggest that the $v\sin i$ distribution obtained from \cite{ALG02} for the so-called field population is similar from the $v\sin i$ distribution of our sample brighter stars.
\subsection{Stars in OB Associations and Clusters}
The idea that stellar rotation of OB stars in clusters relates to cluster density has been put forward in previous studies in the literature.
In particular, comparisons between the $v \sin i$ distributions of stars from clusters, OB associations, or the field have shown that stellar
members of dense associations or clusters rotate on average faster than member stars of unbound associations or the field (e.g. \citealt{wolff07,daflon07}).
Previous studies discussing rotational velocity distributions of stars in clusters include \cite{guthrie82,wolff82,wolff07,hg06a,hg08,hg10}. In general, all these studies confirm that there seems to be real differences between the $v \sin i$ distributions of cluster members when compared to field; there are fewer slow rotators in the clusters when compared to the field, or, the stars in clusters tend to rotate faster. \cite{guthrie82}, however, found the presence of a bimodality in his $v\sin i$ distribution: the cluster distribution was double peaked with one at $v\sin i<50$ km s$^{-1}$ and the other at $V\sin i\sim225$ km s$^{-1}$.
A comparative study of the $v \sin i$'s of all stars in our sample in connection with their birth environments (clusters/associations or field) is of interest but firmly establishing membership is a difficult task as detailed and careful membership determinations are beyond the scope of this paper.
Instead, in this study, we use literature results in order to select a subsample of stars for which there is secure information on their membership. For OB associations, this is based on the list of probable members from the census of OB associations in the Galactic disk from the HIPPARCOS catalogue by de \cite{dezeeuw99}; and in the study of the stellar content of the Orion association by \cite{brown94}. In addition, we searched the target list in \cite{HM84} and found a few more targets to be association members. The stars in our sample members in higher density environments or clusters were obtained from cross checking the studied sample with the WEBDA open cluster database (\citealt{MP03}). In addition, we searched the open cluster member list of \cite{robichon99}. The membership information for each star can be found on column 15 of Table \ref{tab:sample}.
Histograms showing the $v\sin i$ distributions for the culled subsamples of OB association and cluster members are shown in Fig. \ref{fig:A_C} (red dashed lined histograms). The black solid histograms represent a larger sample combining our sample with the sample of O and B stars from \cite{daflon07}. In that paper, 143 OB stars members of open cluster, OB associations and 23 stars in H {\sc ii} regions have been observed in order to probe the radial metallicity gradient in Galactic disk. Since the $v \sin i$'s in the present study were derived using the same grid and methodology as in \cite{daflon07}, the discussion beyond this point will be based on the combined sample (black solid histograms) given better statistics. The distribution of $v\sin i$'s obtained for the stars in OB associations (top panel) has a relatively larger number of objects with $v \sin i$'s between 0 -- 50 km s$^{-1}$ and the number of stars declines smoothly with $v \sin i$. For stars in clusters (bottom panel) there is a smaller fraction of slow rotating stars and an apparent peak at 50 --100 km s$^{-1}$.
The smooth distribution of $v \sin i$ values for the association members may result from a nearly single values for equatorial rotational velocity that is viewed at random inclination, while the cluster distribution may be more complex.
\begin{figure}
\plotone{f9.eps}
\caption{Distribution of $v\sin i$'s for the studied samples of OB association (top panel) and cluster members (lower panel) are shown as red dashed lined histograms. The black solid line histograms represent the combined sample: stars in this study plus
143 star members of clusters and associations from \cite{daflon07}. Both studies use the same methodology to derive $v \sin i$. \label{fig:A_C}}
\end{figure}
Figure \ref{fig:cdf} shows a comparison of the cumulative fractions for the $v \sin i$ distributions for the clusters and OB associations, as well as the field (from the subsample selected here from \cite{ALG02} as discussed above). The field sample has a higher fraction of slowly rotating stars ($v \sin i$ between 0 and 50 km s$^{-1}$) when compared to the OB associations or clusters. In addition, there is a clear excess of stars with $v \sin i$'s between roughly 70 -- 130 km s$^{-1}$ in the cluster distribution when compared to the OB associations as well as the field. In fact, there seems to be a gradation from cluster to OB association to field confirming the trend found by \cite{wolff07}. A Kolmogorov-Smirnov test between the field star sample and the association sample gives 92 percent probability that both samples are drawn from distinct populations and 88 percent probability that the cluster and the field are drawn from distinct populations. A K-S test between the OB associations and the clusters distributions, however, gives only a 50 percent probability that these are drawn from distinct populations. Thus, any differences between the distributions of clusters and associations in this study are not so clear and may not be statistically significant; larger studies are needed.
\begin{figure}
\plotone{f10.eps}
\caption{Cumulative fractions for the $v \sin i$ distributions for the clusters, OB associations and the field. There seems to be a gradation from cluster to OB association to field. A K-S test between the field star sample and the association sample gives 92 percent probability that both samples are drawn from distinct populations and 88 percent probability that the cluster and the field are drawn from distinct populations. A K-S test between the OB associations and the clusters distributions, however, gives only a 50 percent probability that these are drawn from distinct populations. \label{fig:cdf}}
\end{figure}
\subsection{Runaway Stars}
Few studies in the literature have investigated the distribution of rotational velocities in runaway OB stars.
\cite{martin06} studied the properties of a population of stars far from the Galactic plane and this included a sample of 21 Population I runaway stars. The $v \sin i$ distribution for the runaway stars was found in that study to be broad with no apparent peaks in the range $v\sin i =50$ to 200 km s$^{-1}$ and with a slight decline for values of $v\sin i$ below 50 km s$^{-1}$ (see \citealt{martin06}, Fig. 9b).
The interpretation was that the projected rotational velocity distribution for the runaways was more similar to that of an OB association than to the field; one of the main distinctions when comparing with the field is the absence of a larger number of slow rotators in the distribution of the runaway sample.
Runaway stars can be explained by two scenarios: the binary supernova scenario,
in which a star is ejected from the binary system when its companion turns into a supernova, and the dynamical ejection scenario,
in which a star is ejected from its parent cluster or association due to dynamical processes.
These objects are usually identified via one of three methods: spatial velocities,
tangential velocities,
or radial velocities.
\cite{tetzlaff11} combined these three methods to identify runaway stars in the HIPPARCOS catalogue.
Our study has 34 stars identified as runaways in Tetzlaff {\it et al.}'s catalogue of runaway candidates.
The $v \sin i$ distribution obtained for the runaway stars in our sample is shown as a solid line histogram in Fig. \ref{fig:run}.
Two peaks are evident from a visual inspection of our distribution: one corresponding to slow rotating stars (or, $v \sin i\sim$ 0 -- 50 km s$^{-1}$) and another corresponding to higher projected rotational velocities ($v \sin i$'s between 100 -- 150 km s$^{-1}$).
We also show for comparison a histogram representing the combined sample including the runaway stars studied by \cite{martin06}.
Given that the distribution of $v\sin i$ in \cite{martin06} runaway sample is generally flat, the two $v\sin i$ peaks observed in the solid line histogram remain in the combined sample.
A K-S test was run on the runaway $v\sin i$ distribution obtained in this study compared to the other 3 samples discussed previously: the field, the OB association and the cluster subsamples. The probabilities that both distributions are drawn from the same populations are 18 percent, 40 percent and 71 percent, respectively for the field, association and cluster. This is an indication that the runaway phenomenon maybe more likely associated with the dense cluster environments, as expected from a dynamical ejection scenario.
However, we note the lack of very massive and dense clusters nearby the Sun, which are the main sources of runaways ejected by means of the dynamical ejection scenario.
As a final note, the presence of a second peak at low $v \sin i$ ($\sim$ 0 --50 km s$^{-1}$) in the runaway distribution in this study, could be related to runaways originating from OB associations. As discussed previously, stars in associations have tipically lower $v \sin i$ when compared to cluster stars.
\begin{figure}
\plotone{f11.eps}
\caption{$V\sin i$ distribution for the runaway stars in our sample is shown (solid line histogram). The distribution has two peaks. A KS-test indicates that the runaway $v\sin i$ distribution is more similar to the cluster distribution. This could be an indication that the runaway stars originated from a dynamical ejection scenario. The presence of a second peak at low $v \sin i$ could be related to runaways ejected from OB associations.
A histogram representing the combined sample including the runaway stars studied by \cite{martin06} is also presented for comparison (dashed line histogram).
\label{fig:run}}
\end{figure}
\section{Conclusions}
\label{conclusions}
High resolution spectroscopic observations and a first characterization of a sample of 350 OB stars has been carried out.
Projected rotational velocities were obtained for 266 stars (after rejecting spectroscopic binaries/multiple systems) using measurements of FWHM of He {\sc i} lines and interpolation in a synthetic grid from \cite{daflon07}. The $v\sin i$ distribution obtained for the studied sample has a modest peak at low $v\sin i$'s ($\sim$ 0 -- 50 km s$^{-1}$) but it is overall flat for $v\sin i$'s roughly between 0 -- 150 km s$^{-1}$; the number of stars drops for higher values of $v\sin i$. The $v\sin i$ distribution of our brighter sample stars is similar to the one obtained from a sample of field stars picked from the work of \cite{ALG02}.
Literature results on membership were used in order to identify subsamples of stars belonging to OB associations or clusters.
We compared these two groups and found that stars members of OB associations and clusters compose two distinct populations.
The cluster stars tend to have higher $v\sin i$'s when compared to the OB association subsample which could mean that stellar rotation of a population is dictated by the density of the cloud in which it forms.
Also, when the OB association and cluster populations are compared with the field sample, it is found that the latter has a larger fraction of slowest rotators, as previously shown by other works. In fact, there seems to be a gradation from cluster to OB association to field in $v \sin i$ distribution.
The present sample has 34 stars that were identified as runaway candidates in \cite{tetzlaff11} catalogue. The $v\sin i$ distribution of the runaways sample
presents two peaks: one for $v \sin i \sim$ 0 -- 50 km s$^{-1}$ and another for $v \sin i\sim$ 100 -- 150 km s$^{-1}$. The K-S test run with the runaway stars, OB association, cluster and field samples indicate that the runaway $v \sin i$ distribution is more likely to be similar with the distribution of the denser environments,
which could suggest that these stars were ejected through the dynamic ejection mechanism.
Also, there is a possibility that the low $v \sin i$ peak is composed of stars that were ejected from OB associations.
\acknowledgments
We thank the referee for the careful reading and suggestions.
We warmly thank Marcelo Borges, Catherine Garmany, John Glaspey and Joel Lamb for fruitful discussion and comments on the manuscript. G.A.B. thanks the hospitality of University of Michigan on his visit and acknowledges Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico (CNPq-Brazil) and Coordena\c c\~ao de Aperfei\c coamento de Pessoal de N\'ivel Superior (CAPES - Brazil) for his fellowship. T.B. was funded by grant No. 621-2009-3911 from the Swedish Research Council (VR). M.S.O. and T.B. were supported in part by NSF-AST0448900. M.S.O. warmly thanks NOAO for the hospitality of a sabbatical visit.
K.C. acknowledges funding from NSF grant AST-907873. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
{\it Facilities:} \facility{Magellan Observatory}
|
2,869,038,155,126 | arxiv | \section{Background}
The concept of physical layer network coding has attracted a lot of attention in recent times. The idea of physical layer network coding for the two way relay channel was first introduced in \cite{ZhLiLa}, where the multiple access interference occurring at the relay was exploited so that the communication between the end nodes can be done using a two stage protocol. Information theoretic studies for the physical layer network coding scenario were reported in \cite{KiMiTa}, \cite{PoYo}. The design principles governing the choice of modulation schemes to be used at the nodes for uncoded transmission were studied in \cite{KoPoTa}. An extension for the case when the nodes use convolutional codes was done in \cite{KoPoTa_conv}. A multi-level coding scheme for the two-way relaying was proposed in \cite{HeN}. \\
We consider the two-way wireless relaying scenario shown in Fig. 1, where two-way data transfer takes place among the nodes A and B with the help of the relay R. It is assumed that the two nodes operate in half-duplex mode, i.e., they cannot transmit and receive at the same time in the same frequency band. The relaying protocol consists of two phases, \textit{multiple access} (MA) phase, consisting of two channel uses during which A and B transmit to R twice, two independent messages in the two channel uses, with points from 4-PSK constellation, and \textit{broadcast} (BC) phase, in which R transmits to A and B. The relay node R accumulates the information sent by the user nodes in the first and second channel use of the MA phase, and transmits in the BC phase a message that contains information about all the four messages received by it in the MA phase. Network Coding is employed at R in such a way that A(/B) can decode the two messages transmitted by B(/A), given that A(/B) knows its own messages. We call this strategy accumulate-compute and forward (ACF) protocol.\\
It was observed in \cite{KoPoTa} and \cite{MNR} for 4-PSK, that for uncoded transmission, the network coding map used at the relay needs to be changed adaptively according to the channel fade coefficient, in order to minimize the impact of multiple access interference. In other words, the set of all possible channel realizations is quantized into a finite number of regions, with a specific network coding map giving the best performance in a particular region. It is shown in \cite{NMR} for any choice of signal sets of equal cardinality used at the two users, that every such network coding map that satisfies the \textit{exclusive law} is representable as a Latin Square and conversely, this relationship can be used to get the network coding maps satisfying the exclusive law. \\
\begin{definition}
A Latin Square of order $M$ is an $M \times M$ array in which each cell contains a symbol from a set of t different symbols such that each symbol occurs at most once in each row and column \cite{Rod}.\\
\end{definition}
\begin{figure}[tp]
\center
\includegraphics[height=15mm]{Capture1.eps}
\centering
{\caption{A two-way ACF relay channel}}
\end{figure}
Similar to the ACF protocol, a store-and-forward protocol has been earlier studied in \cite{LXT}, for the two-way relaying channel. In \cite{LXT}, the authors derive an upper bound on the ergodic sum-capacity for the two-way relaying scenario when delay tends to infinity, and propose two alternative awaiting and broadcast (AAB) schemes which approach the new upper bound at high SNR. Using numerical results, they show that the proposed AAB schemes significantly outperforms the traditional physical layer network coding methods without delay in terms of ergodic maximum sum rates. However, modulation and physical layer network coding have not been addressed in \cite{LXT}.\\
\begin{figure}[tp]
\center
\includegraphics[height=40mm]{constellation.eps}
\caption{4-PSK constellation}
\end{figure}
The remaining content is organized as follows: Section II discusses the basic concepts, definitions and a summary of the contributions of this paper. Section III demonstrates the network code obtained using Cartesian Product that is utilized at the relay for two-way ACF relaying which removes the fade states associated with the channels. In Section IV, we show how this network code can be obtained using Singularity Removal Constraints. Section V gives results based on structural properties of Latin Squares. In Section VI the complex plane is quantized depending on which one of the obtained Latin Squares maximizes the minimum cluster distance and Section VII gives the simulation results that demonstrate the improvement in the performance using the suggested scheme. Section VIII concludes the paper.
\section{Preliminaries}
Let $\mathcal{S}$ denote the symmetric 4-PSK constellation $\left\{\pm 1\pm i\right\}$ as shown in Fig. 2, used at A and B. Assume that A(/B) wants to send two 2-bit binary tuples to B(/A). Let $ \mu : \mathbb{F}^{2}_{2} \rightarrow \mathcal{S} $ denote the mapping from bits to complex symbols used at A and B where $\mathbb{F}_{2}=\left\{0,1\right\}$. Let $ x_{A_{1}}=\mu\left(s_{A_{1}}\right), x_{B_{1}}=\mu\left(s_{B_{1}}\right) \in \mathcal{S}$ denote the complex symbols transmitted by A and B at the first channel use respectively, and $ x_{A_{2}}=\mu\left(s_{A_{2}}\right), x_{B_{2}}=\mu\left(s_{B_{2}}\right) \in \mathcal{S}$ denote the complex symbols transmitted by A and B at the second channel use respectively, where $s_{A_{1}}, s_{B_{1}}, s_{A_{2}}, s_{B_{2}} \in \mathbb{F}^{2}_{2}.$\\
\noindent \textit{Multiple Access (MA) Phase:}\\
\indent It is assumed that the channel state information is not available at the transmitting nodes A and B during the MA phase. The received signal at R at first channel use is given by
\begin{equation}
\label{yr1}
Y_{R_{1}}=H_{A}x_{A_{1}}+H_{B}x_{B_{1}}+Z_{R_{1}}
\end{equation}
and the received signal at R at the second channel use,
\begin{equation}
\label{yr2}
Y_{R_{2}}=H_{A}x_{A_{2}}+H_{B}x_{B_{2}}+Z_{R_{2}}
\end{equation}
where $H_{A}$ and $H_{B}$ are the fading coefficients associated with the A-R and B-R link respectively. Note that we are taking $H_{A}$ and $H_{B}$ to be the same for the two channel uses. The additive noise $Z_{R_{1}}$ and $Z_{R_{2}}$ are assumed to be $\mathcal{CN}\left(0,\sigma^2 \right)$, where $\mathcal{CN}\left(0,\sigma^2 \right)$ denotes the circularly symmetric complex Gaussian random variable with variance $\sigma^2$. We assume a block fading scenario, with $z=\gamma e^{j\theta}=H_{B}/H_{A}$, where $\gamma \in \mathbb{R}^+$ and $-\pi \leq \theta \leq \pi$, is referred to as the \textit{fade state} for the first and second transmission by A and B at the first and second channel use, and for simplicity can also be denoted by $\left(\gamma, \theta\right)$. Also, it is assumed that $z$ is distributed according to a continuous probability distribution. \\
\begin{figure*}
\footnotesize
\begin{align}
\label{dist}
&
d^{2}_{min}\left(\gamma e^{j\theta}\right)=\hspace{-0.5 cm}\min_{\substack {{((x_{A_{1}},x_{B_{1}}),(x_{A_{2}},x_{B_{2}})),((x'_{A_{1}},x'_{B_{1}}),(x'_{A_{2}},x'_{B_{2}})) \in \mathcal{S}^{4},} \\ {((x_{A_{1}},x_{B_{1}}),(x_{A_{2}},x_{B_{2}})) \neq ((x'_{A_{1}},x'_{B_{1}}),(x'_{A_{2}},x'_{B_{2}}))}}}\hspace{-0.1 cm} \left\{ \left| \left( x_{A_{1}}-x'_{A_{1}}\right)+\gamma e^{j \theta} \left(x_{B_{1}}-x'_{B_{1}}\right) \right|^{2} + \left| \left( x_{A_{2}}-x'_{A_{2}}\right)+\gamma e^{j \theta} \left(x_{B_{2}}-x'_{B_{2}}\right) \right|^{2} \right\}\\
\hline
\label{mle}
&
\left((\hat{x}_{A_{1}}, \hat{x}_{B_{1}}),(\hat{x}_{A_{2}}, \hat{x}_{B_{2}})\right)= \arg \min_{\substack {((x'_{A_{1}}, x'_{B_{1}}),(x'_{A_{2}}, x'_{B_{2}})) \in \mathcal{S}^{4}}} \left\{ \left|Y_{R_{1}} - H_{A} x'_{A_{1}}-H_{B} x'_{B_{1}}\right|^{2} + \left|Y_{R_{2}} - H_{A} x_{A'_{2}}-H_{B} x'_{B_{2}}\right|^{2}\right\}\\
\hline
\label{mel1}
&
\mathcal{M}^{\gamma, \theta}\left(\left(x_{A_{1}},x_{A_{2}}\right),\left(x_{B_{1}},x_{B_{2}}\right)\right) \neq \mathcal{M}^{\gamma, \theta}\left(\left(x'_{A_{1}},x'_{A_{2}}\right),\left(x_{B_{1}},x_{B_{2}}\right)\right), \ whenever \left(x_{A_{1}},x_{A_{2}}\right) \neq \left(x'_{A_{1}},x'_{A_{2}}\right) ~ \forall x_{B_{1}}, x_{B_{2}} \in \mathcal{S} \\
\hline
\label{mel2}
&
\mathcal{M}^{\gamma, \theta}\left(\left(x_{A_{1}},x_{A_{2}}\right),\left(x_{B_{1}},x_{B_{2}}\right)\right) \neq \mathcal{M}^{\gamma, \theta}\left(\left(x_{A_{1}},x_{A_{2}}\right),\left(x'_{B_{1}},x'_{B_{2}}\right)\right), \ whenever \left(x_{B_{1}},x_{B_{2}}\right) \neq \left(x'_{B_{1}},x'_{B_{2}}\right) ~ \forall x_{A_{1}}, x_{A_{2}} \in \mathcal{S} \\
\hline
\label{cl1}
&
\left(d_{min}^{\mathcal{L}_{i},\mathcal{L}_{j}}\left(\gamma e^{j \theta}\right)\right)^{2}=\hspace{-0.2 cm}\min_{\substack {{((x_{A_{1}},x_{B_{1}}),(x_{A_{2}},x_{B_{2}})) \in \mathcal{L}_{i}},\\ ((x'_{A_{1}},x'_{B_{1}}),(x'_{A_{2}},x'_{B_{2}})) \in \mathcal{L}_{j}}} \hspace{-0.2 cm} \left\{ \left| \left( x_{A_{1}}-x'_{A_{1}}\right)+\gamma e^{j \theta} \left(x_{B_{1}}-x'_{B_{1}}\right) \right|^{2} + \left| \left( x_{A_{2}}-x'_{A_{2}}\right)+\gamma e^{j \theta} \left(x_{B_{2}}-x'_{B_{2}}\right) \right|^{2} \right\}\\
\hline
\label{cl2}
&
d^{2}_{min}\left(\mathcal{C}^{\gamma, \theta}\right)=\hspace{-0.6 cm}\min_{\substack {{((x_{A_{1}},x_{B_{1}}),(x_{A_{2}},x_{B_{2}})),((x'_{A_{1}},x'_{B_{1}}),(x'_{A_{2}},x'_{B_{2}})) \in \mathcal{S}^{4},} \\ {\mathcal{M}^{\gamma, \theta}((x_{A_{1}},x_{B_{1}}),(x_{A_{2}},x_{B_{2}})) \neq \mathcal{M}^{\gamma, \theta}((x'_{A_{1}},x'_{B_{1}}),(x'_{A_{2}},x'_{B_{2}}))}}}\hspace{-0.6 cm} \left\{ \left| \left( x_{A_{1}}-x'_{A_{1}}\right)+\gamma e^{j \theta} \left(x_{B_{1}}-x'_{B_{1}}\right) \right|^{2} + \left| \left( x_{A_{2}}-x'_{A_{2}}\right)+\gamma e^{j \theta} \left(x_{B_{2}}-x'_{B_{2}}\right) \right|^{2} \right\}\\
\hline
\label{cl3}
&
d^{2}_{min}\left(\mathcal{C}^{h}, \gamma e^{j \theta}\right)=\hspace{-0.7 cm}\min_{\substack {{((x_{A_{1}},x_{B_{1}}),(x_{A_{2}},x_{B_{2}})),((x'_{A_{1}},x'_{B_{1}}),(x'_{A_{2}},x'_{B_{2}})) \in \mathcal{S}^{4},} \\ {\mathcal{M}^{h}((x_{A_{1}},x_{B_{1}}),(x_{A_{2}},x_{B_{2}})) \neq \mathcal{M}^{h}((x'_{A_{1}},x'_{B_{1}}),(x'_{A_{2}},x'_{B_{2}}))}}}\hspace{-0.4 cm} \left\{ \left| \left( x_{A_{1}}-x'_{A_{1}}\right)+ \gamma e^{j \theta} \left(x_{B_{1}}-x'_{B_{1}}\right) \right|^{2} + \left| \left( x_{A_{2}}-x'_{A_{2}}\right)+ \gamma e^{j \theta} \left(x_{B_{2}}-x'_{B_{2}}\right) \right|^{2} \right\}.\\
\hline
\nonumber
\end{align}
\end{figure*}
Let $ \mathcal{S}_{R} \left( \gamma, \theta \right)$ denote the effective constellation seen at the relay during the MA phase, i.e.,
$$ \mathcal{S}_{R} \left( \gamma, \theta \right) = \left\{(x_{i} + \gamma e^{j \theta}y_{i}, x_{j} + \gamma e^{j \theta}y_{j})| x_{i}, y_{i}, x_j, y_j \in \mathcal{S}\right\}. $$
The effective constellation remains the same over the two channel uses, since we assume $H_{A}$ and $H_{B}$ and hence the ratio $ H_{B}/H_{A}= \gamma e^{j \theta}$ to be the same during the two channel uses.\\
Let $d_{min}\left(\gamma e^{j \theta}\right)$ denote the minimum distance between the points in the constellation $ \mathcal{S}_{R} \left( \gamma, \theta \right) $ during MA phase, as given by (\ref{dist}) on the next page. From (\ref{dist}), it is clear that there exist values of $\gamma e^{j \theta}$, for which $d_{min}\left(\gamma e^{j \theta}\right)=0$. Let, $$\mathcal{H}=\left\{\gamma e^{j \theta} \in \mathbb{C} | d_{min}\left(\gamma e^{j \theta}\right)=0 \right\}.$$ The elements of $\mathcal{H}$ are called singular fade states. For singular fade states, $\left|\mathcal{S}_{R} \left( \gamma, \theta \right)\right|< 4^{4}.$\\
\begin{definition}
A fade state $\gamma e^{j \theta}$ is defined to be a \textit{singular fade state} for the \textit{ACF two-way relaying}, if the cardinality of the signal set $ \mathcal{S}_{R} \left( \gamma, \theta \right)$ is less than $4^{4}$. Let $\mathcal{H} $ denote the set of singular fade states for the two-way ACF relaying. \\
\end{definition}
Let $\left(\hat{x}_{A_{1}}, \hat{x}_{B_{1}}\right) \text{~and~} \left(\hat{x}_{A_{2}}, \hat{x}_{B_{2}}\right) \in \mathcal{S}^{2}$ denote the Maximum Likelihood (ML) estimate of $\left(x_{A_{1}}, x_{B_{1}}\right) \text{~and~} \left(x_{A_{2}}, x_{B_{2}}\right)$ at R based on the received complex numbers $Y_{R_{1}}$ and $Y_{R_{2}}$ at the two channel uses, as given in (\ref{mle}). \\
\noindent \textit{Broadcast (BC) Phase:}\\
\indent Depending on the value of $\gamma e^{j \theta}$, R chooses a map $\mathcal{M}^{\gamma, \theta} : \mathcal{S}^4 \rightarrow \mathcal{S}' $ where $\mathcal{S}^{'}$ is a complex signal set of size between $4^{2}$ and $4^{4}$ used by R during the \textit{BC} phase.\\
The received signals at A and B during the BC phase are respectively given by,
\begin{equation}
Y_{A}=H'_{A}X_{R}+Z_{A} \text{~and~} Y_{B}=H'_{B}X_{R}+Z_{B}
\end{equation}
\noindent where $X_{R}=\mathcal{M}^{\gamma, \theta} \left(\left(\hat{x}_{A_{1}},\hat{x}_{B_{1}}\right), \left(\hat{x}_{A_{2}},\hat{x}_{B_{2}}\right)\right) \in \mathcal{S}'$ is the complex number transmitted by R. The fading coefficients corresponding to the R-A and R-B links are given by $H'_{A}$ and $H'_{B}$ respectively and the additive noises $Z_{A}$ and $Z_{B}$ are $\mathcal{CN}\left(0,\sigma^{2}\right)$. \\
The elements in $\mathcal{S}^4 $ which are mapped to the same signal point in $\mathcal{S}'$ by the map $\mathcal{M}^{\gamma, \theta}$ are said to form a cluster. Let $\left\{\mathcal{L}_{1}, \mathcal{L}_{2},.., \mathcal{L}_{l}\right\}$ denote the set of all such clusters. The formation of clusters is called clustering, denoted by $\mathcal{C}^{\gamma e^{j \theta}}$.\\
In order to ensure that A(/B) is able to decode B's(/A's) messages, the clustering $\mathcal{C}^{\gamma e^{j \theta}}$ should satisfy the exclusive law, as given in (\ref{mel1}), (\ref{mel2}) above.\\
\begin{definition}
The cluster distance between a pair of clusters $\mathcal{L}_i$ and $\mathcal{L}_j$ is the minimum among all the distances calculated between the points $\left(\left(x_{A_{1}},x_{B_{1}}\right), \left(x_{A_{2}},x_{B_{2}}\right)\right) \in \mathcal{L}_{i}$ and $\left(\left(x'_{A_{1}},x'_{B_{1}}\right), \left(x'_{A_{2}},x'_{B_{2}}\right)\right) \in \mathcal{L}_{j}$ in the effective constellation used by the relay node R, as given in \eqref{cl1} above.\\
\end{definition}
\begin{definition}
The \textit{minimum cluster distance} of the clustering $\mathcal{C}^{\gamma e^{j \theta}}$ is the minimum among all the cluster distances, as given in \eqref{cl2} above.\\
\end{definition}
The minimum cluster distance determines the performance during the MA phase of relaying. The performance during the BC phase is determined by the minimum distance of the signal set $\mathcal{S}^{'}$. For values of $ \gamma e^{j \theta}$ in the neighborhood of the singular fade states, the value of $d_{min}\left(\mathcal{C}^{\gamma e^{j \theta}}\right)$ is greatly reduced, a phenomenon referred to as \textit{distance shortening} [4]. To avoid distance shortening, for each singular fade state, a clustering needs to be chosen such that the minimum cluster distance is non zero. \\
A clustering $\mathcal{C}^h$ is said to remove singular fade state $h \in \mathcal{H}, $ if $d_{min}\left(\mathcal{C}^h\right)>0$. For a singular fade state $h \in \mathcal{H} $, let $\mathcal{C}^{h} $ denote the clustering which removes the singular fade state $h$ (if there are multiple clusterings which remove the same singular fade state $h$, choose any of the clusterings). Let $\mathcal{C^{H}}=\left\{\mathcal{C}^{h} :h\in \mathcal{H}\right\} $ denote the set of all such clusterings. \\
\begin{definition}
The minimum cluster distance of the clustering $\mathcal{C}^{h}, h\in \mathcal{H}$ at the fade state $\gamma e^{j \theta}$ which is not necessarily a singular fade state, denoted by $d_{min}\left(\mathcal{C}^{h},\gamma e^{j \theta}\right)$, is as given in (\ref{cl3}). Note that if $\gamma e^{j\theta} =h \in \mathcal{H},$ $d_{min}\left(\mathcal{C}^{h},h\right),$ reduces to $d_{min}\left(\mathcal{C}^{h}\right)$ given in \eqref{cl2}.
\end{definition}
In general, the channel fade state $\gamma e^{j \theta}$ need not be a singular fade state. In such a scenario, among all the clusterings which remove the singular fade states, the one which has the maximum value of the minimum cluster distance at $\gamma e^{j\theta}$ is chosen by the relay R. In other words, for $\gamma e^{j\theta} \notin \mathcal{H}, $ the clustering $\mathcal{C}^{\gamma,{\theta}} $ is chosen to be $\mathcal{C}^{h}$ by the relay R, which satisfies $ d_{min}\left(\mathcal{C}^{h},\gamma e^{j\theta}\right) \geq d_{min}\left(\mathcal{C}^{h'},\gamma e^{j\theta}\right), \forall h \neq h' \in \mathcal{H}.$ Since the clusterings which remove the singular fade states are known to all the three nodes and are finite in number, the clustering used for a particular realization of the fade state can be indicated by R to A and B using overhead bits.
In \cite{NMR}, such clusterings that remove singular fade states for two-way 2-stage relaying scenario were obtained with the help of Latin Squares.\\
The contributions of this paper are as follows:
\begin{itemize}
\item It is shown that if the users A and B transmit points from the same 4-PSK constellation, the clusterings proposed in \cite{NMR} for two user case can be utilized to get clusterings for this case for removing the singular fade states containing either 16 or 25 points by introducing the notion of Cartesian Product of Clusters. In other words, the $16 \times 16$ Latin Squares representing the ACF relaying can be obtained with the help of $4 \times 4$ Latin Squares representing the clusterings for two-way 2-stage relaying as given in \cite{NMR}.
\item Another clustering is proposed for the ACF protocol in the two-way relay channel called Direct Clustering. This clustering also removes the singular fade states and reduces the number of clusters for some cases. Using this clustering, the size of the resulting constellation used by the relay node R in the BC phase is reduced to 20 for a category of cases, as compared to the Cartesian Product approach which results in the constellation size being 25 for these cases.
\item The quantization of the complex plane that contains all the possible fade states, depending on which one of the obtained clusterings maximizes the minimum cluster distance, is proven to be the same as for the two-way 2-stage relaying scenario as done in \cite{MNR}.
\item Simulation results indicate that at high SNR, the schemes based on the ACF protocol performs better than the schemes proposed in \cite{KoPoTa},\cite{NMR} based on two-stage two way relaying. With 4-PSK signal set used at the end nodes, the ACF protocol achieves a maximum sum throughput of 8/3 bits/s/Hz, whereas it is 2 bits/s/Hz for the schemes based on 2-stage two way relaying.\\
\end{itemize}
\section{Exclusive Law and Latin Squares}
The nodes A and B transmit symbols from the same constellation, viz., 4-PSK. Our aim is to find the map that the relay node R should use in order to cluster the $4^4$ possibilities of $\left(\left(x_{A_{1}},x_{B_{1}}\right),\left(x_{A_{2}},x_{B_{2}}\right)\right)$ such that the exclusive law given by (\ref{mel1}), (\ref{mel2}) is satisfied. Consider the $16 \times 16$ array consisting of the 16 possibilities of $\left(x_{A_{1}},x_{A_{2}}\right) $ along the rows and the 16 possibilities of $\left(x_{B_{1}},x_{B_{2}}\right) $ along the columns. We fill this array with elements from $\mathcal{L}=\left\{ \mathcal{L}_{1}, \mathcal{L}_{2}, ..., \mathcal{L}_{t}\right\}$ where each symbol denotes a unique cluster. The constellation size used by the relay in the BC phase, or the number of clusters has to be at least 16, since each user needs 4 bit information corresponding to the two messages sent by the other user implying $t \geq 16$. In order to keep (\ref{mel1}) and (\ref{mel2}) satisfied, a symbol from $\mathcal{L}$ can occur at most once in each row and column. So the $16 \times 16$ array having $\left(x_{A_{1}},x_{A_{2}}\right) $ along the rows and $\left(x_{B_{1}},x_{B_{2}}\right) $ along the columns must be a Latin Square of side 16 (Definition 1). The equivalence between the network code used by the relay in two-way relaying scenario and Latin Squares has been previously discussed in [7]. The clusters are obtained by putting together all those $\left(\left(x_{A_{1}},x_{B_{1}}\right),\left(x_{A_{2}},x_{B_{2}}\right)\right)$ for which the corresponding entry $\left(\left(x_{A_{1}},x_{A_{2}}\right),\left(x_{B_{1}},x_{B_{2}}\right)\right)$ in the array is the same. \\
From above, we can say that all the relay clusterings that satisfy the mutually exclusive law forms Latin Squares of order 16 with entries from $\mathcal{L}$ with $t \geq 16$, when the end nodes use PSK constellations of size 4. It therefore suffices to consider the network code used by the relay node in the BC phase to be a $16 \times 16$ array with rows(/columns) indexed by the 2-tuple consisting of the symbols sent by A(/B) during the first and second channel use. The cells of the array must be filled with elements of $\mathcal{L}$ in such a way, that the resulting array is a Latin Square of order 16 and $t \geq 16$. \\
\noindent \textit{\textbf{Removing Singular fade states and Constrained Latin Squares}}
The relay can manage with constellations of size 16 in BC phase, but it is observed that in some cases relay may not be able to remove the singular fade states and results in severe performance degradation in the MA phase. As stated in Section II, that a clustering $\mathcal{C}^{h}$ is said to remove singular fade state $h \in \mathcal{H}, $ if $d_{min}\left(\mathcal{C}^h\right)>0$. Removing singular fade states for a two-way ACF relay channel can also be defined as follows:\\
\begin{definition}
A clustering $\mathcal{C}^{h}$ is said to \textit{remove the singular fade state} $h \in \mathcal{H}$, if any two possibilities of the messages sent by the users $\left(\left(x_{A_{1}},x_{B_{1}}\right),\left(x_{A_{2}},x_{B_{2}}\right)\right), \left(\left(x'_{A_{1}},x'_{B_{1}}\right),\left(x'_{A_{2}},x'_{B_{2}}\right)\right) \in \mathcal{S}^{4}$ that satisfy\\
$$h= \frac{x'_{A_{1}}-x_{A_{1}}}{x_{B_{1}}-x'_{B_{1}}}= \frac{x'_{A_{2}}-x_{A_{2}}}{x_{B_{2}}-x'_{B_{2}}}, $$
are placed together in the same cluster by $\mathcal{C}^{h}$. \\
\end{definition}
\begin{definition}
A set $\left\{\left(\left(x_{A_{1}},x_{B_{1}}\right),\left(x_{A_{2}},x_{B_{2}}\right)\right)\right\} \subseteq \mathcal{S}^4$ consisting of all the possibilities of $((x_{A_{1}},x_{B_{1}}),(x_{A_{2}},x_{B_{2}}))$ that must be placed in the same cluster of the clustering used at relay node R in the BC phase in order to remove the singular fade state $h$ is referred to as a \textit{Singularity Removal Constraint} for the singular fade state $h$ in two-way ACF relaying scenario.\\
\end{definition}
As given in \cite{KoPoTa}, a complex number $\gamma e^{j \theta}$ is defined to be a \textit{singular fade state} for the \textit{two-way 2-stage relaying scenario}, if $ x_{A}+ \gamma e^{j \theta} x_{B} = x'_{A}+ \gamma e^{j \theta} x'_{B} \text{~for some~} (x_A,x_B),(x'_A,x'_B) \in \mathcal{S}^{2}$ and the set $\left\{(x_A,x_B) ~|~ (x_A,x_B) \in \mathcal{S}^{2}\right\}$ consisting of all the possibilities of $(x_A,x_B) \in \mathcal{S}^{2}$ that must be placed in the same cluster of the clustering that removes the fade state $\gamma e^{j \theta}$ for the two-way 2-stage relaying scenario is the corresponding set of singularity removal constraint for the singular fade state $\gamma e^{j \theta}$. As we show in the following lemma, the singular fade state for the ACF two-way relaying are the same as the singular fade state for the two-way 2-stage relaying scenario.\\
\begin{lemma} The singular fade states for the ACF two-way relaying scenario are the same as the 12 singular fade states for two-way 2-stage relaying scenario as computed in \cite{KoPoTa}.
\end{lemma}
\begin{proof} Let $\gamma e^{j \theta}$ be a singular fade state for the ACF two-way relaying scenario. By definition, $ \exists ((x_{A_{1}},x_{B_{1}}),(x_{A_{2}},x_{B_{2}})),((x'_{A_{1}},x'_{B_{1}}),(x'_{A_{2}},x'_{B_{2}})) \in \mathcal{S}^{4} $ such that,
\begin{align}
\nonumber
&x_{A_{1}}+ \gamma e^{j \theta} x_{B_{1}} = x'_{A_{1}}+ \gamma e^{j \theta} x'_{B_{1}}, \text{~and} \\
\nonumber
&x_{A_{2}}+ \gamma e^{j \theta} x_{B_{2}} = x'_{A_{2}}+ \gamma e^{j \theta} x'_{B_{2}}.
\end{align}
where $x_{A_{1}},x_{B_{1}},x_{A_{2}},x_{B_{2}},x'_{A_{1}},x'_{B_{1}},x'_{A_{2}},x'_{B_{2}} \in \mathcal{S}$. \\
Then, by definition, $\gamma e^{j \theta}$ must be a singular fade state for two-user 2-stage relaying.\\
Conversely, let $\gamma e^{j \theta}$ be a singular fade state for two-user 2-stage relaying scenario. Then, $\exists (x_{A},x_{B}),~ (x'_{A},x'_{B}) \in \mathcal{S}^{2}$ such that,
$$ x_{A}+ \gamma e^{j \theta} x_{B} = x'_{A}+ \gamma e^{j \theta} x'_{B}.$$
Then, since for any $ (x,y) \in \mathcal{S},~ x+ \gamma e^{j \theta} y= x+ \gamma e^{j \theta} y, ~ \left\{ ((x_{A},x_{B}),(x,y)),~((x'_{A},x'_{B}),(x,y)) \right\}$ is a subset of a singularity removal constraint and $\gamma e^{j \theta}$ is a singular fade state for the ACF two-way relaying scenario.\\
Thus, the singular fade states for the ACF two-way relaying scenario and the singular fade states for the two-way 2-stage relaying scenario are the same.
\end{proof}
\vspace{0.5cm}
Let $\gamma e^{j \theta}$ be a fade state for the two-way ACF relaying scenario. Then $\gamma e^{j \theta}$ can be viewed as a fade state for the first and second channel use in the MA phase as shown in Lemma 1. In \cite{KoPoTa} and \cite{MNR}, it is shown that for the two-way 2-stage relaying, the $4^{2}$ possible pairs of symbols from 4-PSK constellation sent by the two users in the MA phase, can be clustered into a clustering dependent on a singular fade coefficient, of size 4 or 5 in a manner so as to remove this singular fade coefficient. In the case of two-way ACF relaying, at the end of MA phase, relay receives two complex numbers, given by (\ref{yr1}) and (\ref{yr2}). Instead of R transmitting a point from the $4^{4}$ point constellation resulting from all the possibilities of $\left(\left(x_{A_{1}}, x_{B_{1}}\right),\left(x_{A_{2}}, x_{B_{2}}\right)\right)$ the relay R can choose to group these possibilities into clusters represented by a smaller constellation. One such clustering for the case when $\gamma e^{j \theta} $ can be obtained by utilizing the clustering provided in [7] for the two-way 2-stage problem in order to remove this fade state. Let $\mathcal{C}^{\left[h\right]}$ denote the clustering for the physical network coded two-way relaying scenario that removes the singular fade state $h \in \mathbb{C}$ for the two-way 2-stage relaying case as given in [7]. \\
\begin{definition} We define the \textit{Cartesian Product} of a clustering $\mathcal{C}^{\left[h\right]}=\left\{l_{1}, l_{2},...,l_{m}\right\}$ with itself denoted by $\mathcal{D}^{\left[h\right]}$, where for $i=1,2,...,m$;
$$l_{i}=\left\{\left(x_{i_{1}},y_{i_{1}}\right), \left(x_{i_{2}},y_{i_{2}}\right), ..., \left(x_{i_{s_{i}}},y_{i_{s_{i}}}\right)\right\}$$
with $x_{i_{p}}, y_{i_{p}} \in \mathbb{Z}_{4} ~ \forall~p=1,2,...,s_{i} $ as follows:
$$\mathcal{D}^{\left[h\right]}=\left\{\mathcal{C}^{\left\{l_{1},l_{1}\right\}},...,\mathcal{C}^{\left\{l_{1},l_{m}\right\}},...,\mathcal{C}^{\left\{l_{m},l_{1}\right\}},...,\mathcal{C}^{\left\{l_{m},l_{m}\right\}} \right\} $$
where,
{\footnotesize
\begin{align}
\nonumber
&\mathcal{C}^{\left\{l_{i},l_{j}\right\}}=\left\{\left((x_{i_{p}},y_{i_{p}}),(x_{j_{q}},y_{j_{q}})\right) | ~ p=1,2,..,s_{i} \text{~and~} q=1,2,..,s_{j} \right\}.
\end{align}
}
\end{definition}
\vspace{0.5cm}
\begin{lemma} Let $\gamma e^{j \theta} \in \mathcal{H}$. The clustering obtained by taking the Cartesian Product $\mathcal{D}^{\left[\gamma e^{j \theta}\right]}$ of $\mathcal{C}^{\left[\gamma e^{j \theta}\right]}$ with itself removes the singular fade state $\gamma e^{j \theta}$ for the two-way ACF relaying scenario.
\end{lemma}
\begin{proof} Let $\mathcal{C}^{\left[\gamma e^{j \theta}\right]}=\left\{l_1, l_2,...,l_m\right\}$,where for $i=1,2,...,m$,
$$ l_i=\left\{(x_{i_{1}},y_{i_{1}}),(x_{i_{2}},y_{i_{2}}),...,(x_{i_{s_{i}}},y_{i_{s_{i}}})\right\}$$
Then,
$$\mathcal{D}^{\left[\gamma e^{j \theta} \right]}=\left\{\mathcal{C}^{\left\{l_{1},l_{1}\right\}},...,\mathcal{C}^{\left\{l_{1},l_{m}\right\}},...,\mathcal{C}^{\left\{l_{m},l_{1}\right\}},...,\mathcal{C}^{\left\{l_{m},l_{m}\right\}} \right\} $$
where,
{\footnotesize
\begin{align}
\nonumber
&\mathcal{C}^{\left\{l_{i},l_{j}\right\}}=\left\{\left((x_{i_{p}},y_{i_{p}}),(x_{j_{q}},y_{j_{q}})\right) | ~ p=1,2,..,s_{i} \text{~and~} q=1,2,..,s_{j} \right\}.
\end{align}
}
By definition, a singularity removal constraint for the fade state $\gamma e^{j \theta}$ in the case of two-way ACF relaying scenario is a set $\left\{((x_{i},y_{i}),(x'_{i},y'_{i}))~|~i=1,2,...,t\right\}$ such that $\forall 1\leq i_{1},i_{2} \leq t$,
$$ \gamma e^{j \theta}=\frac{x_{i_{2}}-x_{i_{1}}}{y_{i_{1}}-y_{i_{2}}}=\frac{x'_{i_{2}}-x'_{i_{1}}}{y'_{i_{1}}-y'_{i_{2}}} $$
Now, $ \gamma e^{j \theta}=\frac{x_{i_{2}}-x_{i_{1}}}{y_{i_{1}}-y_{i_{2}}} \Rightarrow (x_{1},y_{1}) \text{~and~} (x_{2},y_{2})$ must belong to the same cluster, say $l_i$ in $\mathcal{C}^{\left[\gamma e^{j \theta}\right]}$ for it to remove the fade state $\gamma e^{j \theta}$. Similarly,
$ \gamma e^{j \theta}=\frac{x'_{i_{2}}-x'_{i_{1}}}{y'_{i_{1}}-y'_{i_{2}}} \Rightarrow (x'_{1},y'_{1}) \text{~and~} (x'_{2},y'_{2})$ must belong to the same cluster, say $l_j$ in $\mathcal{C}^{\left[\gamma e^{j \theta}\right]}$. This holds $\forall 1\leq i_{1},i_{2} \leq t$. Thus, the singularity removal constraint is,
{
\vspace{-0.2cm}
\begin{align}
\nonumber
&\left\{((x_{i},y_{i}),(x'_{i},y'_{1}))~|~i=1,2,...,t\right\} \subseteq \mathcal{C}^{\left\{l_{i},l_{j}\right\}},
\end{align}
}for some $1\leq i,j \leq n$.
Therefore, the clustering $\mathcal{D}^{\left[\gamma e^{j \theta}\right]}$ removes the singular fade state $\gamma e^{j \theta}$.
\end{proof}
\vspace{0.5cm}
It was studied in [4] by computer search, and then in [7] analytically, that there are 12 possible singular fade states in the complex plane for the case when two users transmit points from 4-PSK constellation in the MA phase. Out of these 12 fade states, 4 lie on the unit circle, 4 lie on a circle of radius $\sqrt{2}$ and 4 lie on a circle of radius 1/$\sqrt{2}$. The size of the constellation used at R in the BC phase for these cases is either 4 or 5. For the two-user ACF scenario we are dealing with, we have two channel uses in the MA phase, with A and B transmitting a message each in the first channel and second channel uses, with the two messages sent by each user in the first and second channel use being possibly different. Keeping in mind the three classes of fade states depending on the radius of the circle it lies on, we consider the following three cases:\\\\
\textit{Case 1:} $\gamma e^{j \theta} $ lies on the unit circle.\\\\
\textit{Case 2:} $\gamma e^{j \theta} $ lies on the circle of radius $ 1/\sqrt{2}$.\\\\
\textit{Case 3:} $\gamma e^{j \theta} $ lies on the circle of radius $ \sqrt{2}$.\\\\
\textbf{\textit{Case 1:} $\gamma e^{j \theta} $ lies on the unit circle.}\\
Since both user nodes A and B require $4$ bits of information from the other user, the size of the constellation that R uses will be at least $2^4=16$. The Cartesian Product of $\mathcal{C}^{\left[\gamma e^{j \theta}\right]}$ with itself consists of $16$ clusters since the clustering $\mathcal{C}^{\left[\gamma e^{j \theta}\right]}$ has $4$ clusters. We illustrate this case with the help of the following example, the remaining instances of fade states that lie on the unit circle can be obtained using this example, as we show in Lemma 6 in Section V.\\
\begin{example} Let the fade state $\gamma e^{j \theta}= j$. The clustering $\mathcal{C}^{\left[j\right]}$ for the case as given in \cite{NMR} is given by,
\begin{align}
\nonumber
\mathcal{C}^{\left[j\right]}=&\left\{l_{1},l_{2},l_{3},l_{4}\right\}
\nonumber
\text{where,}\\
\nonumber
&l_{1}=\left\{\left(0,0\right),\left(1,2\right),\left(2,1\right),\left(3,3\right)\right\}\\
\nonumber
&l_{2}=\left\{\left(0,3\right),\left(1,1\right),\left(2,2\right),\left(3,0\right)\right\}\\
\nonumber
&l_{3}=\left\{\left(0,1\right),\left(1,3\right),\left(2,0\right),\left(3,2\right)\right\}\\
\nonumber
&l_{4}=\left\{\left(0,2\right),\left(1,0\right),\left(2,3\right),\left(3,1\right)\right\}.
\vspace{-0.8cm}
\nonumber
\end{align}
The Cartesian Product of the above clustering given by $\mathcal{D}^{\left[j\right]}=\left\{ \mathcal{C}^{\left\{l_{i},l_{j}\right\}}~ |~ i,j=1,2,3,4\right\}$ contains exactly $16$ clusters:
{\footnotesize
\begin{align}
\nonumber
\vspace{-0.5cm}
\mathcal{C}^{\left\{l_{1},l_{1}\right\}}=&\left\{((0,0),(0,0)), ((0,0),(1,2)), ((0,0),(2,1)), ((0,0),(3,3)), \right.\\
\nonumber
& \left. ((1,2),(0,0)), ((1,2),(1,2)), ((1,2),(2,1)), ((1,2),(3,3)),\right.\\
\nonumber
& \left. ((2,1),(0,0)), ((2,1),(1,2)), ((2,1),(2,1)), ((2,1),(3,3)),\right.\\
\nonumber
& \left. ((3,3),(0,0)), ((3,3),(1,2)), ((3,3),(2,1)), ((3,3),(3,3))\right\}\\
\nonumber
\mathcal{C}^{\left\{l_{1},l_{2}\right\}}=&\left\{((0,0),(0,3)), ((0,0),(1,1)), ((0,0),(2,2)), ((0,0),(3,0)), \right.\\
\nonumber
& \left. ((1,2),(0,3)), ((1,2),(1,1)), ((1,2),(2,2)), ((1,2),(3,0)),\right.\\
\nonumber
& \left. ((2,1),(0,3)), ((2,1),(1,1)), ((2,1),(2,2)), ((2,1),(3,0)),\right.\\
\nonumber
& \left. ((3,3),(0,3)), ((3,3),(1,1)), ((3,3),(2,2)), ((3,3),(3,0))\right\}\\
\nonumber
\mathcal{C}^{\left\{l_{1},l_{3}\right\}}=&\left\{((0,0),(0,1)), ((0,0),(1,3)), ((0,0),(2,0)), ((0,0),(3,2)), \right.\\
\nonumber
& \left. ((1,2),(0,1)), ((1,2),(1,3)), ((1,2),(2,0)), ((1,2),(3,2)),\right.\\
\nonumber
& \left. ((2,1),(0,1)), ((2,1),(1,3)), ((2,1),(2,0)), ((2,1),(3,2)),\right.\\
\nonumber
& \left. ((3,3),(0,1)), ((3,3),(1,3)), ((3,3),(2,0)), ((3,3),(3,2))\right\}\\
\nonumber
\mathcal{C}^{\left\{l_{1},l_{4}\right\}}=&\left\{((0,0),(0,2)), ((0,0),(1,0)), ((0,0),(2,3)), ((0,0),(3,1)), \right.\\
\nonumber
& \left. ((1,2),(0,2)), ((1,2),(1,0)), ((1,2),(2,3)), ((1,2),(3,1)),\right.\\
\nonumber
& \left. ((2,1),(0,2)), ((2,1),(1,0)), ((2,1),(2,3)), ((2,1),(3,1)),\right.\\
\nonumber
& \left. ((3,3),(0,2)), ((3,3),(1,0)), ((3,3),(2,3)), ((3,3),(3,1))\right\}\\
\nonumber
\mathcal{C}^{\left\{l_{2},l_{1}\right\}}=&\left\{((0,3),(0,0)), ((0,3),(1,2)), ((0,3),(2,1)), ((0,3),(3,3)), \right.\\
\nonumber
& \left. ((1,1),(0,0)), ((1,1),(1,2)), ((1,1),(2,1)), ((1,1),(3,3)),\right.\\
\nonumber
& \left. ((2,2),(0,0)), ((2,2),(1,2)), ((2,2),(2,1)), ((2,2),(3,3)),\right.\\
\nonumber
& \left. ((3,0),(0,0)), ((3,0),(1,2)), ((3,0),(2,1)), ((3,0),(3,3))\right\}\\
\nonumber
\mathcal{C}^{\left\{l_{2},l_{2}\right\}}=&\left\{((0,3),(0,3)), ((0,3),(1,1)), ((0,3),(2,2)), ((0,3),(3,0)), \right.\\
\nonumber
& \left. ((1,1),(0,3)), ((1,1),(1,1)), ((1,1),(2,2)), ((1,1),(3,0)),\right.\\
\nonumber
& \left. ((2,2),(0,3)), ((2,2),(1,1)), ((2,2),(2,2)), ((2,2),(3,0)),\right.\\
\nonumber
& \left. ((3,0),(0,3)), ((3,0),(1,1)), ((3,0),(2,2)), ((3,0),(3,0))\right\}\\
\nonumber
\mathcal{C}^{\left\{l_{2},l_{3}\right\}}=&\left\{((0,3),(0,1)), ((0,3),(1,3)), ((0,3),(2,0)), ((0,3),(3,2)), \right.\\
\nonumber
& \left. ((1,1),(0,1)), ((1,1),(1,3)), ((1,1),(2,0)), ((1,1),(3,2)),\right.\\
\nonumber
& \left. ((2,2),(0,1)), ((2,2),(1,3)), ((2,2),(2,0)), ((2,2),(3,2)),\right.\\
\nonumber
& \left. ((3,0),(0,1)), ((3,0),(1,3)), ((3,0),(2,0)), ((3,0),(3,2))\right\}\\
\nonumber
\mathcal{C}^{\left\{l_{2},l_{4}\right\}}=&\left\{((0,3),(0,2)), ((0,3),(1,0)), ((0,3),(2,3)), ((0,3),(3,1)), \right.\\
\nonumber
& \left. ((1,1),(0,2)), ((1,1),(1,0)), ((1,1),(2,3)), ((1,1),(3,1)),\right.\\
\nonumber
& \left. ((2,2),(0,2)), ((2,2),(1,0)), ((2,2),(2,3)), ((2,2),(3,1)),\right.\\
\nonumber
& \left. ((3,0),(0,2)), ((3,0),(1,0)), ((3,0),(2,3)), ((3,0),(3,1))\right\}\\
\nonumber
\mathcal{C}^{\left\{l_{3},l_{1}\right\}}=&\left\{((0,1),(0,0)), ((0,1),(1,2)), ((0,1),(2,1)), ((0,1),(3,3)), \right.\\
\nonumber
& \left. ((1,3),(0,0)), ((1,3),(1,2)), ((1,3),(2,1)), ((1,3),(3,3)),\right.\\
\nonumber
& \left. ((2,0),(0,0)), ((2,0),(1,2)), ((2,0),(2,1)), ((2,0),(3,3)),\right.\\
\nonumber
& \left. ((3,2),(0,0)), ((3,2),(1,2)), ((3,2),(2,1)), ((3,2),(3,3))\right\}\\
\nonumber
\mathcal{C}^{\left\{l_{3},l_{2}\right\}}=&\left\{((0,1),(0,3)), ((0,1),(1,1)), ((0,1),(2,2)), ((0,1),(3,0)), \right.\\
\nonumber
& \left. ((1,3),(0,3)), ((1,3),(1,1)), ((1,3),(2,2)), ((1,3),(3,0)),\right.\\
\nonumber
& \left. ((2,0),(0,3)), ((2,0),(1,1)), ((2,0),(2,2)), ((2,0),(3,0)),\right.\\
\nonumber
& \left. ((3,2),(0,3)), ((3,2),(1,1)), ((3,2),(2,2)), ((3,2),(3,0))\right\}\\
\nonumber
\mathcal{C}^{\left\{l_{3},l_{3}\right\}}=&\left\{((0,1),(0,1)), ((0,1),(1,3)), ((0,1),(2,0)), ((0,1),(3,2)), \right.\\
\nonumber
& \left. ((1,3),(0,1)), ((1,3),(1,3)), ((1,3),(2,0)), ((1,3),(3,2)),\right.\\
\nonumber
& \left. ((2,0),(0,1)), ((2,0),(1,3)), ((2,0),(2,0)), ((2,0),(3,2)),\right.\\
\nonumber
& \left. ((3,2),(0,1)), ((3,2),(1,3)), ((3,2),(2,0)), ((3,2),(3,2))\right\}\\
\nonumber
\mathcal{C}^{\left\{l_{3},l_{4}\right\}}=&\left\{((0,1),(0,2)), ((0,1),(1,0)), ((0,1),(2,3)), ((0,1),(3,1)), \right.\\
\nonumber
& \left. ((1,3),(0,2)), ((1,3),(1,0)), ((1,3),(2,3)), ((1,3),(3,1)),\right.\\
\nonumber
& \left. ((2,0),(0,2)), ((2,0),(1,0)), ((2,0),(2,3)), ((2,0),(3,1)),\right.\\
\nonumber
& \left. ((3,2),(0,2)), ((3,2),(1,0)), ((3,2),(2,3)), ((3,2),(3,1))\right\}\\
\nonumber
\mathcal{C}^{\left\{l_{4},l_{1}\right\}}=&\left\{((0,2),(0,0)), ((0,2),(1,2)), ((0,2),(2,1)), ((0,2),(3,3)), \right.\\
\nonumber
& \left. ((1,0),(0,0)), ((1,0),(1,2)), ((1,0),(2,1)), ((1,0),(3,3)),\right.\\
\nonumber
& \left. ((2,3),(0,0)), ((2,3),(1,2)), ((2,3),(2,1)), ((2,3),(3,3)),\right.\\
\nonumber
& \left. ((3,1),(0,0)), ((3,1),(1,2)), ((3,1),(2,1)), ((3,1),(3,3))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{C}^{\left\{l_{4},l_{2}\right\}}=&\left\{((0,2),(0,3)), ((0,2),(1,1)), ((0,2),(2,2)), ((0,2),(3,0)), \right.\\
\nonumber
& \left. ((1,0),(0,3)), ((1,0),(1,1)), ((1,0),(2,2)), ((1,0),(3,0)),\right.\\
\nonumber
& \left. ((2,3),(0,3)), ((2,3),(1,1)), ((2,3),(2,2)), ((2,3),(3,0)),\right.\\
\nonumber
& \left. ((3,1),(0,3)), ((3,1),(1,1)), ((3,1),(2,2)), ((3,1),(3,0))\right\}\\
\nonumber
\mathcal{C}^{\left\{l_{4},l_{3}\right\}}=&\left\{((0,2),(0,1)), ((0,2),(1,3)), ((0,2),(2,0)), ((0,2),(3,2)), \right.\\
\nonumber
& \left. ((1,0),(0,1)), ((1,0),(1,3)), ((1,0),(2,0)), ((1,0),(3,2)),\right.\\
\nonumber
& \left. ((2,3),(0,1)), ((2,3),(1,3)), ((2,3),(2,0)), ((2,3),(3,2)),\right.\\
\nonumber
& \left. ((3,1),(0,1)), ((3,1),(1,3)), ((3,1),(2,0)), ((3,1),(3,2))\right\}\\
\nonumber
\mathcal{C}^{\left\{l_{4},l_{4}\right\}}=&\left\{((0,2),(0,2)), ((0,2),(1,0)), ((0,2),(2,3)), ((0,2),(3,1)), \right.\\
\nonumber
& \left. ((1,0),(0,2)), ((1,0),(1,0)), ((1,0),(2,3)), ((1,0),(3,1)),\right.\\
\nonumber
& \left. ((2,3),(0,2)), ((2,3),(1,0)), ((2,3),(2,3)), ((2,3),(3,1)),\right.\\
\nonumber
& \left. ((3,1),(0,2)), ((3,1),(1,0)), ((3,1),(2,3)), ((3,1),(3,1)) \right\}
\nonumber
\end{align}
}
\begin{figure*}
{\footnotesize
{
\renewcommand{\arraystretch}{1,3}
\begin{tabular}{!{\vrule width 1pt}c!{\vrule width 1pt}c|c|c|c!{\vrule width 1pt}c|c|c|c!{\vrule width 1pt}c|c|c|c!{\vrule width 1pt}c|c|c|c!{\vrule width 1pt}} \noalign{\hrule height 1pt}
&$(0,0)$&$(0,1)$&$(0,2)$&$(0,3)$&$(1,0)$&$(1,1)$&$(1,2)$&$(1,3)$&$(2,0)$&$(2,1)$&$(2,2)$&$(2,3)$&$(3,0)$&$(3,1)$&$(3,2)$&$(3,3)$ \\\noalign{\hrule height 1pt}
$(0,0)$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{6}$ \\\hline
$(0,1)$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{7}$ \\\hline
$(0,2)$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{8}$ \\\hline
$(0,3)$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{5}$ \\\noalign{\hrule height 1pt}
$(1,0)$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{10}$ \\\hline
$(1,1)$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{11}$ \\\hline
$(1,2)$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{12}$ \\\hline
$(1,3)$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{9}$ \\\noalign{\hrule height 1pt}
$(2,0)$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{14}$ \\\hline
$(2,1)$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{15}$ \\\hline
$(2,2)$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{16}$ \\\hline
$(2,3)$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{13}$ \\\noalign{\hrule height 1pt}
$(3,0)$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{2}$ \\\hline
$(3,1)$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{3}$ \\\hline
$(3,2)$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{4}$ \\\hline
$(3,3)$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{1}$ \\\noalign{\hrule height 1pt}
\end{tabular}
}
}
\caption{Latin Square $L$ representing the clustering at the relay for the case $\gamma e^{j \theta}=j$, obtained using Cartesian Cluster Product, with the 4-PSK symbols that A(B) sent in the first and second channel use along the rows(columns)}
\label{ls1}
\end{figure*}
The entries of the above clusters are of the form $ ((x_{A_{1}},x_{B_{1}}),(x_{A_{2}},x_{B_{2}}))$, i.e., in the order A's transmission during the first channel use, B's transmission during the first channel use, A's transmission during the second channel use, B's transmission during the second channel use. We now represent these clusters by a Latin Square of side $16$, with $ (x_{A_{1}},x_{A_{2}})$ along the rows, and $(x_{B_{1}},x_{B_{2}})$ along the columns. The $((x_{A_{1}},x_{A_{2}}),(x_{B_{1}},x_{B_{2}}))$ entry of the Latin Square as dictated by the clusters above are as follows:
{\footnotesize
\begin{align}
\nonumber
\vspace{-0.5cm}
\mathcal{L}_{1}:=&\left\{((0,0),(0,0)), ((0,1),(0,2)), ((0,2),(0,1)), ((0,3),(0,3)), \right.\\
\nonumber
& \left. ((1,0),(2,0)), ((1,1),(2,2)), ((1,2),(2,1)), ((1,3),(2,3)),\right.\\
\nonumber
& \left. ((2,0),(1,0)), ((2,1),(1,2)), ((2,2),(1,1)), ((2,3),(1,3)),\right.\\
\nonumber
& \left. ((3,0),(3,0)), ((3,1),(3,2)), ((3,2),(3,1)), ((3,3),(3,3))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{L}_{2}:=&\left\{((0,0),(0,3)), ((0,1),(0,1)), ((0,2),(0,2)), ((0,3),(0,0)), \right.\\
\nonumber
& \left. ((1,0),(2,3)), ((1,1),(2,1)), ((1,2),(2,2)), ((1,3),(2,0)),\right.\\
\nonumber
& \left. ((2,0),(1,3)), ((2,1),(1,1)), ((2,2),(1,2)), ((2,3),(1,0)),\right.\\
\nonumber
& \left. ((3,0),(3,3)), ((3,1),(3,1)), ((3,2),(3,2)), ((3,3),(3,0))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{L}_{3}:=&\left\{((0,0),(0,1)), ((0,1),(0,3)), ((0,2),(0,0)), ((0,3),(0,2)), \right.\\
\nonumber
& \left. ((1,0),(2,1)), ((1,1),(2,3)), ((1,2),(2,0)), ((1,3),(2,2)),\right.\\
\nonumber
& \left. ((2,0),(1,1)), ((2,1),(1,3)), ((2,2),(1,0)), ((2,3),(1,2)),\right.\\
\nonumber
& \left. ((3,0),(3,1)), ((3,1),(3,3)), ((3,2),(3,0)), ((3,3),(3,2))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{L}_{4}:=&\left\{((0,0),(0,2)), ((0,1),(0,0)), ((0,2),(0,3)), ((0,3),(0,1)), \right.\\
\nonumber
& \left. ((1,0),(2,2)), ((1,1),(2,0)), ((1,2),(2,3)), ((1,3),(2,1)),\right.\\
\nonumber
& \left. ((2,0),(1,2)), ((2,1),(1,0)), ((2,2),(1,3)), ((2,3),(1,1)),\right.\\
\nonumber
& \left. ((3,0),(3,2)), ((3,1),(3,0)), ((3,2),(3,3)), ((3,3),(3,1))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{L}_{5}:=&\left\{((0,0),(3,0)), ((0,1),(3,2)), ((0,2),(3,1)), ((0,3),(3,3)), \right.\\
\nonumber
& \left. ((1,0),(1,0)), ((1,1),(1,2)), ((1,2),(1,1)), ((1,3),(1,3)),\right.\\
\nonumber
& \left. ((2,0),(2,0)), ((2,1),(2,2)), ((2,2),(2,1)), ((2,3),(2,3)),\right.\\
\nonumber
& \left. ((3,0),(0,0)), ((3,1),(0,2)), ((3,2),(0,1)), ((3,3),(0,3))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{L}_{6}:=&\left\{((0,0),(3,3)), ((0,1),(3,1)), ((0,2),(3,2)), ((0,3),(3,0)), \right.\\
\nonumber
& \left. ((1,0),(1,3)), ((1,1),(1,1)), ((1,2),(1,2)), ((1,3),(1,0)),\right.\\
\nonumber
& \left. ((2,0),(2,3)), ((2,1),(2,1)), ((2,2),(2,2)), ((2,3),(2,0)),\right.\\
\nonumber
& \left. ((3,0),(0,3)), ((3,1),(0,1)), ((3,2),(0,2)), ((3,3),(0,0))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{L}_{7}:=&\left\{((0,0),(3,1)), ((0,1),(3,3)), ((0,2),(3,0)), ((0,3),(3,2)), \right.\\
\nonumber
& \left. ((1,0),(1,1)), ((1,1),(1,3)), ((1,2),(1,0)), ((1,3),(1,2)),\right.\\
\nonumber
& \left. ((2,0),(2,1)), ((2,1),(2,3)), ((2,2),(2,0)), ((2,3),(2,2)),\right.\\
\nonumber
& \left. ((3,0),(0,1)), ((3,1),(0,3)), ((3,2),(0,0)), ((3,3),(0,2))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{L}_{8}:=&\left\{((0,0),(3,2)), ((0,1),(3,0)), ((0,2),(3,3)), ((0,3),(3,1)), \right.\\
\nonumber
& \left. ((1,0),(1,2)), ((1,1),(1,0)), ((1,2),(1,3)), ((1,3),(1,1)),\right.\\
\nonumber
& \left. ((2,0),(2,2)), ((2,1),(2,0)), ((2,2),(2,3)), ((2,3),(2,1)),\right.\\
\nonumber
& \left. ((3,0),(0,2)), ((3,1),(0,0)), ((3,2),(0,3)), ((3,3),(0,1))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{L}_{9}:=&\left\{((0,0),(1,0)), ((0,1),(1,2)), ((0,2),(1,1)), ((0,3),(1,3)), \right.\\
\nonumber
& \left. ((1,0),(3,0)), ((1,1),(3,2)), ((1,2),(3,1)), ((1,3),(3,3)),\right.\\
\nonumber
& \left. ((2,0),(0,0)), ((2,1),(0,2)), ((2,2),(0,1)), ((2,3),(0,3)),\right.\\
\nonumber
& \left. ((3,0),(2,0)), ((3,1),(2,2)), ((3,2),(2,1)), ((3,3),(2,3))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{L}_{10}:=&\left\{((0,0),(1,3)), ((0,1),(1,1)), ((0,2),(1,2)), ((0,3),(1,0)), \right.\\
\nonumber
& \left. ((1,0),(3,3)), ((1,1),(3,1)), ((1,2),(3,2)), ((1,3),(3,0)),\right.\\
\nonumber
& \left. ((2,0),(0,3)), ((2,1),(0,1)), ((2,2),(0,2)), ((2,3),(0,0)),\right.\\
\nonumber
& \left. ((3,0),(2,3)), ((3,1),(2,1)), ((3,2),(2,2)), ((3,3),(2,0))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{L}_{11}:=&\left\{((0,0),(1,1)), ((0,1),(1,3)), ((0,2),(1,0)), ((0,3),(1,2)), \right.\\
\nonumber
& \left. ((1,0),(3,1)), ((1,1),(3,3)), ((1,2),(3,0)), ((1,3),(3,2)),\right.\\
\nonumber
& \left. ((2,0),(0,1)), ((2,1),(0,3)), ((2,2),(0,0)), ((2,3),(0,2)),\right.\\
\nonumber
& \left. ((3,0),(2,1)), ((3,1),(2,3)), ((3,2),(2,0)), ((3,3),(2,2))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{L}_{12}:=&\left\{((0,0),(1,2)), ((0,1),(1,0)), ((0,2),(1,3)), ((0,3),(1,1)), \right.\\
\nonumber
& \left. ((1,0),(3,2)), ((1,1),(3,0)), ((1,2),(3,3)), ((1,3),(3,1)),\right.\\
\nonumber
& \left. ((2,0),(0,2)), ((2,1),(0,0)), ((2,2),(0,3)), ((2,3),(0,1)),\right.\\
\nonumber
& \left. ((3,0),(2,2)), ((3,1),(2,0)), ((3,2),(2,3)), ((3,3),(2,1))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{L}_{13}:=&\left\{((0,0),(2,0)), ((0,1),(2,2)), ((0,2),(2,1)), ((0,3),(2,3)), \right.\\
\nonumber
& \left. ((1,0),(0,0)), ((1,1),(0,2)), ((1,2),(0,1)), ((1,3),(0,3)),\right.\\
\nonumber
& \left. ((2,0),(3,0)), ((2,1),(3,2)), ((2,2),(3,1)), ((2,3),(3,3)),\right.\\
\nonumber
& \left. ((3,0),(1,0)), ((3,1),(1,2)), ((3,2),(1,1)), ((3,3),(1,3))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{L}_{14}:=&\left\{((0,0),(2,3)), ((0,1),(2,1)), ((0,2),(2,2)), ((0,3),(2,0)), \right.\\
\nonumber
& \left. ((1,0),(0,3)), ((1,1),(0,1)), ((1,2),(0,2)), ((1,3),(0,0)),\right.\\
\nonumber
& \left. ((2,0),(3,3)), ((2,1),(3,1)), ((2,2),(3,2)), ((2,3),(3,0)),\right.\\
\nonumber
& \left. ((3,0),(1,3)), ((3,1),(1,1)), ((3,2),(1,2)), ((3,3),(1,0))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{L}_{15}:=&\left\{((0,0),(2,1)), ((0,1),(2,3)), ((0,2),(2,0)), ((0,3),(2,2)), \right.\\
\nonumber
& \left. ((1,0),(0,1)), ((1,1),(0,3)), ((1,2),(0,0)), ((1,3),(0,2)),\right.\\
\nonumber
& \left. ((2,0),(3,1)), ((2,1),(3,3)), ((2,2),(3,0)), ((2,3),(3,2)),\right.\\
\nonumber
& \left. ((3,0),(1,1)), ((3,1),(1,3)), ((3,2),(1,0)), ((3,3),(1,2))\right\}
\end{align}
\begin{align}
\nonumber
\mathcal{L}_{16}:=&\left\{((0,0),(2,2)), ((0,1),(2,0)), ((0,2),(2,3)), ((0,3),(2,1)), \right.\\
\nonumber
& \left. ((1,0),(0,2)), ((1,1),(0,0)), ((1,2),(0,3)), ((1,3),(0,1)),\right.\\
\nonumber
& \left. ((2,0),(3,2)), ((2,1),(3,0)), ((2,2),(3,3)), ((2,3),(3,1)),\right.\\
\nonumber
& \left. ((3,0),(1,2)), ((3,1),(1,0)), ((3,2),(1,3)), ((3,3),(1,1)) \right\}
\nonumber
\end{align}
}
\begin{figure}[h]
\centering
\subfigure[$4 \times 4$ blocks in $L$]{
{
\begin{tabular}{|c|c|c|c|c|}
\hline $\:$ & 0 & 1 & 2 & 3\\
\hline 0 & $l_{1}$ & $l_{3}$ & $l_{4}$ & $l_{2}$\\
\hline 1 & $l_{4}$ & $l_{2}$ & $l_{1}$ & $l_{3}$\\
\hline 2 & $l_{3}$ & $l_{1}$ & $l_{2}$ & $l_{4}$\\
\hline 3 & $l_{2}$ & $l_{4}$ & $l_{3}$ & $l_{1}$\\
\hline
\end{tabular}}
\label{L1}
}
\subfigure[The array $L_B$]{
$\left[ {\begin{array}{cccc}
\alpha_1 & \alpha_3 & \alpha_4 & \alpha_2 \\
\alpha_4 & \alpha_2 & \alpha_1 & \alpha_3 \\
\alpha_3 & \alpha_1 & \alpha_2 & \alpha_4 \\
\alpha_2 & \alpha_4 & \alpha_3 & \alpha_1 \\
\end{array} } \right]$
\label{fig:L_B}
}
\caption[]{Latin Square representing the clustering $\mathcal{C}^{\left[j\right]}$ with the symbol sent by A(B) along the rows(columns).}
\end{figure}
\begin{figure*}
{\footnotesize
{
\renewcommand{\arraystretch}{1,3}
\begin{tabular}{!{\vrule width 1pt}c!{\vrule width 1pt}c|c|c|c!{\vrule width 1pt}c|c|c|c!{\vrule width 1pt}c|c|c|c!{\vrule width 1pt}c|c|c|c!{\vrule width 1pt}} \noalign{\hrule height 1pt}
&$(0,0)$&$(0,1)$&$(0,2)$&$(0,3)$&$(1,0)$&$(1,1)$&$(1,2)$&$(1,3)$&$(2,0)$&$(2,1)$&$(2,2)$&$(2,3)$&$(3,0)$&$(3,1)$&$(3,2)$&$(3,3)$ \\\noalign{\hrule height 1pt}
$(0,0)$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{13}$ \\\hline
$(0,1)$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{12}$ \\\hline
$(0,2)$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{11}$ \\\hline
$(0,3)$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{15}$ \\\noalign{\hrule height 1pt}
$(1,0)$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{8}$ \\\hline
$(1,1)$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{7}$ \\\hline
$(1,2)$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ \\\hline
$(1,3)$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ \\\noalign{\hrule height 1pt}
$(2,0)$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{3}$ \\\hline
$(2,1)$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{2}$ \\\hline
$(2,2)$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{1}$ \\\hline
$(2,3)$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{5}$ \\\noalign{\hrule height 1pt}
$(3,0)$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{23}$ \\\hline
$(3,1)$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{22}$ \\\hline
$(3,2)$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ \\\hline
$(3,3)$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ \\\noalign{\hrule height 1pt}
\end{tabular}
}
}
\caption{Latin Square representing the clustering at the relay for the case $\gamma e^{j \theta}=0.5+0.5j$, obtained using Cartesian Cluster Product, with the 4-PSK symbols that A(B) sent in the first and second channel use along the rows(columns)}
\label{ls2}
\end{figure*}
The resulting Latin Square representing the clusters denoted by say $L$, is shown in Fig. 3. This $16 \times 16$ array $L$ can be divided into 16 blocks of $4 \times 4$ arrays. Let $L_B=\left[L_{i,j}\right] $ where each $L_{i,j}$ is a $4\times 4$ array for $i,j=1,2,3,4$ as shown in Fig. 3. Each $L_{i,j}$ is in one-to-one correspondence with the Latin Square obtained in \cite{NMR} for removing the singular fade state $\gamma e^{j \theta}=j$ for the two-way 2-stage relaying scenario representing the clustering $\mathcal{C}^{\left[j\right]}$ given in Fig \ref{L1}. Also let,
\begin{align*}
\alpha_1 &:= L_{1,1}=L_{2,3}=L_{3,2}=L_{4,4},\\
\alpha_2 &:= L_{1,4}=L_{2,2}=L_{3,3}=L_{4,1}, \\
\alpha_3 &:= L_{1,2}=L_{2,4}=L_{3,1}=L_{4,3} \quad \mathrm{and} \\
\alpha_4 &:= L_{1,3}=L_{2,1}=L_{3,4}=L_{4,2}.
\end{align*}
This makes $L_B$ of the form shown in Fig. \ref{fig:L_B}.
This makes the block matrix $L_B$ also consistent with the Latin Square given in Fig. 4. The reason behind $L_B$ and $L_{i,j}$ being consistent with this Latin Square is as follows: each $L_{i,j}$ corresponds to some fixed values of the symbols A and B send during the first channel use, with the symbols sent by A and B during second channel use varying along the rows and columns respectively. The Latin Square in Fig. 3 has been obtained by taking the Cartesian Product of the clustering for removing the fade state $\gamma e^{j \theta}=j$ with itself. The Cartesian Product utilizes the clustering that is represented by the Latin Square given in Fig. 4, given by $\mathcal{C}^{\left[ j \right]}$, for the case for both the first and second channel use in the MA phase, which makes each $L_{i,j} ~ i,j=1,2,3,4$ and $L_B$ in one-to-one correspondence with this Latin Square that represents the clustering $\mathcal{C}^{\left[j\right]}$. \\
\end{example}
\textbf{\textit{Case 2:} $\gamma e^{j \theta} $ lies on the circle of radius $ 1/\sqrt{2}$.}\\
In this case, the Cartesian Product of $\mathcal{C}^{\left[\gamma e^{j \theta}\right]}$ with itself consists of $25$ clusters since the clustering $\mathcal{C}^{\left[\gamma e^{j \theta}\right]}$ has $5$ clusters. We now give an example of this case. The remaining instances of this case can be obtained from this example as will be shown later in Section V, Lemma 6.\\
\begin{example} Consider the case when $\gamma e^{j \theta}=0.5+0.5j.$ The clustering $\mathcal{C}^{\left[0.5+0.5j\right]}$ given in \cite{NMR} that removes this fade state for the two-way 2-stage relaying scenario is given by:
\begin{align}
\nonumber
\vspace{-0.8cm}
\mathcal{C}^{\left[0.5+0.5j\right]}=&\left\{l_1,l_2,l_3,l_4,l_5\right\},\\
\text{where,}\\
\nonumber
&l_{1}=\left\{\left(0,1\right),\left(1,2\right),\left(2,3\right)\right\}\\
\nonumber
&l_{2}=\left\{\left(0,2\right),\left(1,3\right),\left(3,0\right)\right\}\\
\nonumber
&l_{3}=\left\{\left(0,3\right),\left(2,0\right),\left(3,1\right)\right\}\\
\nonumber
&l_{4}=\left\{\left(1,0\right),\left(2,1\right),\left(3,2\right)\right\}\\
\nonumber
&l_{5}=\left\{\left(0,0\right),\left(1,1\right),\left(2,2\right),\left(3,3\right)\right\}.
\nonumber
\end{align}
The Cartesian Product of the above clustering given by $\mathcal{D}^{\left[0.5+0.5j\right]}=\left\{ \mathcal{C}^{\left\{l_{i},l_{j}\right\}}~ |~ i,j=1,2,3,4,5\right\}$ contains exactly $25$ clusters. The clusters and the corresponding constraints for the Latin Square representing the clustering have been listed in the Appendix A. The Cartesian Product of the clustering $\mathcal{C}^{\left[0.5+0.5j\right]}$ with itself, denoted by $\mathcal{D}^{\left[0.5+0.5j\right]}$ can be represented by the Latin Square given in Fig. 5.\\
\begin{figure}[ht]
\centering
\renewcommand{\arraystretch}{1,3}
\begin{tabular}{!{\vrule width 1pt}c!{\vrule width 1pt}c|c|c|c!{\vrule width 1pt}}\noalign{\hrule height 1pt}
&$0$&$1$&$2$&$3$\\\noalign{\hrule height 1pt}
$0$ & $l_{5}$ & $l_{1}$ & $l_{2}$ & $l_{3}$\\\hline
$1$ & $l_{4}$ & $l_{5}$ & $l_{1}$ & $l_{2}$\\\hline
$2$ & $l_{3}$ & $l_{4}$ & $l_{5}$ & $l_{1}$\\\hline
$3$ & $l_{2}$ & $l_{3}$ & $l_{4}$ & $l_{5}$\\\noalign{\hrule height 1pt}
\end{tabular}
\caption{Latin Square $l$ representing the clustering $\mathcal{C}^{\left[0.5+0.5j\right]}$ with the symbol sent by A(B) along the rows(columns)}
\end{figure}
\begin{figure*}
{\footnotesize
{
\renewcommand{\arraystretch}{1,3}
\begin{tabular}{!{\vrule width 1pt}c!{\vrule width 1pt}c|c|c|c!{\vrule width 1pt}c|c|c|c!{\vrule width 1pt}c|c|c|c!{\vrule width 1pt}c|c|c|c!{\vrule width 1pt}} \noalign{\hrule height 1pt}
&$(0,0)$&$(0,1)$&$(0,2)$&$(0,3)$&$(1,0)$&$(1,1)$&$(1,2)$&$(1,3)$&$(2,0)$&$(2,1)$&$(2,2)$&$(2,3)$&$(3,0)$&$(3,1)$&$(3,2)$&$(3,3)$ \\\noalign{\hrule height 1pt}
$(0,0)$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{7}$ \\\hline
$(0,1)$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{9}$ \\\hline
$(0,2)$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ \\\hline
$(0,3)$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{10}$ \\\noalign{\hrule height 1pt}
$(1,0)$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{17}$ \\\hline
$(1,1)$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{19}$ \\\hline
$(1,2)$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{16}$ \\\hline
$(1,3)$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{20}$ \\\noalign{\hrule height 1pt}
$(2,0)$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{2}$ \\\hline
$(2,1)$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ \\\hline
$(2,2)$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{19}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{1}$ \\\hline
$(2,3)$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{18}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{20}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{5}$ \\\noalign{\hrule height 1pt}
$(3,0)$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{22}$ \\\hline
$(3,1)$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ \\\hline
$(3,2)$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{24}$ & $\mathcal{L}_{25}$ & $\mathcal{L}_{21}$ \\\hline
$(3,3)$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{21}$ & $\mathcal{L}_{23}$ & $\mathcal{L}_{22}$ & $\mathcal{L}_{25}$ \\\noalign{\hrule height 1pt}
\end{tabular}
}
}
\caption{Latin Square representing the clustering at the relay for the case $\gamma e^{j \theta}=1+j$, obtained using Cartesian Cluster Product, with the 4-PSK symbols that A(B) sent in the first and second channel use along the rows(columns)}
\label{ls2}
\end{figure*}
As explained in the previous example, let the $16 \times 16$ Latin Square obtained as shown in Fig. 5 be denoted by say $L'$ and $L'_{B}=\left[L'_{i,j}\right]$ with $i,j=1,2,3,4$. Then both $L'_{B}$ and $L'_{i,j}$ must be consistent with the Latin Square of side $4$ given in Fig. 6, denoted by $l$, which represents the clustering $\mathcal{C}^{\left[0.5+0.5j\right]}$. As can be seen in Fig. 5, $l$ is repeated in each block $L'_{i,j}$ with a possibly different set of five symbols amongst $\left\{\mathcal{L}_{1},\mathcal{L}_{2},..., \mathcal{L}_{25}\right\}$ denoting the five symbols $\left\{l_{1}, l_{2},...,l_{5}\right\}$ in each $L'_{i,j}$. More precisely, the blocks $ L'_{1,1}=L'_{2,2}=L'_{3,3}=L'_{4,4}$ are the same as $l$, with the symbols $\mathcal{L}_{21}, \mathcal{L}_{22}, ..., \mathcal{L}_{25}$ replacing the symbols $l_{1},l_{2},...,l_{5}$ respectively. Similarly, the blocks $L'_{1,2}=L'_{2,3}=L'_{3,4}$ are the same as $l$ with symbols $\mathcal{L}_{1}, \mathcal{L}_{2}, ..., \mathcal{L}_{5}$ replacing the symbols $l_{1},l_{2},...,l_{5}$ respectively, the blocks $L'_{1,3}=L'_{2,4}=L'_{4,1}$ are the same as $l$ with symbols $\mathcal{L}_{6}, \mathcal{L}_{7}, ..., \mathcal{L}_{10}$ replacing the symbols $l_{1},l_{2},...,l_{5}$ respectively, the blocks $L'_{1,4}=L'_{3,1}=L'_{4,2}$ are the same as $l$ with symbols $\mathcal{L}_{11}, \mathcal{L}_{12}, ..., \mathcal{L}_{15}$ replacing the symbols $l_{1},l_{2},...,l_{5}$ respectively, and the blocks $L'_{2,1}=L'_{3,2}=L'_{4,3}$ are the same as $l$ with symbols $\mathcal{L}_{16}, \mathcal{L}_{17}, ..., \mathcal{L}_{20}$ replacing the symbols $l_{1},l_{2},...,l_{5}$ respectively. Thus, the array $L'$ can be obtained using $l$ by simply using a different set of five symbols to denote $l_1,l_2,...,l_5$ for every set of blocks corresponding to a symbol amongst $l_1,l_2,...,l_5$ in $l$. We will illustrate this in the next example by obtaining the $16 \times 16$ Latin Square using the $4\times 4$ Latin Square given in \cite{NMR} for the case.\\
\end{example}
\textbf{\textit{Case 3:} $\gamma e^{j \theta} $ lies on the circle of radius $ \sqrt{2}$.}\\
In this case, the Cartesian Product of $\mathcal{C}^{\left[\gamma e^{j \theta}\right]}$ with itself consists of $25$ clusters since the clustering $\mathcal{C}^{\left[\gamma e^{j \theta}\right]}$ has $5$ clusters. An instance of this case is as follows.\\
\begin{example} Consider the case when $\gamma e^{j \theta}=1+j.$ The clustering $\mathcal{C}^{\left[1+j\right]}$ given in \cite{NMR} that removes this fade state for the two-way 2-stage relaying scenario is given by:
\begin{align}
\nonumber
\vspace{-0.8cm}
\mathcal{C}^{\left[1+j\right]}=&\left\{l_{1},l_{2},l_{3},l_{4},l_{5}\right\} \\
\nonumber
\text{where,}\\
\nonumber
&l_{1}=\left\{\left(0,1\right),\left(2,3\right),\left(3,0\right)\right\}\\
\nonumber
&l_{2}=\left\{\left(0,3\right),\left(1,0\right),\left(3,2\right)\right\}\\
\nonumber
&l_{3}=\left\{\left(1,2\right),\left(2,0\right),\left(3,1\right)\right\}\\
\nonumber
&l_{4}=\left\{\left(0,2\right),\left(1,3\right),\left(2,1\right)\right\}\\
\nonumber
&l_{5}=\left\{\left(0,0\right),\left(1,1\right),\left(2,2\right),\left(3,3\right)\right\}.
\nonumber
\end{align}
This clustering can be represented by a Latin Square of side $4$ denoted by $l'$ as shown in Fig. 8. \\
\begin{figure}[ht]
\centering
\renewcommand{\arraystretch}{1,3}
\begin{tabular}{!{\vrule width 1pt}c!{\vrule width 1pt}c|c|c|c!{\vrule width 1pt}}\noalign{\hrule height 1pt}
&$0$&$1$&$2$&$3$\\\noalign{\hrule height 1pt}
$0$ & $l_{5}$ & $l_{1}$ & $l_{4}$ & $l_{2}$\\\hline
$1$ & $l_{2}$ & $l_{5}$ & $l_{3}$ & $l_{4}$\\\hline
$2$ & $l_{3}$ & $l_{4}$ & $l_{5}$ & $l_{1}$\\\hline
$3$ & $l_{1}$ & $l_{3}$ & $l_{2}$ & $l_{5}$\\\noalign{\hrule height 1pt}
\end{tabular}
\caption{Latin Square representing the clustering $\mathcal{C}^{\left[1+j\right]}$ with the symbol sent by A(B) along the rows(columns)}
\end{figure}
The Cartesian Product of the above clustering given by $\mathcal{D}^{\left[1+j\right]}=\left\{ \mathcal{C}^{\left\{l_{i},l_{j}\right\}}~ |~ i,j=1,2,3,4,5\right\}$ contains exactly $25$ clusters as given in the Appendix B. We represent these clusters by a Latin Square of side $16$, with $ (x_{A_{1}},x_{A_{2}})$ along the rows, and $(x_{B_{1}},x_{B_{2}})$ along the columns. The $((x_{A_{1}},x_{A_{2}}),(x_{B_{1}},x_{B_{2}}))$ entry of the Latin Square as dictated by the clusters above are also listed in the Appendix B.
Let the $16 \times 16$ Latin Square that represents the clustering obtained as a Cartesian Product of $\mathcal{C}^{\left[1+j\right]}$ with itself be denoted by say $L''$, and $L''_{B}=\left[L''_{i,j}\right]$ with $i,j=1,2,3,4$. Then each of $L''_{B}$ and $L''_{i,j}$ must be consistent with the Latin Square of side $4$ given in Fig. 8 which represents the clustering $\mathcal{C}^{\left[1+j\right]}$. We denote $l_1,l_2, ...,l_5$ in $L_{1,2}=L_{3,4}=L_{4,1}$ by $\mathcal{L}_{1}, \mathcal{L}_{2},...,\mathcal{L}_{5}$, in $L_{1,4}=L_{2,1}=L_{4,3}$ by $\mathcal{L}_{5}, \mathcal{L}_{6},...,\mathcal{L}_{10}$, in blocks $L_{2,3}=L_{3,1}=L_{3,2}$ by $\mathcal{L}_{10}, \mathcal{L}_{11},...,\mathcal{L}_{15}$, in blocks $L_{1,3}=L_{2,4}=L_{3,2}$ by $\mathcal{L}_{16}, \mathcal{L}_{17},...,\mathcal{L}_{20}$ and in blocks $L_{1,1}=L_{2,2}=L_{3,3}=L_{4,4}$ by $\mathcal{L}_{21}, \mathcal{L}_{22},...,\mathcal{L}_{25}$. Placing these blocks in accordance with $l'$, the Cartesian Product of the clustering $\mathcal{C}^{\left[1+j\right]}$ with itself, denoted by $\mathcal{D}^{\left[1+j\right]}$ can be represented by the Latin Square given in Fig. 7.\\
The Latin Square which removes the singular fade state $\frac{1}{\gamma}e^{-j\theta}$ can be obtained by taking the transpose of the Latin Square which removes the singular fade state ${\gamma}e^{j\theta}$. For example, the Latin Square which removes the singular fade state $\sqrt{2}e^{j\frac{\pi}{4}}$ can also be obtained by taking the transpose of the Latin Square which removes the singular fade state $\frac{1}{\sqrt{2}}e^{-j\frac{\pi}{4}}.$ The reason for this is as follows: the case when the singular fade state is $\frac{1}{\gamma}e^{-j\theta}$ can be equivalently viewed as the case when the singular fade states is ${\gamma}e^{j\theta}$ with the users A and B interchanged. Interchanging the users is equivalent to taking transpose of the Latin Square.\\
\end{example}
\section{Clusterings from Latin Square of Lower Size}
In this section, we deal with any $2^ \lambda $-PSK constellation. In \cite{MNR}, it was shown that the singular fade states for the two-way 2-stage relaying scenario lie on circles centered at the origin. From Lemma 1, since the singular fade states for the ACF two-way relaying scenario are the same as that of the two-way 2-stage relaying scenario, it follows that the singular fade states for the ACF two-way relaying scenario lie on circles centered at the origin as well. In this section, it is shown that for each circle, it is enough if we obtain one Latin Square which removes a singular fade state on that circle. The Latin Squares which remove the other singular fade states on that circle can be obtained by some elementary operations on the Latin Square, which are described in the sequel.\\
For a Latin Square $L$ of order $2^{2\lambda}$, let $L_{i,j}, 0 \leq i,j \leq 2^{\lambda}-1,$ denote the Latin Sub-square of order $2^{\lambda}$ obtained by taking only the rows $2^{\lambda}i$ to $2^{\lambda}(i+1)-1$ and only the columns $2^{\lambda}j$ to $2^{\lambda}(j+1)-1$ of $L$. For example, in Fig. \ref{Latin_subsquare_ex}, the Latin Sub-squares $L_{i,j}, i,j \in \lbrace 0,1 \rbrace,$ of order 2 corresponding to the Latin Square $L$ of order 4 are shown. Let $L_B$ denote the Square of order $2^\lambda$, associated with the Latin Square $L$ of order $2^{2\lambda}$, with $L_{i,j},0 \leq i,j \leq 2^{\lambda}-1,$ as its entries. For example the square $L_B$ of order 2 associated to a Latin Square $L$ of order 4 is as shown in Fig. \ref{L_B_example}.\\
\begin{figure}[htbp]
\centering
\includegraphics[totalheight=1.25in,width=2.5in]{Latin_subsquare_ex.eps}
\caption{Obtaining Latin Squares $L_{i,j}$'s from the Latin Square $L$.}
\label{Latin_subsquare_ex}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[totalheight=1in,width=1in]{L_B_example.eps}
\caption{The Square $L_B$ order 2 corresponding to a Latin Square $L$ of order 4.}
\label{L_B_example}
\end{figure}
\begin{lemma}
Consider the two-way ACF relaying with $2^\lambda$-PSK signal set used at nodes A and B. The Latin Square $L''$ of order $2^{2\lambda}$ which removes the singular fade state $(\gamma, \theta^{\prime})$, can be obtained from the Latin Square $L$ of order $2^{2\lambda}$ which removes the singular fade state $(\gamma, \theta)$, where $\theta'-\theta = k\frac{2\pi}{2^{\lambda}}$, as follows: Cyclic shift the columns of each one the $2^{2\lambda}$ Latin Squares $L_{i,j}, 0 \leq i,j \leq 2^{\lambda}-1,$ $k$ times to the left to get the Latin Square $L'$. Cyclic shift the columns of the Square $L'_{B}$ associated with $L'$, $k$ times to the left, to get the Square $L''_{B}$ associated with the Latin Square $L''.$\\
\begin{proof}
For the singular fade state $\gamma e^{j\theta}$, let {\footnotesize $\left\lbrace\left((x_{A_1},x_{A_2}),(x_{B_1},x_{B_2})\right),\left((x'_{A_1},x'_{A_2}),(x'_{B_1},x'_{B_2})\right)\right\rbrace$} be a singularity removal constraint , i.e.,\\
\begin{equation}
\label{eqn_sing}
\gamma e^{j\theta}=\frac{x'_{A_1}-x_{A_1}}{x_{B_1}-x'_{B_1}}=\frac{x'_{A_2}-x_{A_2}}{x_{B_2}-x'_{B_2}}.
\end{equation}
From \eqref{eqn_sing}, it follows that
{\footnotesize
\begin{align}
\nonumber
&\left\lbrace\left((x_{A_1},x_{A_2}),(x_{B_1} e^{-\frac{jk2\pi}{2^\lambda}},x_{B_2} e^{-\frac{jk2\pi}{2^\lambda}})\right),\right.\\
\nonumber
&\left.\hspace{1.6 cm}\left((x'_{A_1},x'_{A_2})(x'_{B_1}e^{-\frac{jk2\pi}{2^\lambda}},x'_{B_2} e^{-\frac{jk2\pi}{2^\lambda}})\right)\right\rbrace
\end{align}
} is a singularity removal constraint for the singular fade state $(\gamma, \theta^{\prime})$, where $\theta'-\theta = k\frac{2\pi}{2^\lambda}.$ In other words, the rotation in the $\gamma e^{j\theta}$ plane by an angle $\theta$ can be viewed equivalently as a rotation of the constellations used by B during the MA phases by an angle $\theta^\prime - \theta$. Note that the columns of the Latin Square $L$ which removes the singular fade states $\gamma e^{j\theta}$ are indexed by the symbols $\left(x_{B_1},x_{B_2}\right)$ transmitted by B during the two MA phases. Rotating the signal set used by B during the second MA phase by an angle $\frac{2k\pi}{2^\lambda}$ is equivalent to shifting the columns of the Latin Sub-squares $L_{i,j}$ $k$ times to the left. Similarly, rotating the signal set used by B during the first MA phase by an angle $\frac{2k\pi}{2^{\lambda}}$ is equivalent to cyclic shifting the columns of the square $L_B$, $k$ times to the left. This completes the proof.
\end{proof}
\end{lemma}
\begin{figure}
\centering
\subfigure[The Latin Square $L$ that removes the singular fade state $(\gamma=1,\theta=0)$]{
\includegraphics[totalheight=2.8in,width=2.8in]{ex1.eps}
\label{fig:ex1}
}
\subfigure[The Latin Square $L'$ obtained form the Latin Square $L$ using the procedure described in Lemma 4]{
\includegraphics[totalheight=2.8in,width=2.8in]{ex2.eps}
\label{fig:ex2}
}
\subfigure[The Latin Square $L''$ that removes the singular fade state $(\gamma=1,\theta=\frac{\pi}{2})$]{
\includegraphics[totalheight=2.8in,width=2.8in]{ex3.eps}
\label{fig:ex3}
}
\caption{Construction of the Latin Square $L''$ which removes $(\gamma=1,\theta=\pi/2)$ from the Latin Square $L$ which removes $(\gamma=1,\theta=0)$}
\label{fig:ex}
\end{figure}
For example consider the Latin Square $L$ in Fig. \ref{fig:ex1} which removes the singular fade state $(\gamma=1,\theta=0).$ The Latin Square $L''$ which removes the singular fade state $(\gamma=1,\theta=\frac{\pi}{2})$ can be obtained from $L$ as follows: The columns of the Latin Sub-squares $L_{i,j}, 0 \leq i,j \leq 3$ are cyclically shifted once to the left, to obtain the Latin Square $L'$ shown in Fig. \ref{fig:ex2}. The columns of the Square $L'_{B}$ associated with the Latin Square $L'$ are cyclically shifted once to the left, to obtain the Square $L''_{B}$. The Latin Square $L''$ associated with the Square $L''_B$ which removes the singular fade state $(\gamma=1,\theta=\frac{\pi}{2})$ is shown in Fig. \ref{fig:ex3}.\\
For the ACF two-way relaying scenario with 4-PSK signal set used at the end nodes, the twelve singular fade states lie on three circles with radii $1,$ $\frac{1}{\sqrt{2}},$ and $\sqrt{2}.$ The Latin Squares which remove all the twelve singular fade states can be obtained from the three Latin Squares which remove the singular fade states $j,$ $0.5+0.5j$ and $1+j$ given in Fig. 3, Fig. 5 and Fig.7 respectively.
\section{Direct Clustering}
\begin{algorithm}[H]
\label{Alg}
\SetLine
\linesnumbered
\KwIn{The constrained $16 \times 16$ array}
\KwOut{A Latin Square representing the clustering map at the relay}
Start with the constrained $16 \times 16$ array
Initialize all empty cells of the array to 0
\For{$1\leq i\leq 16 $}{
\For{$1\leq j\leq 16 $}{
\If{cell $\left(i,j\right)$ of the array is NULL}{
Initialize c=1
\eIf{$\mathcal{L}_{c}$ does not occur in the $i^{th}$ row or the $j^{th}$ column of the array}{
replace 0 at cell $\left(i,j\right)$ of the array with $\mathcal{L}_{c}$\;
}{
c=c+1\;
}
}
}
}
\caption{Obtaining the $16 \times 16$ Latin Square from the $16 \times 16 $ array constrained using Singularity Removal Constraints}
\end{algorithm}
Recall that there are three classes of singular fade states depending on the radius of the circle it lies on {\footnotesize\textit{(Case 1:)}} $\gamma e^{j \theta} $ lies on the unit circle, {\footnotesize\textit{(Case 2:)}} $\gamma e^{j \theta} $ lies on the circle of radius $ 1/\sqrt{2}$ and {\footnotesize\textit{(Case 3:)}} $\gamma e^{j \theta} $ lies on the circle of radius $ \sqrt{2}$. The number of clusters in the clustering utilized by relay node R during BC phase obtained using Cartesian Product in the three cases is $16$, $25$ and $25$ respectively. It is observed that, if instead of taking the Cartesian Product of the clusterings given in \cite{NMR}, the Cartesian Product of the \textit{Singularity Removal Constraints} corresponding to each fade state are used to fill a $16 \times 16$ array, and the resulting incomplete array so obtained is completed using Algorithm \ref{Alg}, so as to form a Latin Square of side 16, then the number of clusters of the resulting clustering corresponding the to this Latin Square can be reduced from 25 to a lesser number in both \textit{Case 2} and \textit{Case 3}. We call this the Direct Clustering. We now explain this clustering with the help of examples in the second and the third case only, since for the first case, the minimum number of clusters required, i.e., $16$, can be achieved using Cartesian Product Clustering as shown in Section III.\\\\
\noindent \textbf{\textit{Case 2:} $\gamma e^{j \theta} $ lies on the circle of radius $ 1/\sqrt{2}$.}\\
In this case, there are a total of $80$ singularity removal constraints as shown in the following lemma.\\
\begin{lemma} When $\gamma e^{j \theta} $ lies on the circle of radius $ 1/\sqrt{2}$, there are a total of $80$ singularity removal constraints.
\end{lemma}
\begin{proof} Let the singularity removal constraints for the two-way ACF relaying be the set $\left\{ \mathcal{C}_{1},\mathcal{C}_{2}, ..., \mathcal{C}_{s}\right\}$. Let, for $t=1,2,...,s$,
{\footnotesize
\vspace{-0.4cm}
\begin{align}
\nonumber
&\mathcal{C}_{t}=\left\{((x_{1_{k}},y_{1_{k}}),(x_{2_{k}},y_{2_{k}})) ~ | ~ x_{1_{k}},y_{1_{k}},x_{2_{k}},y_{2_{k}} \in \mathcal{S},~k=1,2,...n_t \right\}.
\end{align}
\vspace{-0.4cm}
}
Then, for $1 \leq k_1, k_2 \leq n_t$,
\begin{align}
\nonumber
&x_{1_{k_{1}}}+ \gamma e^{j \theta} y_{1_{k_{1}}}=x_{1_{k_{2}}}+ \gamma e^{j \theta} y_{1_{k_{2}}} \text{~and}\\
\nonumber
&x_{2_{k_{1}}}+ \gamma e^{j \theta} y_{2_{k_{1}}}=x_{2_{k_{2}}}+ \gamma e^{j \theta} y_{2_{k_{2}}}.
\end{align}
Since in the case of two-way ACF relaying, the user nodes A and B transmit twice to the relay node R, these constraints for the ACF relaying can be obtained by taking the Cartesian Product of all sets of the form
\begin{align}
\nonumber
&\left\{(x_{A_{l}},x_{B_{l}})~|~ x_{A_{l_{1}}}+\gamma e^{j \theta} x_{B_{l_{1}}}=x_{A_{l_{2}}}+\gamma e^{j \theta} x_{B_{l_{2}}} ~\forall l_1,l_2 \right. \\
\nonumber
& \left. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~\text{~where~} x_{A_{l_{1}}},x_{B_{l_{1}}},x_{A_{l_{2}}},x_{B_{l_{2}}} \in \mathcal{S} \right\}.
\end{align}
These sets can be of two types:
\begin{enumerate}
\item The singularity removal constraints corresponding to the fade state $\gamma e^{j \theta} $ for the two-way 2-stage relaying as given in \cite{NMR}. Let us denote the 2-stage singularity removal constraints as:
\begin{align}
\nonumber
&l_{i}=\left\{(x_{i_{1}},y_{i_{1}}),(x'_{i_{1}},y'_{i_{1}})\right\} \text{~for~} i=1,2,3,4.
\end{align}
\item The sets of the form $\left\{(x_A,x_B),(x_A,x_B)\right\}$ for $(x_A,x_B)\in \mathcal{S}^2$ where $(x_A,x_B)\notin l_{i} ~ \forall ~ i=1,2,3,4$, since $ x_A + \gamma e^{j \theta} x_B= x_A + \gamma e^{j \theta} x_B$. The $(x_A,x_B)$ for which $(x_A,x_B) \in l_{i} ~ \text{for some} ~ i=1,2,3,4,5$ are not considered in this category, as it already occurs in some set of the above category.
\end{enumerate}
The Cartesian Products of these sets amongst themselves must be the singularity removal constraints for the ACF relaying. Now, the constraint sets so obtained are also of two types:
\begin{enumerate}
\item For $i,j\in \lbrace 1,2,3,4 \rbrace$,
{\footnotesize
\begin{align}
\nonumber
l_{i} \times l_{j}= & \left\{((x_{i_{1}},y_{i_{1}}),(x'_{j_{1}},y'_{j_{1}})),((x'_{i_{1}},y'_{i_{1}}),(x_{j_{1}},y_{j_{1}})), \right. \\
\nonumber
& \left.((x_{i_{1}},y_{i_{1}}),(x_{j_{1}},y_{j_{1}})),((x'_{i_{1}},y'_{i_{1}}),(x'_{j_{1}},y'_{j_{1}}))\right\},\\
\nonumber
\end{align}
}
These singularity removal constraints account for 16 of the total number of constraints. \\
\item For $i \in \lbrace 1,2,3,4 \rbrace$ and the four $m_k :=((x_{A_{k}},x_{B_{k}}),(x_{A_{k}},x_{B_{k}}))$ for $k \in \lbrace 1,2,...,8 \rbrace$ that satisfy $(x_{A_{k}},x_{B_{k}}) \notin l_{j} ~ \forall j \in \lbrace 1,2,3,4 \rbrace$;
{\footnotesize
\vspace{-0.4cm}
\begin{align}
\nonumber
l_{i} \times m_k= &\left\{((x_{i_{1}},y_{i_{1}}),(x_{A_{k}},x_{B_{k}})),((x'_{i_{1}},y'_{i_{1}}),(x_{A_{k}},x_{B_{k}})) \right\}\\
\nonumber
m_k \times l_{i}= & \left\{((x_{A_{k}},x_{B_{k}}),(x_{i_{1}},y_{i_{1}})),((x_{A_{k}},x_{B_{k}}),(x'_{i_{1}},y'_{i_{1}})) \right\}\\
\nonumber
\end{align}
}
These singularity removal constraints account for remaining 64 constraints. \\
\end{enumerate}
Thus, the set of singularity removal constraints for two-way ACF relaying becomes,
{\footnotesize
\begin{align}
\nonumber
\left\{l_{i} \times l_{j} ~|~ i,j=1,2,3,4\right\} &\cup \left\{l_i \times m_k ~|~ i=1,2,3,4, ~k=1,2,...,8\right\} \\
\nonumber
&\cup \left\{m_k \times l_i ~|~ i=1,2,3,4, ~k=1,2,...,8\right\},
\end{align}
}where the subset $\left\{l_{i} \times l_{j} ~|~ i,j=1,2,3,4\right\}$ contains 16 constraints, and the subsets $\left\{l_i \times m_k ~|~ i=1,2,3,4, ~k=1,2,...,8\right\}$ and $\left\{m_k \times l_i ~|~ i=1,2,3,4, ~k=1,2,...,8\right\}$ contain 32 constraints each, which amount to a total of 80 singularity removal constraints.\\
\end{proof}
\vspace{0.5cm}
The $16\times 16$ Latin Square representing these constraints can be completed using 20 symbols, as we show in the following example.\\
\begin{figure*}
{\scriptsize
\begin{tabular}{||c||l|l|l||}\hline
& ~~~~Singularity Removal Constraints for $\gamma e^{j \theta}=-0.5+0.5j$ &~~~~~~~~ Latin Square Constraints for $\gamma e^{j \theta}=-0.5+0.5j$ & Cluster \\\hline \hline
(1) & $\left\{((0,0),(1,3)), ((1,3),(0,0)), ((0,0),(0,0)), ((1,3),(1,3)) \right\} $ & $\left\{((0,1),(0,3)), ((1,0),(3,0)), ((0,0),(0,0)), ((1,1),(3,3)) \right\} $ & $\mathcal{L}_{1}$ \\\hline
(2) & $\left\{((1,1),(3,2)), ((3,2),(1,1)),((1,1),(1,1)),((3,2),(3,2))\right\} $ & $\left\{((1,3),(1,2)), ((3,1),(2,1)),((1,1),(1,1)),((3,3),(2,2))\right\}$ & $\mathcal{L}_{2}$ \\\hline
(3) & $\left\{((0,1),(2,2)), ((2,2),(0,1)),((0,1),(0,1)),((2,2),(2,2))\right\} $ & $\left\{((0,2),(1,2)), ((2,0),(2,1)),((0,0),(1,1)),((2,2),(2,2))\right\} $ & $\mathcal{L}_{3}$ \\\hline
(4) & $\left\{((2,0),(3,3)), ((3,3),(2,0)),((2,0),(2,0)),((3,3),(3,3))\right\} $ & $\left\{((2,3),(0,3)), ((3,2),(3,0)),((2,2),(0,0)),((3,3),(3,3))\right\} $ & $\mathcal{L}_{4}$ \\\hline
(5) & $\left\{((0,0),(0,1)), ((1,3),(2,2)),((1,3),(0,1)),((0,0),(2,2))\right\} $ & $\left\{((0,0),(0,1)), ((1,2),(3,2)),((1,0),(3,1)),((0,2),(0,2))\right\} $ & $\mathcal{L}_{3}$ \\\hline
(6) & $\left\{((0,0),(1,1)), ((1,3),(3,2)), ((1,3),(1,1)), ((0,0),(3,2)) \right\} $ & $\left\{((0,0),(0,1)), ((1,3),(1,0)), ((1,3),(0,1)), ((0,0),(1,0)) \right\} $ & $\mathcal{L}_{2}$ \\\hline
(7) & $\left\{((0,0),(2,0)), ((1,3),(3,3)), ((1,3),(2,0)), ((0,0),(3,3)) \right\} $ & $\left\{((0,2),(0,0)), ((1,3),(3,3)), ((1,2),(3,0)), ((0,3),(0,3)) \right\}$ & $\mathcal{L}_{5}$ \\\hline
(8) & $\left\{((0,1),(0,0)), ((2,2),(1,3)), ((2,2),(0,0)), ((0,1),(1,3)) \right\} $ & $\left\{((0,0),(1,0)), ((2,1),(2,3)), ((2,0),(2,0)), ((0,1),(1,3)) \right\}$ & $\mathcal{L}_{4}$ \\\hline
(9) & $\left\{((0,1),(1,1)), ((2,2),(3,2)), ((2,2),(3,2)), ((0,1),(1,1)) \right\} $ & $\left\{((0,1),(1,1)), ((2,3),(2,2)), ((2,3),(2,2)), ((0,1),(1,1)) \right\}$ & $\mathcal{L}_{6}$ \\\hline
(10)& $\left\{((0,1),(2,0)), ((2,2),(3,3)), ((2,2),(3,3)), ((0,1),(2,0)) \right\} $ & $\left\{((0,2),(1,0)), ((2,3),(2,3)), ((2,3),(2,3)), ((0,2),(1,0)) \right\}$ & $\mathcal{L}_{1}$ \\\hline
(11) & $\left\{((1,1),(0,0)), ((3,2),(1,3)), ((3,2),(0,0)), ((1,1),(1,3)) \right\} $ & $\left\{((1,0),(1,0)), ((3,1),(2,3)), ((3,0),(2,0)), ((1,1),(1,3)) \right\}$ & $\mathcal{L}_{5}$ \\\hline
(12) & $\left\{((1,1),(0,1)), ((3,2),(2,2)), ((3,2),(0,1)), ((1,1),(2,2)) \right\} $ & $\left\{((1,0),(1,1)), ((3,2),(2,2)), ((3,0),(2,1)), ((1,2),(1,2)) \right\}$ & $\mathcal{L}_{7}$ \\\hline
(13) & $\left\{((1,1),(2,0)), ((3,2),(3,3)), ((3,2),(2,0)), ((1,1),(3,3)) \right\} $ & $\left\{((1,2),(1,0)), ((3,3),(2,3)), ((3,2),(2,0)), ((1,3),(1,3)) \right\}$ & $\mathcal{L}_{6}$ \\\hline
(14) & $\left\{((2,0),(0,0)), ((3,3),(1,3)), ((3,3),(0,0)), ((2,0),(1,3)) \right\} $ & $\left\{((2,0),(0,0)), ((3,1),(3,3)), ((3,0),(3,0)), ((2,1),(0,3)) \right\}$ & $\mathcal{L}_{8}$ \\\hline
(15) & $\left\{((2,0),(0,1)), ((3,3),(2,2)), ((3,3),(0,1)), ((2,0),(2,2)) \right\} $ & $\left\{((2,0),(0,1)), ((3,2),(3,2)), ((3,0),(3,1)), ((2,2),(0,2)) \right\}$ & $\mathcal{L}_{9}$ \\\hline
(16) & $\left\{((2,0),(1,1)), ((3,3),(3,2)), ((3,3),(1,1)), ((2,0),(3,2)) \right\} $ & $\left\{((2,1),(0,1)), ((3,3),(3,2)), ((3,1),(3,1)), ((2,3),(0,2)) \right\}$ & $\mathcal{L}_{7}$ \\\hline
(17) & $\left\{((0,0),(0,2)), ((1,3),(0,2)) \right\} $ & $\left\{((0,0),(0,2)), ((1,0),(3,2))\right\}$ & $\mathcal{L}_{6}$ \\\hline
(18) & $\left\{((0,0),(0,3)), ((1,3),(0,3)) \right\} $ & $\left\{((0,0),(0,3)), ((1,0),(3,3)\right\}$ & $\mathcal{L}_{9}$ \\\hline
(19) & $\left\{((0,0),(1,0)), ((1,3),(1,0)) \right\} $ & $\left\{((0,1),(0,0)), ((1,1),(3,0))\right\}$ & $\mathcal{L}_{7}$ \\\hline
(20) & $\left\{((0,0),(1,2)), ((1,3),(1,2)) \right\} $ & $\left\{((0,1),(0,2)), ((1,1),(3,2))\right\}$ & $\mathcal{L}_{8}$ \\\hline
(21) & $\left\{((0,0),(2,1)), ((1,3),(2,1)) \right\} $ & $\left\{((0,2),(0,1)), ((1,2),(3,1))\right\}$ & $\mathcal{L}_{4}$ \\\hline
(22) & $\left\{((0,0),(2,3)), ((1,3),(2,3)) \right\} $ & $\left\{((0,2),(0,3)), ((1,2),(3,3))\right\}$ & $\mathcal{L}_{10}$ \\\hline
(23) & $\left\{((0,0),(3,0)), ((1,3),(3,0)) \right\} $ & $\left\{((0,3),(0,0)), ((1,3),(3,0))\right\}$ & $\mathcal{L}_{9}$ \\\hline
(24) & $\left\{((0,0),(3,1)), ((1,3),(3,1)) \right\} $ & $\left\{((0,3),(0,1)), ((1,3),(3,1))\right\}$ & $\mathcal{L}_{8}$ \\\hline
(25) & $\left\{((0,2),(0,0)), ((0,2),(1,3)) \right\} $ & $\left\{((0,0),(2,0)), ((0,1),(2,3))\right\}$ & $\mathcal{L}_{10}$ \\\hline
(26) & $\left\{((0,3),(0,0)), ((0,3),(1,3)) \right\} $ & $\left\{((0,0),(3,0)), ((0,1),(3,3))\right\}$ & $\mathcal{L}_{11}$ \\\hline
(27) & $\left\{((1,0),(0,0)), ((1,0),(1,3)) \right\} $ & $\left\{((1,0),(0,0)), ((1,1),(0,3))\right\}$ & $\mathcal{L}_{11}$\\\hline
(28) & $\left\{((1,2),(0,0)), ((1,2),(1,3)) \right\} $ & $\left\{((1,0),(2,0)), ((1,1),(2,3))\right\}$ & $\mathcal{L}_{12}$\\\hline
(29) & $\left\{((2,1),(0,0)), ((2,1),(1,3)) \right\} $ & $\left\{((2,0),(1,0)), ((2,1),(1,3))\right\}$ & $\mathcal{L}_{3}$ \\\hline
(30) & $\left\{((2,3),(0,0)), ((2,3),(1,3)) \right\} $ & $\left\{((2,0),(3,0)), ((2,1),(3,3))\right\}$ & $\mathcal{L}_{12}$\\\hline
(31) & $\left\{((3,0),(0,0)), ((3,0),(1,3)) \right\} $ & $\left\{((3,0),(0,0)), ((3,1),(0,3))\right\}$ & $\mathcal{L}_{2}$ \\\hline
(32) & $\left\{((3,1),(0,0)), ((3,1),(1,3)) \right\} $ & $\left\{((3,0),(1,0)), ((3,1),(1,3))\right\}$ & $\mathcal{L}_{10}$\\\hline
(33) & $\left\{((0,1),(0,2)), ((2,2),(0,2)) \right\} $ & $\left\{((0,0),(1,2)), ((2,0),(2,2))\right\}$ & $\mathcal{L}_{5}$\\\hline
(34) & $\left\{((0,1),(0,3)), ((2,2),(0,3)) \right\} $ & $\left\{((0,0),(1,3)), ((2,0),(2,3)) \right\}$ & $\mathcal{L}_{7}$\\\hline
(35) & $\left\{((0,1),(1,0)), ((2,2),(1,0)) \right\} $ & $\left\{((0,1),(1,0)), ((2,1),(2,0)) \right\}$ & $\mathcal{L}_{9}$\\\hline
(36) & $\left\{((0,1),(1,2)), ((2,2),(1,2)) \right\} $ & $\left\{((0,1),(1,2)), ((2,1),(2,2)) \right\}$ & $\mathcal{L}_{13}$\\\hline
(37) & $\left\{((0,1),(2,1)), ((2,2),(2,1)) \right\} $ & $\left\{((0,2),(1,1)), ((2,2),(2,1)) \right\}$ & $\mathcal{L}_{8}$\\\hline
(38) & $\left\{((0,1),(2,3)), ((2,2),(2,3)) \right\} $ & $\left\{((0,2),(1,3)), ((2,2),(2,3)) \right\}$ & $\mathcal{L}_{11}$\\\hline
(39) & $\left\{((0,1),(3,0)), ((2,2),(3,0)) \right\} $ & $\left\{((0,3),(1,0)), ((2,3),(2,0)) \right\}$ & $\mathcal{L}_{11}$\\\hline
(40) & $\left\{((0,1),(3,1)), ((2,2),(3,1)) \right\} $ & $\left\{((0,3),(1,1)), ((2,3),(2,1)) \right\}$ & $\mathcal{L}_{10}$\\\hline
(41) & $\left\{((0,2),(0,1)), ((0,2),(2,2)) \right\} $ & $\left\{((0,0),(2,1)), ((0,2),(2,2)) \right\}$ & $\mathcal{L}_{12}$\\\hline
(42) & $\left\{((0,3),(0,1)), ((0,3),(2,2)) \right\} $ & $\left\{((0,0),(3,1)), ((0,2),(3,2)) \right\}$ & $\mathcal{L}_{13}$\\\hline
(43) & $\left\{((1,0),(0,1)), ((1,0),(2,2)) \right\} $ & $\left\{((1,0),(0,1)), ((1,2),(0,2)) \right\}$ & $\mathcal{L}_{13}$\\\hline
(44) & $\left\{((1,2),(0,1)), ((1,2),(2,2)) \right\} $ & $\left\{((1,0),(2,1)), ((1,2),(2,2)) \right\}$ & $\mathcal{L}_{14}$\\\hline
(45) & $\left\{((2,1),(0,1)), ((2,1),(2,2)) \right\} $ & $\left\{((2,0),(1,1)), ((2,2),(1,2)) \right\}$ & $\mathcal{L}_{14}$\\\hline
(46) & $\left\{((2,3),(0,1)), ((2,3),(2,2)) \right\} $ & $\left\{((2,0),(3,1)), ((2,2),(3,2)) \right\}$ & $\mathcal{L}_{10}$\\\hline
(47) & $\left\{((3,0),(0,1)), ((3,0),(2,2)) \right\} $ & $\left\{((3,0),(0,1)), ((3,2),(0,2)) \right\}$ & $\mathcal{L}_{1}$\\\hline
(48) & $\left\{((3,1),(0,1)), ((3,1),(2,2)) \right\} $ & $\left\{((3,0),(1,1)), ((3,2),(1,2)) \right\}$ & $\mathcal{L}_{11}$\\\hline
(49) & $\left\{((1,1),(0,2)), ((3,2),(0,2)) \right\} $ & $\left\{((1,0),(1,2)), ((3,0),(2,2)) \right\}$ & $\mathcal{L}_{4}$\\\hline
(50) & $\left\{((1,1),(0,3)), ((3,2),(0,3)) \right\} $ & $\left\{((1,0),(1,3)), ((3,0),(2,3)) \right\}$ & $\mathcal{L}_{15}$\\\hline
(51) & $\left\{((1,1),(1,0)), ((3,2),(1,0)) \right\} $ & $\left\{((1,1),(1,0)), ((3,1),(2,0)) \right\}$ & $\mathcal{L}_{13}$\\\hline
(52) & $\left\{((1,1),(1,2)), ((3,2),(1,2)) \right\} $ & $\left\{((1,1),(1,2)), ((3,1),(2,2)) \right\}$ & $\mathcal{L}_{9}$\\\hline
(53) & $\left\{((1,1),(2,1)), ((3,2),(2,1)) \right\} $ & $\left\{((1,2),(1,1)), ((3,2),(2,1)) \right\}$ & $\mathcal{L}_{15}$\\\hline
(54) & $\left\{((1,1),(2,3)), ((3,2),(2,3)) \right\} $ & $\left\{((1,2),(1,3)), ((3,2),(2,3)) \right\}$ & $\mathcal{L}_{2}$\\\hline
(55) & $\left\{((1,1),(3,0)), ((3,2),(3,0)) \right\} $ & $\left\{((1,3),(1,0)), ((3,3),(2,0)) \right\}$ & $\mathcal{L}_{14}$\\\hline
(56) & $\left\{((1,1),(3,1)), ((3,2),(3,1)) \right\} $ & $\left\{((1,3),(1,1)), ((3,3),(2,1)) \right\}$ & $\mathcal{L}_{1}$\\\hline
(57) & $\left\{((0,2),(1,1)), ((0,2),(3,2)) \right\} $ & $\left\{((0,1),(2,1)), ((0,3(,(2,2)) \right\}$ & $\mathcal{L}_{16}$\\\hline
(58) & $\left\{((0,3),(1,1)), ((0,3),(3,2)) \right\} $ & $\left\{((0,1),(3,1)), ((0,3),(3,2)) \right\}$ & $\mathcal{L}_{12}$\\\hline
(59) & $\left\{((1,0),(1,1)), ((1,0),(3,2)) \right\} $ & $\left\{((1,1),(0,1)), ((1,3),(0,2)) \right\}$ & $\mathcal{L}_{10}$\\\hline
(60) & $\left\{((1,2),(1,1)), ((1,2),(3,2)) \right\} $ & $\left\{((1,1),(2,1)), ((1,3),(2,2)) \right\}$ & $\mathcal{L}_{17}$\\\hline
(61) & $\left\{((2,1),(1,1)), ((2,1),(3,2)) \right\} $ & $\left\{((2,1),(1,1)), ((2,3),(1,2)) \right\}$ & $\mathcal{L}_{16}$\\\hline
(62) & $\left\{((2,3),(1,1)), ((2,3),(3,2)) \right\} $ & $\left\{((2,1),(3,1)), ((2,3),(3,2)) \right\}$ & $\mathcal{L}_{5}$\\\hline
(63) & $\left\{((3,0),(1,1)), ((3,0),(3,2)) \right\} $ & $\left\{((3,1),(0,1)), ((3,3),(0,2)) \right\}$ & $\mathcal{L}_{11}$\\\hline
(64) & $\left\{((3,1),(1,1)), ((3,1),(3,2)) \right\} $ & $\left\{((3,1),(1,1)), ((3,3),(1,2)) \right\}$ & $\mathcal{L}_{12}$\\\hline
(65) & $\left\{((2,0),(0,2)), ((3,3),(0,2)) \right\} $ & $\left\{((2,0),(0,2)), ((3,0),(3,2)) \right\}$ & $\mathcal{L}_{16}$\\\hline
(66) & $\left\{((2,0),(0,3)), ((3,3),(0,3)) \right\} $ & $\left\{((2,0),(0,3)), ((3,0),(3,3)) \right\}$ & $\mathcal{L}_{6}$\\\hline
(67) & $\left\{((2,0),(1,0)), ((3,3),(1,0)) \right\} $ & $\left\{((2,1),(0,0)), ((3,1),(3,0)) \right\}$ & $\mathcal{L}_{14}$\\\hline
(68) & $\left\{((2,0),(1,2)), ((3,3),(1,2)) \right\} $ & $\left\{((2,1),(0,2)), ((3,1),(3,2)) \right\}$ & $\mathcal{L}_{15}$\\\hline
(69) & $\left\{((2,0),(2,1)), ((3,3),(2,1)) \right\} $ & $\left\{((2,2),(0,1)), ((3,2),(3,1)) \right\}$ & $\mathcal{L}_{16}$\\\hline
(70) & $\left\{((2,0),(2,3)), ((3,3),(2,3)) \right\} $ & $\left\{((2,2),(3,3)), ((3,2),(3,3)) \right\}$ & $\mathcal{L}_{3}$\\\hline
(71) & $\left\{((2,0),(3,0)), ((3,3),(3,0)) \right\} $ & $\left\{((2,3),(0,0)), ((3,3),(3,0)) \right\}$ & $\mathcal{L}_{13}$\\\hline
(72) & $\left\{((2,0),(3,1)), ((3,3),(3,1)) \right\} $ & $\left\{((2,3),(0,1)), ((3,3),(3,1)) \right\}$ & $\mathcal{L}_{15}$\\\hline
(73) & $\left\{((0,2),(2,0)), ((0,2),(3,3)) \right\} $ & $\left\{((0,2),(2,0)), ((0,3),(2,3)) \right\}$ & $\mathcal{L}_{17}$\\\hline
(74) & $\left\{((0,3),(2,0)), ((0,3),(3,3)) \right\} $ & $\left\{((0,2),(3,0)), ((0,3),(3,3)) \right\}$ & $\mathcal{L}_{15}$\\\hline
(75) & $\left\{((1,0),(2,0)), ((1,0),(3,3)) \right\} $ & $\left\{((1,2),(0,0)), ((1,3),(0,3)) \right\}$ & $\mathcal{L}_{12}$\\\hline
(76) & $\left\{((1,2),(2,0)), ((1,2),(3,3)) \right\} $ & $\left\{((1,2),(2,0)), ((1,3),(2,3)) \right\}$ & $\mathcal{L}_{16}$\\\hline
(77) & $\left\{((2,1),(2,0)), ((2,1),(3,3)) \right\} $ & $\left\{((2,2),(1,0)), ((2,3),(1,3)) \right\}$ & $\mathcal{L}_{12}$\\\hline
(78) & $\left\{((2,3),(2,0)), ((2,3),(3,3)) \right\} $ & $\left\{((2,2),(3,0)), ((2,3),(3,3)) \right\}$ & $\mathcal{L}_{17}$\\\hline
(79) & $\left\{((3,0),(2,0)), ((3,0),(3,3)) \right\} $ & $\left\{((3,2),(0,0)), ((3,3),(0,3)) \right\}$ & $\mathcal{L}_{17}$\\\hline
(80) & $\left\{((3,1),(2,0)), ((3,1),(3,3)) \right\} $ & $\left\{((3,2),(1,0)), ((3,3),(1,3)) \right\}$ & $\mathcal{L}_{8}$\\\hline
\end{tabular}
}
\label{case2}
\vspace{-.1 cm}
\caption{Singularity Removal Constraints Constraints for $\gamma e^{j \theta}=-0.5+0.5j$}
\end{figure*}
\begin{figure*}
{\footnotesize
{
\renewcommand{\arraystretch}{1,3}
\begin{tabular}{||c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c||}\hline
&$(0,0)$&$(0,1)$&$(0,2)$&$(0,3)$&$(1,0)$&$(1,1)$&$(1,2)$&$(1,3)$&$(2,0)$&$(2,1)$&$(2,2)$&$(2,3)$&$(3,0)$&$(3,1)$&$(3,2)$&$(3,3)$ \\\hline \hline
$(0,0)$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{12}$ & \pmb{$\mathcal{L}_{8}$} & \pmb{$\mathcal{L}_{14}$} & $\mathcal{L}_{11}$ & $\mathcal{L}_{13}$ & \pmb{$\mathcal{L}_{17}$} & \pmb{$\mathcal{L}_{18}$} \\\hline
$(0,1)$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{4}$ & \pmb{$\mathcal{L}_{3}$} & $\mathcal{L}_{16}$ & \pmb{$\mathcal{L}_{15}$} & $\mathcal{L}_{10}$ & \pmb{$\mathcal{L}_{18}$} & $\mathcal{L}_{12}$ & \pmb{$\mathcal{L}_{14}$} & $\mathcal{L}_{11}$ \\\hline
$(0,2)$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{17}$ & \pmb{$\mathcal{L}_{9}$} & $\mathcal{L}_{12}$ & \pmb{$\mathcal{L}_{18}$} & $\mathcal{L}_{15}$ & \pmb{$\mathcal{L}_{6}$} & $\mathcal{L}_{13}$ & \pmb{$\mathcal{L}_{7}$} \\\hline
$(0,3)$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{1}$ & \pmb{$\mathcal{L}_{7}$} & \pmb{$\mathcal{L}_{4}$} & $\mathcal{L}_{16}$ & $\mathcal{L}_{17}$ & \pmb{$\mathcal{L}_{3}$} & \pmb{$\mathcal{L}_{14}$} & $\mathcal{L}_{12}$ & $\mathcal{L}_{15}$ \\\hline
$(1,0)$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{13}$ & \pmb{$\mathcal{L}_{17}$} & \pmb{$\mathcal{L}_{16}$} & $\mathcal{L}_{5}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{14}$ & \pmb{$\mathcal{L}_{10}$} & \pmb{$\mathcal{L}_{8}$} & $\mathcal{L}_{1}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{9}$ \\\hline
$(1,1)$ & \pmb{$\mathcal{L}_{6}$} & $\mathcal{L}_{10}$ & \pmb{$\mathcal{L}_{4}$} & $\mathcal{L}_{11}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{5}$ & \pmb{$\mathcal{L}_{15}$} & $\mathcal{L}_{17}$ & \pmb{$\mathcal{L}_{18}$} & $\mathcal{L}_{12}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{1}$ \\\hline
$(1,2)$ & $\mathcal{L}_{12}$ & \pmb{$\mathcal{L}_{17}$} & $\mathcal{L}_{13}$ & \pmb{$\mathcal{L}_{18}$} & $\mathcal{L}_{6}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{16}$ & \pmb{$\mathcal{L}_{11}$} & $\mathcal{L}_{14}$ & \pmb{$\mathcal{L}_{9}$} & $\mathcal{L}_{5}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{10}$ \\\hline
$(1,3)$ & \pmb{$\mathcal{L}_{15}$} & \pmb{$\mathcal{L}_{18}$} & $\mathcal{L}_{10}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{6}$ & \pmb{$\mathcal{L}_{19}$} & \pmb{$\mathcal{L}_{13}$} & $\mathcal{L}_{17}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{5}$ \\\hline
$(2,0)$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{14}$ & \pmb{$\mathcal{L}_{1}$} & \pmb{$\mathcal{L}_{13}$} & $\mathcal{L}_{4}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{10}$ & \pmb{$\mathcal{L}_{11}$} & \pmb{$\mathcal{L}_{19}$} \\\hline
$(2,1)$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{8}$ & \pmb{$\mathcal{L}_{2}$} & $\mathcal{L}_{16}$ & \pmb{$\mathcal{L}_{10}$} & $\mathcal{L}_{3}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{4}$ & \pmb{$\mathcal{L}_{19}$} & $\mathcal{L}_{5}$ & \pmb{$\mathcal{L}_{1}$} & $\mathcal{L}_{12}$ \\\hline
$(2,2)$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{12}$ & \pmb{$\mathcal{L}_{5}$} & $\mathcal{L}_{14}$ & \pmb{$\mathcal{L}_{18}$} & $\mathcal{L}_{1}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{17}$ & \pmb{$\mathcal{L}_{19}$}& $\mathcal{L}_{10}$ & \pmb{$\mathcal{L}_{13}$} \\\hline
$(2,3)$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{4}$ & \pmb{$\mathcal{L}_{18}$} & \pmb{$\mathcal{L}_{9}$} & $\mathcal{L}_{16}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{1}$ & \pmb{$\mathcal{L}_{2}$} & \pmb{$\mathcal{L}_{20}$} & $\mathcal{L}_{5}$ & $\mathcal{L}_{17}$ \\\hline
$(3,0)$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{1}$ & \pmb{$\mathcal{L}_{12}$} & \pmb{$\mathcal{L}_{13}$} & $\mathcal{L}_{10}$ & $\mathcal{L}_{11}$ & \pmb{$\mathcal{L}_{17}$} & \pmb{$\mathcal{L}_{14}$} & $\mathcal{L}_{5}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{6}$ \\\hline
$(3,1)$ & \pmb{$\mathcal{L}_{16}$} & $\mathcal{L}_{11}$ & \pmb{$\mathcal{L}_{18}$} & $\mathcal{L}_{2}$ & \pmb{$\mathcal{L}_{17}$} & $\mathcal{L}_{12}$ & \pmb{$\mathcal{L}_{19}$} & $\mathcal{L}_{10}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{8}$ \\\hline
$(3,2)$ & $\mathcal{L}_{17}$ & \pmb{$\mathcal{L}_{5}$} & $\mathcal{L}_{1}$ & \pmb{$\mathcal{L}_{14}$} & $\mathcal{L}_{8}$ & \pmb{$\mathcal{L}_{13}$} & $\mathcal{L}_{11}$ & \pmb{$\mathcal{L}_{19}$} & $\mathcal{L}_{6}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{16}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{3}$ \\\hline
$(3,3)$ & \pmb{$\mathcal{L}_{10}$} & \pmb{$\mathcal{L}_{19}$} & $\mathcal{L}_{11}$ & $\mathcal{L}_{17}$ & \pmb{$\mathcal{L}_{16}$} & \pmb{$\mathcal{L}_{18}$} & $\mathcal{L}_{12}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{4}$ \\\hline
\end{tabular}
}
}
\label{table2}
\caption{Latin Square representing the clustering at the relay for the case $\gamma e^{j \theta}$ obtained using Direct Clustering, with the 4-PSK symbols that A(B) sent in the first and second channel use along the rows(columns)}
\end{figure*}
\begin{example} Consider the case for which $\gamma e^{j \theta}=-0.5+0.5j$. The singularity removal constraints for the case $\gamma e^{j \theta}=-0.5+0.5j$ in two-way 2-stage relaying as given in \cite{NMR} are:\\
{\small $\left\{(0,0),(1,3)\right\}, \left\{(1,1),(3,2)\right\}, \left\{(0,1),(2,2)\right\} \text{~and~} \left\{(2,0),(3,3)\right\}.$}\\
As a result, the singularity removal constraints for the two-way ACF relaying are as shown in Fig. 12 and Fig. 13. The clusters as shown in the third column of the table, are chosen such that each cluster satisfies the mutually exclusive laws given by (\ref{mel1}) and (\ref{mel2}). \\
The constraints can be represented using 17 symbols. In order to complete the latin square, we use Algorithm 1. A total of 20 symbols suffice in completing the array. Fig. 14 represents the Latin Square representing the clustering at the relay for the case $\gamma e^{j \theta}=-0.5+0.5j$, with the 4-PSK symbols that A(B) sent in the first and second channel use along the rows(columns).\\
\end{example}
\noindent \textbf{\textit{Case 3:} $\gamma e^{j \theta} $ lies on the circle of radius $\sqrt{2}$.}\\
In this case, there are a total of $80$ singularity removal constraints as stated in the following lemma:\\
\begin{lemma} When $\gamma e^{j \theta} $ lies on the circle of radius $ \sqrt{2}$, there are a total of $80$ singularity removal constraints.
\end{lemma}
\textit{We omit the proof of this lemma, as it is the same as that of Lemma 3.}
The resulting constrained $16 \times 16$ array can be uniquely completed with 20 symbols as shown in the following example:\\
\begin{example} Consider the case for which $\gamma e^{j \theta}=-1+j$. The singularity removal constraints for the case $\gamma e^{j \theta}=-1+j$ in two-way 2-stage relaying as given in \cite{NMR} are:\\
{\small $\left\{(0,0),(3,2)\right\}, \left\{(0,1),(3,3)\right\}, \left\{(1,1),(2,0)\right\} \text{~and~} \left\{(1,3),(2,2)\right\}.$}\\
As a result, the constraints representing the singularity removal constraints for the two-way ACF relaying can be represented by the bold letters in the $16 \times 16$ Latin Square. The constraints can be represented using 18 symbols. In order to complete the latin square using Algorithm 1, a total of 20 symbols suffice. Fig. 15 represents the clustering. The constraints representing the singularity removal constraints for this example are given in Appendix C.\\
\begin{figure*}
{\footnotesize
{
\renewcommand{\arraystretch}{1,3}
\begin{tabular}{||c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c||}\hline
&$(0,0)$&$(0,1)$&$(0,2)$&$(0,3)$&$(1,0)$&$(1,1)$&$(1,2)$&$(1,3)$&$(2,0)$&$(2,1)$&$(2,2)$&$(2,3)$&$(3,0)$&$(3,1)$&$(3,2)$&$(3,3)$ \\\hline \hline
$(0,0)$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{12}$ & \pmb{$\mathcal{L}_{15} $} & \pmb{$\mathcal{L}_{16} $} & $\mathcal{L}_{11}$ & $\mathcal{L}_{14}$ & \pmb{$\mathcal{L}_{13} $} & \pmb{$\mathcal{L}_{17} $} \\\hline
$(0,1)$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{2}$ & \pmb{$\mathcal{L}_{6} $} & $\mathcal{L}_{13}$ & \pmb{$\mathcal{L}_{11} $} & $\mathcal{L}_{14}$ & \pmb{$\mathcal{L}_{15} $} & $\mathcal{L}_{12}$ & \pmb{$\mathcal{L}_{18} $} & $\mathcal{L}_{16}$ \\\hline
$(0,2)$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{13}$ & \pmb{$\mathcal{L}_{18} $} & $\mathcal{L}_{14}$ & \pmb{$\mathcal{L}_{7} $} & $\mathcal{L}_{12}$ & \pmb{$\mathcal{L}_{19} $} & $\mathcal{L}_{16}$ & \pmb{$\mathcal{L}_{20} $} \\\hline
$(0,3)$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{3}$ & \pmb{$\mathcal{L}_{15} $} & \pmb{$\mathcal{L}_{5} $} & $\mathcal{L}_{8}$ & $\mathcal{L}_{12}$ & \pmb{$\mathcal{L}_{17} $} & \pmb{$\mathcal{L}_{18} $} & $\mathcal{L}_{11}$ & $\mathcal{L}_{14}$ \\\hline
$(1,0)$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{1}$ & \pmb{$\mathcal{L}_{3} $} & \pmb{$\mathcal{L}_{18} $} & $\mathcal{L}_{6}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{11}$ & \pmb{$\mathcal{L}_{16} $} & \pmb{$\mathcal{L}_{19} $} & $\mathcal{L}_{5}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{15}$ \\\hline
$(1,1)$ & \pmb{$\mathcal{L}_{10} $} & $\mathcal{L}_{4}$ & \pmb{$\mathcal{L}_{12} $} & $\mathcal{L}_{17}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{8}$ & \pmb{ $\mathcal{L}_{2} $} & $\mathcal{L}_{16}$ & \pmb{$\mathcal{L}_{18} $} & $\mathcal{L}_{13}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{6}$ \\\hline
$(1,2)$ & $\mathcal{L}_{4}$ & \pmb{$\mathcal{L}_{11} $} & $\mathcal{L}_{17}$ & \pmb{ $\mathcal{L}_{3} $} & $\mathcal{L}_{5}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{16}$ & \pmb{$\mathcal{L}_{14} $} & $\mathcal{L}_{13}$ & \pmb{$\mathcal{L}_{17} $} & $\mathcal{L}_{7}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{1}$ \\\hline
$(1,3)$ & \pmb{$\mathcal{L}_{16} $} & \pmb{$\mathcal{L}_{18} $} & $\mathcal{L}_{2}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{7}$ & \pmb{$\mathcal{L}_{4} $} & \pmb{$\mathcal{L}_{9} $} & $\mathcal{L}_{10}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{8}$ \\\hline
$(2,0)$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & \pmb{$\mathcal{L}_{1} $} & \pmb{$\mathcal{L}_{16} $} & $\mathcal{L}_{5}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{11}$ & \pmb{$\mathcal{L}_{14} $} & \pmb{$\mathcal{L}_{10} $} \\\hline
$(2,1)$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{8}$ & \pmb{$\mathcal{L}_{9} $} & $\mathcal{L}_{2}$ & \pmb{$\mathcal{L}_{10} $} & $\mathcal{L}_{18}$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{6}$ & \pmb{$\mathcal{L}_{19} $} & $\mathcal{L}_{16}$ & \pmb{$\mathcal{L}_{1} $} & $\mathcal{L}_{12}$ \\\hline
$(2,2)$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{2}$ & \pmb{$\mathcal{L}_{13} $} & $\mathcal{L}_{18}$ & \pmb{$\mathcal{L}_{11} $} & $\mathcal{L}_{7}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{16}$ & \pmb{$\mathcal{L}_{4} $} & $\mathcal{L}_{12}$ & \pmb{$\mathcal{L}_{19} $} \\\hline
$(2,3)$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{12}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{7}$ & \pmb{$\mathcal{L}_{16} $} & \pmb{$\mathcal{L}_{18} $} & $\mathcal{L}_{3}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{14}$ & $\mathcal{L}_{17}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{8}$ & \pmb{$\mathcal{L}_{2} $} & \pmb{$\mathcal{L}_{13} $} & $\mathcal{L}_{9}$ & $\mathcal{L}_{11}$ \\\hline
$(3,0)$ & $\mathcal{L}_{11}$ & $\mathcal{L}_{14}$ & \pmb{$\mathcal{L}_{18} $} & \pmb{$\mathcal{L}_{6} $} & $\mathcal{L}_{12}$ & $\mathcal{L}_{15}$ & \pmb{$\mathcal{L}_{16} $} & \pmb{$\mathcal{L}_{13} $} & $\mathcal{L}_{1}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{5}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{9}$ \\\hline
$(3,1)$ & \pmb{$\mathcal{L}_{12} $} & $\mathcal{L}_{13}$ & \pmb{$\mathcal{L}_{5} $} & $\mathcal{L}_{16}$ & \pmb{$\mathcal{L}_{17} $} & $\mathcal{L}_{11}$ & \pmb{$\mathcal{L}_{19} $} & $\mathcal{L}_{14}$ & $\mathcal{L}_{8}$ & $\mathcal{L}_{3}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{7}$ & $\mathcal{L}_{2}$ \\\hline
$(3,2)$ & $\mathcal{L}_{13}$ & \pmb{$\mathcal{L}_{8} $} & $\mathcal{L}_{16}$ & \pmb{$\mathcal{L}_{15} $} & $\mathcal{L}_{11}$ & \pmb{$\mathcal{L}_{17} $} & $\mathcal{L}_{14}$ & \pmb{$\mathcal{L}_{19} $} & $\mathcal{L}_{3}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{4}$ \\\hline
$(3,3)$ & \pmb{$\mathcal{L}_{7} $} & \pmb{$\mathcal{L}_{16} $} & $\mathcal{L}_{11}$ & $\mathcal{L}_{14}$ & \pmb{$\mathcal{L}_{8} $} & \pmb{$\mathcal{L}_{19} $} & $\mathcal{L}_{12}$ & $\mathcal{L}_{15}$ & $\mathcal{L}_{9}$ & $\mathcal{L}_{10}$ & $\mathcal{L}_{1}$ & $\mathcal{L}_{2}$ & $\mathcal{L}_{13}$ & $\mathcal{L}_{6}$ & $\mathcal{L}_{4}$ & $\mathcal{L}_{3}$ \\\hline
\end{tabular}
}
}
\label{table3}
\caption{Latin Square representing the clustering at the relay for the case $\gamma e^{j \theta}=-1+j$, with the 4-PSK symbols that A(B) sent in the first and second channel use along the rows(columns)}
\end{figure*}
\end{example}
\section{QUANTIZATION OF THE COMPLEX FADE STATE PLANE}
In practice, $\gamma e^{j\theta}$ can take any value in the complex plane (it takes a value equal to one of the singular fade states with zero probability). As explained in Section II, one of the Latin Squares obtained, which remove the singular fade states needs to be chosen, depending on the value of $\gamma e^{j\theta}$. For a $\gamma e^{j\theta}$ which is not a singular fade state, among all the Latin Squares which remove the singular fade states, the Latin Square $\mathcal{C}^{h}, h \in \mathcal{H}$ which has the maximum value of the minimum cluster distance at $\gamma e^{j\theta}$ is chosen. In other words, for a given $\gamma e^{j\theta} \notin \mathcal{H},$ the clustering is chosen to be the one which removes the singular fade state $h \in \mathcal{H}$ which maximizes the metric $d^{2}_{min}\left(\mathcal{C}^{h}, \gamma e^{j \theta}\right)$ given in \eqref{cl3}. In this way, the $\gamma e^{j\theta}-${plane} is quantized into $\vert \mathcal{H} \vert$ point set, depending on which one of the obtained Latin Squares is chosen.\\
For $(x_A ,x_B) \neq (x'_A,x'_B) \in \mathcal{S}^2,$ let $\mathcal{D}(\gamma,\theta,x_{A},x_{B},x'_A,x'_B)$ be defined as,
{\vspace{-.3 cm}
\begin{align}
\label{decision_metric}
\mathcal{D}(\gamma,\theta,x_A,x_B,x'_A,x'_B)&=\vert (x_A-x'_A)+\gamma e^{j\theta} (x_B-x'_B)\vert.
\end{align}
}
In the following lemma, it is shown that for a given $\gamma e^{j \theta},$ choosing the clustering $\mathcal{C}^h \in \mathcal{C}^\mathcal{H},$ where $h \in \mathcal{H},$ that maximizes $d^{2}_{min}\left(\mathcal{C}^{h}, \gamma e^{j \theta}\right)$ given in \eqref{cl3}, is the same as choosing the clustering $\mathcal{C}^{\left[-\frac{\mathsf{x_A}-\mathsf{x'_A}}{\mathsf{x_B}-\mathsf{x'_B}}\right]},$ where $(\mathsf{x_A} ,\mathsf{x_B}) \neq (\mathsf{x'_A},\mathsf{x'_B}) \in \mathcal{S}^2,$ that minimizes the simpler metric given in \eqref{decision_metric}.\\
\begin{lemma}
\label{lemma_criterion}
If the complex fade state $\gamma e^{j \theta}$ and the clustering $\mathcal{C}^{\left[-\frac{\mathsf{x_A}-\mathsf{x'_A}}{\mathsf{x_B}-\mathsf{x'_B}}\right]} \in \mathcal{C}^{\mathcal{H}}$ are such that, {\footnotesize $$\arg\min_{(x_A,x_B) \neq (x'_A,x'_B) \in \mathcal{S}^2} \mathcal{D}(\gamma,\theta,x_A,x_B,x'_A,x'_B)=(\mathsf{x_A},\mathsf{x_B},\mathsf{x'_A},\mathsf{x'_B}),$$}then $\left[-\frac{\mathsf{x_A}-\mathsf{x'_A}}{\mathsf{x_B}-\mathsf{x'_B}}\right] \in \mathcal{H},$ maximizes the metric $d^{2}_{min}\left(\mathcal{C}^{h}, \gamma e^{j \theta}\right)$ given in \eqref{cl3}, among all $h \in \mathcal{H}.$
\begin{proof}
The squared minimum distance of the effective constellation at the relay $d_{min}(\gamma e^{j\theta})$ is given by \eqref{dist}.\\
\begin{figure*}
\scriptsize
\begin{align}
\hline
\label{eqn_f1}
f_1(\gamma e^{j\theta})&=\min_{(x_{A_1},x_{B_1}) \neq (x'_{A_1},x'_{B_1}),(x_{A_2},x_{B_2}) = (x'_{A_2},x'_{B_2})} \left \lbrace \vert (x_{A_1}-x'_{A_1})+\gamma e^{j\theta} (x_{B_1}-x'_{B_1})\vert^2+\vert (x_{A_2}-x'_{A_2})+\gamma e^{j\theta} (x_{B_2}-x'_{B_2})\vert^2\right\rbrace\\
\hline
\label{eqn_f2}
f_2(\gamma e^{j\theta})&=\min_{(x_{A_1},x_{B_1}) = (x'_{A_1},x'_{B_1}),(x_{A_2},x_{B_2}) \neq (x'_{A_2},x'_{B_2})} \left \lbrace \vert (x_{A_1}-x'_{A_1})+\gamma e^{j\theta} (x_{B_1}-x'_{B_1})\vert^2+\vert (x_{A_2}-x'_{A_2})+\gamma e^{j\theta} (x_{B_2}-x'_{B_2})\vert^2\right\rbrace\\
\hline
\label{eqn_f3}
f_3(\gamma e^{j\theta})&=\min_{(x_{A_1},x_{B_1}) \neq (x'_{A_1},x'_{B_1}),(x_{A_2},x_{B_2}) \neq (x'_{A_2},x'_{B_2})} \left \lbrace \vert (x_{A_1}-x'_{A_1})+\gamma e^{j\theta} (x_{B_1}-x'_{B_1})\vert^2+\vert (x_{A_2}-x'_{A_2})+\gamma e^{j\theta} (x_{B_2}-x'_{B_2})\vert^2\right\rbrace
\end{align}
\hrule
\end{figure*}
Let $f_1$, $f_2$ and $f_3$ be functions of $\gamma e^{j\theta}$ defined as in \eqref{eqn_f1}-\eqref{eqn_f3} given in the next page. We have, $$d_{min}^2(\gamma e^{j\theta})=\min\lbrace f_1(\gamma e^{j\theta}),f_2(\gamma e^{j\theta}),f_3(\gamma e^{j\theta})\rbrace.$$ From \eqref{eqn_f1} and \eqref{eqn_f2}, it follows that
{\footnotesize
\begin{equation}
\nonumber
f_1(\gamma e^{j\theta})=f_2(\gamma e^{j \theta})=\hspace{-0.5 cm}\min_{\: (x_A,x_B) \neq (x'_A,x'_B) \in \mathcal{S}^2} \hspace{-0.5 cm}\vert (x_A-x'_B)+\gamma e^{j\theta} (x_B-x'_B)\vert ^2.
\end{equation}
}
From \eqref{eqn_f3}, it can be seen that,
\begin{equation}
\nonumber
f_3(\gamma e^{j\theta})\geq\min_{(x_A,x_B) \neq (x'_A,x'_B) \in \mathcal{S}^2}\vert (x_A-x'_B)+\gamma e^{j\theta} (x_B-x'_B)\vert ^2.
\end{equation}
Hence, we have,
{\footnotesize
\begin{align}
\nonumber
d_{min}^2(\gamma e^{j\theta}) = \min_{(x_A,x_B) \neq (x'_A,x'_B) \in \mathcal{S}^2}\vert (x_A-x'_B)+\gamma e^{j\theta} (x_B-x'_B)\vert ^2.
\end{align}
}
Since, { $$\arg\min_{x_A,x_B,x'_A,x'_B} \mathcal{D}(\gamma,\theta,x_A,x_B,x'_A,x'_B)=(\mathsf{x_A},\mathsf{x_B},\mathsf{x'_A},\mathsf{x'_B}),$$} we have,
$d_{min}^2(\gamma e^{j\theta}) = \vert (\mathsf{x}_A-\mathsf{x}'_B)+\gamma e^{j\theta} (\mathsf{x}_B-\mathsf{x}'_B)\vert ^2.$
For the clustering $\mathcal{C}^{\left[-\frac{\mathsf{x_A}-\mathsf{x'_A}}{\mathsf{x_B}-\mathsf{x'_B}}\right]}$ which removes the singular fade state $-\frac{\mathsf{x_A}-\mathsf{x'_A}}{\mathsf{x_B}-\mathsf{x'_B}}$, the minimum cluster distance is greater than $d_{min}(\gamma e^{j \theta})$, while for all other clusterings in the set $\mathcal{C}^\mathcal{H}$, it is equal to $d_{min}(\gamma e^{j\theta}).$ This completes the proof.
\end{proof}
\end{lemma}
The decision criterion in Lemma \ref{lemma_criterion} based on which R chooses one of the Latin Squares obtained, is the same as the decision criterion for the two-way 2-stage relaying in \cite{MNR}. Hence, the quantization of the complex fade state plane for the ACF relaying is same as that of the two-way 2-stage relaying obtained in \cite{MNR}.\\
\section{SIMULATION RESULTS}
\begin{figure*}[htbp]
\centering
\includegraphics[totalheight=5.5in,width=7in]{tput_curves_4psk.eps}
\caption{SNR vs throughput curves for different schemes for 4-PSK signal set}
\label{fig:tput_curves_4psk}
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[totalheight=5.5in,width=7in]{tput_curves_8psk.eps}
\caption{SNR vs throughput curves for different schemes for 8-PSK signal set}
\label{fig:tput_curves_8psk}
\end{figure*}
The simulation results presented are for the case when $H_A$, $H_B$, $H'_A$ and $H'_B$ are distributed according to Rayleigh distribution, with the variances of all the fading links equal to 0 dB. It is assumed that the AWGN noises at the three nodes are of variance 0 dB. By SNR, we mean the average energies of the signal set used at the three nodes A, B and R, which are assumed to be equal. The frame length of a transmission is taken to be 256 bits.\\
Consider the case when 4-PSK signal set is used at A and B. Fig. \ref{fig:tput_curves_4psk} shows the SNR vs end-to-end sum throughput curves for the following schemes: Closest-Neighbour Clustering (CNC) Algorithm based scheme for the two-way 2-stage relaying proposed in \cite{KoPoTa}, the Scheme based on Latin Squares for two-way 2-stage relaying proposed in \cite{NMR}, the scheme in which XOR network code is used irrespective of the channel condition, the Cartesian Product based scheme for ACF (ACF-CP) relaying and the Direct Clustering based scheme for ACF (ACF-DC) relaying. It can be seen from Fig. \ref{fig:tput_curves_4psk} that the schemes based on the ACF relaying perform better than the schemes based on 2-stage relaying at high SNR. From Fig. \ref{fig:tput_curves_4psk}, it follows that when the SNR is greater than 42 dB, the ACF-DC scheme outperforms all other schemes. The maximum throughput achieved by the ACF relaying schemes is 8/3 bits/s/Hz, whereas it is 2 bits/s/Hz for the 2-stage two-way relaying schemes. Also, as seen from Fig. \ref{fig:tput_curves_4psk}, the ACF-DC scheme performs better than the ACF-CP scheme. The reason for this is that the maximum cardinality of the signal set used during the BC phase is 25 for the ACF-CP scheme whereas it is 20 for the ACF-DC scheme.\\
Consider the case when 8-PSK signal set is used at A and B. It was shown in \cite{NMR} that for the two-way 2-stage relaying scheme with 8-PSK signal set, all the clusterings which the remove the singular fade states have exactly 8 clusters. Hence, the ACF-CP scheme, in which the clusterings are obtained by taking the Cartesian Product of the clusterings corresponding to the two-way 2-stage relaying scheme, have exactly 64 clusters (note that 64 is the minimum number of clusters required for conveying 6 information bits). Since the Cartesian Product itself results in the minimum number of clusters, the ACF-DC scheme is not considered for this case. Fig. \ref{fig:tput_curves_8psk} shows the SNR vs end-to-end sum throughput curves for the different scheme. Similar to the 4-PSK case, at high SNR, the ACF-CP scheme provides a larger throughput than the 2-stage relaying schemes. The maximum throughput achieved by the ACF-CP scheme is 4 bits/s/Hz, whereas it is 3 bits/s/Hz for the 2-stage relaying schemes.\\
\section{Conclusion}
We proposed a scheme based on the ACF protocol for two-way relaying that utilizes totally three channel uses of the wireless two-way relaying channel unlike the 2-stage protocol that uses four channel uses, assuming that the users A and B transmit points from the same 4-PSK constellation. The network codes used at the relay during the Broadcast Phase were obtained using two methods: by taking the Cartesian Product of the clusterings proposed in \cite{NMR} for two user 2-stage case and by completing the Latin Square filled partially with the singularity removal constraints for a given fade state. Using the second method called Direct Clustering, the maximum size of the resulting constellation used by the relay node R in the BC phase was reduced to 20, as compared to the Cartesian Product based approach which results in the constellation size being 25 for these cases. Having obtained all the Latin Squares, the complex plane was quantized depending on which one of the obtained Latin Squares maximizes the minimum cluster distance. This quantization was shown to be the same as that achieved in \cite{MNR} for the two-way 2-stage relaying scenario. Simulation results showed that the ACF protocol based schemes outperform the schemes proposed in \cite{KoPoTa} and \cite{NMR}, at high SNR.\\\\
\begin{center}
\textsc{Acknowledgments}
\end{center}
This work was supported partly by the DRDO-IISc program on Advanced Research in Mathematical Engineering through a research grant as well as the INAE Chair Professorship grant to B. S. Rajan.
|
2,869,038,155,127 | arxiv | \section{Introduction}
Given a graph $H$, let $\hat{R}_r(H)$ be the minimum $m$ such that there exists a graph $G$ with $m$ edges such that in every $r$-coloring of $G$, there is a monochromatic copy of $H$. When $r=2$, we drop the subscript and just write $\hat{R}(H)$. We refer to $\hat{R}(H)$ as the \emph{size-Ramsey} number of $H$.
Let $P_n$ be the path with $n$ vertices. Erd\H{o}s \cite{Er} famously asked if $\hat{R}(P_n) / n \to \infty$ and $\hat{R}(P_n) / n^2 \to 0$. Beck \cite{B2} proved that in fact, $\hat{R}(P_n) \le 900n$ (for $n$ sufficiently large). The bound $900n$ was subsequently improved in \cite{B}, \cite{Bol}, \cite{DP1}, \cite{Let} and currently rests at $74n$ as proved by Dudek and Pra{\l}at in \cite{DP2}.
As for the lower bound, it is clear that $\hat{R}(P_n)> 2n-4$ since $P_n$ has $n-1$ edges. Beck \cite{B2} proved $\hat{R}(P_n)\geq (9/4-o(1))n$, Bielak \cite{Bie} proved $\hat{R}(P_n)\geq 9n/4-3$, Bollob\'as \cite{B} proved $\hat{R}(P_n)\geq (1+\sqrt{2}-o(1))n$, and finally Dudek and Pra{\l}at \cite{DP2} proved $\hat{R}(P_n)\geq 5n/2-15/2$.
The closest thing there is to a conjecture about the precise value of $\hat{R}(P_n)$ is Bollob\'as' \cite{B} comment, ``it would not be surprising if $\hat{R}(P_n)$ turned out to be about $8n$.'' It is not known what insight led to this comment, but together with the recent flurry of activity on the upper bound, it inspired us to make a determined effort to improve the lower bound. We prove the following.
\begin{theorem}\label{thm:main-2-col}
For all $\epsilon>0$, there exists $n_0$ such that if $n\geq n_0$ and $G$ is a graph with at most $(3.75-\epsilon) n$ edges, there exists a 2-coloring of the edges of $G$ such that every monochromatic path has order less than $n$. Thus $\hat{R}(P_n) \ge (3.75 - o(1))n$.
\end{theorem}
For the general, $r$-color version of the problem, the best upper bound is due to Krivelevich \cite{K} who proved $\hat{R}_r(P_n)=O(r^2\log (r) n)$ (Dudek and Pra{\l}at \cite{DP3} later gave a different proof). In fact, both \cite{K} and \cite{DP3} prove the stronger ``density version'' of the theorem: there exists a graph $G$ (a binomial random graph) with $|E(G)| = O(r^2\log (r)n)$ such that every subgraph of $G$ with at least $e(G)/r$ many edges contains a monochromatic path of order $n$ (A recent paper of Balogh, Dudek, and Li \cite{BDL} shows that the factor $r^2\log r$ cannot be improved for this stronger density version in the setting of random graphs).
As for the lower bound, Dudek and Pra{\l}at \cite{DP2} proved that for any $r\ge 2$, $\hat{R}_r(P_n) \ge \frac{(r+3)r}{4}n - O(r^2)$ and then Krivelevich \cite{K} proved that for any $r\ge 3$ such that $r-2$ is a prime power, $\hat{R}_r(P_n) \ge (r-2)^2n - o(n)$. We improve on each of these results by proving the following.
\begin{theorem}\label{thm:main-r-col}
Let $r \ge 2$ and let $q$ be the largest prime power such that $q\leq r-1$. Then $$\hat{R}_r(P_n)\ge \max\left\{ \of{\frac{(r-1)r}{2} + 2.75-o(1)}n, (q^2-o(1))n\right\}.$$
\end{theorem}
Note that the prime number theorem guarantees that for any $\varepsilon>0$ and $r$ sufficiently large, there is a prime between $(1-\varepsilon)r$ and $r$, so for sufficiently large $r$, the second term in the maximum will dominate and we have $\hat{R}_r(P_n) \ge (r-1 - o_r(1))^2 n$. Determining whether $\hat{R}_r(P_n)=\Theta(r^2n)$ or not is perhaps the most interesting open problem regarding the size-Ramsey number of a path.
\subsection{Outline, Notation}
Our improvement in the lower bound stems from two main ideas.
1) If we can partition the graph $G$ into sets of order at most $n-1$ such that the number of edges crossing the partition is at most $n-2$, then we can color the edges inside the sets red and the edges between the sets blue so there are no monochromatic $P_n$'s. This has some similarity to the problem of determining the bisection width of a graph -- in which case a result of Alon \cite[Proposition 3.1]{A} gives good bounds on the number of crossing edges in a balanced bipartition of graphs with bounded maximum degree and at most $2n-2$ vertices. However, in our case, $G$ may not have bounded maximum degree, $G$ may have more than $2n-2$ vertices, and we don't necessarily want the partition to be balanced. Nevertheless, with some extra work, we are able to use similar methods from the bisection width problem (e.g.\ \cite{A}, \cite{KM}) in our setting.
2) From the ordinary path Ramsey problem it is known that if $G$ has at most $\frac{3n}{2}-2$ vertices, then there exists a 2-coloring of $G$ such that every monochromatic path has order less than $n$. We show that if $G$ has between roughly $3n/2$ and $5n/3$ vertices and few enough edges, then there exists a 2-coloring of $G$ such that every monochromatic path has order less than $n$. This allows us to only consider graphs with at least $5n/3$ vertices.
In Section \ref{sec:lems} we prove a number of lemmas which we will use throughout the proof. We also show how some of these lemmas imply the previously known lower bounds on the size-Ramsey number of paths.
In Section \ref{sec:2-col} we prove Theorem \ref{thm:main-2-col}.
In Section \ref{sec:r-col} we prove Theorem \ref{thm:main-r-col}. In Section \ref{sec:concl}, we list a few observations and approaches that may helpful in trying to improve the lower bounds we have provided.
If $S$ is a subset of vertices of a graph $G=(V,E)$, then $G-S$ refers to $G[V \setminus S]$. When $G$ is a graph, we write $|G|$ for $|V(G)|$. For any other notation we defer to \cite{Die}.
All logarithms are natural (base $e$) unless otherwise stated. Throughout the paper, if we refer to an \emph{$r$-coloring} of $G$, we mean an $r$-coloring of the edges of $G$.
\section{Lemmas}\label{sec:lems}
When proving a lower bound on the $r$-color size-Ramsey number of $P_n$, we are given a graph $G=(V,E)$ and we must exhibit an $r$-coloring of the edges of $G$ so that $G$ has no monochromatic paths of order $n$. It is often useful to break this into cases depending the number of vertices of $G$. In Section \ref{sec:Nlower} we use the examples from the ordinary path Ramsey problem to determine a lower bound on $|V|$. In Section \ref{sec:Nupper} we prove a general result which allows us, when proving a lower bound on $\hat{R}_r(P_n)$, to restrict our attention to graphs with minimum degree at least $r+1$, which in turn gives us an upper bound on $|V|$. In Section \ref{sec:prune}, we prove a lemma which we use in the proof of Theorem \ref{thm:main-r-col}. In Section \ref{sec:mainlem}, we prove the main lemma of the paper needed for the proof of Theorem \ref{thm:main-2-col}. Finally, in Section \ref{extend} we show how to deal with the case when $G$ has between roughly $3n/2$ and $5n/3$ vertices.
\subsection{Examples from the ordinary path Ramsey problem}\label{sec:Nlower}
\begin{proposition}[Gerencs\'er, \ Gy\'arf\'as \cite{GG}]\label{3n/2}
If $G$ has at most $\frac{3n}{2}-2$ vertices, then there exists a 2-coloring of $G$ such that every monochromatic path has order less than $n$.
\end{proposition}
\begin{proof}
Partition $V(G)$ into two sets $X_1, X_2$ with $|X_1|\leq \frac{n}{2}-1$ and $|X_2|\leq n-1$. Color all edges incident with $X_1$ red and all edges inside $X_2$ blue. Any pair of consecutive vertices on a red path must contain at least one vertex of $X_1$. Thus the longest red path is of order at most $2|X_1|+1\leq n-1$.
\end{proof}
\begin{proposition}[Yongqi, Yuansheng, Feng, Bingxi \cite{YYFB}]\label{3n/2_r}
Let $r\geq 3$. If $G$ has at most $2(r-1)(\frac{n}{2}-1)=(r-1)(n-2)$ vertices, then there exists an $r$-coloring of $G$ such that every monochromatic path has order less than $n$.
\end{proposition}
\begin{proof}
Partition $V(G)$ into $2r-2$ sets $X_1, X_2, \ldots, X_{2r-2}$ each of order at most $\frac{n}{2}-1$. In the following, addition is modulo $2r-2$. For $i = 1, \ldots, r-1$, color with color $i$, the edges between $X_i$ and $X_{i+1},\ldots, X_{i+r-2}$ and the edges between $X_{i+r-1}$ and $X_{i+r},\ldots X_{i+2r-3}$.
Use color $r$ for the edges between $X_i$ and $X_{i+r-1}$ for $i=1,\ldots r-1$. Color arbitrarily within the $X_i$'s. This coloring has no monochromatic $P_n$ in color $i$ for $i=1,\ldots r-1$ for the same reason as in Proposition \ref{3n/2}. There is none in color $r$ since each component of color $r$ is of order less than $n$.
\end{proof}
\subsection{A reduction lemma}\label{sec:Nupper}
\begin{fact}\label{Nupper}
If $G=(V,E)$ is a graph with minimum degree at least $r+1$, then $|V|\leq \frac{2|E|}{r+1}$.
\end{fact}
The following lemma shows that in order to get a lower bound on the $r$-color size-Ramsey number of $P_{n}$, we can restrict our attention to graphs $G$ with minimum degree at least $r+1$, and consequently at most $\frac{2|E|}{r+1}$ vertices. This generalizes an observation which is implicit in the proof of Beck's lower bound \cite{B2}.
\begin{lemma}\label{mindegree}
Let $r$ and $n$ be positive integers with $n\geq r+4$. If every connected graph with at most $m$ edges and minimum degree at least $r+1$ (and consequently at most $2m/(r+1)$ vertices) has an $r$-coloring such that every monochromatic path has order less than $n-2$, then every graph with at most $m$ edges has an $r$-coloring such that every monochromatic path has order less than $n$.
\end{lemma}
A \emph{star} is a tree having at most one vertex of degree at least 2. A \emph{star forest} is a vertex disjoint collection of stars. The \emph{star arboricity} of a graph $G$, denoted $sa(G)$, is the minimum number of star forests needed to partition the edge set of $G$. In order to keep this aspect of the proof self contained, we give a short proof of the fact that the star arboricity of graph is at most $\Delta(G)$. We note that stronger statements are known (see \cite[Theorem 1]{BFHP} for instance), but not needed for our purposes.
\begin{fact}\label{stararbor}
$sa(G)\leq \Delta(G)$
\end{fact}
\begin{proof}
Clearly this is true for $\Delta=1$. Suppose $\Delta\geq 2$ and the statement holds for all graphs $G$ with $\Delta(G)\leq \Delta-1$. Let $G$ be a graph with $\Delta(G)=\Delta$.
\begin{claim}\label{starforest}
Every graph $G$ without isolated vertices contains a star-forest $S$ such that every vertex is incident with an edge of $S$.
\end{claim}
\begin{proof}
Let $F$ be a spanning forest consisting of a spanning tree of each component of $G$. Let $z$ be a leaf of $F$ and let $y$ be the neighbor of $z$ in $F$. Let $C$ be the star consisting of $y$ and all of the neighbors of $y$ which are leaves in $F$. Note that in $F-C$, there are no isolated vertices and thus we may repeat this process until $F-C$ is empty at which point we have the desired star forest.
\end{proof}
Apply Claim \ref{starforest} to $G$ to get a star forest $S$ such that every non-isolated vertex in $G$ is incident with an edge of $S$. After deleting the edges of $S$ we have a graph $G'$ with maximum degree at most $\Delta-1$ and we may apply induction to finish the proof.
\end{proof}
\begin{proof}[Proof of Lemma \ref{mindegree}]
Suppose that every connected graph with at most $m$ edges and minimum degree at least $r+1$ has an $r$-coloring such that every monochromatic path has order less than $n-2$. Let $G$ be a graph with at most $m$ edges. Let $S=\{v\in V(G): d(v)\leq r\}$. We begin by describing how to color the edges of $G-S$ so that $G-S$ contains no monochromatic paths of order $n-2$.
If $G-S$ has fewer than $n-2$ vertices, then coloring the edges of $G-S$ arbitrarily we have an $r$-coloring of $G-S$ with no monochromatic paths of order $n-2$. So suppose $G-S$ has at least $n-2\geq r+2$ vertices. Let $v$ be a vertex in $G-S$ and suppose that $v$ has exactly $r+1-t$ neighbors in $G-S$ for some positive $t$. This means $v$ had at least $t$ neighbors in $S$, so by making $v$ adjacent to $t$ vertices in $G-S$ (each of which was previously a non-neighbor of $v$) we make $v$ have degree at least $r+1$ and the total number of edges is still at most $m$. We repeat this process for each vertex in $G-S$ which has degree less than $r+1$, updating on each step. We end up with a graph $H$ such that $G-S\subseteq H$, $H$ has at most $m$ edges, and $\delta(G)\geq r+1$. For each connected component of $H$, color the edges according to the hypothesis so that there are no monochromatic paths of order $n-2$. This implies that $G-S$ has no monochromatic paths of order $n-2$. Note that we are now done with graph $H$.
Since $\Delta(G[S])\leq r$, by Fact \ref{stararbor} we can color the edges of $G[S]$ with $r$ colors so that every color class is a star forest. The only edges we have yet to color are the edges between $S$ and $G-S$. Let $v$ be a vertex in $S$ and suppose that $v$ sends $t$ edges to $G-S$. This means that $v$ has at most $r-t$ neighbors in $S$ and is incident with edges of at most $r-t$ different colors in $S$, which means there are at least $t$ colors which are not used on $v$. Use these $t$ colors to color each of the edges from $v$ to $G-S$ so that each such edge receives a different color. After doing this for each vertex in $S$, we have colored all of the edges of $G$. Note that any monochromatic path which only uses edges from $G-S$ has order less than $n-2$, any monochromatic path which only uses edges from $G[S]$ and $[S, V(G)-S]$ has order at most 3. If a monochromatic, say color 1, path uses an edge from $[S, V(G)-S]$, then since its endpoint in $S$ is not incident with any other edges of color 1, this edge must be a pendant edge of the path (of which there are only two) and thus the longest monochromatic path in $G$ has order less than $(n-2)+2=n$.
\end{proof}
\begin{remark}
Proposition \ref{3n/2} and Lemma \ref{mindegree} imply that $\hat{R}(P_n)> \frac{9}{4}(n-2)-3$.
\end{remark}
\begin{proof}
By Lemma \ref{mindegree} we may assume that $|V|\leq \frac{2|E|}{3}\leq \frac{3}{2}(n-2)-2$ and thus we are done by Proposition \ref{3n/2}.
\end{proof}
\begin{remark}\label{r^2-1}
Proposition \ref{3n/2_r} and Lemma \ref{mindegree} imply that for $r\geq 3$, $\hat{R}_r(P_n)> \frac{r^2-1}{2}(n-4)$.
\end{remark}
\begin{proof}
By Lemma \ref{mindegree} we may assume that $|V|\leq \frac{2|E|}{r+1}\leq (r-1)(n-4)$ and thus we are done by Proposition \ref{3n/2_r}.
\end{proof}
\subsection{Pruning a tree so that no long paths remain}\label{sec:prune}
The following is a slight generalization of the lemma used in \cite{B} and \cite{DP2} to give a lower bound on the size-Ramsey number of a path.
\begin{lemma}\label{snip}
For every tree $T$ with $|V(T)|\geq \floor{n/2}$, there exists a set $E'$ of at most $\floor{\frac{|V|}{\floor{n/2}}}-1$ edges such that $T-E'$ has no paths of order $n$.
\end{lemma}
\begin{proof}
If $T$ has no path of order $n$ we are done, so choose a path of order $n$ and delete the middle edge (or one of the two middle edges if $n$ is odd). This separates $T$ into two subtrees, each with at least $\floor{n/2}$ vertices. Now repeat on each subtree and call the set of deleted edges, $E'$. When the process stops, every component of $T-E'$ has at least $\floor{n/2}$ vertices and no paths of order $n$.
Thus $T-E'$ has at most $\floor{\frac{|V|}{\floor{n/2}}}$ components, which means $|E'| \le \floor{\frac{|V|}{\floor{n/2}}}-1$.
\end{proof}
\begin{remark}
Proposition \ref{3n/2} and Lemma \ref{snip} imply that $\hat{R}(P_n)> \frac{5}{2}n-7$.
\end{remark}
\begin{proof}
Let $G=(V,E)$ be a connected graph with at most $\frac{5n}{2}-7$ edges. We may assume $G$ is connected and by Proposition \ref{3n/2}, we have $\frac{3n}{2}-1\leq |V|\leq \frac{5n}{2}-6$. We let $T$ be a spanning tree of $G$ which contains at least $\frac{3n}{2}-2$ edges. Applying Lemma \ref{snip} to $T$, we are left with a forest $F$ with at least $\frac{3n}{2}-5$ edges and no paths of order $n$, so we may color all of the edges of $F$ red. There are at most $n-2$ edges remaining in $E(G)\setminus E(F)$, all of which we may color blue.
\end{proof}
\subsection{Main lemma}\label{sec:mainlem}
We will only use the following lemma in the case where $k=1$ or $k=2$, but we state it in general here. Note that for instance when $k=1$, this says that if $G$ is a graph on $n-3<N\leq 2(n-3)$ vertices, then there is a bipartition of $V(G)$ into sets of order $n-3$ and $N-(n-3)$ such that the number of edges crossing the partition is approximately what we would get by taking a random such partition of a graph with $|E(G)|-N$ edges.
\begin{lemma}\label{partition}
Let $n\geq 4$, let $G=(V,E)$ be a graph on $N\geq n-2$ vertices, and let $k$ be a positive integer uniquely defined by $k(n-3)<N\leq (k+1)(n-3)$ where $k\leq n^{1/64}$. Let $\alpha_1=\frac{n-3}{N}$ and $\alpha_2=\frac{N-k(n-3)}{N}$. If every component of $G$ has at least $n-2$ vertices, $\Delta(G)\leq N^{1/16}$ and $|E|\leq 100N\leq 100(k+1)n$, then there exists a partition of $V$ into $k+1$ parts $V_1, \dots, V_{k+1}$ such that $|V_1|,\dots,|V_k|, |V_{k+1}|\leq n-3$ and $|V_{k+1}|\leq N-k(n-3)+N^{15/16}$ and the number of edges crossing the partition is at most $(1-k\alpha_1^2-\alpha_2^2)(|E|-N)+N^{15/16}$.
\end{lemma}
The first tool needed to prove Lemma \ref{partition} is the following fact mentioned by Alon \cite{A}, stated in general and made explicit here.
\begin{lemma}\label{treepartition}
Let $G$ be a connected graph on $p$ vertices with maximum degree $\Delta$. For any $1\le \ell < p$, we can find a collection of connected subgraphs $S_1, \ldots, S_t$ of $G$ such that
\begin{enumerate}[label=(T\arabic*)]
\item\label{t1} $V(S_1), \ldots, V(S_t)$ form a partition of $V(G)$ with $\ell < |S_i| \le 1+\Delta \ell $ for all $i\in [t-1]$ and $|S_t| \le 1 + \Delta \ell$
\item\label{t2} $\sum_{i=1}^t |E(S_i)| \ge p - t$
\item\label{t3} if $\ell = \floor{\sqrt{p}},$ then $\frac{1}{\Delta+1} \sqrt{p} \le t \le \sqrt{p}+1$
\end{enumerate}
\end{lemma}
\begin{proof}
Let $T_0$ be a rooted spanning tree of $G$ with (arbitrary) root $r$. For a rooted tree $T$ and vertex $v$, let $s(T,v)$ denote the subtree of $T$ rooted at vertex $v$ and let $C(v)$ denote the set of children of $v$. Assume $T_i$ has been defined for some $i\ge 0$ and that $r$ is still the root of $T_i$. Traverse down $T_i$ from $r$ until encountering a vertex $v$ (if one exists) such that $|s(T_i,v)| > \ell$ and $|s(T_i,u)| \le \ell$ for all $u\in C(v)$. Then $s(T_i,v)$ satisfies
\begin{align}\label{eq:t-right-size}
\ell < |s(T_i,v)| = 1 + \sum_{u\in C(v)}|s(T_i,u)| \le 1 + \Delta \ell.
\end{align}
If $v\neq r$, let $S_{i+1} = s(T_i, v)$ and $T_{i+1} = T_i - S_{i+1}$ and repeat for $i+1$. If $v=r$ or if no such vertex $v$ exists, then set $S_{i+1}=S_t = T_i$.
Each $S_i$ is connected by construction. Property \ref{t1} is satisfied by \eqref{eq:t-right-size}. Property \ref{t2} follows since each $S_i$ is connected and thus $\sum_{i=1}^t |E(G[S_i])| \geq \sum_{i=1}^t (|S_i| -1) = p - t$.
Finally, if $\ell= \floor{\sqrt{p}}$ we have
\[ (t-1)(\floor{\sqrt{p}} + 1) \le \sum_{i=1}^t|S_i| = p \le t(1+ \Delta\sqrt{p})\]
and from each of $(t-1)(\floor{\sqrt{p}} + 1) \le p$ and $p \le t(1+ \Delta\sqrt{p})\leq t\sqrt{p}(1+\Delta)$, we derive the bounds on $t$ in \ref{t3}.
\end{proof}
The next tool we need is the following concentration inequality of McDiarmid \cite{M} (see also \cite{FK}).
\begin{lemma}[McDiarmid's ineqauality]\label{lem:mcd} Let $Z=Z(X_1, \ldots, X_N)$ be a random variable that depends on $N$ independent random variables $X_1, \ldots, X_N$. Suppose that
\[|Z(X_1,\ldots, X_k, \ldots, X_N ) - Z(X_1, \ldots, X_k', \ldots, X_N)| \le c_k\]
for all $k=1,\ldots, N$ and $X_1,\ldots, X_n, X'_k$. Then for any $t\ge 0$ we have
\[\mathbb{P}\sqbs{Z\ge \mathbb{E}[Z] + t} \le \exp\of{-\frac{t^2}{2\sum_{k\in [N]} c_k^2 }}.
\]
\end{lemma}
We are now ready to prove the main lemma.
\begin{proof}[Proof of Lemma \ref{partition}]
Apply Lemma \ref{treepartition} with $\ell = \floor{\sqrt{N}}$ to partition the components of $G$ into $\frac{\sqrt{N}}{\Delta+1} \le t\le \sqrt{N}+1$ connected subgraphs $S_1, \ldots, S_t$ each of order at most $1+\Delta\sqrt{N}$. There are at least $N-(t-1)\geq N-\sqrt{N}$ edges accounted for in these subgraphs. Define $m=|E|-(N-\sqrt{N})$ to be an upper bound on the number of edges of $G$ which are not contained in these subgraphs.
We independently at random place each such connected subgraph in one of the sets $V_1, \dots, V_k, V_{k+1}$ with probabilities $\alpha_1, \dots, \alpha_1, \alpha_2$ respectively. Let $Z_i$ represent the number of vertices which land in the set $V_i$ for all $i\in [k+1]$.
Then $\mathbb{E}\sqbs{Z_1} = \dots= \mathbb{E}\sqbs{Z_k} = \alpha_1N$ and $\mathbb{E}\sqbs{Z_{k+1}} = \alpha_2N$.
Note that changing the position of one of $S_1, \ldots, S_t$ can change any of these variables by at most $1+\Delta\sqrt{N} \le N^{9/16}$.
Thus we may apply McDiarmid's inequality (Lemma \ref{lem:mcd}) and the union bound to conclude that the probability that for some $i\in [k]$, $Z_i$ exceeds $\alpha_1 N+N^{7/8}$ or $Z_{k+1}$ exceeds $\alpha_2 N + N^{7/8}$ is at most
\[(k+1)\cdot\exp\of{- \frac12\cdot\frac{N^{7/4}}{(\sqrt{N}+1)\cdot(N^{9/16})^2} } = \exp\of{-\Omega(n^{1/8})}.\]
Thus at least $1 - e^{-\Omega(n^{1/8})}$ proportion of the partitions satisfy
\begin{align}\label{eq:parts-small}
|V_1|, \dots, |V_k| \le \alpha_1 N+N^{7/8}\,\, \textrm{ and }\,\, |V_{k+1}|\le \alpha_2 N + N^{7/8}.
\end{align}
Now, by linearity of expectation, the expected number of edges $\mu$ crossing the partition satisfies
\begin{align*}
\mu\le (1-k\alpha_1^2-\alpha_2^2)m.
\end{align*}
So there is a partition $V_1, \dots, V_{k}, V_{k+1}$ satisfying \eqref{eq:parts-small} with at most $(1-k\alpha_1^2-\alpha_2^2)m+1$ edges crossing the partition; otherwise we would have
\[ (1-k\alpha_1^2-\alpha_2^2)m\ge \mu \ge (1 - e^{-\Omega(n^{1/8})})((1-k\alpha_1^2-\alpha_2^2)m+1)>(1-k\alpha_1^2-\alpha_2^2)m,\]
a contradiction.
Let $S=\{v\in V(G): d(v)\leq 800k^3\}$ and note that $|S|>(1-\frac{1}{4k^3})N$; otherwise, there are at least $\frac{N}{4k^3}$ vertices of degree greater than $800k^3$ and we have that $$200N\geq 2|E| = \sum_{v\in V(G)} d(v)> \frac{N}{4k^3}(800k^3)=200N,$$ a contradiction. If $|V_{k+1}|<\frac{N}{2k^3}$, we move vertices from $S\cap (V_1\cup \dots \cup V_k)$ to $V_{k+1}$ until $|V_1|, \dots, |V_k|\leq n-3$; a total of at most $kN^{7/8}$ vertices. If $|V_{k+1}|\geq \frac{N}{2k^3}$, we do the following: if $|V_i|>n-3$ for $i\in [k]$ or $|V_{k+1}|>N-k(n-3)$, there must exist $j\in [k]$ such that $|V_j|<n-3$ or $V_{k+1}<N-k(n-3)$, so we select a vertex from $V_i\cap S$ and we move it to $V_j$. Because of the size of $|S|$, we can repeat this process until we have $|V_1|=\dots=|V_k|$ and $|V_{k+1}|=N-k(n-3)$. The total number of vertices moved will be at most $kN^{7/8}$. In either case, at the end of this process, the number of edges crossing the partition is at most $$(1-k\alpha_1^2-\alpha_2^2)m +1+ kN^{7/8}\cdot 800k^3 < (1-k\alpha_1^2-\alpha_2^2)(|E|-N) + N^{15/16}.$$
\end{proof}
\subsection{Extending Proposition \ref{3n/2}}\label{extend}
The following observations extend Proposition \ref{3n/2}. We note that there is a similarity between this observation and the concept of the \emph{integrity} of a graph (see \cite{V}).
\begin{observation}\label{n/2-path}
If $G$ has a set $S$ of at most $\frac{n}{2}-1$ vertices such that every component of $G-S$ has no path of order $n$, then there exists a 2-coloring of the edges of $G$ such that every monochromatic path has order less than $n$.
\end{observation}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1]{fig_obs.pdf}
\caption{Coloring the edges of $G$ in Observation \ref{n/2-path}}\label{obs31}
\end{center}
\end{figure}
\begin{proof}
We color all edges incident to $S$ with red and every other edge blue. Clearly there will be no blue path of order $n$. Any pair of consecutive vertices on a red path must contain at least one vertex of $S$. Thus the longest red path is of order less than $n$.
\end{proof}
We also note that there is a similarity between the following observation and the concept of the \emph{edge integrity} of a graph (see \cite{AHK}).
\begin{observation}\label{edgeint}
If $G$ has a subgraph $H$ such that $H$ has no path of order $n$ (in particular, if $H$ has at most $n-2$ edges) and every component of $(V(G), E(G)\setminus E(H))$ has order less than $n$, then there exists a 2-coloring of the edges of $G$ such that every monochromatic path has order less than $n$.
\end{observation}
\begin{proof}
Color the edges of $H$ with red and color the remaining edges blue.
\end{proof}
The following lemma says that if the number of vertices is not too much more than $3n/2$ and the number of edges of $G$ is small enough, we can essentially color $G$ in a way which resembles the coloring in Proposition \ref{3n/2}.
\begin{lemma}\label{close3n/2}
Let $0<\epsilon<\frac{1}{6}$, let $n$ be sufficiently large, and let $G=(V, E)$ be a graph with $\delta(G)\geq 3$ on $\frac{3}{2}(n-2)-2+\sigma n$ vertices where $0<\sigma\leq \frac{1}{6}-\epsilon$. Let $d$ be an integer such that $4\leq d\leq \min\{\frac{1/2-3\epsilon}{\sigma}+1, 100\}$ and $q$ be an integer such that $0\leq q\leq \frac{\epsilon n}{d+1}-5$. If $H = (U,E')$ is a subgraph of $G$ with $|U| = |V| - q$ and $$|E|\leq \left(\frac{3(d+1)+6\sigma}{4}-\epsilon\right)n,$$ then there exists a partition $\{X,Y,Z\}$ of $U$ such that
\begin{enumerate}
\item every vertex in $X$ has at most one neighbor in $Z$ and
\item $|Z|\leq n-3$, $|Y|\leq \frac{n-6}{2}-q$, and $|X|+|Y|\leq n-3$.
\end{enumerate}
Consequently, there exists a 2-coloring of $G$ such that every blue path has order at most $n-3$ and every red path has order at most $n-3$.
\end{lemma}
\begin{proof}
First note that there is a value of $d$ which satisfies $4\leq d\leq \frac{1/2-3\epsilon}{\sigma}+1$ since $\sigma\leq \frac{1}{6}-\epsilon$ and there is a value of $q$ that satisfies $0\leq q\leq \frac{\epsilon n}{d+1}-5$ since $d\leq 100$ and $n$ is sufficiently large.
Let $X^*=\{v\in U: d_H(v)\leq d\}$. We first show that $|X^*|$ is significantly larger than $\sigma n$. Indeed, we have
\begin{align*}
\left(\frac{3(d+1)+6\sigma}{2}-2\epsilon\right)n\geq 2|E|\ge \sum_{v\in U}d_G(v)&\geq 3|X^*|+(d+1)(|U|-|X^*|)\\
&=(d+1)|U|-(d-2)|X^*|.
\end{align*}
Rearranging and using $|U|=\frac{3}{2}(n-2)-2-q+\sigma n$ and $q\leq \frac{\epsilon n}{d+1}-5$ gives
\begin{align*}
(d-2)|X^*|&\geq (d+1)(\frac{3}{2}(n-2)-2-q+\sigma n)-\left(\frac{3(d+1)+6\sigma}{2}-2\epsilon\right)n\\
&=(d-2)\sigma n+2\epsilon n-(d+1)(q+5)\geq (d-2)\sigma n+\epsilon n,
\end{align*}
and thus
\begin{equation}\label{X*}
|X^*|\geq (\sigma +\frac{\epsilon}{d-2})n\geq \floor{(\sigma +\frac{\epsilon}{d-1})n}\geq \sigma n+1,
\end{equation}
where the last inequality holds since $d\leq 100$ and $n$ is sufficiently large.
Now let $X\subseteq X^*$ such that $|X|=\floor{(\sigma +\frac{\epsilon}{d-1})n}$, let $Y^*=N_H(X)\setminus X$, and note that
\begin{equation}\label{Y*}
|Y^*| \le d|X|.
\end{equation}
Since $N_H(X)\subseteq X\cup Y^*$ we would be done if $|Y^*| \le \frac{n-6}{2}-q$ by taking $Y=Y^*, Z = U\setminus (X\cup Y)$ (see Figure \ref{proof2i} for the coloring). We now show that if $|Y^*| > \frac{n-6}{2}-q$, then we can move at least $|Y^*|-(\frac{n-6}{2}-q)$ vertices from $Y^*$ to $Z$. We do this by showing that there exists an induced matching in the bipartite graph $H[X, Y^*]$ of size at least $|Y^*|-(\frac{n-6}{2}-q)$.
Let $Y_1=\{v\in Y^*: d_H(v, X)=1\}$ and $Y_2=\{v\in Y^*: d_H(v, X)\geq 2\}$. We note that since every vertex in $X$ sends at most $d$ edges to $Y^*$, $H[X, Y^*]$ has an induced matching of size at least $|Y_1|/d$. We have
\begin{align*}
d|X|\geq e_H(X, Y^*)\geq |Y_1|+2(|Y^*|-|Y_1|)=2|Y^*|-|Y_1|
\end{align*}
which implies
\begin{align*}\frac{|Y_1|}{d} \geq \frac{2|Y^*|}{d}-|X|= |Y^*| - \frac{d-2}{d}|Y^*| - |X|
&\stackrel{\eqref{Y*}}{\ge} |Y^*| - (d-1)|X|\\
&= |Y^*|-(d-1)\floor{(\sigma +\frac{\epsilon}{d-1})n}\\
&\geq |Y^*|-(\frac{1}{2}-2\epsilon)n\\
&\geq |Y^*|-(\frac{n-6}{2}-q),
\end{align*} where the penultimate inequality holds by the upper bound on $d$ and the ultimate inequality holds by the upper bound on $q$.
Let $X'=\{v\in X: N(v)\cap Y_1\neq \emptyset\}$ and choose a matching from $X'$ to $Y_1$ which saturates $X'$ (which must exist by the definition of $Y_1$ and $X'$) and let $f(X')$ be the vertices in $Y_1$ which are saturated by the matching. Set $Y'=Y^*\setminus f(X')$. By the above we have $|Y'|\leq \frac{n-6}{2}-q$.
\begin{figure}[ht]
\centering
\subfloat[Subfigure 1 list of figures text][Coloring the edges of $H$ if $Y^*$ is small enough]{
\includegraphics[scale=1]{fig_proof4i.pdf}
\label{proof2i}}
\hfill
\subfloat[Subfigure 1 list of figures text][Coloring the edges of $H$ after moving vertices from $Y^*$]{
\includegraphics[scale=1]{fig_proof4.pdf}
\label{proof2ii}}
\hfill
\subfloat[Subfigure 1 list of figures text][Coloring the edges of $G$]{
\includegraphics[scale=1]{fig_proof3i.pdf}
\label{proof2iii}}
\caption{Coloring the edges in Lemma \ref{close3n/2}}
\end{figure}
Let $Y\subseteq U\setminus X$ such that $Y'\subseteq Y$ and $|Y|=\floor{\frac{n-6}{2}-q}$. Now let $Z=U\setminus (X\cup Y)$ and note that
\begin{align*}
|Z|=|U|-|X|-|Y|&= \frac{3}{2}(n-2)-2+\sigma n -q-|X|-|Y|\\
&\stackrel{\eqref{X*}}{\leq} \frac{3}{2}(n-2)-2 +\sigma n-q-(\sigma n+1)-(\frac{n-6}{2}-q) = n-3.
\end{align*}
Color all edges inside $X\cup Y$ blue, all edges inside $Z$ blue, and color all other edges red (see Figure \ref{proof2ii}). Clearly there is no blue path of order $n-2$ since $|X|+|Y|, |Z|\leq n-3$. Since $|Y|\leq \frac{n-6}{2}-q$, the longest red path in the bipartite graph induced by $[Y,Z]$ has order at most $n-5-2q$ and since the red edges from $X$ to $Z$ form a matching, the longest red path overall in $H$ has order at most $n-3-2q$.
Finally, coloring all edges from $V_0=V\setminus U$ to $X\cup Y\cup V_0$ blue and all edges from $V_0$ to $Z$ red (see Figure \ref{proof2iii}), we have that the longest red path is at most $n-3-2q+2|V_0|=n-3$, and the longest blue path has order at most $n-3$ since $|X|+|Y|+|V_0|, |Z|\leq n-3$.
\end{proof}
\section{Two colors}\label{sec:2-col}
We are now ready to give a proof of Theorem \ref{thm:main-2-col}. We note that if $G$ had bounded maximum degree, we would be able to directly apply Lemma \ref{partition} (or Lemma \ref{close3n/2}). So dealing with the ``high degree'' vertices is the main challenge which remains. We also note that the $\epsilon$ in the following proof can be taken to be as small as $\epsilon =n^{-\Theta(1)}$; however, for the sake of readability, we didn't try to optimize the value of $\epsilon$.
\begin{proof}[Proof of Theorem \ref{thm:main-2-col}]
Let $\epsilon>0$ and let $n_0$ be a sufficiently large integer (the value of which we don't explicitly compute, but we will point out which inequalities depend on $n$ being sufficiently large).
Let $G=(V,E)$ be a connected graph on $N$ vertices with at most $(3+\gamma-\epsilon)n$ edges, where $0\leq \gamma<3/2$ is to be chosen later (ultimately, we will choose $\gamma=3/4$, but we leave it undetermined for now because it helps to see where there is slack in certain parts of the proof). By Lemma \ref{mindegree} it suffices to assume that $\delta(G)\geq 3$ and thus $N\leq 2|E|/3\leq (2+2\gamma/3-2\epsilon/3)n<3(n-2)$. Since we are using Lemma \ref{mindegree}, our goal for the rest of the proof is to exhibit a 2-coloring of $G$ which has no monochromatic paths of order $n-2$. So by Proposition \ref{3n/2} we may assume that $N\ge \frac{3}{2}(n-2)-1$.
Let $V_0=\{v\in V(G): d(v)>n^{1/32}\}$. We have
$n^{1/32}|V_0|\leq 2|E|$ and thus since $n$ is sufficiently large,
\begin{equation}\label{V0}
|V_0|\leq 2(3+\gamma-\epsilon)n^{31/32}\leq \frac{\epsilon n}{200}.
\end{equation}
We say that a component $C$ of $G-V_0$ is \emph{small} if $|C|<n-2$, \emph{medium} if $n-2\leq |C|\leq \frac{3}{2}(n-2)-|V_0|-2$, and \emph{large} if $\frac{3}{2}(n-2)-|V_0|-1\leq |C|$.
Regarding the components of $G-V_0$, if there is at most one medium component $B$, and the rest of the components are small, select a set $S\subseteq B$ such that $|V_0|+|S|=\floor{\frac{n-2}{2}}-1$. Note that every component of $G-(S\cup V_0)$ has order at most
$$\frac{3}{2}(n-2)-2-|V_0|-|S|\leq \frac{3}{2}(n-2)-2-\left(\frac{n-2}{2}-1\right)=n-3$$
and thus we are done by Observation \ref{n/2-path}.
Since $N<3(n-2)$, if there is more than one medium component, then there are exactly 2, call them $B_1$ and $B_2$ and the remaining components $C_1, \dots, C_t$ are small. Likewise, if there is one large component $A$, then there is exactly one and either we have a medium component $B$ and the remaining components $C_1, \dots, C_t$ are small, or there are no medium components and the remaining components $C_1, \dots, C_t$ are small. Let $U=A$, $U=A\cup B$, or $U=B_1\cup B_2$ depending on the case and note that in any case $|U| \ge \frac{3}{2}(n-2)-1-|V_0|$.
\noindent
\textbf{Case 1} First suppose that $\frac{3}{2}(n-2)-1-|V_0|\leq |U| \leq 2(n-3)$. We parameterize this by introducing a variable $\sigma$ such that $|U|=\frac{3}{2}(n-2)-2-|V_0|+\sigma n$ where $0<\sigma< \frac{1}{2}+\frac{|V_0|}{n}$.
\textbf{Case 1.1} Suppose $0<\sigma\leq \frac{1}{6}-\epsilon$ (in this case $U$ must consist of one large component). We apply Lemma \ref{close3n/2} to $G[U\cup V_0]$ with $\epsilon$ and $\sigma$ as given, $d=\min\{\floor{\frac{1/2-3\epsilon}{\sigma}+1}, 100\}$, and $q=|V_0|$. Note that $q=|V_0|<\frac{\epsilon n}{200}\leq \frac{\epsilon n}{d+1}-5$. Also note that in order to apply the lemma we must have $(3+\gamma-\epsilon)n\leq (3.9375-\epsilon)n\leq \left(\frac{3(d+1)+6\sigma}{4}-\epsilon\right)n$ (because of this, it would suffice to choose $d=4$ if $\sigma \ge 1/8$ and $d=5$ otherwise).
The lemma provides a partition of $U$ as $\{X,Y,Z\}$. We color the edges exactly as in Lemma \ref{close3n/2} and additionally we color all edges from $V_0$ to the small components red and all edges inside the small components blue. Since $|Y|+|V_0|\leq \frac{n-6}{2}$ the longest red path still has order at most $n-3$ (that is, the extra red edges from $V_0$ to the small components do not change the properties of the coloring from Lemma \ref{close3n/2}).
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1]{fig_proof3.pdf}
\caption{Coloring the edges of $G$ in Case 1.1}\label{proof}
\end{center}
\end{figure}
\textbf{Case 1.2}
Now we deal with the case where $\frac{1}{6}-\epsilon< \sigma< \frac{1}{2}+\frac{|V_0|}{n}$ (in this case $U$ must consist of one large component).
Apply Lemma \ref{partition} (with $k=1$) to $G[U]$ to get a bipartition of $U$ into sets $U_1, U_2$ of order at most $n-3$ such that the number of edges crossing the partition is
\begin{equation*}
\left(1-\left(\frac{1}{1+(1/2+\sigma)}\right)^2-\left(\frac{1/2+\sigma}{1+(1/2+\sigma)}\right)^2\right)(3/2+
\gamma-\sigma-\epsilon)n+N^{15/16}<(1-\frac{\epsilon}{4})n,
\end{equation*}
where the last inequality holds provided $n$ is sufficiently large and
\begin{equation}\label{gammaupper}
\gamma\leq \frac{3/4+\sigma+3\sigma^2}{1+2\sigma}.
\end{equation}
Since we are assuming $0< \sigma< \frac{1}{2}+\frac{|V_0|}{n}$, we have $\frac{3/4+\sigma+3\sigma^2}{1+2\sigma}\geq 0.75$ with the minimum occurring when $\sigma= \frac{1}{6}-\epsilon$ (note that the minimum of $\frac{3/4+\sigma+3\sigma^2}{1+2\sigma}$ over the entire interval $0< \sigma< \frac{1}{2}+\frac{|V_0|}{n}$ is $\sqrt{3}-1\approx 0.732$ and occurs when $\sigma=\frac{2\sqrt{3}-3}{6}\approx 0.07735$).
Now color the edges inside the sets $U_1, U_2$ blue, the edges between the sets $U_1$, $U_2$ red, the edges inside $V_0$ blue, the remaining edges incident with $V_0$ red, and the edges inside the small components blue. Note that since $|V_0|\leq \frac{\epsilon n}{200}$ and the number of edges between the sets $U_1$, $U_2$ is at most $(1-\frac{\epsilon}{4})n$, the longest blue path has order at most $(1-\frac{\epsilon}{4})n+2|V_0|\leq n-3$ (c.f. Observation \ref{edgeint}, thinking of $H$ as the graph induced by the red edges).
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1]{fig_proof.pdf}
\caption{Coloring the edges of $G$ in Case 1.2 and Case 2}\label{proof4}
\end{center}
\end{figure}
\noindent
\textbf{Case 2}
Now suppose that $|U| > 2(n-3)$. We parameterize this by introducing a variable $\tau$ and assuming that $|U| = (2+\tau)(n-3)$ where $0 < \tau \le 2\gamma/3<1$
(in this case $U$ either consists of one large component, one large and one medium component, or two medium components). Apply Lemma \ref{partition} (with $k=2$) to $G[U]$ to get a tripartition of $U$ into sets $U_1, U_2, U_3$ of order at most $n-3$ such that the number of edges crossing the partition is at most
\begin{align*}
\left(1-2\left(\frac{1}{2+\tau}\right)^2-\left(\frac{\tau}{2+\tau}\right)^2\right)(1+\gamma-\tau-\epsilon)n+N^{15/16}<(1-\frac{\epsilon}{4})n,
\end{align*}
where the last inequality holds provided $n$ is sufficiently large and
\begin{equation*}
\gamma\leq \frac{1+\tau+5\tau^2/2}{1+2\tau}.
\end{equation*}
We have $\frac{1+\tau+5\tau^2/2}{1+2\tau}\geq \frac{3}{4}(\sqrt{5}-1)\approx 0.927$
with the minimum occurring when $\tau=\frac{3\sqrt{5}-5}{10}\approx 0.1708$.
Now color the edges as we did at the end of Case 1.2.
\end{proof}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=6]{updatedfigure.png}
\caption{The value of $c$ on the $x$-axis represents $|U|=cn$. For a given value of $c$, the curve shows the maximum number of edges $G$ can have so that our proof gives a 2-coloring of $G$ with no monochromatic $P_n$. The blue curve corresponds to Case 1.1, the red curve to Case 1.2, and the green curve to Case 2. Note that the minimum over the entire interval is $3.75$ and occurs when $c=5/3$.}\label{3.75}
\end{center}
\end{figure}
We note two things about the previous proof. We originally dealt with the case $|U|=\frac{3}{2}(n-2)-1+\sigma n$ as a whole rather than splitting into the subcases $0< \sigma\lesssim 1/6$ and $1/6\lesssim \sigma\lesssim 1/2$. Without the subcases, the bound we obtained in \eqref{gammaupper} was $\gamma\leq \sqrt{3}-1$ which gives an overall lower bound of $\hat{R}(P_n)\geq (2+\sqrt{3}-o(1))n\approx 3.732n$. So by dealing with the subcases separately, we got an improvement of about $(0.018-o(1))n$.
If one were to attempt to improve the lower bound of $(3.75-o(1))n$, a good test case would be when $|U|\approx\frac{5n}{3}$, since this corresponds to the case where $|U|=\frac{3}{2}(n-2)-1+\sigma n$ and $\sigma\approx 1/6$ which is the bottleneck of the above proof (see Figure \ref{3.75}).
\section{More than two colors} \label{sec:r-col}
The following theorem implies the first part of Theorem \ref{thm:main-r-col}. We simply use the $r=2$ case and induction.
\begin{proposition}\label{r=3}
For all $r\geq 2$ and sufficiently large $n$, if $G$ is a graph with at most $(\frac{(r-1)r}{2}+2.75-o(1))n$ edges, then there exists a $r$-coloring of the edges of $G$ such that every monochromatic path has order less than $n$.
\end{proposition}
\begin{proof}
Let $G=(V,E)$ be a connected graph. For $r=2$, this holds by Theorem \ref{thm:main-2-col}. So let $r\geq 3$ and suppose the result holds for all smaller $r$. If $N\leq (r-1)(n-2)$, then we are done by Proposition \ref{3n/2_r}; so suppose $N\geq (r-1)(n-2)+1$. Let $T$ be a spanning tree of $G$ and apply Lemma \ref{snip} to get a forest $F$ with no paths of order $n$ and at least $N-2r^2$ edges. Color the edges of the forest with color $r$. The number of remaining edges is at most $(\frac{(r-1)r}{2}+2.75-o(1))n-(r-1)n-2r^2=(\frac{(r-2)(r-1)}{2}+2.75-o(1))n$ and thus we may apply induction to color the remaining edges with the remaining $r-1$ colors.
\end{proof}
\begin{remark}
The bound is Proposition \ref{r=3} is larger than the bound in Remark \ref{r^2-1} for $2\leq r\leq 6$, and for $r\geq 7$, the bound in Remark \ref{r^2-1} is larger.
\end{remark}
\begin{definition}
An affine plane of order $q$ is a $q$-uniform hypergraph on $q^2$ vertices (called points), with $q(q+1)$ edges (called lines) such that each pair of vertices is contained in exactly one edge.
\end{definition}
It is well known that an affine plane of order $q$ exists whenever $q$ is a prime power (and it is unknown whether there exists an affine plane of non-prime power order). We collect two key properties of affine planes in the following proposition.
\begin{proposition}\label{affprop}
Let $q\geq 2$ such that there exists an affine plane of order $q$. There exists a $q+1$-coloring of the edges of $K_{q^2}$ such that
\begin{enumerate}
\item\label{ap1} every color class (called a parallel class) consists of a collection of $q$ vertex disjoint $K_q$'s, and
\item\label{ap2} every vertex is contained in exactly one edge of each color and the union of these $q+1$ edges is all of $V(K_{q^2})$.
\end{enumerate}
\end{proposition}
The following theorem implies the second part of Theorem \ref{thm:main-r-col}. We modify Krivelevich's proof \cite[Theorem 8]{K} in such a way that no color is ``wasted'' on the high degree vertices. This improves the lower bound from $((r-2)^2-o(1))n$ to $((r-1)^2-o(1))n$.
\begin{proposition}\label{affine}
Let $r\ge 3$ and let $q\leq r-1$ be the largest integer such that an affine plane of order $q$ exists (effectively, let $q\leq r-1$ be the largest integer such that $q$ is a prime power) and suppose $n$ is sufficiently large. For all graphs $G$ with at most $q^2 n - 6q^4n^{0.9} = (q^2-o(1)) n$ edges, there exists a $q+1$-coloring (which is an $r$-coloring) of the edges of $G$ such that every monochromatic path has order less than $n$.
\end{proposition}
\begin{proof}
Let $G=(V,E)$ be a graph with $|E|\leq q^2-6q^4n^{.9}$. Let $V_0:=\set{v\in V(G)}{d(v) \ge n^{0.1}}$. Then $q^2 n \ge |E(G)| \ge \frac{1}{2}|V_0|n^{0.1}$ implies that
$|V_0| \le 2q^2n^{0.9}$.
Now randomly partition $V\setminus V_0$ into $q^2$ parts $V_1, \ldots V_{q^2}$ by placing each vertex into one of these sets independently with probability $1/q^2$.
Let $L$ be a line of the affine plane $A_{q}$ on point set $[q^2]$. For each edge $e$ in $G[V\setminus V_0]$, we assign color $i$ to $e$ if the endpoints of $e$ are in sets $V_x$ and $V_y$ where the unique line containing $x$ and $y$ in $A_{q}$ is in the $i$'th parallel class of $A_{q}$. We color $e$ arbitrarily if both of its endpoints are in $V_x$ for some $x$.
For a line $L$ of $A_{q}$, define the random variable $X_L := |E\of{\bigcup_{x\in L}V_x}|$.
Then \[\mathbb{E}\sqbs{X_L}\le \frac{1}{q^2}\cdot |E(G)| \leq n - 6q^2n^{0.9}.\]
Since every vertex of $V\setminus V_0$ has degree at most $n^{0.1}$, we have that moving any one vertex from $V_x$ to $V_y$ can change $X_L$ by at most $n^{0.1}.$ Thus we may apply McDiarmid's inequality (Lemma \ref{lem:mcd}) with $c_k = n^{0.1}$ for all $k$ to conclude that
\[\mathbb{P}\sqbs{X_L \ge n-5q^2n^{0.9}} \le \exp\of{-\frac{(q^2n^{0.9})^2}{2|V\setminus V_0|\cdot (n^{0.1})^2}} = \exp\of{-\Omega(n^{0.6})}.\]
Where we used $|V\setminus V_0|\leq |E|\leq q^2n$ in the last inequality. Thus taking a union bound over all $(q+1)q$ lines $L$, we conclude that there exists a partition of $V\setminus V_0$ in which at most $n - 5q^2n^{0.9}$ edges lie inside $\bigcup_{x\in L}V_x$ for all lines $L$.
Suppose $V_1, \ldots, V_{q^2}$ is such a partition.
Let $L_1, \dots, L_{q+1}$ be the lines from $A_q$ incident with the point $1$, one from each parallel class (which is possible by Proposition \ref{affprop}(ii)). Note that for all $j\in [q^2]\setminus \{1\}$, $V_j$ intersects precisely one such $\bigcup_{x\in L_i}V_x$ for $i\in [q+1]$. For each $i\in [q+1]$, we color the edges from $V_0$ to $\bigcup_{x\in L_i}V_x$ with color $i$ (coloring the edges from $V_0$ to $V_1$ arbitrarily). Now every edge from $V_0$ to $V\setminus V_0$ has been colored and for each color $i\in [q+1]$, there exists a unique line $L_i$ such that $V_0$ sends edges of color $i$ to $\bigcup_{x\in L}V_x$.
Any path contained in $V_0\cup \bigcup_{x\in L_i}V_x$ can have order at most \[\abs{E\of{\bigcup_{x\in L_i}V_x}} + 2|V_0| \le n - 5q^2n^{0.9} + 4q^2n^{0.9} < n\]
The sets $\bigcup_{x\in L}V_x$ where $L$ does not contain point 1 still contain less than $n-1$ edges and thus have no path of order $n$.
\end{proof}
\begin{remark}
The bound in Proposition \ref{affine} is larger than the bounds in Proposition \ref{r=3} and Remark \ref{r^2-1} for all $r\geq 4$.
\end{remark}
\section{Additional observations and conclusion}\label{sec:concl}
In this section we collect a few additional thoughts, none of which fit into into the main thread of the paper. The four observations below quantify the intuitive notion that if $G$ is a graph having the property that every 2-coloring of the edges of $G$ contains a monochromatic $P_n$, then $G$ must be ``expansive'' in some sense.
For a graph $G=(V,E)$, let $S_V$ be the set of permutations of $V$. The \emph{bandwidth}, $\varphi$ of $G$ is defined as \[\varphi(G):=\min_{f\in S_V}\max_{uv\in E}|f(u) - f(v)|.\]
\begin{observation}
If $\varphi(G) \le \frac{n}{2}-1$, then there is a 2-coloring of the edges of $G$ such that every monochromatic path has order less than $n$.
\end{observation}
\begin{proof}
Order the vertex set to witness the minimum bandwidth, then split the vertices into sets $V_1, \dots, V_t$, with $|V_1|=\dots=|V_{t-1}|=\floor{\frac{n}{2}-1}$ and $|V_t|\leq n-1$. For all odd $i\in [t]$, color the edges from $V_i$ to $V_i\cup V_{i+1}$ red, and for all even $j\in [t]$ color the edges from $V_j$ to $V_i\cup V_{j+1}$ blue.
\end{proof}
A \emph{depth first search} (DFS) tree (or \emph{normal} tree) $T$ rooted at $x$ in a graph $G$ is a subtree of $G$ such that for all $uv\in E(G)$ with $u,v \in V(T)$, either $v$ is on the $x-v$ path in $T$ or $u$ is on the $x-u$ path in $T$.
For a connected subgraph $H$ of a graph $G$ and vertices $u,v\in V(H)$, let $d_H(u,v)$ be the length of the shortest path between $u$ and $v$ in $H$. A \emph{breadth first search} (BFS) tree $T$ rooted at $x$ is a subtree of $G$ such that for all $v\in V(T)$, $d_T(x, v)=d_G(x, v)$. Such a tree has the property that for all $uv\in E(G)$ with $u,v\in V(T)$, $|d_T(x,u)-d_T(x,v)|\leq 1$. The vertices at each fixed distance from the root are called the \emph{levels} of $T$.
It is well known that for every connected graph $G$ and every vertex $x\in V(G)$, there exists a spanning DFS tree $T$ rooted at $x$ and a spanning BFS tree rooted at $x$.
Using the notation for rooted trees from the proof of Lemma \ref{treepartition}, we have the following observation.
\begin{observation}
Let $G$ be a connected graph. If there exists a vertex $x$ and a DFS tree $T$ rooted at $x$ so that every child $y\in C(x)$ satisfies $|S(T,y)|\le \frac{5n}{4}-2$, then there exists a 2-coloring of the edges of $G$ such that every monochromatic path has order less than $n$.
\end{observation}
\begin{proof}
For each sub-tree $S(T,y)$ where $y\in C(x)$, we partition the vertices of $S(T,y)$ into sets $A_y$ and $B_y$ where $|A_y|\le \frac n4 -1$, $y\in A_y$ and $|B_y|\le n-1$. Let $A=\{x\}\cup \bigcup_{y\in C(x)}A_y$ and $B=\bigcup_{y\in C(x)}B_y$. We color the edges of $G$ within $B$ blue and the edges from $A$ to $A\cup B$ red. Note that this is all the edges of $G$ since no edges go between $S(T,y)$ and $S(T,z)$ for $y,z\in C(x)$, $y\neq z$. Clearly there are no blue paths of order $n$. Any red path may intersect at most two of the sub-trees $S(T,y)$, $S(T,z)$ for $y,z\in C(x)$, $y\neq z$ and any such path must pass through $x$. For all $y\in C(x)$, the longest possible red path in $G[A_y\cup B_y]$ is of order at most $\frac n2 -1$ and so the longest red path in $G$ is of order at most $n-1$.
\end{proof}
\begin{observation}
Let $G$ be a connected graph. If there exists a vertex $x$ and a BFS tree $T$ rooted at $x$ such that every pair of consecutive levels of $T$ have fewer than $n$ vertices, then there exists a 2-coloring of the edges of $G$ such that every monochromatic path has order less than $n$.
\end{observation}
\begin{proof}
For all $i\geq 0$, let $D_i=\{v: d_T(x, v)=i\}$. For all $j\geq 0$, color the edges from $D_{2j}$ to $D_{2j}\cup D_{2j+1}$ red and the edges from $D_{2j+1}$ to $D_{2j+1}\cup D_{2j+2}$ blue. By the property of BFS trees, this accounts for every edge in $G$. Since every two consecutive levels contain fewer than $n$ vertices, there are no monochromatic paths of order $n$.
\end{proof}
The following observation was inspired by Figure 2 in both \cite{BKLL1} and \cite{BKLL2}.
\begin{observation}
If $G$ is a graph on $N$ vertices with $\alpha(G)\geq N-(n-2)$, then there exists a 2-coloring of the edges of $G$ such that every monochromatic path has order less than $n$.
\end{observation}
\begin{proof}
Let $S$ be an independent set of order at least $N-(n-2)$ and partition the vertices of $V(G)\setminus S$ into disjoint sets $X,Y$ with $|X|, |Y|\leq \frac{n}{2}-1$. Color all edges incident with $X$ red and color all edges incident with $Y$ blue (so edges between $X$ and $Y$ can be either color). The longest monochromatic path has order at most $2(\frac{n}{2}-1)+1=n-1$.
\end{proof}
Finally, we end with the following question which relates to the upper bound on the size-Ramsey number of a path.
\begin{question}
What is the largest monochromatic path one can find in an arbitrary 2-coloring of a $d$-regular graph on $n$ vertices?
\end{question}
For instance, suppose $d=5$ and there was an upper bound of $n/28$. This would imply that 5-regular graphs on $28n$ vertices (having $70n$ edges) have a 2-coloring with no monochromatic $P_n$. In other words, 5-regular graphs could never improve the current best \cite{DP2} upper bound $\hat{R}(P_n)\leq 74n$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.